WO2007092512A2 - Driver drowsiness and distraction monitor - Google Patents

Driver drowsiness and distraction monitor Download PDF

Info

Publication number
WO2007092512A2
WO2007092512A2 PCT/US2007/003287 US2007003287W WO2007092512A2 WO 2007092512 A2 WO2007092512 A2 WO 2007092512A2 US 2007003287 W US2007003287 W US 2007003287W WO 2007092512 A2 WO2007092512 A2 WO 2007092512A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
wavelength
driver
eye
light
Prior art date
Application number
PCT/US2007/003287
Other languages
French (fr)
Other versions
WO2007092512A3 (en
Inventor
Richard Grace
Original Assignee
Attention Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Attention Technologies, Inc. filed Critical Attention Technologies, Inc.
Publication of WO2007092512A2 publication Critical patent/WO2007092512A2/en
Publication of WO2007092512A3 publication Critical patent/WO2007092512A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms

Definitions

  • the present invention relates to an apparatus and method for monitoring a subject's eyes and, more particularly, to an apparatus and method in which light of two different wavelengths is analyzed to monitor a subject's eyes while driving.
  • Perclos generally, is a measure of the proportion of time that a subject's eyes are closed, either completely or beyond a predetermined point. For example, perclos may be the measure of the proportion of time that a subject's eyes are between 80% and 100% closed.
  • the driver's eyelids can be closed for a period of between 2-30 seconds. This perclos eyelid closure often repeats many times over the period of an hour or more before a driver stops and rests. Often, the measurement of perclos must be done manually, such as by videotaping the subject, reviewing the tape, and measuring the subject's perclos. Such a method, of course, is not practical for many applications, such as the detection of a fatigued driver.
  • Another method of determining perclos involves the use of an image sensor, such as a video camera, and image processing software to monitor the subject, determine the location of the subject's eyes, and determine the subject's perclos.
  • an image sensor such as a video camera
  • image processing software to monitor the subject, determine the location of the subject's eyes, and determine the subject's perclos.
  • Such a method is time consuming and often cannot be performed in real time, thereby prohibiting it from being used to determine the drowsiness of a driver of a motor vehicle.
  • One attempt to overcome this problem is to monitor only a portion of the subject's face, the portion containing the subject's eyes, thereby reducing the amount of processing required to determine perclos.
  • This approach creates other difficulties because the subject's eyes must be tracked as they move to monitor the road. As the subject's eyes move, the subject's head and body also move.
  • Video-based monitoring systems have the potential to identify slow eyelid closures. These systems can be categorized as systems using structured illumination sources and systems not using structured illumination sources. Structured illumination sources are designed to accentuate objects of interest in a scene. In eye tracking systems, structured illumination is often an infrared illumination source used to accentuate the image of the subject's eyes. Systems using structured illumination need less processing and are less susceptible to sudden changes in environmental lighting conditions. However, although structured illumination source based systems are effective for nighttime operation, they typically will not function well in daylight.
  • United States Patent No. 6,082,858 describes a structured illumination-based system that can detect the position of a driver's eyes in low light conditions using two images.
  • a first image is captured of a driver's eyes reflecting a first illumination source that produces bright pupils.
  • a second image is also captured of the driver's eyes reflecting a second illumination source that produces dark pupils.
  • the second image is subtracted from the first image to produce a third image that shows just the bright pupils with all other image features eliminated.
  • a bright pupil image is collected using an 850 nm illumination source mounted close to the camera lens. This image includes a bright object 1 caused by reflection of the light source on the subject's glasses, and bright pupils 2 and 3.
  • FIG. 6 describes a structured illumination-based system that can detect the position of a driver's eyes in low light conditions using two images.
  • a first image is captured of a driver's eyes reflecting a first illumination source that produces bright pupils.
  • a second image is also captured of the driver's eyes reflecting a second illumination source that produces
  • a dark pupil image is collected using a 950 nm light source mounted slightly farther away from the camera lens.
  • the image includes a bright object 4 caused by the reflection of the light source from the subject's glasses and dark pupils 5 and 6.
  • FIG. Ic a difference image is produced by subtracting the image in FIG. Ib from the image in FIG. Ia.
  • the image in FIG. Ic is black except for the two bright eye objects 7 and 8 corresponding to the subject's eyes.
  • the bright objects 1 and 4 are eliminated in the subtraction process because the bright objects 1 and 4 are the same shape, have the same brightness, and are located at the same coordinates within the two images.
  • the bright pupil image and the dark pupil image must be nearly identically oriented in order for the stray bright images to be eliminated.
  • the need for nearly identically oriented images can be problematic in the case where a single camera is used to collect consecutive images separated in time. In this case, the coordinates of the images can be quite different if the subject moves or if there is a rapid change in environmental lighting. In practice, drivers often move their heads at a rate that can cause substantial changes in the two consecutively collected images. Also, rapidly moving headlights from oncoming traffic can cause substantial changes in the orientation of the bright pupil image and the dark pupil image. [0008]
  • Prior art devices and methods for determining perclos typically have difficulty finding and monitoring the subject's eyes.
  • the prior art devices often cannot distinguish between the subject's eyes and other sources of light and reflected light, such as is caused by dashboard lights, lights from other vehicles, and street lights. Problems arising from reflected lights are often further exaggerated when the subject is wearing glasses.
  • Example prior art devices have been described in United States Patent Nos. 4,953,111; 5,801,390; and 5,231,674 as well as Japanese Patent Nos. 2-138673; 52-54291; and 9-62828.. [0009] Accordingly, there is a need for a device and method for monitoring the eyes of a subject, such as can be used to determine perclos, which can operate in real time, can account for subject movement, and is insensitive to other sources of light.
  • An image sensor has been developed which includes a first source of light having a first wavelength, and a second source of light having a second wavelength, the second wavelength not being equal to the first wavelength.
  • the image sensor further includes at least one optics module for receiving reflected light having the first wavelength, and producing a first image corresponding to the size and location of the reflected light having the first wavelength.
  • the optics module is also capable of receiving a subsequent reflected light having the second wavelength, and producing a second image corresponding to the reflected light having the second wavelength.
  • the image sensor also includes a controller for receiving the first image and the second image, producing a third image indicative of the first image subtracted from the second image, screening the portions of the third image that are above a brightness threshold, and selecting portions of the third image that are within a nearness parameter to reflections in the first image or second image.
  • the image sensor may utilize a first wavelength that is highly reflective when optically aligned with a human eye and a second wavelength is highly absorbed when optically aligned with the human eye.
  • the first wavelength may be up to about 915 nm, such as from about 820-915 nm, and the second wavelength can be from about 940-960 nm.
  • the first source of light and the second source of light may be optically aligned with the eye of a driver, and the image sensor may receive light having the first wavelength and the second wavelength reflected from the eye of the driver.
  • the controller may produce a third image indicative of the first image subtracted from the second image, and may select portions of the third image by determining the presence of a corneal reflection within the second image substantially adjacent a bright object in a third image.
  • the controller may also select portions of the third image by determining the presence of a bright object within the first image adjacent a bright object within the third image that correspond substantially to the size and/or shape of a human eye.
  • the controller may further utilize informational differences between successively captured third images to calculate perclos of the subject.
  • the image sensor may further include a driver interface for providing a visual, audible, and/or tactile feedback mechanism in response to the detected perclos.
  • a driver alert system has also been developed which includes a first source of light having a first wavelength optically aligned with a driver's eye and a second source of light having a second wavelength, the second wavelength not being equal to the first wavelength, with the second source of light also optically aligned with the driver's eye.
  • the system further includes at least one image sensor for receiving reflected light from the driver's eye having the first wavelength and producing a first image, and for receiving a subsequent reflected light from the driver's eye having the second wavelength and producing a second image.
  • the system further includes a controller for receiving the first image and the second image, producing a third image indicative of the first image subtracted from the second image, detecting eye closure and calculating perclos based on the informational differences between successively captured third images.
  • the system may further include a driver interface for providing a visual, audible and/or tactile feedback mechanism in response to an elevated level of perclos.
  • the controller may screen portions of the third image that are above a brightness threshold.
  • the controller may also select portions of the third image that are within a nearness parameter to reflections in the first image or second image.
  • the driver alert system may further include a driver interface which utilizes at least two visual, audible and/or tactile feedback mechanisms.
  • a method has also been developed which includes the steps of providing a source of light toward a subject's eyes, the light having first and second wavelengths, wherein the first wavelength does not equal the second wavelength; producing a first image corresponding to the size and location of light reflected from the subject's eyes having the first wavelength; producing a second image corresponding to the size and location of light reflected from the subject's eyes having the second wavelength; producing a third image indicative of the first image subtracted from the second image; screening portions of the third image that are above a brightness threshold; and selecting portions of the third image that are within a nearness parameter to reflections in the first image or second image.
  • the method step of selecting portions may further include determining the presence of a corneal reflection in the second image substantially adjacent a bright object in the third image.
  • the method step of selecting may also include determining the presence of a bright object in the first image adjacent a bright object in the third image that corresponds substantially to the size and/or shape of a human eye.
  • the method may further include the step of determining a distraction measure.
  • Another method has been developed which includes the steps of providing a first source of light having a first wavelength optically aligned with a driver's eye; providing a second source of light having a second wavelength, the second wavelength not being equal to the first wavelength, the second source of light optically aligned with the driver's eye; receiving reflected light from the driver's eye having the first wavelength and producing a first image; receiving reflected light from the driver's eye having the second wavelength and producing a second image; detecting closure of the driver's eye based on informational differences between successive third images; calculating perclos based on the detected closure of the driver's eye; and activating a driver-responsive visual, audible and/or tactile feedback mechanism in response to an elevated level of perclos.
  • the method may also include the step of screening portions of the third image that are above a brightness threshold.
  • the method step of detecting perclos may further include the step of selecting portions of the third image that are within a nearness parameter to reflections in the first image or second image.
  • FIG. Ia is a photographic representation of a bright pupil image of the prior art
  • FIG. Ib is a photographic representation of a dark pupil image of the prior art
  • FIG. Ic is a calculated representation of a difference image of the prior art produced by subtracting the dark pupil image of FIG. Ib from the light pupil image of FIG. Ia
  • FIG. 2 is a schematic representation of an image system including an optics module, a controller, and a driver interface in accordance with an embodiment of the present invention
  • FIG. 3 illustrates a process of analyzing information from a subject for determining inattentiveness and drowsiness in accordance with an embodiment of the present invention
  • FIG. 4a is a photographic representation of a bright pupil image utilized in determining whether motion of the subject has occurred
  • FIG. 4b is a photographic representation of a dark pupil image utilized in determining whether motion of the subject has occurred
  • FIG. 4c is a calculated representation of the difference image obtained by subtracting the image in FIG. 4b from the image in FIG. 4a;
  • FIG. 4d is an image created by applying a threshold to the bright pupil image in
  • FIG. 4a
  • FIG. 4e is an image created by applying a threshold to the bright pupil image in
  • FIG.4b
  • FIG. 5 is a schematic representation of a driver interface having feedback mechanisms in association with an embodiment of the present invention.
  • FIG. 6 illustrates a process of automatically calibrating the bright pupil illumination source and the dark pupil illumination source.
  • the present invention measures a subject's eye using two or more different wavelengths of light.
  • the present invention will be described in terms of two different wavelengths of light, although more than two wavelengths may also be used.
  • light is reflected by the different components of the eye.
  • Most wavelengths of light, such as up to about 915 ran, such as from about 820-915 run are largely reflected by the human eye, while other wavelengths demonstrate significant absorption.
  • wavelengths of from about 940-960 nm, such as about 950 run are largely absorbed by the human eye.
  • two representations formed from light of 950 nm and 880 nm, respectively, are approximately identical to each other except that the image formed from light having a wavelength of about 950 nm will not have an image, or will have only a very faint image, of the subject's pupils.
  • the wavelengths of 950 nm and 880 nm are only an example of two wavelengths that may be used in the present invention and are not limitations of the invention. Other wavelengths, having different reflection/absorption characteristics, may also be used. As a general guideline, the light used should not be pupil restricting, should not be damaging to the subject, and should not be distracting (e.g., it should not be visible, or just slightly visible to the subject). Infrared light generally is a good choice, although other wavelengths may also be used. The extent to which the reflection/absorption characteristics of two wavelengths must differ for use with the present invention depends on the sensitivity of the equipment being used.
  • the retina generally provides the greatest variance of reflection/absorption
  • the other parts of the eye such as the lens, vitreous and the aqueous portions, also exhibit reflection/absorption characteristics that may be used with the present invention.
  • the present invention will often be described with respect to the retina and infrared light, the present invention may be used with the other portions of the eye and with other wavelengths.
  • the present invention may be used in many ways, including to determine perclos, to determine the direction of a driver's gaze, and to provide a warning of driver inattentiveness and/or drowsiness.
  • the present invention has many applications, including use in automobiles to reduce the risk that the driver will fall asleep at the wheel. Another application is in commercial motor vehicles, such as large trucks and vehicles carrying hazardous materials.
  • the present application may also be used for paraplegic communications and human factors studies.
  • an image system 30a of the present invention includes an optics module 9, a controller 18, and a driver interface 20a.
  • the optics module 9 includes an image sensor 14, an infrared filter 10, a first source of light, such as a bright pupil illumination source 11, and a second source of light, such as a dark pupil illumination source 13.
  • the optics module 9 may be used to monitor eyes 16 of a subject 17. Specifically, the optics module 9 may be optically aligned with the eyes 16 of the subject 17 to capture a bright pupil image and a dark pupil image, as will be discussed herein.
  • the bright pupil illumination source 11 may have a wavelength of about 880 nm and can be positioned close to the image sensor 14, such as from about 0 mm to about 20 mm from the optical center of the image sensor 14. In one embodiment, the bright pupil illumination source 11 will be positioned as close to the view axis 12 as permitted by the lens of the image sensor 14. Alternatively, the bright pupil illumination source 11 can be mounted on a view axis 12 of the image sensor 14. In one embodiment, the focused optical spot of the bright pupil illumination source 11 is very small in comparison to the diameter of the lens of the image sensor 14.
  • the bright pupil illumination source 11 may be a light emitting diode (LED) or a plurality of LEDs.
  • the dark pupil illumination source 13 may have a wavelength of about 950 nm and can be mounted farther away from the image sensor 14 from the bright pupil illumination source 11. In one embodiment, the dark pupil illumination source 13 can be positioned from about 20 mm to about 30 mm from the optical center of the image sensor 14. In another embodiment, the dark pupil illumination source 13 is positioned from about 30 ixim to about 100 mm farther away from the optical center of the image sensor 14 than the bright pupil illumination source 11.
  • the bright pupil illumination source 11 and the dark pupil illumination source 13 may produce the respective wavelengths at the same intensity or at differing intensities.
  • the bright pupil illumination source 11 and the dark pupil illumination source 13 can be provided by a single light source 11a including a plurality of LEDs producing each respective wavelength, such as a first LED producing a wavelength of light of about 950 nm and a second LED producing a wavelength of light of about 880 nm. In this embodiment, more than one LED for each wavelength may also be included.
  • An eye 16 tends to reflect light at approximately the same angle at which the light is incident onto the eye 16. As a result, the reflected light tends to follow a path that is very similar to the path of the incident light.
  • the bright pupil illumination source 11 and the dark pupil illumination source 13 may both be positioned close to and around the view axis 12 so that there is only a very small angle between the incident light and the reflected light. It has been found that the bright pupil illumination source 11 may be positioned adjacent the view axis 12 and the dark pupil illumination source 13 may be positioned farther away from the view axis 12 in order to reduce the overall brightness of the resulting image of the dark pupil, as will be described herein.
  • the image sensor 14 is a camera, such as a thermographic camera, a forward looking infrared camera (FLIR), a complimentary metal oxide (CMOS) camera, or a charge coupled device (CCD), having an integral lens.
  • the image sensor 14 has an integral lens that may be optically coupled with an infrared filter 10.
  • the infrared filter 10 provides that only selected light is passed to the integral lens and image sensor 14.
  • the infrared filter may be, for example, a dielectric filter and may have a 50 nm half power bandwidth.
  • the image sensor 14 can be coupled to at least one camera adjustment mechanism 15 that the subject 17 can use to direct the focal point of the image sensor in line with his/her eyes 16.
  • the camera adjustment mechanism 15 can be a two-axis adjustment mechanism capable of fine adjustment and locking of the pan and tilt axes of the image sensor 14.
  • the controller 18 of the present invention is a microcontroller, or digital signal processor, with appropriate electronics to interface to the other system components, as are conventionally known.
  • the controller 18 includes sufficient memory for storing multiple images acquired from the image sensor 14 and non-volatile memory for storing programs, data, and/or calibration parameters.
  • the controller 18 includes a communications port 19, such as a calibration and/or diagnostic interface, for exchanging information.
  • the communications port 19 can provide diagnostic information or data from the controller 18 to an external evaluation module or to allow programming information or calibration information to be uploaded into the controller 18.
  • the communications port 19 can be any standard communications protocol such as EIA 232 or USB.
  • the controller 18 is capable of pulsing each of the bright pupil illumination source 11 and the dark pupil illumination source 13 synchronously with the image sensor 14 exposure periods.
  • the controller 18 is also capable of reading the reflected light corresponding to the wavelength produced by the bright pupil illumination source 11, and the reflected light corresponding to the wavelength produced by the dark pupil illumination source 13 into memory for further processing.
  • the controller 18 is also capable of receiving a first image of the eyes 16 of the subject 17 corresponding to the reflected light originating from the bright pupil illumination source 11 and captured by the optics module 9, and a second image of the eyes 16 of the subject 17 corresponding to the reflected light originating from the dark pupil illumination source 13 and captured by the optics module 9.
  • the image sensor 14 and the controller 18 are structured to provide for the capture and subsequent analysis of multiple images of the subject's eyes 16. [0042] The controller 18 then subtracts one of the first or second images, corresponding to the wavelength of light produced by the bright pupil illumination source 11 or the dark pupil illumination source 13, from the other of the first or second images, represented by the other of the wavelength of light produced by the bright pupil illumination source 11 or the dark pupil illumination source 13, to produce a third image.
  • the first and second images should be substantially the same, except that one of the images should include an image of the retina while the other should not include, an image of the retina (or should include a more faint image of the retina), when one of the first or second images is subtracted from the other to produce the third image, the third image should be an image of only the retina of the subject 17.
  • stray images can still be present in the third image due to extraneous sources of light. Correction of the third image and elimination of these unwanted images will be discussed herein.
  • the subtraction of one image from the other may be done, for example, by comparing corresponding pixels of each of the first and second images and determining the state of the corresponding pixel in the third image. For example, if the corresponding pixels in both the first and second images are the same, either on or off, then the corresponding pixel in the third image should be off. If, however, the pixels are different, then the corresponding pixel in the third image should be on.
  • the controller 18 utilizes different absorption properties of the eye 16 to measure closure of the eye 16.
  • the present invention utilizes the fact that the eye 16 will generally absorb light at a wavelength of about 950 nm, and the retina of the eye will generally reflect light at other wavelengths such as about 880 nm.
  • the present invention illuminates the subject's eyes 16 with both frequencies of light at appropriate intensities to produce similar images, measures the reflected image at each of the wavelengths, and subtracts one of the images from the other to form a third image that is primarily only of the subject's pupils. From the third image, the present invention can determine whether and to what extent the subject's eyes 16 are open or closed and can be used to determine perclos from successively captured third images.
  • the controller can be programmed to execute a series of steps to determine drowsiness and inattention.
  • the image acquisition and analysis process begins with step 70 in which the first illumination source, or bright pupil illumination source, is pulsed during the image exposure period. After exposure, the first image, or bright pupil image, is acquired by the controller and stored in memory in step 72.
  • the second illumination source or dark pupil illumination source
  • the controller subsequently generates a third image, or difference image, by subtracting the dark pupil image from the bright pupil image in step 80.
  • a minimum brightness threshold is determined in step 82 by analyzing the difference image produced in step 80. The threshold is applied to the difference image to identify the brightest pixels in the image in step 84, and the difference image is screened to permit portions of the third image that are above the brightness threshold to be analyzed and portions below the brightness threshold to be discarded.
  • the bright pixels are next clustered to form bright objects in step 86.
  • the bright objects are analyzed based on shape and location to determine which bright objects potentially correspond to the subject's eyes in step 88. After all of the bright objects have been classified, the difference image is analyzed to determine if significant motion has occurred in step 90.
  • the subject may have moved slightly between the capture of the two images. Movement can occur due to typical variations in the head angle of the subject, having the subject tilt their head to check traffic or to consult instrumentation on the dash. If the subject moves between image captures, the features of the bright pupil image and the dark pupil image may not properly align. Accordingly, in order to determine whether motion of the subject has occurred between image captures, the difference image of step 80 is evaluated to determine whether there is a significant increase in the calculated pixel minimum brightness threshold of step 82. If significant motion has occurred, as determined by step 92, it is likely that some of the bright objects are incorrectly classified as corresponding to the subject's eyes. Accordingly, the classification of portions of the third image potentially corresponding to the subject's eyes must be confirmed by analyzing the bright pupil image in step 94 and the dark pupil image in step 96.
  • the analysis of the bright pupil image examines the shape and location of the bright object associated with each potential eye candidate.
  • the location and shape of the reflection of a bright object in the bright pupil image will be very similar to the location and shape of the bright object in the difference image.
  • a misclassified eye is often caused by the misalignment of bright objects between the bright pupil image and the difference image.
  • a properly classified eye is determined by the alignment of a bright object in the bright pupil image having the approximate shape of an eye within a nearness parameter of a bright object in the difference image.
  • the term "nearness parameter" means a distance of from about 1/10 to about 1 A the diameter of a typically observed pupil.
  • the analysis of the dark pupil image examines the presence or absence of a corneal reflection associated with each potential eye candidate.
  • a corneal reflection will be present substantially adjacent the bright object within a nearness parameter.
  • the corneal reflection of the dark pupil image is within a nearness parameter of a bright object of the difference image. It is observed that for an image acquisition time of 17 milliseconds, the corneal reflection rarely moves more than 1 centimeter with respect to the bright object corresponding to the subject's pupil.
  • the classification of an eye can be confirmed by the presence of a corneal reflection within the • second image to be within a nearness parameter, such as about 1 centimeter, of a bright object of the third image.
  • portions of the third image, or difference image, corresponding to the bright objects which can represent the subject's eyes, are selected if the portion of the third image is within a nearness parameter to reflections in the first image or the second image.
  • FIGS. 4a-4e five images are presented which relate to the detection of eye closure when significant motion occurs while acquiring the bright pupil image and the dark pupil image.
  • FIG. 4a is the bright pupil image. A number of bright objects is observed in FIG. 4a including the two bright pupils 22, 26 and several bright objects 20, 21, 23, 24, 25, 27, 28, 29 caused by reflections of the illumination source from the subject's glasses.
  • FIG. 4b is the dark pupil image. Again, a number of bright objects are observed in FIG.
  • FIG. 4b including two corneal reflections 32, 36 that appear as small bright objects within the dark pupil region and several bright objects 30, 31, 33, 34, 35, 37, 38 caused by reflections of the illumination source from the subject's glasses. Because the subject was moving during image acquisition, the size and shape of the reflections on the subject's glasses in FIG. 4b are different in size, shape and location compared to those from FIG. 4a.
  • FIG. 4c is the difference image calculated by subtracting the image in FIG. 4b from the image in FIG. 4a.
  • a total of six bright objects 39, 40, 41, 42, 43, 44 are shown in FIG. 4c.
  • Objects 40 and 43 are the bright pupil objects and objects 39, 41, 42, 44 are objects caused by differences in the bright objects associated with reflection from the subject's glasses.
  • FIG. 4d is an image created by applying a threshold to the bright pupil image in FIG. 4a. All of the pixels above the threshold are shown as white and all other pixels are shown as black. Objects 47 and 51 are the bright pupil objects and objects 45, 46, 48, 49, 50, 52, 53 are objects caused by the reflections on the subject's glasses.
  • FIG. 4e is an image created by applying a threshold to the bright pupil image in FIG. 4b. Again, all of the pixels above the threshold are shown as white and all other pixels are shown as black.
  • Objects 55 and 58 are the corneal reflection objects and objects 54, 56, 57, 59, 60, 61 are objects caused by the reflections on the subject's glasses.
  • Object 62 is associated with a bright spot on the subject's face.
  • the threshold applied to create the images in FIGS. 4d and 4e is calculated as a fraction of the brightness of the spot being analyzed. In practice, many images similar to FIG. 4d and FIG. 4e are created for a variety of thresholds associated with each bright object. FIG.4d and FIG. 4e are examples of the many images that can be created.
  • the classification of an eye starts with the difference image in FIG. 4c.
  • the bright objects in the difference image are analyzed based on the size and shape of an eye.
  • the size and shape parameters are set to be very liberal, thereby initially allowing many spots to be classified as eyes. As shown in FIG. 4c, based on the initial analysis, objects 39, 40, 41 and 43 are classified as eyes.
  • an object 47 in FIG. 4d oriented at the same location as object 40 is used.
  • Object 47 is similar in size and shape as object 40.
  • the classification of object 40 as an eye is confirmed in the bright pupil image.
  • the next step in the confirmation process for object 40 is the identification of object 55 in FIG. 4e.
  • Object 55 is consistent in location and size with an expected corneal reflection.
  • the classification of object 40 is confirmed in the dark pupil image.
  • the classification of object 43 as an eye is confirmed based on objects 51 and 58.
  • Object 51 is the appropriate size and shape for a bright pupil and object 58 is in the proper location and the proper size to be a corneal reflection
  • the confirmation process for object 39 shown in FIG. 4c identifies an object 45 in FIG. 4d at the same location as object 39.
  • Object 45 is similar in size to object 39 but its shape is inconsistent with that of an eye. Accordingly, object 45 fails the symmetry test associated with eye classification.
  • the symmetry test can calculate a slope of a specified pixel group using a least mean square. Proper classification of an eye requires that the calculated slope is statistically consistent with a slope of zero.
  • the classification of object 39 as an eye is also rejected based on the shape of corresponding object 54 being too large to be a corneal reflection.
  • the confirmation process for object 41 in FIG. 4c identifies an object 48 in FIG. 4d at the same location as object 41.
  • Object 48 is similar in size and shape to object 41. Hence, the classification of object 41 as an eye is incorrectly confirmed in the bright pupil image. The classification of object 41 as an eye is rejected based on the shape of object 60 being too large to be a corneal reflection. Object 41 is accordingly rejected as an eye since object 60 in FIG. 4e is inconsistent with a corneal reflection.
  • the driver's gaze angle can be measured in step 98.
  • the centroid of the subject's pupil can be calculated by analysis of the bright pupil image and the location of the corneal reflection can be determined by analysis of the dark pupil image.
  • the gaze angle is based on the relative position of the corneal reflection and the centroid of the subject's pupil. Accordingly, the subject's gaze angle can be calculated as a function of the centroid of the subject's pupil in an image and the location of the corneal reflection of an illumination source in an image.
  • drowsiness and/or inattention of the subject is determined by calculating perclos and/or the distraction measure in step 100.
  • Perclos is determined by the percentage of time the subject's eyes are 60% - 80% closed. Accordingly, perclos can be calculated by evaluating the successive detections of the closure of the driver's eye(s).
  • the distraction measure has at least two components. One component is the identification of unsafe behavior. For example, if the subject takes his/her eyes off the road for a predetermined period, such as about 3 seconds or more, this lapse period can be considered unsafe. Also contemplated within the identification of unsafe behavior is inattention by the subject to the rear view or side view mirrors for a period of time.
  • a second component of driver distraction is a large variation from normal movement patterns.
  • Most alert drivers will scan the road ahead, look at the instrument panel and look at the mirrors in a pattern specific to each person.
  • the distraction measure of the present invention quantifies the pattern and looks for abrupt changes in the pattern.
  • the pattern can include the number of subject eye glances per second.
  • the pattern can also include the distribution of time spent by the subject viewing the road, the instrument panel, the side view mirrors, etc.
  • the distraction measure can be determined through video methods and/or information archived through repeated still images taken of the subject.
  • a distraction warning can be delivered in step 102 if the subject exhibits the above-described distracted behavior.
  • Distraction warnings can be customized to the type of distraction observed.
  • the goal of a distraction warning is to draw the driver's gaze to an area that requires attention. For example, if the driver fails to look at the roadway for a period of time, or if he/she spends less than 50% of his/her time looking as the roadway, a visual indicator located in a heads-up display could be used to bring the driver's attention to the front. If the driver has not looked at his/her right mirror for an unsafe period of time, the driver's attention could be drawn to the right using a visual indicator on or near the mirror.
  • auditory signals and/or tactile signals such as seat vibration, can be used in conjunction with the visual indicators and/or as a replacement for the visual indicators.
  • a single beep can be provided, but if the driver fails to look at his mirror for some time, a series of beeps that appear to come from the mirror may be used.
  • the series of beeps may also be coupled with a visual indicator mounted near or integral with the mirror.
  • the subject can be provided with drowsiness warnings that will clearly indicate his/her level of drowsiness through a driver interface 20.
  • the purpose of this warning is not to maintain alertness. Rather, the purpose of the warning is to inform the driver of his/her impaired state of alertness and to encourage the driver to stop and engage in proven alerting activities such as napping or ingesting caffeine.
  • the driver interface 20 can include a multi-media information display. In one embodiment, the driver interface 20 can include a visual, audible, and/or tactile feedback mechanism 65.
  • the driver interface 20 can be positioned to provide feedback in the form of a distraction warning to a subject during the driving process.
  • the driver interface 20 can be positioned adjacent the driver, such as on a portion of an interior 68 of an automobile, such as on the dashboard or visor of an automobile.
  • the driver interface 20 can include at least two feedback mechanisms 65 to reinforce the transfer of information to an impaired driver.
  • the feedback mechanism 65 may also have a directional component to bring the driver's attention to a particular place.
  • An example of such an interface 20 includes a visual display that includes a plurality of visually alerting devices 66 and an audibly alerting device 67.
  • the visually alerting device 66 may be an LED and/or a conventional light source.
  • the audibly alerting device 67 can comprise a speaker.
  • the driver interface 20 can communicate to the subject the number of seconds that the driver's eyes were closed. In one embodiment, if the driver's eyes are closed for a predetermined number of consecutive seconds, such as about 6 seconds, a feedback mechanism 65 can be activated. For example, a visually alerting device 66 and an audibly alerting device 67 can simultaneously be activated. In one embodiment, a series of notes, such as a scale or a well-known tune, can be played in order to alert the subject. In this embodiment, the visual information is directly reinforced by the audible information. Presenting the information through redundant feedback mechanisms 65 increases the likelihood that the information is properly delivered to an impaired driver.
  • the driver interface 20 can include an array of LEDs and a speaker capable of playing specific notes for notifying the subject when his/her eyes were closed for a period of 4 consecutive seconds.
  • a first LED can be lit and a musical note of frequency 880 Hz, corresponding to A on the musical scale, can be played for 1 second.
  • a second LED can be lit and a note of frequency 987 Hz, corresponding to B on the musical scale, can be played for 1 second.
  • a third LED can subsequently be lit and a note of frequency 1046 Hz, corresponding to C on the musical scale, can be played for 1 second.
  • a fourth LED can be lit and a note of 1173 Hz, corresponding to D on the musical scale, can be played for 1 second.
  • the four LEDs remain lit for a period of time, such as about ten seconds, to allow the driver time to absorb the information.
  • the system of the present invention also includes an automatic gain control that adjusts the overall brightness of an image within a set range in step 104.
  • This feature is desirable to avoid image saturation when background lighting increases rapidly due to changes in environmental lighting, such as the presence of oncoming headlights.
  • This feature is also desirable to compensate for person-to-person variations in retinal reflection producing variations of observed brightness of the subject's pupils in the image.
  • the intensity of the bright- pupil illumination source and/or the dark pupil illumination source is increased as the image sensor gain reduces.
  • the intensity of the bright pupil illumination source and/or the dark pupil illumination source is decreased as the image sensor gain increases.
  • the bright pupil illumination source can be adjusted to maintain the bright pupil as the brightest object in the image while avoiding saturation of the image caused by the illumination source combined with the environmental lighting.
  • the dark pupil illumination source can be adjusted to match the overall image brightness obtained with the bright pupil illumination source.
  • the automatic calibration process is illustrated in FIG. 6.
  • the first required step 106 is to analyze the bright pupil and dark pupil images to identify if a subject's eyes are present in the images.
  • the next step 108 is to determine the confidence or accuracy of the classification of the subject's eyes. Confidence is considered high in step 110 if two eyes are found separated by an expected distance. Confidence can also include the absence of any significant detected motion. If confidence is low, no attempt is made to calibrate the system as shown in step 112.
  • the calibration is incrementally adjusted.
  • the brightness of the pupils in the bright pupil image is set within a specified calibration range, and the average brightness of the dark pupil image is set to be similar to that of the bright pupil image.
  • the brightness of the subject's pupils are determined by analyzing the bright pupil image in step 114. If the pupil brightness is below a lower calibration limit, the image exposure is increased in step 116. Next, the brightness of the subject's pupils is determined in the dark pupil image in step 118. If the pupil brightness exceeds an upper calibration limit, the image exposure is decreased in step 120. This is done in small steps to assure that brief changes in environmental lighting do not have a large effect on the calibration.
  • the adjustment includes increasing and decreasing the exposure of the images. This can be accomplished by adjusting the brightness of the bright pupil illumination source and the dark pupil illumination source. For example, if the illumination sources were LEDs, the current in the LEDs could be varied. The exposure can also be changed by increasing or decreasing the time the illumination sources are on. Some cameras include an electronic shutter that can be set by the controller.

Abstract

An image sensor and driver alert system for detecting eye closure and/or gaze angle is disclosed. The image sensor includes a first source of light having a first wavelength and a second source of light having a second wavelength, the second wavelength not being equal to the first wavelength. The image sensor also includes at least one optics module for receiving the first wavelength and producing a first image, and for receiving a second wavelength and producing a second image. A controller for receiving the first image and the second image, producing a third image indicative of the first image subtracted from the second image, screening portions of the third image that are above a brightness threshold, and selecting portions of the third image that are within a nearness parameter to reflections in the first image or second image is also provided.

Description

DRIVER DROWSINESS AND DISTRACTION MONITOR
BACKGROUND OF THE INVENTION Field of the Invention
[0001] The present invention relates to an apparatus and method for monitoring a subject's eyes and, more particularly, to an apparatus and method in which light of two different wavelengths is analyzed to monitor a subject's eyes while driving. Description of Related Art
[0002] Statistics have shown that 60% of adult drivers have driven a vehicle while feeling drowsy in the past year, and more than one-third have actually fallen asleep at the wheel. Approximately eleven million drivers admit they have had an accident or near accident because they dozed off or were too tired to drive. Injury, death and significant monetary losses are all unfortunate consequences of driver fatigue-related accidents each year. [0003] It is known in the art to monitor a subject's eyes, such as to measure "perclos". Perclos, generally, is a measure of the proportion of time that a subject's eyes are closed, either completely or beyond a predetermined point. For example, perclos may be the measure of the proportion of time that a subject's eyes are between 80% and 100% closed. If a driver is drowsy, the driver's eyelids can be closed for a period of between 2-30 seconds. This perclos eyelid closure often repeats many times over the period of an hour or more before a driver stops and rests. Often, the measurement of perclos must be done manually, such as by videotaping the subject, reviewing the tape, and measuring the subject's perclos. Such a method, of course, is not practical for many applications, such as the detection of a fatigued driver.
[0004] Another method of determining perclos involves the use of an image sensor, such as a video camera, and image processing software to monitor the subject, determine the location of the subject's eyes, and determine the subject's perclos. Such a method, however, is time consuming and often cannot be performed in real time, thereby prohibiting it from being used to determine the drowsiness of a driver of a motor vehicle. One attempt to overcome this problem is to monitor only a portion of the subject's face, the portion containing the subject's eyes, thereby reducing the amount of processing required to determine perclos. This approach, however, creates other difficulties because the subject's eyes must be tracked as they move to monitor the road. As the subject's eyes move, the subject's head and body also move. Often the subject moves too quickly to allow for accurate tracking of the subject's eyes to determine perclos. [0005] Video-based monitoring systems have the potential to identify slow eyelid closures. These systems can be categorized as systems using structured illumination sources and systems not using structured illumination sources. Structured illumination sources are designed to accentuate objects of interest in a scene. In eye tracking systems, structured illumination is often an infrared illumination source used to accentuate the image of the subject's eyes. Systems using structured illumination need less processing and are less susceptible to sudden changes in environmental lighting conditions. However, although structured illumination source based systems are effective for nighttime operation, they typically will not function well in daylight.
[0006] United States Patent No. 6,082,858, describes a structured illumination-based system that can detect the position of a driver's eyes in low light conditions using two images. In this system, a first image is captured of a driver's eyes reflecting a first illumination source that produces bright pupils. A second image is also captured of the driver's eyes reflecting a second illumination source that produces dark pupils. The second image is subtracted from the first image to produce a third image that shows just the bright pupils with all other image features eliminated. As shown in FIG- la, a bright pupil image is collected using an 850 nm illumination source mounted close to the camera lens. This image includes a bright object 1 caused by reflection of the light source on the subject's glasses, and bright pupils 2 and 3. As shown in FIG. Ib, a dark pupil image is collected using a 950 nm light source mounted slightly farther away from the camera lens. The image includes a bright object 4 caused by the reflection of the light source from the subject's glasses and dark pupils 5 and 6. As shown in FIG. Ic, a difference image is produced by subtracting the image in FIG. Ib from the image in FIG. Ia. The image in FIG. Ic is black except for the two bright eye objects 7 and 8 corresponding to the subject's eyes. The bright objects 1 and 4 are eliminated in the subtraction process because the bright objects 1 and 4 are the same shape, have the same brightness, and are located at the same coordinates within the two images. [0007] However, in order for the system identified in United States Patent No. 6,082,858 to function, the bright pupil image and the dark pupil image must be nearly identically oriented in order for the stray bright images to be eliminated. The need for nearly identically oriented images can be problematic in the case where a single camera is used to collect consecutive images separated in time. In this case, the coordinates of the images can be quite different if the subject moves or if there is a rapid change in environmental lighting. In practice, drivers often move their heads at a rate that can cause substantial changes in the two consecutively collected images. Also, rapidly moving headlights from oncoming traffic can cause substantial changes in the orientation of the bright pupil image and the dark pupil image. [0008] Prior art devices and methods for determining perclos typically have difficulty finding and monitoring the subject's eyes. For example, the prior art devices often cannot distinguish between the subject's eyes and other sources of light and reflected light, such as is caused by dashboard lights, lights from other vehicles, and street lights. Problems arising from reflected lights are often further exaggerated when the subject is wearing glasses. Example prior art devices have been described in United States Patent Nos. 4,953,111; 5,801,390; and 5,231,674 as well as Japanese Patent Nos. 2-138673; 52-54291; and 9-62828.. [0009] Accordingly, there is a need for a device and method for monitoring the eyes of a subject, such as can be used to determine perclos, which can operate in real time, can account for subject movement, and is insensitive to other sources of light.
SUMMARY OF THE INVENTION
[0010] In view of the foregoing, a system has been developed which can monitor the eyes of a subject in real time, is able to account for movement of the subject between captured images, and can eliminate unwanted images due to extraneous sources of light. [0011] An image sensor has been developed which includes a first source of light having a first wavelength, and a second source of light having a second wavelength, the second wavelength not being equal to the first wavelength. The image sensor further includes at least one optics module for receiving reflected light having the first wavelength, and producing a first image corresponding to the size and location of the reflected light having the first wavelength. The optics module is also capable of receiving a subsequent reflected light having the second wavelength, and producing a second image corresponding to the reflected light having the second wavelength. The image sensor also includes a controller for receiving the first image and the second image, producing a third image indicative of the first image subtracted from the second image, screening the portions of the third image that are above a brightness threshold, and selecting portions of the third image that are within a nearness parameter to reflections in the first image or second image.
[0012] The image sensor may utilize a first wavelength that is highly reflective when optically aligned with a human eye and a second wavelength is highly absorbed when optically aligned with the human eye. The first wavelength may be up to about 915 nm, such as from about 820-915 nm, and the second wavelength can be from about 940-960 nm. The first source of light and the second source of light may be optically aligned with the eye of a driver, and the image sensor may receive light having the first wavelength and the second wavelength reflected from the eye of the driver.
[0013] The controller may produce a third image indicative of the first image subtracted from the second image, and may select portions of the third image by determining the presence of a corneal reflection within the second image substantially adjacent a bright object in a third image. The controller may also select portions of the third image by determining the presence of a bright object within the first image adjacent a bright object within the third image that correspond substantially to the size and/or shape of a human eye. The controller may further utilize informational differences between successively captured third images to calculate perclos of the subject. Optionally, the image sensor may further include a driver interface for providing a visual, audible, and/or tactile feedback mechanism in response to the detected perclos.
[0014] A driver alert system has also been developed which includes a first source of light having a first wavelength optically aligned with a driver's eye and a second source of light having a second wavelength, the second wavelength not being equal to the first wavelength, with the second source of light also optically aligned with the driver's eye. The system further includes at least one image sensor for receiving reflected light from the driver's eye having the first wavelength and producing a first image, and for receiving a subsequent reflected light from the driver's eye having the second wavelength and producing a second image. The system further includes a controller for receiving the first image and the second image, producing a third image indicative of the first image subtracted from the second image, detecting eye closure and calculating perclos based on the informational differences between successively captured third images. The system may further include a driver interface for providing a visual, audible and/or tactile feedback mechanism in response to an elevated level of perclos.
[0015] The controller may screen portions of the third image that are above a brightness threshold. The controller may also select portions of the third image that are within a nearness parameter to reflections in the first image or second image. The driver alert system may further include a driver interface which utilizes at least two visual, audible and/or tactile feedback mechanisms.
[0016] A method has also been developed which includes the steps of providing a source of light toward a subject's eyes, the light having first and second wavelengths, wherein the first wavelength does not equal the second wavelength; producing a first image corresponding to the size and location of light reflected from the subject's eyes having the first wavelength; producing a second image corresponding to the size and location of light reflected from the subject's eyes having the second wavelength; producing a third image indicative of the first image subtracted from the second image; screening portions of the third image that are above a brightness threshold; and selecting portions of the third image that are within a nearness parameter to reflections in the first image or second image. [0017] The method step of selecting portions may further include determining the presence of a corneal reflection in the second image substantially adjacent a bright object in the third image. The method step of selecting may also include determining the presence of a bright object in the first image adjacent a bright object in the third image that corresponds substantially to the size and/or shape of a human eye. The method may further include the step of determining a distraction measure.
[0018] Another method has been developed which includes the steps of providing a first source of light having a first wavelength optically aligned with a driver's eye; providing a second source of light having a second wavelength, the second wavelength not being equal to the first wavelength, the second source of light optically aligned with the driver's eye; receiving reflected light from the driver's eye having the first wavelength and producing a first image; receiving reflected light from the driver's eye having the second wavelength and producing a second image; detecting closure of the driver's eye based on informational differences between successive third images; calculating perclos based on the detected closure of the driver's eye; and activating a driver-responsive visual, audible and/or tactile feedback mechanism in response to an elevated level of perclos.
[0019] The method may also include the step of screening portions of the third image that are above a brightness threshold. The method step of detecting perclos may further include the step of selecting portions of the third image that are within a nearness parameter to reflections in the first image or second image.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] FIG. Ia is a photographic representation of a bright pupil image of the prior art; [0021] FIG. Ib is a photographic representation of a dark pupil image of the prior art; [0022] FIG. Ic is a calculated representation of a difference image of the prior art produced by subtracting the dark pupil image of FIG. Ib from the light pupil image of FIG. Ia; [0023] FIG. 2 is a schematic representation of an image system including an optics module, a controller, and a driver interface in accordance with an embodiment of the present invention;
[0024] FIG. 3 illustrates a process of analyzing information from a subject for determining inattentiveness and drowsiness in accordance with an embodiment of the present invention;
[0025] FIG. 4a is a photographic representation of a bright pupil image utilized in determining whether motion of the subject has occurred;
[0026] FIG. 4b is a photographic representation of a dark pupil image utilized in determining whether motion of the subject has occurred;
[0027] FIG. 4c is a calculated representation of the difference image obtained by subtracting the image in FIG. 4b from the image in FIG. 4a;
[0028] FIG. 4d is an image created by applying a threshold to the bright pupil image in
FIG. 4a;
[0029] FIG. 4e is an image created by applying a threshold to the bright pupil image in
FIG.4b;
[0030] FIG. 5 is a schematic representation of a driver interface having feedback mechanisms in association with an embodiment of the present invention; and
[0031] FIG. 6 illustrates a process of automatically calibrating the bright pupil illumination source and the dark pupil illumination source.
DESCRIPTION OF THE INVENTION
[0032] The present invention measures a subject's eye using two or more different wavelengths of light. The present invention will be described in terms of two different wavelengths of light, although more than two wavelengths may also be used. Generally, light is reflected by the different components of the eye. However, in the light spectrum there are peaks and valleys in the reflection/absorption characteristics. Most wavelengths of light, such as up to about 915 ran, such as from about 820-915 run are largely reflected by the human eye, while other wavelengths demonstrate significant absorption. Particularly, wavelengths of from about 940-960 nm, such as about 950 run, are largely absorbed by the human eye. It has been found that by using light having two different wavelengths, with each wavelength having different reflection/absorption characteristics, useful measurements, such as driver eye closure, can be obtained. It has also been found that two wavelengths, about 950 nm and about 880 ran, are particularly useful in that regard, although other wavelengths may provide superior results. A wavelength of about 950 nm produces a representation of the eye having a bright pupil, whereas a wavelength of about 880 run produces a representation of the eye having a dark pupil. Aside from the significantly different retinal reflection/absorption characteristics of 950 ran light and 880 nm light, however, they produce images of a human face that are nearly identical. As a result, two representations formed from light of 950 nm and 880 nm, respectively, are approximately identical to each other except that the image formed from light having a wavelength of about 950 nm will not have an image, or will have only a very faint image, of the subject's pupils.
[0033] The wavelengths of 950 nm and 880 nm are only an example of two wavelengths that may be used in the present invention and are not limitations of the invention. Other wavelengths, having different reflection/absorption characteristics, may also be used. As a general guideline, the light used should not be pupil restricting, should not be damaging to the subject, and should not be distracting (e.g., it should not be visible, or just slightly visible to the subject). Infrared light generally is a good choice, although other wavelengths may also be used. The extent to which the reflection/absorption characteristics of two wavelengths must differ for use with the present invention depends on the sensitivity of the equipment being used. Furthermore, although the retina generally provides the greatest variance of reflection/absorption, the other parts of the eye, such as the lens, vitreous and the aqueous portions, also exhibit reflection/absorption characteristics that may be used with the present invention. Although the present invention will often be described with respect to the retina and infrared light, the present invention may be used with the other portions of the eye and with other wavelengths.
[0034] The present invention may be used in many ways, including to determine perclos, to determine the direction of a driver's gaze, and to provide a warning of driver inattentiveness and/or drowsiness. The present invention has many applications, including use in automobiles to reduce the risk that the driver will fall asleep at the wheel. Another application is in commercial motor vehicles, such as large trucks and vehicles carrying hazardous materials. The present application may also be used for paraplegic communications and human factors studies.
[0035] As shown in FIG. 2, an image system 30a of the present invention includes an optics module 9, a controller 18, and a driver interface 20a. The optics module 9 includes an image sensor 14, an infrared filter 10, a first source of light, such as a bright pupil illumination source 11, and a second source of light, such as a dark pupil illumination source 13. The optics module 9 may be used to monitor eyes 16 of a subject 17. Specifically, the optics module 9 may be optically aligned with the eyes 16 of the subject 17 to capture a bright pupil image and a dark pupil image, as will be discussed herein.
[0036] The bright pupil illumination source 11 may have a wavelength of about 880 nm and can be positioned close to the image sensor 14, such as from about 0 mm to about 20 mm from the optical center of the image sensor 14. In one embodiment, the bright pupil illumination source 11 will be positioned as close to the view axis 12 as permitted by the lens of the image sensor 14. Alternatively, the bright pupil illumination source 11 can be mounted on a view axis 12 of the image sensor 14. In one embodiment, the focused optical spot of the bright pupil illumination source 11 is very small in comparison to the diameter of the lens of the image sensor 14. The bright pupil illumination source 11 may be a light emitting diode (LED) or a plurality of LEDs. The dark pupil illumination source 13 may have a wavelength of about 950 nm and can be mounted farther away from the image sensor 14 from the bright pupil illumination source 11. In one embodiment, the dark pupil illumination source 13 can be positioned from about 20 mm to about 30 mm from the optical center of the image sensor 14. In another embodiment, the dark pupil illumination source 13 is positioned from about 30 ixim to about 100 mm farther away from the optical center of the image sensor 14 than the bright pupil illumination source 11.
[0037] The bright pupil illumination source 11 and the dark pupil illumination source 13 may produce the respective wavelengths at the same intensity or at differing intensities. In yet another embodiment, the bright pupil illumination source 11 and the dark pupil illumination source 13 can be provided by a single light source 11a including a plurality of LEDs producing each respective wavelength, such as a first LED producing a wavelength of light of about 950 nm and a second LED producing a wavelength of light of about 880 nm. In this embodiment, more than one LED for each wavelength may also be included. [0038] An eye 16 tends to reflect light at approximately the same angle at which the light is incident onto the eye 16. As a result, the reflected light tends to follow a path that is very similar to the path of the incident light. Accordingly, the bright pupil illumination source 11 and the dark pupil illumination source 13 may both be positioned close to and around the view axis 12 so that there is only a very small angle between the incident light and the reflected light. It has been found that the bright pupil illumination source 11 may be positioned adjacent the view axis 12 and the dark pupil illumination source 13 may be positioned farther away from the view axis 12 in order to reduce the overall brightness of the resulting image of the dark pupil, as will be described herein. [0039] In one embodiment, the image sensor 14 is a camera, such as a thermographic camera, a forward looking infrared camera (FLIR), a complimentary metal oxide (CMOS) camera, or a charge coupled device (CCD), having an integral lens. The image sensor 14 has an integral lens that may be optically coupled with an infrared filter 10. The infrared filter 10 provides that only selected light is passed to the integral lens and image sensor 14. The infrared filter may be, for example, a dielectric filter and may have a 50 nm half power bandwidth. In one embodiment, the image sensor 14 can be coupled to at least one camera adjustment mechanism 15 that the subject 17 can use to direct the focal point of the image sensor in line with his/her eyes 16. In another embodiment, the camera adjustment mechanism 15 can be a two-axis adjustment mechanism capable of fine adjustment and locking of the pan and tilt axes of the image sensor 14.
[0040] The controller 18 of the present invention is a microcontroller, or digital signal processor, with appropriate electronics to interface to the other system components, as are conventionally known. The controller 18 includes sufficient memory for storing multiple images acquired from the image sensor 14 and non-volatile memory for storing programs, data, and/or calibration parameters. In one embodiment, the controller 18 includes a communications port 19, such as a calibration and/or diagnostic interface, for exchanging information. The communications port 19 can provide diagnostic information or data from the controller 18 to an external evaluation module or to allow programming information or calibration information to be uploaded into the controller 18. The communications port 19 can be any standard communications protocol such as EIA 232 or USB. [0041] In one embodiment, the controller 18 is capable of pulsing each of the bright pupil illumination source 11 and the dark pupil illumination source 13 synchronously with the image sensor 14 exposure periods. The controller 18 is also capable of reading the reflected light corresponding to the wavelength produced by the bright pupil illumination source 11, and the reflected light corresponding to the wavelength produced by the dark pupil illumination source 13 into memory for further processing. The controller 18 is also capable of receiving a first image of the eyes 16 of the subject 17 corresponding to the reflected light originating from the bright pupil illumination source 11 and captured by the optics module 9, and a second image of the eyes 16 of the subject 17 corresponding to the reflected light originating from the dark pupil illumination source 13 and captured by the optics module 9. The image sensor 14 and the controller 18 are structured to provide for the capture and subsequent analysis of multiple images of the subject's eyes 16. [0042] The controller 18 then subtracts one of the first or second images, corresponding to the wavelength of light produced by the bright pupil illumination source 11 or the dark pupil illumination source 13, from the other of the first or second images, represented by the other of the wavelength of light produced by the bright pupil illumination source 11 or the dark pupil illumination source 13, to produce a third image. Because the first and second images should be substantially the same, except that one of the images should include an image of the retina while the other should not include, an image of the retina (or should include a more faint image of the retina), when one of the first or second images is subtracted from the other to produce the third image, the third image should be an image of only the retina of the subject 17. However, stray images can still be present in the third image due to extraneous sources of light. Correction of the third image and elimination of these unwanted images will be discussed herein. The subtraction of one image from the other may be done, for example, by comparing corresponding pixels of each of the first and second images and determining the state of the corresponding pixel in the third image. For example, if the corresponding pixels in both the first and second images are the same, either on or off, then the corresponding pixel in the third image should be off. If, however, the pixels are different, then the corresponding pixel in the third image should be on.
[0043] The controller 18 utilizes different absorption properties of the eye 16 to measure closure of the eye 16. In particular, the present invention utilizes the fact that the eye 16 will generally absorb light at a wavelength of about 950 nm, and the retina of the eye will generally reflect light at other wavelengths such as about 880 nm. The present invention illuminates the subject's eyes 16 with both frequencies of light at appropriate intensities to produce similar images, measures the reflected image at each of the wavelengths, and subtracts one of the images from the other to form a third image that is primarily only of the subject's pupils. From the third image, the present invention can determine whether and to what extent the subject's eyes 16 are open or closed and can be used to determine perclos from successively captured third images. Because the third image contains much less data than a normal image of the subject, it can be processed more easily and more quickly than a conventional image of the subject, thereby capable of delivering real time information. [0044] As shown in FIG. 3, the controller, can be programmed to execute a series of steps to determine drowsiness and inattention. The image acquisition and analysis process begins with step 70 in which the first illumination source, or bright pupil illumination source, is pulsed during the image exposure period. After exposure, the first image, or bright pupil image, is acquired by the controller and stored in memory in step 72. Next, the second illumination source, or dark pupil illumination source, is pulsed during the next image exposure period of step 74 and the second image, or dark pupil image, is acquired by the controller and stored in memory in step 76 and the second illumination source is turned off in step 78. The controller subsequently generates a third image, or difference image, by subtracting the dark pupil image from the bright pupil image in step 80. [0045] Referring again to FIG. 3, a minimum brightness threshold is determined in step 82 by analyzing the difference image produced in step 80. The threshold is applied to the difference image to identify the brightest pixels in the image in step 84, and the difference image is screened to permit portions of the third image that are above the brightness threshold to be analyzed and portions below the brightness threshold to be discarded. The bright pixels are next clustered to form bright objects in step 86. The bright objects are analyzed based on shape and location to determine which bright objects potentially correspond to the subject's eyes in step 88. After all of the bright objects have been classified, the difference image is analyzed to determine if significant motion has occurred in step 90.
[0046] Since there is a time lapse between the capture of the bright pupil image and the capture of the dark pupil image, the subject may have moved slightly between the capture of the two images. Movement can occur due to typical variations in the head angle of the subject, having the subject tilt their head to check traffic or to consult instrumentation on the dash. If the subject moves between image captures, the features of the bright pupil image and the dark pupil image may not properly align. Accordingly, in order to determine whether motion of the subject has occurred between image captures, the difference image of step 80 is evaluated to determine whether there is a significant increase in the calculated pixel minimum brightness threshold of step 82. If significant motion has occurred, as determined by step 92, it is likely that some of the bright objects are incorrectly classified as corresponding to the subject's eyes. Accordingly, the classification of portions of the third image potentially corresponding to the subject's eyes must be confirmed by analyzing the bright pupil image in step 94 and the dark pupil image in step 96.
[0047] The analysis of the bright pupil image examines the shape and location of the bright object associated with each potential eye candidate. For a properly classified eye, the location and shape of the reflection of a bright object in the bright pupil image will be very similar to the location and shape of the bright object in the difference image. A misclassified eye is often caused by the misalignment of bright objects between the bright pupil image and the difference image. Accordingly, a properly classified eye is determined by the alignment of a bright object in the bright pupil image having the approximate shape of an eye within a nearness parameter of a bright object in the difference image. As used herein, the term "nearness parameter" means a distance of from about 1/10 to about 1A the diameter of a typically observed pupil.
[0048] The analysis of the dark pupil image examines the presence or absence of a corneal reflection associated with each potential eye candidate. For a true eye image, a corneal reflection will be present substantially adjacent the bright object within a nearness parameter. Accordingly, for a true eye, the corneal reflection of the dark pupil image is within a nearness parameter of a bright object of the difference image. It is observed that for an image acquisition time of 17 milliseconds, the corneal reflection rarely moves more than 1 centimeter with respect to the bright object corresponding to the subject's pupil. Hence, the classification of an eye can be confirmed by the presence of a corneal reflection within the second image to be within a nearness parameter, such as about 1 centimeter, of a bright object of the third image.
[0049] Accordingly, portions of the third image, or difference image, corresponding to the bright objects which can represent the subject's eyes, are selected if the portion of the third image is within a nearness parameter to reflections in the first image or the second image. As shown in FIGS. 4a-4e, five images are presented which relate to the detection of eye closure when significant motion occurs while acquiring the bright pupil image and the dark pupil image. FIG. 4a is the bright pupil image. A number of bright objects is observed in FIG. 4a including the two bright pupils 22, 26 and several bright objects 20, 21, 23, 24, 25, 27, 28, 29 caused by reflections of the illumination source from the subject's glasses. FIG. 4b is the dark pupil image. Again, a number of bright objects are observed in FIG. 4b including two corneal reflections 32, 36 that appear as small bright objects within the dark pupil region and several bright objects 30, 31, 33, 34, 35, 37, 38 caused by reflections of the illumination source from the subject's glasses. Because the subject was moving during image acquisition, the size and shape of the reflections on the subject's glasses in FIG. 4b are different in size, shape and location compared to those from FIG. 4a.
[0050] FIG. 4c is the difference image calculated by subtracting the image in FIG. 4b from the image in FIG. 4a. A total of six bright objects 39, 40, 41, 42, 43, 44 are shown in FIG. 4c. Objects 40 and 43 are the bright pupil objects and objects 39, 41, 42, 44 are objects caused by differences in the bright objects associated with reflection from the subject's glasses.
[0051] FIG. 4d is an image created by applying a threshold to the bright pupil image in FIG. 4a. All of the pixels above the threshold are shown as white and all other pixels are shown as black. Objects 47 and 51 are the bright pupil objects and objects 45, 46, 48, 49, 50, 52, 53 are objects caused by the reflections on the subject's glasses.
[0052] FIG. 4e is an image created by applying a threshold to the bright pupil image in FIG. 4b. Again, all of the pixels above the threshold are shown as white and all other pixels are shown as black. Objects 55 and 58 are the corneal reflection objects and objects 54, 56, 57, 59, 60, 61 are objects caused by the reflections on the subject's glasses. Object 62 is associated with a bright spot on the subject's face.
[0053] The threshold applied to create the images in FIGS. 4d and 4e is calculated as a fraction of the brightness of the spot being analyzed. In practice, many images similar to FIG. 4d and FIG. 4e are created for a variety of thresholds associated with each bright object. FIG.4d and FIG. 4e are examples of the many images that can be created. [0054] The classification of an eye starts with the difference image in FIG. 4c. The bright objects in the difference image are analyzed based on the size and shape of an eye. The size and shape parameters are set to be very liberal, thereby initially allowing many spots to be classified as eyes. As shown in FIG. 4c, based on the initial analysis, objects 39, 40, 41 and 43 are classified as eyes.
[0055] In order to confirm that object 40 shown in FIG. 4c is in fact an eye of the subject, an object 47 in FIG. 4d oriented at the same location as object 40 is used. Object 47 is similar in size and shape as object 40. Hence, the classification of object 40 as an eye is confirmed in the bright pupil image. The next step in the confirmation process for object 40 is the identification of object 55 in FIG. 4e. Object 55 is consistent in location and size with an expected corneal reflection. Hence, the classification of object 40 is confirmed in the dark pupil image. Similarly, the classification of object 43 as an eye is confirmed based on objects 51 and 58. Object 51 is the appropriate size and shape for a bright pupil and object 58 is in the proper location and the proper size to be a corneal reflection
[0056] The confirmation process for object 39 shown in FIG. 4c identifies an object 45 in FIG. 4d at the same location as object 39. Object 45 is similar in size to object 39 but its shape is inconsistent with that of an eye. Accordingly, object 45 fails the symmetry test associated with eye classification. In one embodiment, the symmetry test can calculate a slope of a specified pixel group using a least mean square. Proper classification of an eye requires that the calculated slope is statistically consistent with a slope of zero. Likewise, the classification of object 39 as an eye is also rejected based on the shape of corresponding object 54 being too large to be a corneal reflection. [0057] The confirmation process for object 41 in FIG. 4c identifies an object 48 in FIG. 4d at the same location as object 41. Object 48 is similar in size and shape to object 41. Hence, the classification of object 41 as an eye is incorrectly confirmed in the bright pupil image. The classification of object 41 as an eye is rejected based on the shape of object 60 being too large to be a corneal reflection. Object 41 is accordingly rejected as an eye since object 60 in FIG. 4e is inconsistent with a corneal reflection.
[0058] Referring again to FIG.3, if there is little or no motion during the acquisition of the bright pupil image and the dark pupil image, the driver's gaze angle can be measured in step 98. The centroid of the subject's pupil can be calculated by analysis of the bright pupil image and the location of the corneal reflection can be determined by analysis of the dark pupil image. The gaze angle is based on the relative position of the corneal reflection and the centroid of the subject's pupil. Accordingly, the subject's gaze angle can be calculated as a function of the centroid of the subject's pupil in an image and the location of the corneal reflection of an illumination source in an image. By utilizing the system of the present invention, practical eye gaze monitoring techniques can be implemented within a vehicle in which it is necessary to identify the corneal reflection in rapidly changing environmental lighting conditions. Previous methods of determining corneal reflection required the use of a single image for detecting both a bright pupil and a corneal reflection, thereby requiring that the corneal reflection be significantly brighter than the bright pupil. According to the present invention, by using one image to find the bright pupil and another to find the corneal reflection, the accuracy of the gaze angle measurement is greatly increased. The corneal reflection method for determining the orientation of the eye is described in United States Patent No. 5,231,674.
[0059] Referring once again to FIG. 3, drowsiness and/or inattention of the subject is determined by calculating perclos and/or the distraction measure in step 100. Perclos is determined by the percentage of time the subject's eyes are 60% - 80% closed. Accordingly, perclos can be calculated by evaluating the successive detections of the closure of the driver's eye(s). The distraction measure has at least two components. One component is the identification of unsafe behavior. For example, if the subject takes his/her eyes off the road for a predetermined period, such as about 3 seconds or more, this lapse period can be considered unsafe. Also contemplated within the identification of unsafe behavior is inattention by the subject to the rear view or side view mirrors for a period of time. A second component of driver distraction is a large variation from normal movement patterns. Most alert drivers will scan the road ahead, look at the instrument panel and look at the mirrors in a pattern specific to each person. The distraction measure of the present invention quantifies the pattern and looks for abrupt changes in the pattern. For example, the pattern can include the number of subject eye glances per second. The pattern can also include the distribution of time spent by the subject viewing the road, the instrument panel, the side view mirrors, etc. The distraction measure can be determined through video methods and/or information archived through repeated still images taken of the subject.
[0060] Once the distraction measure for a subject is determined in step 100, a distraction warning can be delivered in step 102 if the subject exhibits the above-described distracted behavior. Distraction warnings can be customized to the type of distraction observed. The goal of a distraction warning is to draw the driver's gaze to an area that requires attention. For example, if the driver fails to look at the roadway for a period of time, or if he/she spends less than 50% of his/her time looking as the roadway, a visual indicator located in a heads-up display could be used to bring the driver's attention to the front. If the driver has not looked at his/her right mirror for an unsafe period of time, the driver's attention could be drawn to the right using a visual indicator on or near the mirror. Alternatively, auditory signals and/or tactile signals, such as seat vibration, can be used in conjunction with the visual indicators and/or as a replacement for the visual indicators.
[0061] For example, if the driver fails to look forward for several seconds, a single beep can be provided, but if the driver fails to look at his mirror for some time, a series of beeps that appear to come from the mirror may be used. The series of beeps may also be coupled with a visual indicator mounted near or integral with the mirror.
[0062] When drowsiness is detected, such as by a raised level of perclos, or a level of perclos in excess of a predetermined standard, the subject can be provided with drowsiness warnings that will clearly indicate his/her level of drowsiness through a driver interface 20. The purpose of this warning is not to maintain alertness. Rather, the purpose of the warning is to inform the driver of his/her impaired state of alertness and to encourage the driver to stop and engage in proven alerting activities such as napping or ingesting caffeine. [0063] As shown in FIG. 5, the driver interface 20 can include a multi-media information display. In one embodiment, the driver interface 20 can include a visual, audible, and/or tactile feedback mechanism 65. The driver interface 20 can be positioned to provide feedback in the form of a distraction warning to a subject during the driving process. In one embodiment, the driver interface 20 can be positioned adjacent the driver, such as on a portion of an interior 68 of an automobile, such as on the dashboard or visor of an automobile. The driver interface 20 can include at least two feedback mechanisms 65 to reinforce the transfer of information to an impaired driver. The feedback mechanism 65 may also have a directional component to bring the driver's attention to a particular place. An example of such an interface 20 includes a visual display that includes a plurality of visually alerting devices 66 and an audibly alerting device 67. In one embodiment, the visually alerting device 66 may be an LED and/or a conventional light source. In another embodiment, the audibly alerting device 67 can comprise a speaker.
[0064] The driver interface 20 can communicate to the subject the number of seconds that the driver's eyes were closed. In one embodiment, if the driver's eyes are closed for a predetermined number of consecutive seconds, such as about 6 seconds, a feedback mechanism 65 can be activated. For example, a visually alerting device 66 and an audibly alerting device 67 can simultaneously be activated. In one embodiment, a series of notes, such as a scale or a well-known tune, can be played in order to alert the subject. In this embodiment, the visual information is directly reinforced by the audible information. Presenting the information through redundant feedback mechanisms 65 increases the likelihood that the information is properly delivered to an impaired driver. [0065J In one embodiment, the driver interface 20 can include an array of LEDs and a speaker capable of playing specific notes for notifying the subject when his/her eyes were closed for a period of 4 consecutive seconds. When the warning is first triggered, a first LED can be lit and a musical note of frequency 880 Hz, corresponding to A on the musical scale, can be played for 1 second. After playing the first note, a second LED can be lit and a note of frequency 987 Hz, corresponding to B on the musical scale, can be played for 1 second. A third LED can subsequently be lit and a note of frequency 1046 Hz, corresponding to C on the musical scale, can be played for 1 second. Finally, a fourth LED can be lit and a note of 1173 Hz, corresponding to D on the musical scale, can be played for 1 second. In this embodiment, the four LEDs remain lit for a period of time, such as about ten seconds, to allow the driver time to absorb the information.
[0066] Referring again to FIG. 3, the system of the present invention also includes an automatic gain control that adjusts the overall brightness of an image within a set range in step 104. This feature is desirable to avoid image saturation when background lighting increases rapidly due to changes in environmental lighting, such as the presence of oncoming headlights. This feature is also desirable to compensate for person-to-person variations in retinal reflection producing variations of observed brightness of the subject's pupils in the image. To maintain optimal performance of the system, the intensity of the bright- pupil illumination source and/or the dark pupil illumination source is increased as the image sensor gain reduces. Alternatively, the intensity of the bright pupil illumination source and/or the dark pupil illumination source is decreased as the image sensor gain increases. [0067] The bright pupil illumination source can be adjusted to maintain the bright pupil as the brightest object in the image while avoiding saturation of the image caused by the illumination source combined with the environmental lighting. The dark pupil illumination source can be adjusted to match the overall image brightness obtained with the bright pupil illumination source.
[0068] Automatic adjustment of the brightness of the illumination sources can be controlled based on the value of the camera gain available as a signal from the image sensor or directly based on the brightness of the images. In the direct method, the bright pupil illumination source is incrementally adjusted after each successful eye classification until the bright pupil image is within a given range of brightness. The dark pupil illumination source is subsequently adjusted to match the overall image intensity of the bright pupil image. [0069] The automatic calibration process is illustrated in FIG. 6. The first required step 106 is to analyze the bright pupil and dark pupil images to identify if a subject's eyes are present in the images. The next step 108 is to determine the confidence or accuracy of the classification of the subject's eyes. Confidence is considered high in step 110 if two eyes are found separated by an expected distance. Confidence can also include the absence of any significant detected motion. If confidence is low, no attempt is made to calibrate the system as shown in step 112.
[0070] Referring again to FIG. 6, if confidence is high, the calibration is incrementally adjusted. To accomplish this, the brightness of the pupils in the bright pupil image is set within a specified calibration range, and the average brightness of the dark pupil image is set to be similar to that of the bright pupil image. The brightness of the subject's pupils are determined by analyzing the bright pupil image in step 114. If the pupil brightness is below a lower calibration limit, the image exposure is increased in step 116. Next, the brightness of the subject's pupils is determined in the dark pupil image in step 118. If the pupil brightness exceeds an upper calibration limit, the image exposure is decreased in step 120. This is done in small steps to assure that brief changes in environmental lighting do not have a large effect on the calibration.
[0071] The adjustment includes increasing and decreasing the exposure of the images. This can be accomplished by adjusting the brightness of the bright pupil illumination source and the dark pupil illumination source. For example, if the illumination sources were LEDs, the current in the LEDs could be varied. The exposure can also be changed by increasing or decreasing the time the illumination sources are on. Some cameras include an electronic shutter that can be set by the controller.
[0072] While several embodiments of an image system and method for detecting eye closure and gaze angles therewith were described in the foregoing detailed description, those skilled in the art may make modifications and alterations to these embodiments without departing from the scope and spirit of the invention. Accordingly, the foregoing description is intended to be illustrative rather than restrictive. The invention described hereinabove is defined by the appended claims and all changes to the invention that fall within the meaning and the range of equivalency of the claims are embraced within their scope.

Claims

THE INVENTION CLAIMED IS:
1. An image sensor, comprising: a first source of light having a first wavelength; a second source of light having a second wavelength, the second wavelength not being equal to the first wavelength; at least one optics module for receiving reflected light having the first wavelength and producing a first image corresponding to the size and location of the reflected light having the first wavelength, and for receiving a subsequent reflected light having the second wavelength and producing a second image corresponding to the reflected light having the second wavelength; and a controller for receiving the first image and the second image, producing a third image indicative of the first image subtracted from the second image, screening portions of the third image that are above a brightness threshold, and selecting portions of the third image that are within a nearness parameter to reflections in the first image or second image.
2. The image sensor of claim 1, wherein the first wavelength is highly reflective when optically aligned with a human eye and the second wavelength is highly absorbed when optically aligned with a human eye.
3. The image sensor of claim 2, wherein the first wavelength is from about 820 nm to about 915 run and the second wavelength is from about 940 nm to about 960 nm.
4. The image sensor of claim 1 , wherein the first source of light and the second source of light are optically aligned with the eye of a driver, and the image sensor receives reflected light having the first wavelength and the second wavelength from the eye of the driver.
5. The image sensor of claim 1, wherein the controller selects portions of the third image by determining the presence of a corneal reflection within the second image substantially adjacent a bright object within the third image.
6. The image sensor of claim 1, wherein the controller selects portions of the third image by determining the presence of a bright object with the first image adjacent a bright object within the third image that corresponds substantially to the size and/or shape of a human eye.
7. The image sensor of claim 1, wherein the controller detects an eye closure based on the informational differences within successive third images.
8. The image sensor of claim 7, further comprising a driver interface for providing a visual, audible, and/or tactile feedback mechanism in response to detected perclos.
9. A driver alert system, comprising: a first source of light having a first wavelength optically aligned with a driver's eye; a second source of light having a second wavelength, the second wavelength not being equal to the first wavelength, the second source of light optically aligned with the driver's eye; at least one image sensor for receiving reflected light from the driver's eye having the first wavelength and producing a first image for receiving a subsequent reflected light from the driver's eye having the second wavelength and producing a second image; a controller for receiving the first image and the second image, producing a third image indicative of the first image subtracted from the second image to detect eye closure and calculate perclos based on the informational differences between successive third images; and a driver interface for providing a visual, audible, and/or tactile feedback mechanism in response to an elevated level of perclos.
10. The driver alert system of claim 9, wherein the first wavelength is from about 820 nm to about 915 nm and the second wavelength is from about 940 nm to about 960 ran.
11. The driver alert system of claim 9, wherein the controller screens portions of the third image that are above a brightness threshold.
12. The driver alert system of claim 9, wherein the controller selects portions of the third image that are within a nearness parameter to reflections in the first image or second image.
13. The driver alert system of claim 9, wherein the controller selects portions of the third image by determining the presence of a corneal reflection within the second image substantially adjacent a bright object within the third image.
14. The driver alert system of claim 9, wherein the controller selects portions of the third image by determining the presence of a bright object within the first image adjacent a bright object within the third image that corresponds substantially to the size and/or shape of a human eye.
15. The driver alert system of claim 9, wherein the driver interface includes at least two visual, audible, and/or tactile feedback mechanisms.
16. A method, comprising the steps of: providing a source of light toward a subject's eyes, the light having first and second wavelengths, wherein the first wavelength does not equal the second wavelength; producing a first image corresponding to the size and location of light reflected from the subject's eyes having the first wavelength; producing a second image corresponding to the size and location of. light reflected from the subject's eyes having the second wavelength; producing a third image indicative of the first image subtracted from the second image; screening portions of the third image that are above a brightness threshold; and selecting portions of the third image that are within a nearness parameter to reflections in the first image or second image.
17. The method of claim 16, wherein the step of selecting portions further includes determining the presence of a corneal reflection within the second image substantially adjacent a bright object within the third image.
18. The method of claim 16, wherein the step of selecting portions further includes determining the presence of a bright object within the first image adjacent a bright object in the third image that corresponds substantially to the size and/or shape of a human eye.
19. The method of claim 16, further comprising the step of producing a screened third image and utilizing the screened third image to determine a distraction measure.
20. A method, comprising the steps of: providing a first source of light having a first wavelength optically aligned with a driver's eye; providing a second source of light having a second wavelength, the second wavelength not being equal to the first wavelength, the second source of light optically aligned with the driver's eye; receiving reflected light from the driver's eye having the first wavelength and producing a first image; receiving reflected light from the driver's eye having the second wavelength and producing a second image; producing a third image indicative of the first image subtracted from the second image; detecting closure of the driver's eye based on informational differences between successive third images; calculating perclos based on the detected closure of the driver's eye; and activating a driver responsive . visual, audible, and/or tactile feedback mechanism in response to an elevated level of perclos.
21. The method of claim 20, further comprising the step of screening portions of the third image that are above a brightness threshold.
22. The method of claim 21, wherein the step of detecting closure of the driver's eye includes the step of selecting portions of the third image that are within a nearness parameter to reflections in the first image or second image.
23. The method of claim 22, wherein the step of selecting portions further includes determining the presence of a corneal reflection in the second image substantially adjacent a bright object in the third image.
24. The method of claim 22, wherein the step of selecting portions further includes determining the presence of a bright object within the first image adjacent a bright object in the third image that corresponds substantially to the size and/or shape of a human eye.
25. The method of claim 20, further comprising the step of producing a screened third image and utilizing the screened third image to determine a distraction measure.
26. The method of claim 25, further comprising the step of activating the driver responsive visual, audible, and/or tactile feedback mechanism in response to an elevated distraction measure.
PCT/US2007/003287 2006-02-07 2007-02-07 Driver drowsiness and distraction monitor WO2007092512A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US77106306P 2006-02-07 2006-02-07
US60/771,063 2006-02-07

Publications (2)

Publication Number Publication Date
WO2007092512A2 true WO2007092512A2 (en) 2007-08-16
WO2007092512A3 WO2007092512A3 (en) 2009-04-09

Family

ID=38345782

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/003287 WO2007092512A2 (en) 2006-02-07 2007-02-07 Driver drowsiness and distraction monitor

Country Status (1)

Country Link
WO (1) WO2007092512A2 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011149577A2 (en) 2010-03-01 2011-12-01 Eye-Com Corporation Systems and methods for spatially controlled scene illumination
EP2560391A1 (en) * 2011-08-17 2013-02-20 Autoliv Development AB imaging system in a motor vehicle, and corresponding imaging method
CN103071247A (en) * 2013-01-31 2013-05-01 郎锦义 Induced respiration controller and implementation method thereof
CN103718134A (en) * 2011-05-25 2014-04-09 索尼电脑娱乐公司 Eye gaze to alter device behavior
EP2731049A1 (en) 2012-11-13 2014-05-14 Tobii Technology AB Eye-tracker
WO2014146199A1 (en) 2013-03-18 2014-09-25 Mirametrix Inc. System and method for on-axis eye gaze tracking
US8885877B2 (en) 2011-05-20 2014-11-11 Eyefluence, Inc. Systems and methods for identifying gaze tracking scene reference locations
US8911087B2 (en) 2011-05-20 2014-12-16 Eyefluence, Inc. Systems and methods for measuring reactions of head, eyes, eyelids and pupils
US8929589B2 (en) 2011-11-07 2015-01-06 Eyefluence, Inc. Systems and methods for high-resolution gaze tracking
US10039445B1 (en) 2004-04-01 2018-08-07 Google Llc Biosensors, communicators, and controllers monitoring eye movement and methods for using them
US10137893B2 (en) * 2016-09-26 2018-11-27 Keith J. Hanna Combining driver alertness with advanced driver assistance systems (ADAS)
US10860852B2 (en) 2015-07-06 2020-12-08 Pixart Imaging Inc. Eye state detecting method and eye state detecting system
CN113743232A (en) * 2021-08-09 2021-12-03 广州铁路职业技术学院(广州铁路机械学校) Fatigue detection method for urban rail driver

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263749A (en) * 2015-07-14 2019-09-20 原相科技股份有限公司 Eye state method for detecting and eye state detecting system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030209893A1 (en) * 1992-05-05 2003-11-13 Breed David S. Occupant sensing system
US20050100191A1 (en) * 2003-11-11 2005-05-12 Harbach Andrew P. Imaging system and method for monitoring an eye

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030209893A1 (en) * 1992-05-05 2003-11-13 Breed David S. Occupant sensing system
US20050100191A1 (en) * 2003-11-11 2005-05-12 Harbach Andrew P. Imaging system and method for monitoring an eye

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10039445B1 (en) 2004-04-01 2018-08-07 Google Llc Biosensors, communicators, and controllers monitoring eye movement and methods for using them
CN102918834A (en) * 2010-03-01 2013-02-06 艾肯姆有限公司 Systems and methods for spatially controlled scene illumination
KR101829850B1 (en) * 2010-03-01 2018-03-29 아이플루언스, 인크. Systems and methods for spatially controlled scene illumination
EP2543187A4 (en) * 2010-03-01 2017-12-20 Google LLC Systems and methods for spatially controlled scene illumination
US20150181100A1 (en) * 2010-03-01 2015-06-25 Eyefluence, Inc. Systems and methods for spatially controlled scene illumination
WO2011149577A2 (en) 2010-03-01 2011-12-01 Eye-Com Corporation Systems and methods for spatially controlled scene illumination
US8890946B2 (en) 2010-03-01 2014-11-18 Eyefluence, Inc. Systems and methods for spatially controlled scene illumination
US8911087B2 (en) 2011-05-20 2014-12-16 Eyefluence, Inc. Systems and methods for measuring reactions of head, eyes, eyelids and pupils
US8885877B2 (en) 2011-05-20 2014-11-11 Eyefluence, Inc. Systems and methods for identifying gaze tracking scene reference locations
CN103718134A (en) * 2011-05-25 2014-04-09 索尼电脑娱乐公司 Eye gaze to alter device behavior
US10120438B2 (en) 2011-05-25 2018-11-06 Sony Interactive Entertainment Inc. Eye gaze to alter device behavior
CN103718134B (en) * 2011-05-25 2017-02-22 索尼电脑娱乐公司 Eye gaze to alter device behavior
EP2560391A1 (en) * 2011-08-17 2013-02-20 Autoliv Development AB imaging system in a motor vehicle, and corresponding imaging method
US8929589B2 (en) 2011-11-07 2015-01-06 Eyefluence, Inc. Systems and methods for high-resolution gaze tracking
EP2731049A1 (en) 2012-11-13 2014-05-14 Tobii Technology AB Eye-tracker
CN103071247A (en) * 2013-01-31 2013-05-01 郎锦义 Induced respiration controller and implementation method thereof
WO2014146199A1 (en) 2013-03-18 2014-09-25 Mirametrix Inc. System and method for on-axis eye gaze tracking
US9733703B2 (en) 2013-03-18 2017-08-15 Mirametrix Inc. System and method for on-axis eye gaze tracking
EP2975997A4 (en) * 2013-03-18 2016-12-14 Mirametrix Inc System and method for on-axis eye gaze tracking
JP2016512765A (en) * 2013-03-18 2016-05-09 ミラメトリックス インコーポレイテッド On-axis gaze tracking system and method
CN105431078A (en) * 2013-03-18 2016-03-23 视译公司 System and method for on-axis eye gaze tracking
US10860852B2 (en) 2015-07-06 2020-12-08 Pixart Imaging Inc. Eye state detecting method and eye state detecting system
US10137893B2 (en) * 2016-09-26 2018-11-27 Keith J. Hanna Combining driver alertness with advanced driver assistance systems (ADAS)
CN113743232A (en) * 2021-08-09 2021-12-03 广州铁路职业技术学院(广州铁路机械学校) Fatigue detection method for urban rail driver

Also Published As

Publication number Publication date
WO2007092512A3 (en) 2009-04-09

Similar Documents

Publication Publication Date Title
WO2007092512A2 (en) Driver drowsiness and distraction monitor
CA2554905C (en) Device for determining the driving capability of a driver in a vehicle
US7202793B2 (en) Apparatus and method of monitoring a subject and providing feedback thereto
US7199767B2 (en) Enhanced vision for driving
US6926429B2 (en) Eye tracking/HUD system
US9460601B2 (en) Driver distraction and drowsiness warning and sleepiness reduction for accident avoidance
US6873714B2 (en) Auto calibration and personalization of eye tracking system using larger field of view imager with higher resolution
US6952498B2 (en) Face portion detecting apparatus
EP2288287B1 (en) Driver imaging apparatus and driver imaging method
JP4593942B2 (en) Pupil detection apparatus and method
EP2564777B1 (en) Method for classification of eye closures
US7553021B2 (en) Optical system for monitoring eye movement
EP1701319A1 (en) Illuminating apparatus, image capturing apparatus, and monitoring apparatus, for vehicle driver
US7091867B2 (en) Wavelength selectivity enabling subject monitoring outside the subject's field of view
JPH03254291A (en) Monitor for automobile driver
JP3296119B2 (en) Gaze direction measuring device for vehicles
KR102072910B1 (en) A drowsiness detection method using the drowsiness detection apparatus
CN116946006B (en) Control method of matrix type car lamp and car
JPH0761256A (en) Forward carelessness detecting device for vehicle
KR20230050550A (en) AEB system and control method using adaptive careless judgment function
KR20230099497A (en) Headlamp position control system and control method using adaptive careless judgment function
KR20230072579A (en) Headlamp position control system and control method using adaptive careless judgment function
KR20230050146A (en) Driver carelessness judgment system and method
KR20230050551A (en) Headlamp position control system and control method using adaptive careless judgment function
KR20230050147A (en) Parking assistance control system and control method using adaptive careless judgment function

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07763236

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 07763236

Country of ref document: EP

Kind code of ref document: A2