JP4715262B2 - Gaze direction detection apparatus and method for vehicle - Google Patents

Gaze direction detection apparatus and method for vehicle Download PDF

Info

Publication number
JP4715262B2
JP4715262B2 JP2005089125A JP2005089125A JP4715262B2 JP 4715262 B2 JP4715262 B2 JP 4715262B2 JP 2005089125 A JP2005089125 A JP 2005089125A JP 2005089125 A JP2005089125 A JP 2005089125A JP 4715262 B2 JP4715262 B2 JP 4715262B2
Authority
JP
Japan
Prior art keywords
vehicle
driver
correction value
line
object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2005089125A
Other languages
Japanese (ja)
Other versions
JP2006263334A (en
Inventor
真知子 平松
光明 萩野
Original Assignee
日産自動車株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日産自動車株式会社 filed Critical 日産自動車株式会社
Priority to JP2005089125A priority Critical patent/JP4715262B2/en
Publication of JP2006263334A publication Critical patent/JP2006263334A/en
Application granted granted Critical
Publication of JP4715262B2 publication Critical patent/JP4715262B2/en
Application status is Expired - Fee Related legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a gaze direction detecting device and method for a vehicle.

  Conventionally, various devices for detecting a direction in which a human is looking are known. As one of them, an optical device with a line-of-sight detection function used for camera autofocus technology is known. With this device, the correction value for correcting the user's line-of-sight direction is stored at the time of shipment from the factory, and when actually used by the user, the correction value at the time of shipment from the factory is changed in accordance with the usage status of the user. ing. Thereby, after a correction value is changed, the line-of-sight direction can be accurately detected for each user (see, for example, Patent Document 1).

In addition, the head position and the eyeball direction when operating a specific device such as a navigation screen and a door mirror are stored in advance, and the head position and the eyeball direction are corrected according to the use situation of the user. There is known a gaze direction measuring device that accurately detects the gaze direction of the user from the head position and the eyeball direction (see, for example, Patent Document 2).
JP 7-151958 A JP 9-238905 A

  However, in general, human eyes move differently when they see things that are moving and when they see things that are stationary. For this reason, even if the gaze direction is corrected for a stationary object (such as an object that is visible when the camera's viewfinder is removed or an in-vehicle device), the dynamic object (such as a preceding vehicle or a pedestrian) is recognized. There is a possibility that the line-of-sight direction of the vehicle driver that needs to be detected accurately.

  A gaze direction detecting device for a vehicle according to the present invention includes a gaze detecting unit, an in-vehicle visual object specifying unit, a static correction value calculating unit, a static correcting unit, an obstacle detecting unit, and an out-of-vehicle visual object specifying unit. And a dynamic correction value calculation means and a dynamic correction means. The line-of-sight detection means detects the driver's line-of-sight direction from the image of the vehicle driver's face. The in-vehicle visual recognition object specifying means specifies an in-vehicle visual recognition object that is estimated to be visually recognized by the driver among the in-vehicle objects that are stationary in the in-vehicle space. The static correction value calculation means, when the direction of the line of sight detected by the line-of-sight detection means is different from the direction connecting the driver's eye position to the in-vehicle visual object specified by the in-vehicle visual object specifying means, The static correction value corresponding to the difference is obtained. The static correction means corrects the line-of-sight direction detected by the line-of-sight detection means according to the static correction value obtained by the static correction value calculation means. The obstacle detection means detects an obstacle around the vehicle. The vehicle exterior visual target specifying means is a moving object that is moving in real space among the obstacles detected by the obstacle detection means, and is a vehicle external visual target object that is estimated to be viewed by the driver. It is something to identify. The dynamic correction value calculation means, when the direction of the line of sight corrected by the static correction means and the direction connecting the driver's eye position to the outside visual recognition target specified by the outside visual recognition target specifying means are different, A dynamic correction value corresponding to the difference is obtained. The dynamic correction means determines the line-of-sight direction detected by the line-of-sight detection means according to the static correction value obtained by the static correction value calculation means and the dynamic correction value obtained by the dynamic correction value calculation means. It is to correct.

  According to the present invention, when the line-of-sight direction detected from the image is different from the direction connecting the driver's eye position to the in-vehicle object, the static correction value corresponding to the difference is obtained and obtained. The line-of-sight direction is corrected according to the static correction value. For this reason, once a static correction value can be obtained, correction can be performed based on the static correction value, and an accurate line-of-sight direction can be obtained for a stationary object.

  Further, when the line-of-sight direction corrected by the static correction value is different from the direction connecting the driver's eye position to the object visually recognized outside the vehicle, a dynamic correction value corresponding to the difference is obtained and obtained. The line-of-sight direction detected from the image is corrected by the dynamic correction value and the already obtained static correction value. Here, the dynamic correction value is a correction value including a characteristic characteristic that cannot be obtained from a static object. For example, when a driver is looking at a moving object, the black eye part moves according to the movement of the object, and the movement of the black eye part varies depending on various factors such as age and driving skill. . The dynamic correction value is a correction value including a characteristic characteristic when viewing the dynamic object as described above. Therefore, in addition to the static correction value, the vehicle driving which visually recognizes the dynamic thing by correcting the gaze direction detected from the image using the dynamic correction value including the characteristic peculiar to the dynamic. A person's gaze direction can be detected accurately.

  Therefore, the detection accuracy of the visual recognition target of the vehicle driver can be improved.

  DESCRIPTION OF EXEMPLARY EMBODIMENTS Hereinafter, preferred embodiments of the invention will be described with reference to the drawings. FIG. 1 is a configuration diagram showing an outline of a vehicle gaze direction detecting device according to a first embodiment of the present invention. As shown in the figure, the vehicular gaze direction detection device 1 captures the face of the vehicle driver and detects the gaze direction of the driver from the obtained image. The vehicle gaze direction detection device 1 includes an imaging unit 10, a calculation unit 20, a warning unit (warning unit) 30, an obstacle detection unit (obstacle detection unit) 40, a vehicle motion information acquisition unit 50, The vehicle surrounding environment information acquisition unit 60 is provided.

  The imaging unit 10 captures an image of the seated driver's face, and is provided in an instrument panel portion in front of the driver's seat. Two imaging units 10 are provided in front of the driver's seat, and acquire images of the driver's face from different angles. Although not shown, two infrared light emitting elements are provided in the vicinity of the imaging unit 10. One of the infrared light emitting elements is provided in the vicinity of one imaging unit 10, and the one imaging unit 10 performs imaging while one infrared light emitting element is lit. Yes. Further, the other infrared light emitting element is provided in the vicinity of the other imaging unit 10, and the other imaging unit 10 performs imaging while the other infrared light emitting element is lit. It has become.

  The calculating part 20 inputs the image data imaged by the imaging part 10, and calculates | requires a driver | operator's gaze direction. Further, the calculation unit 20 stores a correction value in advance at the time of factory shipment or the like, and corrects the gaze direction obtained from the image data from the imaging unit 10 with the correction value so as to obtain a more accurate gaze direction. It has become. The correction value at the time of factory shipment is obtained from data obtained by measuring a plurality of subjects.

  Furthermore, the calculating part 20 calculates | requires a driver's peculiar correction value from the image obtained by imaging. In particular, in the present embodiment, the driver-specific correction value includes two, a static correction value and a dynamic correction value, and accurately determines the line-of-sight direction of the driver who is looking at the stationary object using the static correction value. The driver's line-of-sight direction is accurately determined based on the static correction value and the dynamic correction value.

  The warning unit 30 warns the driver when it is determined that the object that may collide with the vehicle does not match the driver's line-of-sight direction and the driver is not viewing the object. Is to do. The warning unit 30 may alert the driver by voice or may alert the driver by an image (navigation screen or head-up display).

  The obstacle detection unit 40 detects an obstacle around the vehicle. Here, as an obstacle, a moving object moving in real space, such as a preceding vehicle or a pedestrian (that is, an object that does not appear to move due to movement of the vehicle, but is absolutely moving) And stationary objects that are stationary in real space, such as traffic signs and signboards (that is, those that are absolutely stationary). The obstacle detection unit 40 is constituted by, for example, a laser radar, receives reflected light of laser light irradiated forward, and detects the obstacle from the distance to the obstacle ahead of the host vehicle and the width of the obstacle. It is configured to be able to.

  The vehicle motion information acquisition unit 50 acquires information related to vehicle motion, such as vehicle speed information, acceleration information (such as front and rear and lateral directions), and steering amount of the steering wheel. For this reason, the vehicle motion information acquisition unit 50 includes a sensor that detects the vehicle speed, acceleration, steering amount, and the like.

  The vehicle surrounding environment information acquisition unit 60 acquires environment information around the vehicle, for example, type information or shape information of a road on which the vehicle is traveling. The vehicle surrounding environment information acquisition unit 60 includes a CCD camera provided on the back side of a room mirror, for example. For this reason, the vehicle surrounding environment information acquisition unit 60 is configured to be able to acquire road type information, shape information, and the like from a captured image in front of the host vehicle captured by a CCD camera.

  Here, the configuration of the gaze direction detecting device 1 for a vehicle will be described in further detail. FIG. 2 is a configuration diagram showing details of the vehicle gaze direction detecting device according to the first embodiment. As shown in the figure, the vehicle gaze direction detection device 1 includes an imaging unit 10, a calculation unit 20, a warning unit 30, an obstacle detection unit 40, a vehicle motion information acquisition unit 50, and a vehicle surrounding environment information acquisition unit 60. In addition, a driver operation detection unit 70 is further provided.

  The driver motion detection unit 70 detects the motion of the driver seated in the driver's seat. In the present embodiment, the driver motion detection unit 70 is a vehicle in-vehicle device such as an air conditioner or audio device, and a door mirror. It is configured to detect whether or not the angle adjustment switch or the like is operated. That is, the driver motion detection unit 70 is configured to monitor whether or not a switch such as an in-vehicle device has been operated.

  In addition, the calculation unit 20 described above includes a line-of-sight detection unit (line-of-sight detection unit) 21, an in-vehicle visual recognition target specifying unit (in-vehicle visual recognition target specifying unit) 22, a correction value calculation unit 23, a correction value storage unit 24, and a correction A visual recognition target specifying unit (visual recognition determination unit) 27, and a visual recognition determination unit (visual determination determination unit) 27.

  The line-of-sight detection unit 21 processes the driver's face image imaged by the imaging unit 10 to detect the driver's line-of-sight direction. Specifically, the line-of-sight detection unit 21 detects the driver's corneal sphere center position (eye position) and the driver's pupil center position (black eye position) from the driver's face image. Yes. The line-of-sight detection unit 21 is configured to obtain the rotation angle of the black eye relative to the eye from the corneal sphere center position and the pupil center position, and detect the driver's line-of-sight direction from the rotation angle.

  The in-vehicle visual recognition target specifying unit 22 specifies the in-vehicle visual recognition object that is estimated to be visually recognized by the driver among the in-vehicle objects that are stationary in the vehicle interior space. In detail, the in-vehicle visual recognition target specifying unit 22 is connected to the driver operation detecting unit 70 and is configured to input information on whether or not the driver has operated the switch. Moreover, when the information on the switch operation by the driver is input, the vehicle interior visual target identification unit 22 determines that the driver is viewing the in-vehicle device that has been operated by the switch, and uses the in-vehicle device as the in-vehicle visual object. It has a specific configuration.

  The correction value calculation unit 23 corrects the gaze direction detected by the gaze detection unit 21 and calculates a correction value for obtaining an accurate gaze direction. A static correction value calculation unit (static correction value calculation) Means) 23a. The static correction value calculation unit 23a has a case in which the line-of-sight direction detected by the line-of-sight detection unit 21 and the direction connecting the driver's eye position to the in-vehicle visual target object detected by the in-vehicle visual target specifying unit 22 are different. The static correction value corresponding to the difference is obtained. Here, since the driver looks at the in-vehicle device while operating the in-vehicle device that is the in-vehicle object, the correct line-of-sight direction is the direction connecting the eye position to the in-vehicle object. For this reason, when the direction detected by the line-of-sight detection unit 21 is different from the connecting direction, the static correction value calculation unit 23a obtains the difference as a static correction value.

  The correction value storage unit 24 stores a correction value for obtaining an accurate line-of-sight direction by correcting the line-of-sight direction detected by the line-of-sight detection unit 21. The correction value storage unit 24 stores general correction values in advance at the time of factory shipment. Further, the correction value storage unit 24 includes a static correction value storage unit 24a, and is configured to store the static correction value calculated by the static correction value calculation unit 23a.

  The correction unit 25 corrects the line-of-sight direction detected by the line-of-sight detection unit 21 according to the correction value stored in the correction value storage unit 24. The correction unit 25 corrects the line-of-sight direction initially detected by the line-of-sight detection unit 21 with a correction value stored at the time of factory shipment stored in the correction value storage unit 24. The correction unit 25 includes a static correction unit 25a. After the static correction value is obtained once, the line of sight is determined according to the static correction value stored in the static correction value storage unit 24a. The line-of-sight direction detected by the detection unit 21 is corrected.

  The in-vehicle visual recognition target specifying unit 26 is a moving object that is moving in real space among the obstacles detected by the obstacle detection unit 40 and is estimated to be viewed by the driver outside the vehicle. It identifies the object. That is, the outside visual recognition target specifying unit 26 is configured to specify an obstacle detected by the obstacle detecting unit 40 from a moving object such as a preceding vehicle or a pedestrian that is estimated to be viewed by the driver. It is configured not to identify stationary objects such as signs and signboards. Further, the vehicle exterior visual recognition target specifying unit 26 is configured to identify the vehicle exterior visual recognition target object based on information from the vehicle surrounding environment information acquisition unit 60 and the driver motion detection unit 70. For example, when the driver suddenly brakes, the driver is likely to visually recognize a moving object ahead of the host vehicle. In such a case, the outside visual recognition target specifying unit 26 specifies the moving object ahead of the host vehicle as the external visual recognition object.

  The correction value calculation unit 23 includes a dynamic correction value calculation unit (dynamic correction value calculation unit) 23b, the correction value storage unit 24 includes a dynamic correction value storage unit 24b, and the correction unit 25 includes It has a dynamic correction unit (dynamic correction means) 25b.

  In the dynamic correction value calculation unit 23b, the line-of-sight direction corrected by the static correction unit 25a is different from the direction connecting the driver's eye position to the vehicle exterior visual target object identified by the vehicle exterior visual target identification unit 26. In this case, a dynamic correction value corresponding to the difference is obtained. Here, the dynamic correction value is a correction value including a characteristic characteristic that cannot be obtained from a static object. For example, when a driver is looking at a moving object, the black eye part moves according to the movement of the object, and the movement of the black eye part varies depending on various factors such as age and driving skill. . For this reason, the dynamic correction value calculation unit 23b obtains a dynamic correction value including such dynamic characteristic features.

  The dynamic correction value storage unit 24b stores the dynamic correction value calculated by the dynamic correction value calculation unit 23b. After the dynamic correction value is obtained once, the dynamic correction unit 25b has a static correction value stored in the static correction value storage unit 24a and a dynamic correction value stored in the dynamic correction value storage unit 24b. Accordingly, the line-of-sight direction detected by the line-of-sight detection unit 21 is corrected.

  In addition, the visual recognition determination unit 27 determines whether the line-of-sight direction corrected by the dynamic correction unit 25b matches the direction connecting the driver's eye position to the collision object that may collide with the host vehicle. By determining, it is determined whether or not the driver visually recognizes the collision object. Further, when the visual recognition determination unit 27 determines that the driver does not visually recognize the collision object, the visual recognition determination unit 27 transmits information to that effect to the warning unit 30. For this reason, the warning part 30 will warn a vehicle driver, when the visual recognition judgment part 27 judges that the driver has not visually recognized the collision object.

  Next, a vehicle gaze direction detection method according to this embodiment will be described with reference to FIGS. 1 and 2. First, the imaging unit 10 acquires an image of the face of the vehicle driver (image acquisition step). Next, the gaze detection unit 21 detects the gaze direction of the vehicle driver from the acquired face image (gaze detection step). At this time, the gaze detection unit 21 obtains the rotation angle of the black eye relative to the eye from the eye position and the black eye position, and detects the driver's gaze direction from this rotation angle.

  Next, the in-vehicle visual recognition target specifying unit 22 specifies the in-vehicle visual recognition object that is estimated to be viewed by the driver among the in-vehicle objects that are stationary in the in-vehicle space (in-vehicle visual recognition object specifying step). . At this time, the in-vehicle visual recognition target specifying unit 22 specifies the in-vehicle device in which the switch operation is performed as the in-vehicle visual recognition object when the driver performs a switch operation. Then, the static correction value calculating unit 23a determines whether or not the line-of-sight direction detected as described above is different from the direction connecting the driver's eye position to the in-vehicle visible object. A static correction value corresponding to the difference is obtained (static correction value calculating step).

  Thereafter, the imaging unit 10 acquires again an image of the face of the vehicle driver (image reacquisition step). Next, the gaze detection unit 21 obtains the gaze direction from the acquired face image (gaze direction redetection step). Next, the static correction unit 25a corrects the line-of-sight direction obtained by the line-of-sight detection unit 21 using the static correction value (static correction step). As a result, an accurate line-of-sight direction can be detected for a static object.

  On the other hand, the obstacle detection unit 40 detects obstacles around the vehicle (obstacle detection step). Next, the outside visual recognition target specifying unit 26 specifies a moving external object that is moving in the real space among the detected obstacles and is estimated to be visually recognized by the driver ( Step for identifying objects to be seen outside the vehicle) At this time, the outside visual recognition target specifying unit 26 sets the moving object as the out-of-vehicle visual recognition object when, for example, the driver is likely to brake suddenly and visually recognize the moving object ahead of the host vehicle.

  Then, the dynamic correction value calculation unit 23b determines whether or not the line-of-sight direction corrected using the static correction value is different from the direction connecting the driver's eye position to the object visually recognized outside the vehicle, If they are different, a dynamic correction value corresponding to the difference is obtained (dynamic correction value calculating step).

  Next, the imaging unit 10 further acquires an image of the face of the vehicle driver (image re-acquisition step). Next, the line-of-sight detection unit 21 detects the line-of-sight direction of the vehicle driver from the acquired face image (line-of-sight direction re-detection step).

  The dynamic correction unit 25b corrects the detected driver's line-of-sight direction from the static correction value and the dynamic correction value (dynamic correction step). Here, the dynamic correction value is a correction value including a characteristic characteristic that cannot be obtained from a static object. For this reason, the gaze direction detecting device 1 for a vehicle accurately corrects the gaze direction of a vehicle driver who is visually recognizing a dynamic object by correcting the dynamic gaze direction using a dynamic correction value including a dynamic characteristic. It can be detected.

  Next, a detailed operation of the vehicle gaze direction detecting device 1 according to the present embodiment will be described. FIG. 3 is a flowchart showing the operation of the vehicle gaze direction detecting device 1 according to the first embodiment. Note that the process shown in FIG. 3 is repeated until the vehicle gaze direction detecting device 1 is turned off, such as when the ignition key of the vehicle is turned off.

  As shown in the figure, the vehicular gaze direction detecting device 1 first causes the infrared light emitting element to emit light (ST1), and the imaging unit 10 performs imaging. Then, the calculation unit 20 reads two pieces of image data having different shooting angles (ST2). Next, the vehicle gaze direction detecting device 1 extinguishes the infrared light emitting element (ST3).

  Next, the calculating part 20 judges whether it is an initial detection mode (ST4). Here, the arithmetic unit 20 is in the initial detection mode, for example, when it is within a predetermined time (for example, 10 seconds) after the ignition key is turned on, or when a predetermined initial detection mode switch is turned on. (ST4: YES).

  Then, the line-of-sight detection unit 21 obtains the line-of-sight direction, and the static correction value calculation unit 23a calculates a static correction value (ST5). At this time, the line-of-sight detection unit 21 obtains the line-of-sight direction from the eye position and the black-eye position obtained from the driver's face image, and the static correction value calculation unit 23a recognizes the in-vehicle visual target from the direction and the eye position. The static correction value is obtained by comparing the direction connecting the objects. Next, the static correction value storage unit 24a stores the static correction value, and the process illustrated in FIG. 3 ends.

  By the way, the calculation unit 20 determines that the mode is not the initial detection mode when the predetermined time has elapsed since the ignition key was turned on and the initial detection mode switch is not turned on (ST4: NO). And the calculating part 20 detects a driver | operator's gaze direction using the static correction value memorize | stored in step ST6, and the dynamic correction value memorize | stored in step ST11 mentioned later (ST7).

  Next, the calculating part 20 determines an operation mode (ST8). In this process, the arithmetic unit 20 determines whether the mode is the emergency mode or the normal mode. Here, the emergency mode is a mode when it is determined that the possibility of a collision of the host vehicle is high, and the normal mode is a mode when it is determined that the possibility of a collision of the host vehicle is low. . For this reason, the calculation unit 20 determines the mode by determining the possibility of collision.

Specifically, the calculation unit 20 inputs information from the obstacle detection unit 40 and information from the vehicle movement information acquisition unit 50,
The possibility of a collision is obtained using the following relational expression. Here, Lo is the distance from the host vehicle to the obstacle, and Vr is the relative speed of the host vehicle with respect to the obstacle. When the TTC is smaller than a specified value To (for example, 2 seconds) and the host vehicle speed Vm is smaller than the specified value Vo (for example, 20 km / h), the calculation unit 20 determines that the possibility of a collision is high, and sets the emergency mode. judge.

  Next, computing unit 20 determines whether or not the mode determined in step ST8 is an emergency mode (ST9). When the mode determined in step ST8 is not the critical mode (ST9: NO), the arithmetic unit 20 determines whether or not the dynamic correction value can be calculated (ST10). When it is determined that the dynamic correction value cannot be calculated (ST10: NO), the process illustrated in FIG. 3 ends.

  On the other hand, when it is determined that the dynamic correction value can be calculated (ST10: YES), the dynamic correction value calculation unit 23b obtains the dynamic correction value, and the dynamic correction value storage unit 24b obtains the obtained dynamic correction value. Store (ST11). Then, the process shown in FIG. 3 ends.

  In addition, when the mode determined in step ST8 is the emergency mode (ST9: YES), the visual recognition determination unit 27 determines whether or not the driver visually recognizes a collision object that may collide with the host vehicle. Judgment is made (ST12). At this time, the visual recognition determination unit 27 uses the obstacle detection unit 40 based on the gaze direction obtained in step ST7 (that is, the gaze direction corrected by the static correction value and the dynamic correction value) and the eye position of the driver. It is determined whether or not the direction connecting to the detected collision object matches. And the visual recognition judgment part 27 judges that the driver | operator has visually recognized the collision object, when the said both directions correspond (ST12: YES). Thereafter, the process shown in FIG. 3 ends.

  On the other hand, if the two directions do not match, the visual recognition determination unit 27 determines that the driver is not visually recognizing the collision object (ST12: NO). Then, the visual recognition determination unit 27 transmits information to that effect to the warning unit 30. As a result, the warning unit 30 issues a warning to the driver (ST13), and then the processing shown in FIG. 3 ends.

  FIG. 4 is a flowchart showing details of step ST5 shown in FIG. As shown in the figure, first, the line-of-sight detection unit 21 obtains the driver's corneal center position Peee (eye position) from the image obtained in step ST2 (ST21). At this time, the line-of-sight detection unit 21 grasps the corneal center position Peye as shown in FIG.

  FIG. 5 is an explanatory diagram showing details of step ST21 shown in FIG. As shown in the figure, the line-of-sight detection unit 21 grasps the corneal sphere center position Peye in a three-dimensional coordinate system with the midpoint position of the two imaging units 10 as the origin. Further, in the present embodiment, the line-of-sight detection unit 21 obtains the corneal ball center position Peye from the eyeballs on the center side in the automobile interior where the light environment is relatively stable among the left and right eyes of the driver.

  The line-of-sight detection unit 21 performs the process shown in FIG. 6 when grasping the corneal sphere center position Peye in the three-dimensional coordinate system. FIGS. 6A and 6B are explanatory diagrams showing details when the corneal sphere center position Peey is obtained. FIG. 6A shows an image taken by one imaging unit 10, and FIG. 6B shows the other imaging unit 10. The captured image is shown, and (c) shows a state when the line-of-sight detection unit 21 grasps the corneal sphere center position Peye in three dimensions.

  First, the line-of-sight detection unit 21 captures eye characteristics from each of the captured images as shown in FIGS. 6A and 6B and obtains a corneal sphere position in two dimensions. Next, the line-of-sight detection unit 21 obtains, for each captured image, a straight line from the virtual camera position to the corneal sphere position in two dimensions with the position of the imaging unit 10 on the side where the captured image is obtained as the virtual camera position. Then, as shown in FIG. 6C, the line-of-sight detection unit 21 grasps the obtained two straight lines in three dimensions, and sets the intersection of the two straight lines as a corneal sphere center position Peye. The details of the calculation of the corneal sphere center position Peye are described in Japanese Patent Application Laid-Open No. 9-238905 and the like, and thus the description thereof is omitted.

  Reference is again made to FIG. After obtaining the corneal sphere center position Peye as described above, the line-of-sight detection unit 21 obtains the pupil center position Ppmp (black eye position) (ST22). At this time, the line-of-sight detection unit 21 detects the outer peripheral edge of the pupil and performs circular approximation, and sets the center of the circle as the pupil center position Ppmp.

Then, the line-of-sight detection unit 21 and the correction unit 25 cooperate to calculate the line-of-sight direction (ST23). At this time, the line-of-sight detection unit 21 obtains the rotation angle eye1 of the pupil with respect to the eye from the corneal sphere center position Peye and the pupil center position Ppmp. That is, the line-of-sight detection unit 21
The rotation angle θeye1 of the pupil with respect to the eye is obtained from the following relational expression. Here, L1 is a standard distance from the center of the corneal sphere to the center of the pupil, and is a predetermined value. Peey1 is the corneal sphere center position obtained from one captured image, and | PupPee1 | represents the magnitude of the vector from the corneal reflection image center position Peye1 to the pupil center position Pupp. Further, fscale is a correction coefficient (corresponding to an optical magnification) determined by the positional relationship between the lens system of the camera that captures an eyeball image and the corneal sphere center position Peey1. Αeye is a correction coefficient for correcting individual differences with respect to L1, and a correction value αeye0 at the time of factory shipment is substituted for αeye.

Next, the correction unit 25 corrects the rotation angle θeye1 of the pupil to obtain the corrected rotation angle θeye. Specifically, the correction unit 25
Is executed. Here, θo is an angle correction term (default value) of θeye1, βey (Paye) is a correction coefficient, and βeye (Paye) is assigned a correction value βeye0 (Peeee) at the time of factory shipment. The correction unit 25 obtains the rotation angle θeye corrected by the correction value at the time of shipment from the above equation (3).

  Then, after obtaining the rotation angle θeye, the correction unit 25 converts the rotation angle θeye into the three-dimensional coordinate system shown in FIG. 3 to obtain the line-of-sight direction. At this time, the correction unit 25 grasps the line-of-sight direction by dividing it into a horizontal direction Φlv and a vertical direction ξlv. Thus, in step ST23 of FIG. 4, the rotation angle eye1 of the pupil with respect to the eye is obtained, the rotation angle is corrected, the corrected rotation angle is replaced with a three-dimensional coordinate system, and the horizontal direction Φlv and the vertical direction ξlv are used. Will be obtained.

  After the line-of-sight direction is acquired as described above, the driver motion detection unit 70 determines whether or not a predetermined operation switch is turned on (ST24). If it is determined that the default operation switch is not turned on (ST24: NO), the process shown in FIG. 4 ends, and the process proceeds to step ST6 in FIG. On the other hand, when it is determined that the predetermined operation switch is turned on (ST24: YES), the in-vehicle visual target specifying unit 22 specifies the operated switch as the in-vehicle visual target. Then, the static correction value calculation unit 23a obtains a direction (Φsw, ξsw) from the corneal sphere center position Peye to the switch position Psw (i) (ST25).

  Thereafter, the static correction value calculation unit 23a determines whether or not the difference between the line-of-sight direction (Φlv, ξlv) obtained in step ST23 and the direction (Φsw, ξsw) obtained in step ST25 is within a specified value (for example, 10%). Is determined (ST26). If it is determined that the difference between the two directions is not within the specified value (ST26: NO), the static correction value calculation unit 23a is considered to be operating without looking at the switch. Do not get value. And the process shown in FIG. 4 is complete | finished, and transfers to step ST6 of FIG.

  If it is determined that the difference between the two directions is within the specified value (ST26: YES), it can be assumed that the driver is operating while looking at the switch. Therefore, the static correction value calculation unit 23a sets the difference between the two directions to the static correction value Δαeye so that the line-of-sight direction (Φlv, ξlv) obtained in step ST23 matches the direction (Φsw, ξsw) obtained in step ST25. , Δβeye (ST27). Next, the static correction value storage unit 24a stores the obtained static correction values Δαeye and Δβeye.

  Next, computing unit 20 determines whether or not a specified time has elapsed since the operation switch was first turned on (ST28). If it is determined that the specified time has not elapsed (ST28: NO), the process shown in FIG. 4 ends, and the process proceeds to step ST6 in FIG. On the other hand, when it is determined that the specified time has elapsed (ST28: YES), the static correction value calculation unit 23a averages the plurality of static correction values Δαeye and Δβeye obtained during the specified time, and averages the static values. Correction values Δαeye and Δβeye are stored in the static correction value storage unit 24a (ST29). Here, the stored static correction value will be used for future gaze direction correction.

  As described with reference to step ST4 of FIG. 3, the flowchart shown in FIG. 4 is not limited to the case where the ignition switch is turned on but within a predetermined time (for example, 10 seconds). It is also executed when the detection mode switch is turned on. For this reason, a static correction value may already be obtained for step ST23 in FIG. In this case, a value reflecting the static correction value is substituted for αeye in equation (2) and βeye (Peye) in equation (3).

  FIG. 7 is a flowchart showing details of step ST7 shown in FIG. As shown in the figure, when detecting the line-of-sight direction, the calculation unit 20 executes the same processing as the processing shown in steps ST21 to ST23 shown in FIG. That is, the line-of-sight detection unit 21 obtains the driver's cornea center position Peye (eye position) (ST31), and obtains the pupil center position Ppmp (black eye position) (ST32). Thereafter, the line-of-sight detection unit 21 and the correction unit 25 calculate the line-of-sight direction using the above-described equations (2) and (3) (ST33).

Here, the correction unit 25 calculates αeye of the above equation (2) as
Is obtained from the following relational expression. Here, αeye0 is a correction value at the time of shipment from the factory as described above, and Δαeye is a static correction value.

Further, the correction unit 25 calculates βeye (Peye) of the above formula (3) as follows.
Is obtained from the following relational expression. Here, βeye0 (Peye) is a correction value at the time of shipment from the factory as described above, and Δβeye is a static correction value. Βeye_k is a dynamic correction value obtained in step ST11 of FIG.

  Thus, the calculating part 20 will obtain | require a gaze direction using a static correction value and a dynamic correction value. In particular, since the dynamic correction value is a correction value including a characteristic characteristic that cannot be obtained from a static object, βeye (Peee) used in the above equation (3) is a line of sight in an environment of a vehicle. It is an appropriate value for finding the direction. When the dynamic correction value βeye_k is not obtained, “0” is substituted for βeye_k in the equation (5).

  FIG. 8 is a flowchart showing details of step ST10 shown in FIG. Whether or not the dynamic correction value can be acquired is determined as shown in FIG. First, the calculation unit 20 determines whether or not a preceding vehicle exists based on a signal from the obstacle detection unit 40, and when the preceding vehicle exists, the host vehicle continuously follows the preceding vehicle for a predetermined time or more. It is determined whether or not (ST41).

  Here, when it is determined that the host vehicle is not following the preceding vehicle continuously for a predetermined time or longer (ST41: NO), the calculation unit 20 determines the vehicle speed, the preceding vehicle position (the distance between the host vehicle and the host vehicle). ), The accelerator pedal operation amount, the corneal sphere center position Peee, and the line-of-sight direction (Φlv, ξlv) are stored in the buffer together with the information on the traveling road (ST42). And the calculating part 20 judges that a dynamic correction value cannot be acquired (ST43). Thereby, the calculating part 20 will judge "NO" in step ST10 of FIG.

  On the other hand, when it is determined that the host vehicle is traveling following the preceding vehicle continuously for a predetermined time (ST41: YES), the calculation unit 20 determines whether the acquired data is for a single road and straight traveling. Is determined (ST44). Here, the acquired data is the data obtained in step ST42. Further, the straight traveling in step ST44 does not have to be complete straight traveling, and means that the vehicle travels on a curved section with a small curvature or does not make a right turn or a left turn.

  If it is determined that the acquired data is not for a single road and traveling straight (ST44: NO), the arithmetic unit 20 resets the stored contents of the buffer (ST45) and determines that a dynamic correction value cannot be acquired (ST43). ). Thereby, the calculating part 20 will judge "NO" in step ST10 of FIG.

If it is determined that the acquired data is for a single road and straight running (ST44: YES), the calculation unit 20 calculates the inter-vehicle distance from the host vehicle to the preceding vehicle and the pedal operation amount of the vehicle by the driver. It is determined whether there is a predetermined correlation (ST46). More specifically, the calculation unit 20 represents an expression Lv (t) representing a time change of the inter-vehicle distance obtained within a predetermined time, and an expression Sa (t) representing a time change of the accelerator pedal operation amount by the driver. The cross correlation coefficient is obtained for. At this time, the calculation unit 20
The cross-correlation coefficient is obtained from the following equation. Here, T is the buffer storage time and corresponds to the predetermined time in step ST41. And the calculating part 20 judges whether the calculated | required correlation mutual coefficient is more than a regulation value, and when it is more than a regulation value, it judges that there exists the predetermined correlation of step ST46.

  When it is determined that there is a predetermined correlation between the inter-vehicle distance and the pedal operation amount (ST46: YES), the calculation unit 20 determines that a dynamic correction value can be acquired (ST47). Thereby, the calculating part 20 will judge "YES" in step ST10 of FIG. On the other hand, when it is determined that there is no predetermined correlation between the inter-vehicle distance and the pedal operation amount (ST46: NO), the arithmetic unit 20 resets the stored content of the buffer (ST45) and determines that the dynamic correction value cannot be acquired. (ST43). Thereby, the calculating part 20 will judge "NO" in step ST10 of FIG.

  Here, step ST46 will be supplementarily described. In general, when the driver visually recognizes the preceding vehicle, the driver operates the pedal so that the distance between the preceding vehicle and the preceding vehicle is maintained. For this reason, when there is a correlation between the inter-vehicle distance and the pedal operation amount, it can be estimated that the driver is viewing the preceding vehicle. Further, in order to calculate the dynamic correction value, the driver needs to visually recognize some moving body. Therefore, when there is a predetermined correlation between the inter-vehicle distance and the pedal operation amount, it can be said that the calculation condition of the dynamic correction value is satisfied, and the calculation unit 20 determines that the dynamic correction value can be acquired in step ST47. ing. The predetermined correlation will be further described. The predetermined correlation means that the pedal is operated so that the inter-vehicle distance is maintained. As the inter-vehicle distance decreases, the accelerator pedal depression amount decreases or This is a correlation in which the accelerator pedal depression amount increases as the brake depression amount increases and the inter-vehicle distance increases.

  FIG. 9 is a flowchart showing details of step ST11 shown in FIG. The dynamic correction value is obtained as shown in FIG. First, the outside visual recognition target specifying unit 26 specifies a preceding vehicle as an out-of-vehicle visual recognition object that is estimated to be viewed by the driver, and the dynamic correction value calculating unit 23b is a preceding vehicle that the host vehicle is following. Is determined in a three-dimensional coordinate system, and the direction (Φob, ξob) from the corneal sphere center position Peye to the preceding vehicle position Pob is obtained (ST51).

Next, the dynamic correction value calculation unit 23b determines the difference between the line-of-sight direction (Φlv, ξlv) obtained in step ST7 of FIG. 3 and the direction (Φob, ξob) obtained in step ST51 as the line-of-sight deviation (Φlv ( t), ξlv (t)) (ST52). Next, the dynamic correction value calculation unit 23b calculates the normalized cross-correlation coefficient between the line-of-sight deviation (Φlv (t), ξlv (t)) and the head position Phead angle time variation Φhead (t), ξhead (t). Is calculated (ST53). That is, the dynamic correction value calculation unit 23b
The normalized cross-correlation coefficient is calculated from these two relational expressions.

  Next, the dynamic correction value calculation unit 23b obtains the average vehicle speed Vm of the host vehicle from the vehicle speed during the time T (ST54). Then, the dynamic correction value calculation unit 23b associates Rlvh_x, Rlvh_y and Vm obtained from the equations (7) and (8) and stores them in the dynamic correction value storage unit 24b. Next, the dynamic correction value calculation unit 23b performs regression processing on the stored combination data of Rlvh_x, Rlvh_y, and Vm to obtain line-of-sight correction coefficients Rlvh_x (Vm) and Rlvh_y (Vm) for each vehicle speed (ST55). .

Thereafter, the dynamic correction value calculation unit 23b obtains the dynamic correction value βeye_k (ST56). At this time, the dynamic correction value calculation unit 23b
The following two relational expressions are executed. Here, Φ [φeye (t)] and Φ [ξeye (t)] are autocorrelation coefficients of φeye and ξeye within a predetermined time at time t, and Kx and Ky are corneal sphere center positions Peye and 3 It is a correction coefficient (specified value) determined from the ratio of the magnitude of the rotational motion of a straight line connecting the origin of the dimensional coordinate system and the rotational motion of the corneal sphere.

  Then, βeye_k_x and βeye_k_y obtained by the above equations (9) and (10) are used in step ST7 shown in FIG. 3, and an accurate line-of-sight direction is detected. As shown in FIG. 9, obtaining the line-of-sight correction coefficient for each vehicle speed means obtaining the dynamic correction values shown in the equations (9) and (10) for each vehicle speed. That is, the vehicular gaze direction detection apparatus 1 according to the present embodiment accurately obtains the gaze direction when the driver is viewing a moving object using the dynamic correction value, and calculates the dynamic correction value for each vehicle speed. Thus, the line-of-sight direction can be determined more appropriately for each vehicle speed of the host vehicle.

  FIG. 10 is an explanatory diagram showing the correlation between the line-of-sight correction coefficient and the vehicle speed obtained in step ST55 shown in FIG. As shown in the figure, the line-of-sight correction coefficient has a large value when the vehicle speed is low, and tends to decrease as the vehicle speed increases. This tendency will be described. When the vehicle speed is high, for example, when driving on a highway, the driver tends to increase the inter-vehicle distance. For this reason, the driver often visually recognizes distant places when traveling at high speed. The distant moving object has a smaller amount of apparent movement from the driver than a nearby object. Here, when looking at an object with a small apparent amount of movement, the driver sees the moving object, but the situation is close when viewing a stationary object. Therefore, the line-of-sight correction coefficient tends to decrease during high-speed traveling.

  Thus, according to the vehicle gaze direction detection device 1 and method according to the present embodiment, the gaze direction detected from the image is different from the direction connecting the driver's eye position to the in-vehicle visual object. The static correction value corresponding to the difference is obtained, and the line-of-sight direction is corrected according to the obtained static correction value. For this reason, once a static correction value can be obtained, correction can be performed based on the static correction value, and an accurate line-of-sight direction can be obtained for a stationary object.

  Further, when the line-of-sight direction corrected by the static correction value is different from the direction connecting the driver's eye position to the object visually recognized outside the vehicle, a dynamic correction value corresponding to the difference is obtained and obtained. The line-of-sight direction detected from the image is corrected by the dynamic correction value and the already obtained static correction value. Here, the dynamic correction value is a correction value including a characteristic characteristic that cannot be obtained from a static object. For example, when a driver is looking at a moving object, the black eye part moves according to the movement of the object, and the movement of the black eye part varies depending on various factors such as age and driving skill. . The dynamic correction value is a correction value including a characteristic characteristic when viewing the dynamic object as described above. Therefore, in addition to the static correction value, the vehicle driving which visually recognizes the dynamic thing by correcting the gaze direction detected from the image using the dynamic correction value including the characteristic peculiar to the dynamic. A person's gaze direction can be detected accurately.

  Therefore, the detection accuracy of the visual recognition target of the vehicle driver can be improved.

  Whether the driver is viewing the collision object by determining whether the line-of-sight direction corrected using the dynamic correction value matches the direction from the driver's eye position to the collision object. Judgment is made. For this reason, it is determined whether or not the collision object is visually recognized based on the line-of-sight direction accurately detected by the dynamic correction value, and it is possible to accurately determine whether or not the collision object is visually recognized. . In addition, when it is determined that the driver is not visually recognizing the object, a warning is given to the vehicle driver. Therefore, it is possible to call attention with high accuracy.

  In addition, when there is a preceding vehicle and there is a predetermined correlation between the inter-vehicle distance and the amount of pedal operation by the driver, the preceding vehicle is identified as an object to be visually recognized outside the vehicle that is estimated to be visually recognized by the driver. Yes. Here, when the driver visually recognizes the preceding vehicle, the driver generally operates the pedal so that the distance between the preceding vehicle and the preceding vehicle is maintained. For this reason, when there is a correlation between the inter-vehicle distance and the pedal operation amount, it can be estimated that the driver is visually recognizing the preceding vehicle, and it can be said that the inter-vehicle distance is suitable for identification as an object to be visually recognized outside the vehicle. Therefore, it is possible to accurately specify the outside visual object that is estimated to be visually recognized by the driver.

  Since the dynamic correction value is obtained for each average vehicle speed of the host vehicle, the driver's line-of-sight direction can be appropriately corrected for each vehicle speed. Here, when the vehicle speed is high, the dynamic correction value tends to be small. For this reason, the detection accuracy of the visual recognition target of the vehicle driver can be further improved by obtaining the dynamic correction value for each vehicle speed.

  Next, a second embodiment of the present invention will be described. The vehicle gaze direction detecting device 2 according to the second embodiment is the same as that of the first embodiment, but the processing contents of steps ST10 and ST11 in FIG. 3 are different from those of the first embodiment. Hereinafter, differences from the first embodiment will be described.

  First, the driver tends to generate fluctuations (hereinafter referred to as “saccade”) in the black-eye portion when he / she escapes from the state of being indifferent. That is, the case where the black-eye portion fluctuates can be said to be a state in which the driver is not indiscriminately viewing a certain target. In particular, when there is a preceding vehicle, it can be said that the driver visually recognizes the preceding vehicle if fluctuation occurs in the black eye portion. In the second embodiment, it is determined whether the dynamic correction value can be obtained based on this idea.

  FIG. 11 is a flowchart showing details of step ST10 shown in FIG. 3 according to the second embodiment. As shown in the figure, computing unit 20 determines whether there is a preceding vehicle (ST61). If it is determined that there is no preceding vehicle (ST61: NO), operation unit 20 turns off the saccade flag (ST62), and determines that a dynamic correction value cannot be acquired (ST63). Thereafter, the process ends.

  On the other hand, when it is determined that a preceding vehicle exists (ST61: YES), operation unit 20 determines whether or not the saccade flag is on (ST64). If it is determined that the saccade flag is not on (ST64: NO), operation unit 20 determines whether or not a saccade has occurred (ST65). If it is determined that no saccade has occurred (ST65: NO), operation unit 20 determines that a dynamic correction value cannot be acquired (ST63). Thereafter, the process ends. On the other hand, when it is determined that saccade has occurred (ST65: YES), operation unit 20 turns on the saccade flag (ST66). Thereafter, the process ends.

  By the way, when it is determined that the saccade flag is on (ST64: YES), the arithmetic unit 20 determines whether or not the brake pedal is operated at a predetermined vehicle speed or higher (ST67). Here, if it is determined that the brake pedal is not operated at a predetermined vehicle speed or higher (ST67: NO), the calculation unit 20 determines whether or not a predetermined time has elapsed since the saccade flag was turned on. (ST68).

  If it is determined that the predetermined time has not elapsed (ST68: NO), the calculation unit 20 determines that a dynamic correction value cannot be acquired (ST63), and then the process ends. On the other hand, if it is determined that the predetermined time has elapsed (ST68: YES), operation unit 20 turns off the saccade flag (ST62) and determines that the dynamic correction value cannot be acquired (ST63). Thereafter, the process ends.

  If it is determined that the brake pedal is operated at a predetermined vehicle speed or higher (ST67: YES), the calculation unit 20 determines the vehicle speed at that time, the preceding vehicle position (the distance between the vehicle and the lateral difference from the vehicle). ), The corneal sphere center position Peye, and the line-of-sight direction (Φlv, ξlv) are stored in the buffer (ST68). Next, computing unit 20 determines whether or not the elapsed time from turning on the brake pedal is equal to or greater than a specified value or whether the vehicle speed is zero (ST69). Here, when both are not satisfied (ST69: NO), the calculation unit 20 determines that the dynamic correction value cannot be acquired (ST63), and then the process ends.

  On the other hand, when it is determined that either one is satisfied (ST69: YES), operation unit 20 turns off the saccade flag (ST70) and determines that the dynamic correction value can be acquired (ST71). Thereafter, the process proceeds to step ST11 in FIG.

  FIG. 12 is a flowchart showing details of step ST11 shown in FIG. 3 according to the second embodiment. In the second embodiment, the dynamic correction value is obtained as shown in FIG. First, the vehicle exterior visual recognition target specifying unit 26 identifies the preceding vehicle as the vehicle exterior visual recognition object that is estimated to be visually recognized by the driver. Then, the dynamic correction value calculation unit 23b executes the same processing as steps ST51 to ST53 in FIG.

  Next, the dynamic correction value calculation unit 23b obtains TTC (correlation value indicating the correlation between the inter-vehicle distance from the own vehicle to the preceding vehicle and the relative vehicle speed of the own vehicle with respect to the preceding vehicle) when the brake pedal is turned on ( ST84). The TTC referred to here is the same as that shown in the above formula (1).

Next, the dynamic correction value calculation unit 23b performs the same processing as steps ST55 and ST56 in FIG. 9, obtains a line-of-sight correction coefficient, and obtains a dynamic correction value from the line-of-sight correction coefficient. Note that the dynamic correction value calculation unit 23b determines when the line-of-sight correction coefficient is obtained.
The following two relational expressions are executed. And after calculating | requiring a dynamic correction value, the process shown in FIG. 12 will be complete | finished.

  Thus, according to the gaze direction detecting device 2 for a vehicle according to the second embodiment, the detection accuracy of the visual recognition target of the vehicle driver can be improved as in the first embodiment. In addition, it is possible to accurately determine whether or not the user is visually recognizing, and to determine whether or not the user is visually recognizing with high accuracy and to give a warning, thereby making it possible to accurately call attention. .

  Furthermore, according to the second embodiment, there is a preceding vehicle, and a fluctuation is detected in the black eye part of the driver, and the brake pedal is operated at a predetermined vehicle speed or higher within a predetermined time after the fluctuation is detected. When the vehicle is operated, the preceding vehicle is specified as an object to be visually recognized outside the vehicle that is estimated to be visually recognized by the driver. Here, the driver tends to generate fluctuations in the black-eye portion when he / she escapes from the state of being indiscretion. That is, the case where the black-eye portion fluctuates can be said to be a state in which the driver is not indiscriminately viewing a certain target. In particular, when there is a preceding vehicle, it can be said that the driver visually recognizes the preceding vehicle if fluctuation occurs in the black eye portion. In addition, since it has been confirmed that the brake pedal has been operated at a speed higher than the predetermined vehicle speed, the case where the preceding vehicle is viewed from a state of waiting for a signal, etc. It is possible to reliably identify the target as an object visually recognized outside the vehicle. Therefore, it is possible to more accurately specify the object to be visually recognized outside the vehicle that is estimated to be visually recognized by the driver.

  Further, since the dynamic correction value is obtained for each correlation value (TTC) indicating the correlation between the inter-vehicle distance and the relative vehicle speed, the driver's line-of-sight direction can be appropriately corrected according to the inter-vehicle distance and the relative vehicle speed. it can. Here, if the inter-vehicle distance is short, the driver tends to concentrate on the preceding vehicle, and if the relative vehicle speed is high, the driver will concentrate more. If the concentration on the preceding vehicle is high, the driver will move according to the movement of the preceding vehicle. The gaze direction tends to follow quickly. When the driver's line-of-sight direction quickly follows, the detected line-of-sight direction and the direction from the eye position to the object visually recognized outside the vehicle tend to be close, and the correction amount tends to be small.

  Also, if the degree of concentration on the preceding vehicle is low, the driver's line-of-sight direction cannot quickly follow the movement of the preceding vehicle. For this reason, the difference between the detected line-of-sight direction and the direction from the eye position to the object visually recognized outside the vehicle tends to be large, and the correction amount tends to be large.

  In this way, by obtaining the dynamic correction value for each correlation value indicating the correlation between the inter-vehicle distance and the relative vehicle speed, it is possible to perform an appropriate correction according to the degree of concentration, and the vehicle driver's visual target The detection accuracy can be further improved.

  Next, a third embodiment of the present invention will be described. The vehicle gaze direction detecting device 3 according to the third embodiment is the same as that of the first embodiment, but part of the configuration and the processing contents of steps ST10 and ST11 in FIG. 3 are different from those of the first embodiment. Yes. Hereinafter, differences from the first embodiment will be described.

  FIG. 13: is a block diagram which shows the detail of the gaze direction detection apparatus for vehicles which concerns on 3rd Embodiment. As shown in the figure, in the vehicular gaze direction detection device 3, the calculation unit 20 is newly provided with a calculation unit 28. The calculation unit 28 includes a pedestrian average visual recognition time calculation unit (pedestrian average visual recognition time calculation unit) 28a and a pedestrian visual reference value calculation unit (pedestrian visual reference value calculation unit) 28b.

  The pedestrian average visual recognition time calculation unit 28a calculates an average of the time during which the driver is viewing the pedestrian. Specifically, the pedestrian average visual recognition time calculation unit 28a is a time for which the difference between the direction from the driver's eye position to the pedestrian detected by the obstacle detection unit 40 and the gaze direction is equal to or less than a predetermined value. The average is calculated.

  Moreover, the pedestrian visual recognition reference value calculation part 28b calculates the pedestrian visual recognition reference time used as the reference | standard whether the pedestrian is visually recognized from the average time St computed by the pedestrian average visual recognition time calculation part 28a. Is. Specifically, the pedestrian visual recognition reference value calculation unit 28b is configured to obtain a reference pedestrian visual recognition reference time by multiplying the average time St by a coefficient k (k is a number greater than 0 and less than 1).

  FIG. 14 is a flowchart showing details of step ST10 shown in FIG. 3 according to the third embodiment. As shown in the figure, the calculation unit 20 first determines whether or not a pedestrian has been detected by the obstacle detection unit 40 (ST91). If it is determined that no pedestrian has been detected (ST91: NO), the calculation unit 20 determines that a dynamic correction value cannot be acquired (ST92), and the process ends.

  On the other hand, when it is determined that a pedestrian has been detected (ST91: YES), computing unit 20 determines whether or not a predetermined time has elapsed since the detection of the pedestrian (ST93). When it is determined that the predetermined time has not elapsed since the detection of the pedestrian (ST93: NO), the arithmetic unit 20 resets the stored content of the buffer (ST94) and determines that the dynamic correction value cannot be acquired (ST94). ST92), the process ends.

  If it is determined that a predetermined time has elapsed since the detection of the pedestrian (ST93: YES), the computing unit 20 determines the vehicle speed and the position of the pedestrian within the predetermined time (distance to the pedestrian, lateral deviation). The corneal sphere position Peye and the line-of-sight direction (Φlv, ξlv) are stored in the buffer, and it is determined that the dynamic correction value can be acquired (ST96), and the process proceeds to step ST11 in FIG.

  FIG. 15 is a flowchart showing details of step ST11 shown in FIG. 3 according to the third embodiment. In the third embodiment, the dynamic correction value is obtained as shown in FIG. First, the vehicle exterior visual recognition target specifying unit 26 identifies a pedestrian as a vehicle exterior visual recognition object that is estimated to be viewed by the driver. At this time, when there are n pedestrians (n is an integer equal to or greater than 2), the outside visual recognition target specifying unit 26 specifies all n pedestrians as external visual recognition objects.

  Then, the dynamic correction value calculation unit 23b grasps the position of the pedestrian in the three-dimensional coordinate system, and obtains the direction (Φob, ξob) from the corneal sphere center position Peye to the pedestrian (ST101). At this time, when there are n pedestrians, directions (Φob, ξob) are obtained for all n pedestrians.

  Next, the dynamic correction value calculation unit 23b calculates the difference between the line-of-sight direction (Φlv, ξlv) obtained in step ST7 of FIG. 3 and the direction (Φob, ξob) obtained in step ST101 as the line-of-sight deviation (Φlv ( t), ξlv (t)) i (ST102). This line-of-sight deviation (Φlv (t), ξlv (t)) i is obtained for all n pedestrians. Note that i is a number representing a pedestrian.

  Next, the dynamic correction value calculation unit 23b extracts data within a predetermined ratio (for example, 5%) from the line-of-sight deviation (Φlv (t), ξlv (t)) i (ST103). That is, the dynamic correction value calculation unit 23b extracts a sight deviation that is somewhat smaller so as to identify the pedestrian that the driver is looking at.

  Thereafter, the dynamic correction value calculation unit 23b performs dynamic correction (βeye_k_p_x, βeye_k_p_y) for matching the line-of-sight direction (Φlv, ξlv) with the direction of the pedestrian (Φob, ξob) for all the extracted data ( It is calculated as a regression coefficient when Φob, ξob) is an objective variable and the line-of-sight direction (Φlv, ξlv) is an explanatory variable (ST104). Thereby, the dynamic correction value calculation unit 23b acquires the dynamic correction value.

  Next, the pedestrian average visual recognition time calculation unit 28a calculates the average time when the difference between the corneal sphere center position Peye to the pedestrian (Φob, ξob) and the line-of-sight direction (Φlv, ξlv) is equal to or less than a predetermined ratio. Is calculated (ST105).

  FIG. 16 is an explanatory diagram showing a line-of-sight direction when a pedestrian is visually recognized. As shown in the figure, when the driver is viewing with a pedestrian, a characteristic variation occurs in the horizontal direction Φlv among the line-of-sight directions (Φlv, ξlv). The pedestrian average visual recognition time calculation unit 28a detects the time Sti in which the fluctuation occurs as the pedestrian visual recognition time, and obtains the average of these times.

  FIG. 15 will be referred to again. After obtaining the average time St, the pedestrian visual reference value calculation unit 28b multiplies the average time St by a coefficient k to obtain a pedestrian visual reference time serving as a reference (ST106). Thereby, when the emergency mode is determined in step ST9 of FIG. 3 (when “YES” is determined in ST9), it is determined whether or not the pedestrian is viewed based on the pedestrian visual reference time. It becomes.

  That is, in step ST12, the visual recognition determination unit 27 connects the line-of-sight direction corrected using the static correction value and the dynamic correction value and the collision pedestrian who may collide with the host vehicle from the driver's eye position. It is determined whether or not the direction is consistent over the pedestrian viewing reference time. And when it corresponds, the visual recognition judgment part 27 judges that the driver | operator has visually recognized the collision pedestrian. Therefore, the process ends without passing through the warning in step ST13. On the other hand, when it does not correspond over the pedestrian visual recognition reference time or more, the visual recognition determination unit 27 determines that the driver does not visually recognize the pedestrian, and a warning is given to the vehicle driver in step ST13. Become.

  Thus, according to the gaze direction detecting device 3 for a vehicle according to the third embodiment, the detection accuracy of the visual recognition target of the vehicle driver can be improved as in the first embodiment.

  Furthermore, according to the third embodiment, when a pedestrian exists in the vicinity of the vehicle and the state where the pedestrian is present continues for a predetermined time, the vehicle walks as an object to be visually recognized outside the vehicle that is estimated to be visually recognized by the driver. The person is identified. Here, the case where the pedestrian exists in the surroundings and continues for a predetermined time is nothing but that the driver recognizes the pedestrian. In other words, it can be said that there are pedestrians around the vehicle continuously for a predetermined time because the driver notices the pedestrian and sets the vehicle speed to “0” or a fine speed. For this reason, when a pedestrian is present around the vehicle and the state where the pedestrian is present continues for a predetermined time, it can be estimated that the driver is visually recognizing the pedestrian, which is suitable for identification as an object to be visually recognized outside the vehicle. It can be said that. Therefore, it is possible to accurately specify the outside visual object that is estimated to be visually recognized by the driver.

  In addition, the average of the time when the line-of-sight direction corrected using the dynamic correction value matches the direction from the driver's eye position to the pedestrian is obtained, and whether or not the pedestrian is visually recognized from the average time. The visual reference time that is the reference is calculated. For this reason, the time used as the reference | standard which judges whether the driver | operator has visually recognized the pedestrian can be obtained.

  In addition, by determining whether or not the line-of-sight direction corrected using the dynamic correction value and the direction connecting the driver's eye position to the collision pedestrian coincide with each other over the visual reference time, It is determined whether or not the person visually recognizes the collision pedestrian. When it is determined that the pedestrian is not visually recognized, a warning is given to the vehicle driver. For this reason, after determining whether or not the pedestrian is visually recognized, a warning is given, and a high-accuracy alert can be made.

  As described above, the present invention has been described based on the embodiments. However, the present invention is not limited to the above embodiments, and modifications may be made without departing from the spirit of the present invention, and the embodiments may be combined. It may be.

  For example, the pedestrian viewing reference time described in the third embodiment may be obtained for a stationary object such as a signboard or a sign. As a result, it is possible to determine whether or not the driver has visually recognized important information on the sign (for example, traffic jam information and destination information). Specifically, it may be configured as follows.

  In other words, the vehicle gaze direction detection devices 1 to 3 are provided with vehicle exterior stationary object identification means for identifying the vehicle stationary object that is stationary in real space among the obstacles detected by the obstacle detection unit 40. Further, the line-of-sight direction corrected using only the static correction value or the static correction value and the dynamic correction value, and the direction connecting from the driver's eye position to the vehicle stationary object specified by the vehicle stationary object specifying means There is provided a stationary object average visual recognition time calculation means for calculating an average of the time during which the difference is equal to or less than a predetermined value. Furthermore, there is provided a stationary object visual reference value calculation means for calculating a stationary object visual recognition reference time that is a reference as to whether or not the stationary object is visually recognized from the average time calculated by the stationary object average visual recognition time calculation means. And the visual recognition judgment part 27 is set as the structure which judges whether the driver | operator is visually recognizing the stationary object outside a vehicle based on the stationary object visual recognition reference time calculated by the stationary object visual recognition reference value calculation means. With the above configuration, it is possible to determine whether or not the driver has visually recognized a stationary object such as a signboard or a sign in substantially the same manner as when obtaining the pedestrian visual recognition reference time.

It is a lineblock diagram showing an outline of a gaze direction detecting device for vehicles concerning a 1st embodiment of the invention. It is a block diagram which shows the detail of the gaze direction detection apparatus for vehicles which concerns on 1st Embodiment. It is a flowchart which shows operation | movement of the gaze direction detection apparatus for vehicles which concerns on 1st Embodiment. It is a flowchart which shows the detail of step ST5 shown in FIG. It is explanatory drawing which shows the detail of step ST21 shown in FIG. It is explanatory drawing which shows the detail at the time of calculating | requiring the corneal-sphere center position Peye, (a) shows the image imaged by one imaging part, (b) shows the image imaged by the other imaging part. (C) has shown the mode at the time of a visual axis detection part grasping | ascertaining the corneal-sphere center position Peye in three dimensions. It is a flowchart which shows the detail of step ST7 shown in FIG. It is a flowchart which shows the detail of step ST10 shown in FIG. It is a flowchart which shows the detail of step ST11 shown in FIG. It is explanatory drawing which shows the correlation with the gaze correction coefficient calculated | required by step ST55 shown in FIG. 9, and a vehicle speed. It is a flowchart which shows the detail of step ST10 shown in FIG. 3 which concerns on 2nd Embodiment. It is a flowchart which shows the detail of step ST11 shown in FIG. 3 which concerns on 2nd Embodiment. It is a block diagram which shows the detail of the gaze direction detection apparatus for vehicles which concerns on 3rd Embodiment. It is a flowchart which shows the detail of step ST10 shown in FIG. 3 which concerns on 3rd Embodiment. It is a flowchart which shows the detail of step ST11 shown in FIG. 3 which concerns on 3rd Embodiment. It is explanatory drawing which shows a gaze direction when visually recognizing a pedestrian.

Explanation of symbols

1-3: Gaze direction detection device 10 for vehicle ... Imaging unit 20 ... Calculation unit 21: Gaze detection unit (gaze detection means)
22 ... Vehicle interior visual target identification part (vehicle interior visual target identification means)
23 ... Correction value calculation unit 23a ... Static correction value calculation unit (static correction value calculation means)
23b ... Dynamic correction value calculation unit (dynamic correction value calculation means)
24 ... Correction value storage unit 24a ... Static correction value storage unit 24b ... Dynamic correction value storage unit 25 ... Correction unit 25a ... Static correction unit (static correction means)
25b ... Dynamic correction unit (dynamic correction means)
26 ... Outside-vehicle-viewing object specifying part (out-vehicle-viewing object specifying means)
27 ... Visual determination unit (visual determination means)
28 ... Calculating part 28a ... Pedestrian average visual recognition time calculation part (pedestrian average visual recognition time calculation means)
28b ... Pedestrian visual recognition reference value calculation unit (pedestrian visual recognition reference value calculation means)
30 ... Warning section (Warning means)
40. Obstacle detection unit (obstacle detection means)
50 ... Vehicle motion information acquisition unit 60 ... Vehicle surrounding environment information acquisition unit 70 ... Driver motion detection unit

Claims (10)

  1. Gaze detection means for detecting the gaze direction of the driver from the image of the face of the vehicle driver;
    In-vehicle visual recognition object specifying means for specifying an in-vehicle visual recognition object that is estimated to be visually recognized by the driver among objects in the vehicle that are stationary in the internal vehicle space;
    When the line-of-sight direction detected by the line-of-sight detection means and the direction connecting the driver's eye position to the in-vehicle visual target object specified by the in-vehicle visual object specifying means are different, the static corresponding to the difference A static correction value calculating means for obtaining a dynamic correction value;
    Static correction means for correcting the gaze direction detected by the gaze detection means in accordance with the static correction value obtained by the static correction value calculation means;
    Obstacle detection means for detecting obstacles around the vehicle;
    Out-of-vehicle visual object identification that identifies an out-of-vehicle visual recognition object that is moving in real space among obstacles detected by the obstacle detection means and is estimated to be visually recognized by the driver Means,
    When the line-of-sight direction corrected by the static correction unit is different from the direction connecting the driver's eye position to the vehicle exterior visual target object identified by the vehicle exterior visual target object identification unit, the difference is determined. Dynamic correction value calculating means for obtaining a dynamic correction value;
    A motion for correcting the line-of-sight direction detected by the line-of-sight detection means according to the static correction value obtained by the static correction value calculation means and the dynamic correction value obtained by the dynamic correction value calculation means. Correction means,
    A gaze direction detecting device for a vehicle, comprising:
  2. By determining whether or not the line-of-sight direction corrected by the dynamic correction unit and the direction connecting the driver's eye position to the collision object that may collide with the host vehicle are the same, Visual judgment means for judging whether or not the collision object is visually recognized;
    Warning means for giving a warning to the vehicle driver when it is determined by the visual recognition determination means that the driver does not visually recognize the object;
    The line-of-sight direction detecting device for a vehicle according to claim 1, further comprising:
  3.   The vehicle exterior visual target specifying means is a case where there is a preceding vehicle, and when there is a predetermined correlation between an inter-vehicle distance from the host vehicle to the preceding vehicle and a pedal operation amount of the vehicle by the driver, The vehicular gaze direction detection device according to claim 1, wherein a preceding vehicle is specified as an out-of-vehicle visual recognition object estimated to be visually recognized.
  4.   The gaze direction detecting device for a vehicle according to claim 3, wherein the dynamic correction value calculating means calculates the dynamic correction value for each average vehicle speed of the host vehicle.
  5.   The vehicle exterior visual target specifying means is a case where there is a preceding vehicle and a fluctuation is detected in the driver's black eye part, and the brake pedal is operated at a predetermined vehicle speed or higher within a predetermined time after the fluctuation is detected. The vehicular gaze direction detection device according to claim 1, wherein a preceding vehicle is specified as an out-of-vehicle visual recognition object that is estimated to be visually recognized by the driver.
  6.   The dynamic correction value calculating means calculates the dynamic correction value for each correlation value indicating a correlation between an inter-vehicle distance from the host vehicle to a preceding vehicle and a relative vehicle speed of the host vehicle with respect to the preceding vehicle. Item 6. The gaze direction detecting device for a vehicle according to Item 5.
  7.   The outside-vehicle visual recognition target specifying means is a pedestrian as an out-of-vehicle visual recognition object that is estimated to be visually recognized by the driver when a pedestrian exists around the vehicle and a pedestrian exists for a predetermined time. The vehicle gaze direction detection device according to claim 1, wherein:
  8. Pedestrian average viewing for calculating the average of the time when the difference between the gaze direction corrected by the static correction unit or the dynamic correction unit and the direction connecting the driver's eye position to the pedestrian is equal to or less than a predetermined value. Time calculation means;
    From the average time calculated by the pedestrian average visual recognition time calculation means, a pedestrian visual reference value calculation means for calculating a pedestrian visual recognition reference time that is a reference whether or not the pedestrian is visually recognized,
    The gait direction corrected by the dynamic correction means and the direction connecting the driver's eye position to the collision pedestrian that may collide with the host vehicle is calculated by the pedestrian visual reference value calculation means. Visual determination means for determining whether or not the driver is visually recognizing the collision pedestrian by determining whether or not they coincide with each other over the person visual recognition reference time,
    Warning means for warning the vehicle driver when it is determined by the visual recognition means that the driver does not visually recognize the pedestrian;
    The vehicle gaze direction detecting device according to claim 7, further comprising:
  9. Among the obstacles detected by the obstacle detection means, an outside stationary object specifying means for specifying an outside stationary object stationary in real space;
    The difference between the line-of-sight direction corrected by the static correction unit or the dynamic correction unit and the direction connecting the driver's eye position to the vehicle stationary object specified by the vehicle stationary object specifying unit is a predetermined value or less. A stationary object average visual recognition time calculation means for calculating an average of the time to become,
    From the average time calculated by the stationary object average visual recognition time calculation means, a stationary object visual recognition reference value calculation means for calculating a stationary object visual recognition reference time as a reference whether or not the stationary object is visually recognized,
    The visual recognition determination means determines whether or not the driver is visually recognizing the stationary object outside the vehicle based on the stationary object visual recognition reference time calculated by the stationary object visual recognition reference value calculation means. Item 9. The visual line direction detection device for a vehicle according to Item 8.
  10. An image acquisition step of acquiring an image of the face of the vehicle driver;
    From a face image obtained in the image acquisition step, a line-of-sight detection step for detecting a line-of-sight direction of the vehicle driver;
    In-vehicle visual recognition object specifying step for specifying an in-vehicle visual recognition object that is estimated to be visually recognized by the driver among the in-vehicle objects that are stationary in the in-vehicle space;
    If the line-of-sight direction detected in the line-of-sight detection step is different from the direction connecting the driver's eye position to the in-vehicle visual object specified in the in-vehicle visual object specifying step, the static according to the difference. A static correction value calculating step for obtaining a dynamic correction value;
    An image reacquisition step for acquiring again an image of the face of the vehicle driver after the image acquisition step;
    From the face image acquired by the image reacquisition step, a gaze direction redetection step of detecting the gaze direction of the vehicle driver;
    A static correction step of correcting the line-of-sight direction detected in the line-of-sight direction redetection step according to the static correction value obtained in the static correction value calculation step;
    An obstacle detection step for detecting obstacles around the vehicle;
    Out-of-vehicle visual target identification that identifies an out-of-vehicle visual recognition object that is moving in real space among obstacles detected in the obstacle detection step and is estimated to be visually recognized by the driver Steps,
    When the line-of-sight direction corrected in the static correction step is different from the direction connecting the driver's eye position to the vehicle exterior visual recognition target object specified in the vehicle exterior visual recognition object identification step, the difference is determined. A dynamic correction value calculating step for obtaining a dynamic correction value;
    An image reacquisition step for further acquiring an image of the face of the vehicle driver after the image reacquisition step;
    From the face image acquired in the image re-acquisition step, a line-of-sight direction re-detection step of detecting the vehicle driver's line-of-sight direction;
    The driver's line-of-sight direction detected in the line-of-sight direction re-detection step, the static correction value obtained in the static correction value calculation step, and the dynamic correction value obtained in the dynamic correction value calculation step, A dynamic correction step for correcting from
    A gaze direction detection method for a vehicle, comprising:
JP2005089125A 2005-03-25 2005-03-25 Gaze direction detection apparatus and method for vehicle Expired - Fee Related JP4715262B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005089125A JP4715262B2 (en) 2005-03-25 2005-03-25 Gaze direction detection apparatus and method for vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2005089125A JP4715262B2 (en) 2005-03-25 2005-03-25 Gaze direction detection apparatus and method for vehicle

Publications (2)

Publication Number Publication Date
JP2006263334A JP2006263334A (en) 2006-10-05
JP4715262B2 true JP4715262B2 (en) 2011-07-06

Family

ID=37199883

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2005089125A Expired - Fee Related JP4715262B2 (en) 2005-03-25 2005-03-25 Gaze direction detection apparatus and method for vehicle

Country Status (1)

Country Link
JP (1) JP4715262B2 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4231884B2 (en) * 2006-09-27 2009-03-04 株式会社デンソーアイティーラボラトリ Gaze object detection device and gaze object detection method
JP2009136551A (en) 2007-12-07 2009-06-25 Denso Corp Face feature quantity detector for vehicle and face feature quantity detecting method
JP5561219B2 (en) * 2011-03-25 2014-07-30 株式会社デンソー Gaze direction detection device
KR101281629B1 (en) * 2011-06-30 2013-07-03 재단법인 경북아이티융합 산업기술원 Driving guidance system using sensors

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000005130A (en) * 1998-06-19 2000-01-11 Nippon Hoso Kyokai <Nhk> Method and device for eyeball movement measuring
JP2004210213A (en) * 2003-01-08 2004-07-29 Toyota Central Res & Dev Lab Inc Information providing device
JP2004282471A (en) * 2003-03-17 2004-10-07 Media Glue Corp Evaluation device for video image content
JP2005261728A (en) * 2004-03-19 2005-09-29 Fuji Xerox Co Ltd Line-of-sight direction recognition apparatus and line-of-sight direction recognition program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07151958A (en) * 1993-11-30 1995-06-16 Canon Inc Optical equipment with line-of-sight detecting function
JP3785669B2 (en) * 1996-03-05 2006-06-14 日産自動車株式会社 Gaze direction measuring device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000005130A (en) * 1998-06-19 2000-01-11 Nippon Hoso Kyokai <Nhk> Method and device for eyeball movement measuring
JP2004210213A (en) * 2003-01-08 2004-07-29 Toyota Central Res & Dev Lab Inc Information providing device
JP2004282471A (en) * 2003-03-17 2004-10-07 Media Glue Corp Evaluation device for video image content
JP2005261728A (en) * 2004-03-19 2005-09-29 Fuji Xerox Co Ltd Line-of-sight direction recognition apparatus and line-of-sight direction recognition program

Also Published As

Publication number Publication date
JP2006263334A (en) 2006-10-05

Similar Documents

Publication Publication Date Title
US6327522B1 (en) Display apparatus for vehicle
US8743202B2 (en) Stereo camera for a motor vehicle
US8593519B2 (en) Field watch apparatus
US8085140B2 (en) Travel information providing device
US6774772B2 (en) Attention control for operators of technical equipment
US7423540B2 (en) Method of detecting vehicle-operator state
EP2523839B1 (en) Combining driver and environment sensing for vehicular safety systems
JP4380541B2 (en) Vehicle agent device
CN101500874B (en) Sight-line end estimation device and driving assist device
US6411898B2 (en) Navigation device
EP1974998B1 (en) Driving support method and driving support apparatus
US8724858B2 (en) Driver imaging apparatus and driver imaging method
US20050134479A1 (en) Vehicle display system
US8538044B2 (en) Line-of-sight direction determination device and line-of-sight direction determination method
EP1405124B1 (en) Head-up display system and method for projecting a marking of a road sign with regard to the viewing direction of the driver
US20050046953A1 (en) Virtual display device for a vehicle instrument panel
DE102004035842B4 (en) Dual disparate sensing object detection and detection system
US20060267747A1 (en) Travel safety apparatus for vehicle
US7362215B2 (en) System and method for monitoring the surroundings of a vehicle
CN104584102B (en) Method and the method for the object in the environment for selecting vehicle for supplementing the object information for distributing to object
JP6036065B2 (en) Gaze position detection device and gaze position detection method
JP2016506572A (en) Infotainment system
EP1968006A1 (en) Image processing apparatus
EP1878604B1 (en) Method of mitigating driver distraction
JP2009205191A (en) Parking space recognition system

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20080227

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110214

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20110301

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20110314

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140408

Year of fee payment: 3

LAPS Cancellation because of no payment of annual fees