WO2019220572A1 - Driving inability detection device and driving inability detection method - Google Patents

Driving inability detection device and driving inability detection method Download PDF

Info

Publication number
WO2019220572A1
WO2019220572A1 PCT/JP2018/018937 JP2018018937W WO2019220572A1 WO 2019220572 A1 WO2019220572 A1 WO 2019220572A1 JP 2018018937 W JP2018018937 W JP 2018018937W WO 2019220572 A1 WO2019220572 A1 WO 2019220572A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
driver
determination unit
normal
unit
Prior art date
Application number
PCT/JP2018/018937
Other languages
French (fr)
Japanese (ja)
Inventor
翔悟 甫天
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2018/018937 priority Critical patent/WO2019220572A1/en
Publication of WO2019220572A1 publication Critical patent/WO2019220572A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers

Definitions

  • the present invention relates to an inoperability detecting device and an inoperability detecting method for detecting that the driver is in an inoperable state.
  • the driving disability detection device described in Patent Literature 1 sequentially detects the driver's head based on the image of the driver's seat taken by the imaging device, and when the detected head is out of a predetermined range, Detects that the driver is in an inoperable state.
  • operation impossible detection apparatus can detect having fallen into the driving
  • the inoperability detecting device described in Patent Document 1 detects whether or not the inoperable state is detected for a driver whose driving posture is deviated from the standard position by, for example, causing the driver to adjust a predetermined range. It is possible.
  • the driving impossibility detecting device described in Patent Document 1 is configured as described above, a predetermined range used for detecting whether or not driving is impossible is automatically determined in consideration of individual differences in driving posture. There was a problem that it could not be adjusted.
  • the present invention has been made to solve the above-described problems, and an object thereof is to detect an inoperable state in consideration of individual differences in driving posture.
  • the driving impossibility detecting device includes: an image acquisition unit that acquires a captured image of a driver of a vehicle; and a normal captured image when the driver is in a normal state among the captured images acquired by the image acquisition unit.
  • a database to be registered as an image an image determination unit that calculates a degree of divergence between a normal image registered in the database and a captured image acquired by the image acquisition unit, and an image acquisition unit.
  • the eye detection unit that detects the driver's eyes from the captured image and the first divergence calculated by the image determination unit are compared with a first threshold, and when the first divergence is greater than or equal to the first threshold, If the time during which the driver's eyes cannot be detected by the detection unit continues for a predetermined time or longer, the driver determines that the driver is incapable of driving, and the time during which the driver's eyes cannot be detected by the eye detection unit is predetermined.
  • a state determination unit that determines that the driver is in a normal state if it does not continue for more than a period of time, and when the state determination unit determines that the driver is in a normal state, the captured image acquired by the image acquisition unit is a normal image And a registration unit for registering in the database.
  • the captured image when the driver is determined to be in the normal state is registered in the database as a normal image and used for subsequent determinations. Therefore, driving considering individual differences in driving posture An impossible state can be detected.
  • FIG. 6 is a diagram illustrating an example of a method of calculating a first divergence degree by the image determination unit of Embodiment 1.
  • FIG. 4 is a flowchart illustrating an operation example of the inoperability detecting apparatus according to the first embodiment. It is a block diagram which shows the structural example of the driving impossible detection apparatus which concerns on Embodiment 2.
  • FIG. 6 is a flowchart illustrating an operation example of the inoperability detecting apparatus according to the second embodiment.
  • FIG. 10 is a block diagram illustrating a configuration example of an inoperability detecting device according to a third embodiment.
  • 10 is a flowchart illustrating an operation example of the inoperability detecting apparatus according to the third embodiment.
  • 10 is a flowchart illustrating an operation example of the inoperability detecting apparatus according to the fourth embodiment.
  • 9A and 9B are diagrams illustrating a hardware configuration example of the inoperability detecting device according to each embodiment.
  • FIG. 1 is a block diagram illustrating a configuration example of the inoperability detecting apparatus 10 according to the first embodiment.
  • the driving impossibility detection device 10 and the imaging device 1 connected to the driving impossibility detection device 10 are mounted on a vehicle.
  • the imaging device 1 is a visible light camera or an infrared camera installed in the vehicle. Note that the imaging apparatus 1 may be configured by a plurality of cameras installed in the vehicle.
  • the imaging device 1 is connected to the inoperability detecting device 10 via a bus or the like. This imaging device 1 images the inside of the vehicle including the driver, and outputs the captured image to the image acquisition unit 11.
  • the inoperability detection apparatus 10 includes an image acquisition unit 11, a database 12, an image determination unit 13, an eye detection unit 14, a state determination unit 15, and a registration unit 16.
  • the image acquisition unit 11 acquires a captured image captured by the imaging device 1 and outputs the captured image to the image determination unit 13, the eye detection unit 14, and the registration unit 16.
  • the captured image output by the image acquisition unit 11 to the image determination unit 13 or the like includes at least a range where the face is likely to be located when the driver is seated in the driver's seat. . By widening this range, it is possible to cope with various driving postures.
  • a captured image when the driver is in a normal state among the captured images acquired by the image acquisition unit 11 is registered as a normal image.
  • Normal state means that normal operation is possible.
  • the inoperable state refers to a state in which the driver has lost consciousness due to a sudden illness and the like and is in a state in which normal driving cannot be performed.
  • the driver is unable to drive, the driver's face often moves from the normal driving position and does not return to the normal driving position.
  • the image determination unit 13 compares the normal image registered in the database 12 with the captured image acquired by the image acquisition unit 11 and calculates the first degree of divergence between the two images.
  • the image determination unit 13 outputs the calculated first divergence degree to the state determination unit 15.
  • the image determination unit 13 calculates a first divergence degree indicating the degree of divergence between the luminances of both images. When the first divergence degree is large, there is a high possibility that the driver's face has moved from the position during normal driving, and there is a suspicion that the driver is unable to drive.
  • FIG. 2 is a diagram illustrating an example of a first divergence calculation method by the image determination unit 13 according to the first embodiment.
  • the normal image p 1, p 2, ⁇ , p n is the registered normal image database 12, the time t 1, t 2, ⁇ ⁇ ⁇ , constant captured in the order of t n It is a captured image for a time (for example, the latest 60 seconds).
  • the image determination unit 13 compares the plurality of normal images p 1 , p 2 ,..., Pn one by one in time series order, and finally adds the luminance in the corresponding pixel unit, thereby finally One reference image b n is generated.
  • the image determining unit 13 as a reference image b 1 first normal image p 1.
  • the image determination unit 13 weights and adds the luminance of each pixel of the next normal image p 2 and the luminance of each pixel of the reference image b 1 to generate the reference image b 2 . Subsequently, the image determination unit 13 weights and adds the luminance of each pixel of the next normal image p 3 and the luminance of each pixel of the reference image b 2 to generate the reference image b 3 . . Image determination unit 13, by repeating this process, finally generating one reference image b n.
  • n is an integer of 2 or more
  • is a weight.
  • may be a fixed value regardless of the time, or may be a value that changes according to the time.
  • is a value that changes according to time, it is preferable that the brightness of the captured image captured at a newer time than the brightness of the captured image captured at the old time is easily reflected in the reference image b n. .
  • the image determining unit 13, and the reference image b n which is finally produced, as compared to the current captured image acquired by the image acquisition unit 11 calculates the luminance difference between pixels. Then, the image determination unit 13 calculates an average luminance difference from the luminance difference in units of pixels. The image determination unit 13 sets the calculated average luminance difference as the first divergence degree. Alternatively, the image determination unit 13 may calculate a variance value by using a plurality of average luminance differences calculated in the past and the average luminance difference calculated this time, and may use the calculated variance value as the first divergence degree. By using the variance value as the first divergence degree, it is possible to eliminate the influence of disturbance on the captured image.
  • the eye detection unit 14 detects the driver's eyes from the captured image acquired by the image acquisition unit 11. That is, when the driver's eyes are reflected in the captured image, the eye detection unit 14 determines that the eyes have been detected regardless of whether the eyes are open or closed. On the other hand, the eye detection unit 14 determines that the eyes cannot be detected when the driver's face is removed from the captured image and the eyes are not visible, or when eye detection fails. The eye detection unit 14 outputs the eye detection result to the state determination unit 15. As described above, when the driver is unable to drive, the driver's face often moves from the normal driving position and does not return to the normal driving position. Therefore, when the driver is unable to drive, it is often impossible to detect eyes from the captured image.
  • the eye detection unit 14 may determine the presence or absence of eye detection using the reliability of image recognition when the eyes are detected from the captured image. For example, the eye detection unit 14 determines that the eye has been detected when the reliability of image recognition when the eye is detected is greater than or equal to a predetermined value, and the reliability is less than the predetermined value. It is determined that the eyes cannot be detected.
  • the state determination unit 15 determines whether the driver is in a normal state or incapable of driving using the first divergence calculated by the image determination unit 13 and the eye detection result by the eye detection unit 14. To do. When the state determination unit 15 determines that the driver is in a normal state, the state determination unit 15 outputs a determination result to the registration unit 16.
  • the first divergence calculated by the image determination unit 13 is compared with a predetermined first threshold.
  • the state determination unit 15 determines that the driver is suspected of being incapable of driving when the first divergence degree is equal to or greater than the first threshold, and when the first divergence degree is less than the first threshold, the driver Is determined to be in a normal state.
  • the state determination unit 15 determines that there is a suspicion that the driver is incapable of driving, the time during which the eyes of the driver cannot be detected by the eye detection unit 14 is determined in advance (hereinafter referred to as “predetermined time”). It is determined whether or not to continue.
  • the predetermined time is, for example, 2 seconds.
  • the state determination unit 15 determines that the driver is incapable of driving, and the time during which the driver's eyes cannot be detected is less than the predetermined time. It is determined that the driver is in a normal state.
  • the registration unit 16 registers the captured image acquired by the image acquisition unit 11 in the database 12 as a normal image.
  • FIG. 3 is a flowchart showing an operation example of the inoperability detecting apparatus 10 according to the first embodiment.
  • step ST1 the image acquisition unit 11 acquires a captured image from the imaging device 1, and outputs the acquired image to the image determination unit 13, the eye detection unit 14, and the registration unit 16.
  • step ST2 the image determination unit 13 determines whether or not a sufficient amount of normal images for calculating the first divergence degree is registered in the database 12.
  • the amount sufficient for calculating the first divergence degree is, for example, a normal image for 60 seconds. If the image determination unit 13 determines that a sufficient amount of normal images for calculating the first divergence degree is registered in the database 12 (step ST2 “YES”), the process proceeds to step ST3. On the other hand, if the image determination unit 13 determines that a sufficient amount of normal images for calculating the first divergence degree is not registered in the database 12 (step ST2 “NO”), the process proceeds to step ST9.
  • the image determination unit 13 considers that the driver is in a normal state for a certain period (for example, 60 seconds) after the ignition switch is turned on, and uses the current captured image acquired in step ST1 as a normal image.
  • the registration unit 16 is instructed to register in the database 12.
  • step ST3 the image determination unit 13 acquires a sufficient amount of normal images from the database 12 for calculating the first divergence degree. Then, the image determination unit 13 calculates the first divergence based on the luminance of the normal image acquired from the database 12 and the luminance of the current captured image acquired in step ST1, and outputs the first divergence degree to the state determination unit 15. .
  • step ST4 when the first divergence calculated by the image determination unit 13 is greater than or equal to the first threshold (step ST4 “YES”), the state determination unit 15 is suspected that the driver is in an inoperable state. And the process proceeds to step ST5. On the other hand, if the first divergence calculated by the image determination unit 13 is less than the first threshold (step ST4 “NO”), the state determination unit 15 proceeds to step ST8.
  • step ST5 the eye detection unit 14 detects an eye from the current captured image acquired in step ST1, and outputs a detection result indicating the presence or absence of detection to the state determination unit 15.
  • step ST6 the state determination unit 15 proceeds to step ST7 when the eye can be detected by the eye detection unit 14 (step ST6 “YES”), and to step ST10 when the eye cannot be detected (step ST6 “NO”). move on.
  • step ST7 the state determination unit 15 resets the duration.
  • This duration time is a time period in which there is a suspicion that the driver is in an inoperable state and an eye cannot be detected by the eye detection unit 14.
  • step ST8 the state determination unit 15 determines that the driver is in a normal state. Then, the state determination unit 15 instructs the registration unit 16 to register the current captured image acquired in step ST1 in the database 12 as a normal image.
  • step ST9 when the registration unit 16 receives an instruction from the image determination unit 13 in step ST2 or receives an instruction from the state determination unit 15 in step ST8, the registration unit 16 performs normal operation on the current captured image acquired in step ST1.
  • the image is registered in the database 12 as an image.
  • the operations of steps ST1, ST2, and ST9 are performed until the normal image registered in the database 12 in step ST2 has a sufficient amount for calculating the first divergence degree. Will be repeated.
  • step ST10 the state determination unit 15 counts the duration time because there is a suspicion that the driver is in an inoperable state and the eyes cannot be detected by the eye detection unit 14.
  • step ST11 the state determination unit 15 determines whether or not the duration time is equal to or longer than a predetermined time. If state determination unit 15 determines that the duration is greater than or equal to the predetermined time (step ST11 “YES”), it proceeds to step ST12 and determines that the duration is less than the predetermined time (step ST11 “NO”). Then, the operation shown in the flowchart of FIG. The state determination unit 15 continues to hold the duration until it is determined that the driver is in a normal state while the operation shown in the flowchart of FIG. 3 is repeated.
  • step ST12 the state determination unit 15 determines that the driver is in an inoperable state.
  • the inoperability detecting apparatus 10 includes the image acquisition unit 11, the database 12, the image determination unit 13, the eye detection unit 14, the state determination unit 15, and the registration unit 16.
  • the image acquisition unit 11 acquires a captured image of the vehicle driver.
  • the database 12 registers, as a normal image, a captured image when the driver is in a normal state among the captured images acquired by the image acquisition unit 11.
  • the image determination unit 13 calculates a first divergence degree indicating the degree of divergence between the normal image registered in the database 12 and the captured image acquired by the image acquisition unit 11.
  • the eye detection unit 14 detects the driver's eyes from the captured image acquired by the image acquisition unit 11.
  • the state determination unit 15 compares the first divergence calculated by the image determination unit 13 with a first threshold, and when the first divergence is equal to or greater than the first threshold, the eye detection unit 14 performs the driver's eyes. If the time during which the vehicle cannot be detected continues for a predetermined time or more, the driver determines that the vehicle is in an inoperable state. If the time during which the eyes of the driver cannot be detected by the eye detection unit 14 does not continue for the predetermined time or longer, the driver is in a normal state. Is determined. When the state determination unit 15 determines that the driver is in a normal state, the registration unit 16 registers the captured image acquired by the image acquisition unit 11 in the database 12 as a normal image.
  • the inoperability detecting device 10 can register normal images in the database 12 and use them for subsequent determinations, and therefore can detect an inoperable state in consideration of individual differences in driving posture.
  • the inoperability detecting device 10 detects the inoperable state by combining the first deviation degree between the normal image and the current captured image and the presence / absence of eye detection, the normal state and the inoperable state are accurately discriminated. can do.
  • the image determination unit 13 calculates the first divergence degree based on the luminance of the normal image registered in the database 12 and the luminance of the captured image acquired by the image acquisition unit 11. Thereby, when the brightness of the normal image and the brightness of the current captured image are deviated, the driving impossibility detection device 10 moves from the position during normal driving and becomes incapable of driving. It can be determined that there is a suspicion.
  • FIG. FIG. 4 is a block diagram illustrating a configuration example of the inoperability detecting apparatus 10 according to the second embodiment.
  • the inoperability detecting apparatus 10 according to Embodiment 2 has a configuration in which a vehicle device 2 and a vehicle information acquisition unit 21 are added to the inoperability detecting apparatus 10 of Embodiment 1 shown in FIG. 4, parts that are the same as or correspond to those in FIG. 1 are given the same reference numerals, and descriptions thereof are omitted.
  • the vehicle device 2 is, for example, a steering sensor, a speedometer, a gear position sensor, an accelerator position sensor, a brake pressure sensor, an accelerometer, a thermometer, a hygrometer, a seat position sensor, or a GPS (Global Positioning System) receiver.
  • the vehicle device 2 is connected to the inoperability detecting device 10 via a CAN (Controller Area Network) or the like.
  • the vehicle device 2 outputs vehicle information including information such as a steering angle, a vehicle speed, or a gear position to the vehicle information acquisition unit 21.
  • the vehicle information acquisition part 21 acquires the vehicle information which shows the state of a vehicle from the vehicle apparatus 2, and outputs it to the image determination part 13a and the registration part 16a.
  • a captured image when the driver is in a normal state among the captured images acquired by the image acquisition unit 11 is registered in association with the vehicle information acquired by the vehicle information acquisition unit 21. That is, in the database 12a, a normal image when the driver is in a normal state and vehicle information indicating the state of the vehicle when the normal image is captured are associated and registered.
  • FIG. 5 is a flowchart showing an operation example of the inoperability detecting apparatus 10 according to the second embodiment.
  • step ST1 the image acquisition unit 11 acquires a captured image from the imaging device 1, and outputs the acquired image to the image determination unit 13a, the eye detection unit 14, and the registration unit 16a.
  • step ST21 the vehicle information acquisition unit 21 acquires vehicle information from the vehicle device 2 and outputs the vehicle information to the image determination unit 13a and the registration unit 16a.
  • step ST2a the image determination unit 13a determines whether or not an amount of normal images sufficient for calculating the first divergence degree is registered in the database 12a.
  • the image determination unit 13a selects a normal image associated with vehicle information that matches the current vehicle information acquired in step ST21 as the normal image used for calculating the first divergence degree.
  • the image determination unit 13a selects a normal image captured at the steering angle and vehicle speed that match the current steering angle and vehicle speed. Note that the current vehicle information and the vehicle information already registered in the database 12a do not need to be exactly the same, but may be similar to each other as long as they can be regarded as substantially matching.
  • step ST2a determines that a sufficient amount of normal images for calculation of the first divergence degree is registered in the database 12a (step ST2a “YES”)
  • step ST3 determines that a sufficient amount of normal images for calculation of the first divergence degree is not registered in the database 12a
  • step ST9a the image determination unit 13a considers that the driver is in a normal state for a certain period (for example, 60 seconds) after the ignition switch is turned on, and sets the current captured image acquired in step ST1 as a normal image.
  • the registration unit 16a is instructed to associate the normal image with the vehicle information acquired in step ST21 and register it in the database 12a.
  • step ST3 the image determination unit 13a acquires from the database 12a a normal image associated with vehicle information that matches the current vehicle information acquired in step ST21, and calculates a first degree of divergence.
  • Steps ST4 to ST8 and ST10 to ST12 after step ST3 are the same operations as steps ST4 to ST8 and ST10 to ST12 of FIG.
  • step ST9a when the registration unit 16a receives an instruction from the image determination unit 13a in step ST2a, or receives an instruction from the state determination unit 15 in step ST8, the registration unit 16a acquires the current captured image acquired in step ST1.
  • a normal image is registered, and the normal image and the current vehicle information acquired in step ST21 are associated with each other and registered in the database 12a.
  • the inoperability detecting apparatus 10 includes the vehicle information acquisition unit 21 that acquires vehicle information indicating the state of the vehicle.
  • the database 12a registers a normal image in association with vehicle information indicating the state of the vehicle when the normal image is captured.
  • the image determination unit 13a calculates a first degree of divergence using a normal image associated with vehicle information that matches the vehicle information acquired by the vehicle information acquisition unit 21 among normal images registered in the database 12a.
  • FIG. 6 is a block diagram illustrating a configuration example of the inoperability detecting apparatus 10 according to the third embodiment.
  • the driving disability detection device 10 according to the third embodiment has a configuration in which a face movement determination unit 31 is added to the driving disability detection device 10 of the first embodiment shown in FIG.
  • FIG. 6 the same or corresponding parts as those in FIG.
  • the face movement determination unit 31 compares the normal image registered in the database 12 with the captured image acquired by the image acquisition unit 11, and calculates the second degree of divergence between the two images.
  • the face movement determination unit 31 outputs the calculated second deviation degree to the state determination unit 15b.
  • the face movement determination unit 31 calculates a second divergence degree indicating the degree of divergence between the face positions of both images. When the second divergence degree is large, there is a high possibility that the driver's face is moving from the normal driving position, and there is a suspicion that the driver is in a driving impossible state.
  • the face movement determination unit 31 detects a face position from each of a plurality of captured images captured in the most recent time zone from the present to a certain time before (for example, 10 seconds before). Then, the face movement determination unit 31 creates a determination histogram that represents the distribution of face positions over the last 10 seconds.
  • the captured image for the latest 10 seconds may include an image that has been determined as a normal image by the state determination unit 15b. Also, the captured image captured in the time zone from the present to a certain time ago may be temporarily stored in the database 12 or may be temporarily stored in the face movement determination unit 31.
  • the face movement determination unit 31 acquires a plurality of normal images captured in a time period from the time when the ignition switch is turned on to the predetermined time before the present time among the normal images registered in the database 12. .
  • the face movement determination unit 31 acquires a plurality of normal images for the past 9 minutes and 50 seconds from the database 12.
  • the face movement determination unit 31 detects a face position from each of a plurality of normal images acquired from the database 12. Then, the face movement determination unit 31 creates a reference histogram representing the distribution of face positions for the past 9 minutes and 50 seconds. Note that since the number of target images differs between the determination histogram and the reference histogram, the face movement determination unit 31 sets the frequencies of both histograms to relative values.
  • the face movement determination unit 31 compares the determination histogram for the latest 10 seconds with the reference histogram for the past 9 minutes and 50 seconds, and calculates the second divergence degree. For example, the face movement determination unit 31 calculates a correlation value between two histograms using a histogram intersection method, and sets the reciprocal of the correlation value as the second divergence degree.
  • the face movement determination unit 31 determines the movement of the face position using a plurality of normal images and a plurality of captured images, so that when the driver changes his / her posture while driving, it is erroneously determined to be incapable of driving. Can be prevented.
  • the face movement determination unit 31 creates a histogram for determination by using the captured images captured in the latest 10 seconds at the next time (for example, when 10 minutes and 1 second have elapsed since the ignition switch was turned on). In addition, the face movement determination unit 31 creates a reference histogram using normal images captured from the time when the ignition switch is turned on until 9 minutes 51 seconds among normal images registered in the database 12. To do. When creating the reference histogram, the face movement determination unit 31 assigns a small weight to the frequency indicating the face position in the normal image for the past 9 minutes and 50 seconds, and determines the face position in the newly added normal image at 9 minutes and 51 seconds. Give a high weight to the frequency shown. Accordingly, the face position of the normal image captured at a newer time than the face position of the normal image captured at the old time is easily reflected in the reference histogram.
  • FIG. 7 is a flowchart showing an operation example of the inoperability detecting apparatus 10 according to the third embodiment.
  • the inoperability detector 10 periodically repeats the operation shown in the flowchart of FIG. Steps ST1 to ST3 and ST5 to ST12 in FIG. 7 are the same as the operations in steps ST1 to ST3 and ST5 to ST12 in FIG.
  • step ST31 the face movement determination unit 31 acquires, from the database 12, a normal image captured in a time zone from when the ignition switch is turned on until the predetermined time before the present time. Then, the face movement determination unit 31 is captured in the most recent time zone from the present to the predetermined time before, including the face position in the normal image acquired from the database 12 and the current captured image acquired in step ST1. Based on the face position in the captured image, the second divergence degree is calculated and output to the state determination unit 15b.
  • step ST ⁇ b> 32 the state determination unit 15 b compares the first divergence calculated by the image determination unit 13 with the first threshold and is determined in advance as the second divergence calculated by the face movement determination unit 31.
  • the second threshold value is compared.
  • the state determination unit 15b becomes incapable of driving.
  • the process proceeds to step ST5.
  • the state determination unit 15b proceeds to step ST8.
  • the driving disability detection device 10 includes the driver's face position in the normal image registered in the database 12 and the driver's face position in the captured image acquired by the image acquisition unit 11.
  • a second divergence degree indicating the degree of divergence is calculated.
  • the first divergence calculated by the image determination unit 13 is greater than or equal to the first threshold and the second divergence calculated by the face movement determination unit 31 is greater than or equal to the second threshold.
  • the eye detection unit 14 cannot detect the driver's eyes for a predetermined time or more, it is determined that the driver is in an inoperable state.
  • the state determination unit 15b determines that the driver is in a normal state if the eye detection unit 14 cannot detect the driver's eyes for a predetermined time or longer.
  • the driving impossibility detection device 10 moves from the position during normal driving and becomes incapable of driving. Can be determined to be suspected.
  • the inoperability detecting device 10 detects the inoperable state by combining the first and second divergence degrees between the normal image and the current captured image, and the presence or absence of eye detection, the normal state and the inoperable state are detected. Can be more accurately discriminated.
  • Embodiment 4 Since the configuration of the inoperability detecting apparatus 10 according to the fourth embodiment is the same as the configuration shown in FIG. 6 of the third embodiment in the drawing, FIG. 6 is used below.
  • FIG. 8 is a flowchart showing an operation example of the inoperability detecting apparatus 10 according to the fourth embodiment.
  • the inoperability detecting device 10 periodically repeats the operation shown in the flowchart of FIG. Steps ST1 to ST3, ST5 to ST12, ST31, and ST32 in FIG. 8 are the same as the operations in steps ST1 to ST3, ST5 to ST12, ST31, and ST32 in FIG.
  • step ST41 when the face movement determination unit 31 can detect the driver's face position from the current captured image acquired in step ST1 (step ST41 “YES”), the process proceeds to step ST31 and the face position cannot be detected. If so (step ST41 "NO”), the process proceeds to step ST42.
  • step ST ⁇ b> 42 the state determination unit 15 b uses the second deviation degree that should have been calculated by the face movement determination unit 31 when the face movement determination unit 31 cannot detect the driver's face position in the captured image. It is determined that only the threshold determination using the first divergence calculated by the image determination unit 13 in step ST3 is performed without performing the determination. When it is determined that only the threshold determination using the first divergence degree is performed, the state determination unit 15b is more than the case where the threshold determination is performed using the first divergence degree and the second divergence degree (step ST32). 1 Threshold value is changed to a larger value.
  • step ST43 when the first divergence calculated by the image determination unit 13 is greater than or equal to the first threshold after the change (step ST43 “YES”), the state determination unit 15b enters a state in which the driver cannot drive. The process proceeds to step ST5. On the other hand, when the first divergence calculated by the image determination unit 13 is less than the changed first threshold (step ST43 “NO”), the state determination unit 15b proceeds to step ST8.
  • the state determination unit 15b uses the first divergence calculated by the image determination unit 13 as the first degree of deviation.
  • the first deviation degree is equal to or greater than the first threshold value when compared with the threshold value, it is determined that the driver is in an inoperable state if the eye detection unit 14 continues for a predetermined time or longer during which the driver's eyes cannot be detected. If the time during which the eyes of the driver cannot be detected by the eye detection unit 14 does not continue for a predetermined time or more, it is determined that the driver is in a normal state.
  • the inoperability detecting device 10 can accurately determine the normal state and the inoperable state by not performing the threshold determination using the inaccurate second degree of deviation.
  • the state determination part 15b of Embodiment 4 changes a 1st threshold value to a large value, when the face movement determination part 31 cannot detect a driver
  • the driving impossibility detecting device 10 does not determine that there is a suspicion that the driver is in an inoperable state only by a slight movement of the driver's face position, and is in an inoperable state when the driver moves more greatly. It will be determined that there is doubt. Therefore, the inoperability detection apparatus 10 determines that there is an excessive suspicion that the inoperability is excessive when only the threshold determination using the first divergence is performed without performing the threshold determination using the second divergence. Can be prevented.
  • the driving impossibility detecting device 10 when there are a plurality of drivers who drive the same vehicle, the driving impossibility detecting device 10 performs personal authentication based on a captured image or the like, and builds a database 12 for each driver. That's fine. Then, the inoperability detecting device 10 may determine whether the driver is in a normal state or in an inoperable state using a normal image in the database 12 corresponding to the driver.
  • FIGS. 9A and 9B are diagrams illustrating a hardware configuration example of the inoperability detecting apparatus 10 according to each embodiment.
  • the databases 12 and 12a in the inoperability detecting apparatus 10 are the memory 102.
  • Functions of the image acquisition unit 11, the image determination units 13 and 13 a, the eye detection unit 14, the state determination units 15 and 15 b, the registration units 16 and 16 a, the vehicle information acquisition unit 21, and the face movement determination unit 31 in the driving disability detection device 10. Is realized by a processing circuit. That is, the inoperability detecting device 10 includes a processing circuit for realizing the above function.
  • the processing circuit may be the processing circuit 100 as dedicated hardware, or may be the processor 101 that executes a program stored in the memory 102.
  • the processing circuit 100 when the processing circuit is dedicated hardware, includes, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit). ), FPGA (Field Programmable Gate Array), or a combination thereof.
  • the functions of the image acquisition unit 11, the image determination units 13 and 13 a, the eye detection unit 14, the state determination units 15 and 15 b, the registration units 16 and 16 a, the vehicle information acquisition unit 21, and the face movement determination unit 31 are a plurality of processing circuits 100. Alternatively, the functions of the respective units may be combined and realized by one processing circuit 100.
  • the processing circuit is the processor 101
  • the functions of the unit 21 and the face movement determination unit 31 are realized by software, firmware, or a combination of software and firmware.
  • Software or firmware is described as a program and stored in the memory 102.
  • the processor 101 reads out and executes a program stored in the memory 102, thereby realizing the function of each unit. That is, the inoperability detecting apparatus 10 includes a memory 102 for storing a program that, when executed by the processor 101, results in the steps shown in the flowchart of FIG.
  • this program is a procedure of the image acquisition unit 11, the image determination units 13 and 13 a, the eye detection unit 14, the state determination units 15 and 15 b, the registration units 16 and 16 a, the vehicle information acquisition unit 21, and the face movement determination unit 31. It can also be said that the method is executed by a computer.
  • the processor 101 is a CPU (Central Processing Unit), a processing device, an arithmetic device, a microprocessor, or the like.
  • the memory 102 may be a RAM (Random Access Memory), a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), or a nonvolatile or volatile semiconductor memory such as a flash memory, a hard disk, a flexible disk, or the like.
  • the magnetic disk may be a CD (Compact Disc) or a DVD (Digital Versatile Disc).
  • the inoperability detecting apparatus detects an inoperable state in consideration of individual differences in driving posture, and is therefore suitable for use in an inoperable detecting apparatus used in an occupant monitoring system for monitoring an occupant. ing.
  • SYMBOLS 1 Image pick-up device 2 Vehicle equipment, 10 Unoperability detection apparatus, 11 Image acquisition part, 12, 12a database, 13, 13a Image determination part, 14 Eye detection part, 15, 15b State determination part, 16, 16a Registration part, 21 Vehicle information acquisition unit, 31 face movement determination unit, 100 processing circuit, 101 processor, 102 memory.

Abstract

An image acquisition unit (11) acquires a captured image of a driver. An image determination unit (13) calculates a first degree of deviation between a normal image which has been registered in a database (12) and the captured image acquired by the image acquisition unit (11). An eye detection unit (14) detects the driver's eyes from the captured image. When the first degree of deviation is equal to or higher than a first threshold value, a state determination unit (15) determines that the driver is in a driving inability state if a period of time in which the eye detection unit (14) cannot detect the driver's eyes continues for a prescribed period or more, and determines that the driver is in a normal state if the period of time in which the eye detection unit (14) cannot detect the driver's eyes does not continue for the prescribed period or more. A registration unit (16) registers the captured image acquired by the image acquisition unit (11) as a normal image in the database (12) when it is determined that the driver is in a normal state.

Description

運転不能検知装置及び運転不能検知方法Inoperability detection device and inoperability detection method
 この発明は、運転者が運転不能状態となったことを検知する運転不能検知装置及び運転不能検知方法に関するものである。 The present invention relates to an inoperability detecting device and an inoperability detecting method for detecting that the driver is in an inoperable state.
 例えば、特許文献1記載の運転不能検知装置は、撮像装置により撮像された運転席の画像に基づいて運転者の頭部を逐次検知し、検知した頭部が所定範囲から外れている場合に、運転者が運転不能状態であることを検知する。これにより、上記運転不能検知装置は、車両の運転中に、運転者が意識を失う等の運転不能状態に陥ったことを検知できる。 For example, the driving disability detection device described in Patent Literature 1 sequentially detects the driver's head based on the image of the driver's seat taken by the imaging device, and when the detected head is out of a predetermined range, Detects that the driver is in an inoperable state. Thereby, the said driving | operation impossible detection apparatus can detect having fallen into the driving | operation impossible state, such as a driver | operator losing consciousness, during driving | running | working of a vehicle.
特開2016-27452号公報JP 2016-27452 A
 標準位置からずれた姿勢で運転する癖がある運転者の頭部は、上記所定範囲の外に位置するため、その運転者について運転不能状態か否かを検出できない。そこで、特許文献1記載の運転不能検知装置は、運転者自身に所定範囲を調整させる等することによって、運転姿勢が標準位置からずれる癖がある運転者について、運転不能状態か否かの検出を可能としている。しかしながら、特許文献1記載の運転不能検知装置は以上のように構成されているので、運転不能状態か否かを検出するために用いる所定範囲を、運転姿勢の個人差を考慮して自動的に調整することができないという課題があった。 Since the head of a driver who has a habit of driving with a posture deviating from the standard position is located outside the predetermined range, it is impossible to detect whether or not the driver is in an inoperable state. In view of this, the inoperability detecting device described in Patent Document 1 detects whether or not the inoperable state is detected for a driver whose driving posture is deviated from the standard position by, for example, causing the driver to adjust a predetermined range. It is possible. However, since the driving impossibility detecting device described in Patent Document 1 is configured as described above, a predetermined range used for detecting whether or not driving is impossible is automatically determined in consideration of individual differences in driving posture. There was a problem that it could not be adjusted.
 この発明は、上記のような課題を解決するためになされたもので、運転姿勢の個人差を考慮して運転不能状態を検知することを目的とする。 The present invention has been made to solve the above-described problems, and an object thereof is to detect an inoperable state in consideration of individual differences in driving posture.
 この発明に係る運転不能検知装置は、車両の運転者の撮像画像を取得する画像取得部と、画像取得部により取得された撮像画像のうちの運転者が正常状態であるときの撮像画像を正常画像として登録するデータベースと、データベースに登録済みの正常画像と画像取得部により取得された撮像画像との乖離の度合いを示す第1乖離度を算出する画像判定部と、画像取得部により取得された撮像画像から運転者の目を検出する目検出部と、画像判定部により算出された第1乖離度を第1閾値と比較し、当該第1乖離度が第1閾値以上である場合に、目検出部において運転者の目を検出できない時間が予め定められた時間以上継続すると運転者が運転不能状態であると判定し、目検出部において運転者の目を検出できない時間が予め定められた時間以上継続しなければ運転者が正常状態であると判定する状態判定部と、状態判定部により運転者が正常状態であると判定された場合、画像取得部により取得された撮像画像を正常画像としてデータベースに登録する登録部とを備えるものである。 The driving impossibility detecting device according to the present invention includes: an image acquisition unit that acquires a captured image of a driver of a vehicle; and a normal captured image when the driver is in a normal state among the captured images acquired by the image acquisition unit. A database to be registered as an image, an image determination unit that calculates a degree of divergence between a normal image registered in the database and a captured image acquired by the image acquisition unit, and an image acquisition unit The eye detection unit that detects the driver's eyes from the captured image and the first divergence calculated by the image determination unit are compared with a first threshold, and when the first divergence is greater than or equal to the first threshold, If the time during which the driver's eyes cannot be detected by the detection unit continues for a predetermined time or longer, the driver determines that the driver is incapable of driving, and the time during which the driver's eyes cannot be detected by the eye detection unit is predetermined. A state determination unit that determines that the driver is in a normal state if it does not continue for more than a period of time, and when the state determination unit determines that the driver is in a normal state, the captured image acquired by the image acquisition unit is a normal image And a registration unit for registering in the database.
 この発明によれば、運転者が正常状態であると判定された場合の撮像画像を正常画像としてデータベースに登録し、以降の判定に用いるようにしたので、運転姿勢の個人差を考慮して運転不能状態を検知することができる。 According to the present invention, the captured image when the driver is determined to be in the normal state is registered in the database as a normal image and used for subsequent determinations. Therefore, driving considering individual differences in driving posture An impossible state can be detected.
実施の形態1に係る運転不能検知装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the driving impossible detection apparatus which concerns on Embodiment 1. FIG. 実施の形態1の画像判定部による第1乖離度の算出方法の一例を示す図である。6 is a diagram illustrating an example of a method of calculating a first divergence degree by the image determination unit of Embodiment 1. FIG. 実施の形態1に係る運転不能検知装置の動作例を示すフローチャートである。4 is a flowchart illustrating an operation example of the inoperability detecting apparatus according to the first embodiment. 実施の形態2に係る運転不能検知装置の構成例を示すブロック図である。It is a block diagram which shows the structural example of the driving impossible detection apparatus which concerns on Embodiment 2. FIG. 実施の形態2に係る運転不能検知装置の動作例を示すフローチャートである。6 is a flowchart illustrating an operation example of the inoperability detecting apparatus according to the second embodiment. 実施の形態3に係る運転不能検知装置の構成例を示すブロック図である。FIG. 10 is a block diagram illustrating a configuration example of an inoperability detecting device according to a third embodiment. 実施の形態3に係る運転不能検知装置の動作例を示すフローチャートである。10 is a flowchart illustrating an operation example of the inoperability detecting apparatus according to the third embodiment. 実施の形態4に係る運転不能検知装置の動作例を示すフローチャートである。10 is a flowchart illustrating an operation example of the inoperability detecting apparatus according to the fourth embodiment. 図9A及び図9Bは、各実施の形態に係る運転不能検知装置のハードウェア構成例を示す図である。9A and 9B are diagrams illustrating a hardware configuration example of the inoperability detecting device according to each embodiment.
 以下、この発明をより詳細に説明するために、この発明を実施するための形態について、添付の図面に従って説明する。
実施の形態1.
 図1は、実施の形態1に係る運転不能検知装置10の構成例を示すブロック図である。運転不能検知装置10、及びこの運転不能検知装置10に接続される撮像装置1は、車両に搭載される。
Hereinafter, in order to explain the present invention in more detail, modes for carrying out the present invention will be described with reference to the accompanying drawings.
Embodiment 1 FIG.
FIG. 1 is a block diagram illustrating a configuration example of the inoperability detecting apparatus 10 according to the first embodiment. The driving impossibility detection device 10 and the imaging device 1 connected to the driving impossibility detection device 10 are mounted on a vehicle.
 撮像装置1は、車両内に設置された可視光カメラ又は赤外線カメラ等である。なお、車両内に設置された複数のカメラにより、撮像装置1が構成されてもよい。撮像装置1は、バス等を介して運転不能検知装置10に接続される。この撮像装置1は、運転者を含む車両内を撮像し、撮像画像を画像取得部11へ出力する。 The imaging device 1 is a visible light camera or an infrared camera installed in the vehicle. Note that the imaging apparatus 1 may be configured by a plurality of cameras installed in the vehicle. The imaging device 1 is connected to the inoperability detecting device 10 via a bus or the like. This imaging device 1 images the inside of the vehicle including the driver, and outputs the captured image to the image acquisition unit 11.
 運転不能検知装置10は、画像取得部11、データベース12、画像判定部13、目検出部14、状態判定部15、及び登録部16を備える。 The inoperability detection apparatus 10 includes an image acquisition unit 11, a database 12, an image determination unit 13, an eye detection unit 14, a state determination unit 15, and a registration unit 16.
 画像取得部11は、撮像装置1が撮像した撮像画像を取得し、画像判定部13、目検出部14、及び登録部16へ出力する。なお、画像取得部11が、画像判定部13等へ出力する撮像画像には、少なくとも、運転者が運転席に着座した場合に顔が位置する可能性の高い範囲が含まれているものとする。この範囲を広くすることで、様々な運転姿勢に対応可能である。 The image acquisition unit 11 acquires a captured image captured by the imaging device 1 and outputs the captured image to the image determination unit 13, the eye detection unit 14, and the registration unit 16. The captured image output by the image acquisition unit 11 to the image determination unit 13 or the like includes at least a range where the face is likely to be located when the driver is seated in the driver's seat. . By widening this range, it is possible to cope with various driving postures.
 データベース12には、画像取得部11により取得された撮像画像のうちの運転者が正常状態であるときの撮像画像が、正常画像として登録される。 In the database 12, a captured image when the driver is in a normal state among the captured images acquired by the image acquisition unit 11 is registered as a normal image.
 正常状態とは、正常な運転ができる状態にあることを指す。運転不能状態とは、運転者が急病により意識を失った状態等にあり、正常な運転ができない状態にあることを指す。運転者が運転不能状態になると、運転者の顔は正常な運転時の位置から移動し、正常な運転時の位置に戻ってこなくなることが多い。 “Normal state” means that normal operation is possible. The inoperable state refers to a state in which the driver has lost consciousness due to a sudden illness and the like and is in a state in which normal driving cannot be performed. When the driver is unable to drive, the driver's face often moves from the normal driving position and does not return to the normal driving position.
 画像判定部13は、データベース12に登録済みの正常画像と、画像取得部11により取得された撮像画像とを比較して、両画像の第1乖離度を算出する。画像判定部13は、算出した第1乖離度を状態判定部15へ出力する。この画像判定部13は、両画像の輝度の乖離の度合いを示す第1乖離度を算出するものである。第1乖離度が大きい場合、運転者の顔が正常な運転時の位置から移動している可能性が高く、運転不能状態となっている疑いが有る。 The image determination unit 13 compares the normal image registered in the database 12 with the captured image acquired by the image acquisition unit 11 and calculates the first degree of divergence between the two images. The image determination unit 13 outputs the calculated first divergence degree to the state determination unit 15. The image determination unit 13 calculates a first divergence degree indicating the degree of divergence between the luminances of both images. When the first divergence degree is large, there is a high possibility that the driver's face has moved from the position during normal driving, and there is a suspicion that the driver is unable to drive.
 図2は、実施の形態1の画像判定部13による第1乖離度の算出方法の一例を示す図である。図2において、正常画像p,p,・・・,pは、データベース12に登録済みの正常画像であり、時刻t,t,・・・,tの順に撮像された一定時間分(例えば、直近の60秒分)の撮像画像である。画像判定部13は、複数の正常画像p,p,・・・,pを、時系列順に1枚ごとに比較し、対応する画素単位の輝度を重み付け加算することによって、最終的に1枚の基準画像bを生成する。
 まず、画像判定部13は、最初の正常画像pをそのまま基準画像bとする。続いて画像判定部13は、次の正常画像pの各画素の輝度と、基準画像bの各画素の輝度とを、式(1)により重み付け加算し、基準画像bを生成する。続いて、画像判定部13は、次の正常画像pの各画素の輝度と、基準画像bの各画素の輝度とを、式(1)により重み付け加算し、基準画像bを生成する。画像判定部13は、この処理を繰り返すことにより、最終的に1枚の基準画像bを生成する。
FIG. 2 is a diagram illustrating an example of a first divergence calculation method by the image determination unit 13 according to the first embodiment. 2, the normal image p 1, p 2, ···, p n is the registered normal image database 12, the time t 1, t 2, · · ·, constant captured in the order of t n It is a captured image for a time (for example, the latest 60 seconds). The image determination unit 13 compares the plurality of normal images p 1 , p 2 ,..., Pn one by one in time series order, and finally adds the luminance in the corresponding pixel unit, thereby finally One reference image b n is generated.
First, the image determining unit 13, as a reference image b 1 first normal image p 1. Subsequently, the image determination unit 13 weights and adds the luminance of each pixel of the next normal image p 2 and the luminance of each pixel of the reference image b 1 to generate the reference image b 2 . Subsequently, the image determination unit 13 weights and adds the luminance of each pixel of the next normal image p 3 and the luminance of each pixel of the reference image b 2 to generate the reference image b 3 . . Image determination unit 13, by repeating this process, finally generating one reference image b n.

  b=(1-α)・p+α・bn-1   (1)

 ここで、nは2以上の整数、αは重みである。αは、時刻によらず固定された値であってもよいし、時刻に応じて変化する値にしてもよい。αは、時刻に応じて変化する値である場合、古い時刻に撮像された撮像画像の輝度より新しい時刻に撮像された撮像画像の輝度が基準画像bに反映されやすい値であることが好ましい。

b n = (1−α) · p n + α · b n−1 (1)

Here, n is an integer of 2 or more, and α is a weight. α may be a fixed value regardless of the time, or may be a value that changes according to the time. When α is a value that changes according to time, it is preferable that the brightness of the captured image captured at a newer time than the brightness of the captured image captured at the old time is easily reflected in the reference image b n. .
 画像判定部13は、最終的に生成した基準画像bと、画像取得部11により取得された現在の撮像画像と比較し、画素単位の輝度差を算出する。そして、画像判定部13は、画素単位の輝度差から平均輝度差を算出する。画像判定部13は、算出した平均輝度差を、第1乖離度とする。
 あるいは、画像判定部13は、過去に算出した複数の平均輝度差と、今回算出した平均輝度差とを用いて、分散値を算出し、算出した分散値を第1乖離度としてもよい。分散値を第1乖離度として用いることにより、撮像画像に対する外乱の影響を排除できる。
The image determining unit 13, and the reference image b n which is finally produced, as compared to the current captured image acquired by the image acquisition unit 11 calculates the luminance difference between pixels. Then, the image determination unit 13 calculates an average luminance difference from the luminance difference in units of pixels. The image determination unit 13 sets the calculated average luminance difference as the first divergence degree.
Alternatively, the image determination unit 13 may calculate a variance value by using a plurality of average luminance differences calculated in the past and the average luminance difference calculated this time, and may use the calculated variance value as the first divergence degree. By using the variance value as the first divergence degree, it is possible to eliminate the influence of disturbance on the captured image.
 目検出部14は、画像取得部11により取得された撮像画像から運転者の目を検出する。つまり、目検出部14は、撮像画像に運転者の目が写っている場合、その目が開いているか閉じているかによらず、目を検出できたと判定する。一方、目検出部14は、撮像画像から運転者の顔が外れていて目が写っていない場合、又は目の検出に失敗した場合、目を検出できないと判定する。目検出部14は、目の検出結果を状態判定部15へ出力する。上述したように、運転者が運転不能状態になると、運転者の顔は正常な運転時の位置から移動し、正常な運転時の位置に戻ってこなくなることが多い。そのため、運転者が運転不能状態になると、撮像画像から目を検出できなくなることが多い。 The eye detection unit 14 detects the driver's eyes from the captured image acquired by the image acquisition unit 11. That is, when the driver's eyes are reflected in the captured image, the eye detection unit 14 determines that the eyes have been detected regardless of whether the eyes are open or closed. On the other hand, the eye detection unit 14 determines that the eyes cannot be detected when the driver's face is removed from the captured image and the eyes are not visible, or when eye detection fails. The eye detection unit 14 outputs the eye detection result to the state determination unit 15. As described above, when the driver is unable to drive, the driver's face often moves from the normal driving position and does not return to the normal driving position. Therefore, when the driver is unable to drive, it is often impossible to detect eyes from the captured image.
 なお、目検出部14は、撮像画像から目を検出した際の画像認識の信頼度等を用いて、目の検出有無を判定してもよい。例えば、目検出部14は、目を検出した際の画像認識の信頼度が予め定められた値以上である場合に目を検出できたと判定し、信頼度が上記予め定められた値未満である場合に目を検出できないと判定する。 Note that the eye detection unit 14 may determine the presence or absence of eye detection using the reliability of image recognition when the eyes are detected from the captured image. For example, the eye detection unit 14 determines that the eye has been detected when the reliability of image recognition when the eye is detected is greater than or equal to a predetermined value, and the reliability is less than the predetermined value. It is determined that the eyes cannot be detected.
 状態判定部15は、画像判定部13により算出された第1乖離度と、目検出部14による目の検出結果とを用いて、運転者が正常状態であるか運転不能状態であるかを判定する。状態判定部15は、運転者が正常状態であると判定した場合、登録部16へ判定結果を出力する。 The state determination unit 15 determines whether the driver is in a normal state or incapable of driving using the first divergence calculated by the image determination unit 13 and the eye detection result by the eye detection unit 14. To do. When the state determination unit 15 determines that the driver is in a normal state, the state determination unit 15 outputs a determination result to the registration unit 16.
 具体的には、画像判定部13により算出された第1乖離度と予め定められた第1閾値とを比較する。状態判定部15は、第1乖離度が第1閾値以上である場合、運転者が運転不能状態となっている疑い有りと判定し、第1乖離度が第1閾値未満である場合、運転者が正常状態であると判定する。状態判定部15は、運転者が運転不能状態となっている疑い有りと判定した場合、目検出部14において運転者の目を検出できない時間が予め定められた時間(以下、「所定時間」と称する。)以上継続するか否かを判定する。所定時間は、例えば2秒である。状態判定部15は、運転者の目を検出できない時間が所定時間以上継続した場合、運転者が運転不能状態であると判定し、運転者の目を検出できない時間が所定時間未満であった場合、運転者が正常状態であると判定する。 Specifically, the first divergence calculated by the image determination unit 13 is compared with a predetermined first threshold. The state determination unit 15 determines that the driver is suspected of being incapable of driving when the first divergence degree is equal to or greater than the first threshold, and when the first divergence degree is less than the first threshold, the driver Is determined to be in a normal state. When the state determination unit 15 determines that there is a suspicion that the driver is incapable of driving, the time during which the eyes of the driver cannot be detected by the eye detection unit 14 is determined in advance (hereinafter referred to as “predetermined time”). It is determined whether or not to continue. The predetermined time is, for example, 2 seconds. When the time during which the driver's eyes cannot be detected continues for a predetermined time or longer, the state determination unit 15 determines that the driver is incapable of driving, and the time during which the driver's eyes cannot be detected is less than the predetermined time. It is determined that the driver is in a normal state.
 登録部16は、状態判定部15により運転者が正常状態であると判定された場合、画像取得部11により取得された撮像画像を、正常画像としてデータベース12に登録する。 When the state determination unit 15 determines that the driver is in a normal state, the registration unit 16 registers the captured image acquired by the image acquisition unit 11 in the database 12 as a normal image.
 図3は、実施の形態1に係る運転不能検知装置10の動作例を示すフローチャートである。車両のイグニッションスイッチがオンされて運転不能検知装置10への電源供給が開始されると、運転不能検知装置10は、図3のフローチャートに示す動作を周期的に繰り返す。 FIG. 3 is a flowchart showing an operation example of the inoperability detecting apparatus 10 according to the first embodiment. When the ignition switch of the vehicle is turned on and the supply of power to the inoperability detecting device 10 is started, the inoperability detecting device 10 periodically repeats the operation shown in the flowchart of FIG.
 ステップST1において、画像取得部11は、撮像装置1から撮像画像を取得し、画像判定部13、目検出部14、及び登録部16へ出力する。 In step ST1, the image acquisition unit 11 acquires a captured image from the imaging device 1, and outputs the acquired image to the image determination unit 13, the eye detection unit 14, and the registration unit 16.
 ステップST2において、画像判定部13は、第1乖離度の算出に十分な量の正常画像がデータベース12に登録されているか否かを判定する。第1乖離度の算出に十分な量とは、例えば、60秒分の正常画像である。画像判定部13は、第1乖離度の算出に十分な量の正常画像がデータベース12に登録されていると判定した場合(ステップST2“YES”)、ステップST3へ進む。一方、画像判定部13は、第1乖離度の算出に十分な量の正常画像がデータベース12に登録されていないと判定した場合(ステップST2“NO”)、ステップST9へ進む。この際、画像判定部13は、イグニッションスイッチがオンされてから一定期間(例えば、60秒間)は運転者が正常状態であるものとみなし、ステップST1で取得された現在の撮像画像を正常画像としてデータベース12に登録するよう、登録部16に対して指示する。 In step ST2, the image determination unit 13 determines whether or not a sufficient amount of normal images for calculating the first divergence degree is registered in the database 12. The amount sufficient for calculating the first divergence degree is, for example, a normal image for 60 seconds. If the image determination unit 13 determines that a sufficient amount of normal images for calculating the first divergence degree is registered in the database 12 (step ST2 “YES”), the process proceeds to step ST3. On the other hand, if the image determination unit 13 determines that a sufficient amount of normal images for calculating the first divergence degree is not registered in the database 12 (step ST2 “NO”), the process proceeds to step ST9. At this time, the image determination unit 13 considers that the driver is in a normal state for a certain period (for example, 60 seconds) after the ignition switch is turned on, and uses the current captured image acquired in step ST1 as a normal image. The registration unit 16 is instructed to register in the database 12.
 ステップST3において、画像判定部13は、第1乖離度の算出に十分な量の正常画像をデータベース12から取得する。そして、画像判定部13は、データベース12から取得した正常画像の輝度とステップST1で取得された現在の撮像画像の輝度とに基づいて、第1乖離度を算出し、状態判定部15へ出力する。 In step ST3, the image determination unit 13 acquires a sufficient amount of normal images from the database 12 for calculating the first divergence degree. Then, the image determination unit 13 calculates the first divergence based on the luminance of the normal image acquired from the database 12 and the luminance of the current captured image acquired in step ST1, and outputs the first divergence degree to the state determination unit 15. .
 ステップST4において、状態判定部15は、画像判定部13により算出された第1乖離度が第1閾値以上である場合(ステップST4“YES”)、運転者が運転不能状態となっている疑い有りと判定してステップST5へ進む。一方、状態判定部15は、画像判定部13により算出された第1乖離度が第1閾値未満である場合(ステップST4“NO”)、ステップST8へ進む。 In step ST4, when the first divergence calculated by the image determination unit 13 is greater than or equal to the first threshold (step ST4 “YES”), the state determination unit 15 is suspected that the driver is in an inoperable state. And the process proceeds to step ST5. On the other hand, if the first divergence calculated by the image determination unit 13 is less than the first threshold (step ST4 “NO”), the state determination unit 15 proceeds to step ST8.
 ステップST5において、目検出部14は、ステップST1で取得された現在の撮像画像から目を検出し、検出有無を示す検出結果を状態判定部15へ出力する。 In step ST5, the eye detection unit 14 detects an eye from the current captured image acquired in step ST1, and outputs a detection result indicating the presence or absence of detection to the state determination unit 15.
 ステップST6において、状態判定部15は、目検出部14において目が検出できた場合(ステップST6“YES”)、ステップST7へ進み、目が検出できない場合(ステップST6“NO”)、ステップST10へ進む。 In step ST6, the state determination unit 15 proceeds to step ST7 when the eye can be detected by the eye detection unit 14 (step ST6 “YES”), and to step ST10 when the eye cannot be detected (step ST6 “NO”). move on.
 ステップST7において、状態判定部15は、継続時間をリセットする。この継続時間は、運転者が運転不能状態となっている疑いが有り、かつ、目検出部14において目が検出できない状態が継続している時間である。 In step ST7, the state determination unit 15 resets the duration. This duration time is a time period in which there is a suspicion that the driver is in an inoperable state and an eye cannot be detected by the eye detection unit 14.
 ステップST8において、状態判定部15は、運転者が正常状態であると判定する。そして、状態判定部15は、ステップST1で取得された現在の撮像画像を正常画像としてデータベース12に登録するよう、登録部16に対して指示する。 In step ST8, the state determination unit 15 determines that the driver is in a normal state. Then, the state determination unit 15 instructs the registration unit 16 to register the current captured image acquired in step ST1 in the database 12 as a normal image.
 ステップST9において、登録部16は、ステップST2で画像判定部13から指示を受けた場合、又はステップST8で状態判定部15から指示を受けた場合、ステップST1で取得された現在の撮像画像を正常画像としてデータベース12に登録する。
 なお、図3のフローチャートに示される動作が1回行われるごとに、データベース12に撮像画像が1枚登録されることになる。そのため、データベース12に正常画像が登録されていない初期状態では、ステップST2においてデータベース12に登録済みの正常画像が第1乖離度の算出に十分な量になるまで、ステップST1,ST2,ST9の動作が繰り返されることになる。
In step ST9, when the registration unit 16 receives an instruction from the image determination unit 13 in step ST2 or receives an instruction from the state determination unit 15 in step ST8, the registration unit 16 performs normal operation on the current captured image acquired in step ST1. The image is registered in the database 12 as an image.
Each time the operation shown in the flowchart of FIG. 3 is performed once, one captured image is registered in the database 12. Therefore, in the initial state where no normal image is registered in the database 12, the operations of steps ST1, ST2, and ST9 are performed until the normal image registered in the database 12 in step ST2 has a sufficient amount for calculating the first divergence degree. Will be repeated.
 ステップST10において、状態判定部15は、運転者が運転不能状態となっている疑いが有り、かつ、目検出部14において目が検出できない状態であるため、上記継続時間をカウントアップする。 In step ST10, the state determination unit 15 counts the duration time because there is a suspicion that the driver is in an inoperable state and the eyes cannot be detected by the eye detection unit 14.
 ステップST11において、状態判定部15は、上記継続時間が所定時間以上であるか否かを判定する。状態判定部15は、継続時間が所定時間以上であると判定した場合(ステップST11“YES”)、ステップST12へ進み、継続時間が所定時間未満であると判定した場合(ステップST11“NO”)、図3のフローチャートに示される動作を終了する。
 なお、状態判定部15は、図3のフローチャートに示される動作を繰り返している間、運転者が正常状態であると判定するまで、継続時間を保持し続ける。
In step ST11, the state determination unit 15 determines whether or not the duration time is equal to or longer than a predetermined time. If state determination unit 15 determines that the duration is greater than or equal to the predetermined time (step ST11 “YES”), it proceeds to step ST12 and determines that the duration is less than the predetermined time (step ST11 “NO”). Then, the operation shown in the flowchart of FIG.
The state determination unit 15 continues to hold the duration until it is determined that the driver is in a normal state while the operation shown in the flowchart of FIG. 3 is repeated.
 ステップST12において、状態判定部15は、運転者が運転不能状態であると判定する。 In step ST12, the state determination unit 15 determines that the driver is in an inoperable state.
 以上のように、実施の形態1に係る運転不能検知装置10は、画像取得部11、データベース12、画像判定部13、目検出部14、状態判定部15、及び登録部16を備える。画像取得部11は、車両の運転者の撮像画像を取得する。データベース12は、画像取得部11により取得された撮像画像のうちの運転者が正常状態であるときの撮像画像を、正常画像として登録する。画像判定部13は、データベース12に登録済みの正常画像と画像取得部11により取得された撮像画像との乖離の度合いを示す第1乖離度を算出する。目検出部14は、画像取得部11により取得された撮像画像から運転者の目を検出する。状態判定部15は、画像判定部13により算出された第1乖離度を第1閾値と比較し、当該第1乖離度が第1閾値以上である場合に、目検出部14において運転者の目が検出できない時間が所定時間以上継続すると運転者が運転不能状態であると判定し、目検出部14において運転者の目が検出できない時間が所定時間以上継続しなければ運転者が正常状態であると判定する。登録部16は、状態判定部15により運転者が正常状態であると判定された場合、画像取得部11により取得された撮像画像を正常画像としてデータベース12に登録する。この構成により、運転不能検知装置10は、データベース12に正常画像を逐次登録して以降の判定に用いることができるので、運転姿勢の個人差を考慮して運転不能状態を検知することができる。また、運転不能検知装置10は、正常画像と現在の撮像画像との第1乖離度、及び目の検出有無を組み合わせて運転不能状態を検知するので、正常状態と運転不能状態とを精度よく判別することができる。 As described above, the inoperability detecting apparatus 10 according to the first embodiment includes the image acquisition unit 11, the database 12, the image determination unit 13, the eye detection unit 14, the state determination unit 15, and the registration unit 16. The image acquisition unit 11 acquires a captured image of the vehicle driver. The database 12 registers, as a normal image, a captured image when the driver is in a normal state among the captured images acquired by the image acquisition unit 11. The image determination unit 13 calculates a first divergence degree indicating the degree of divergence between the normal image registered in the database 12 and the captured image acquired by the image acquisition unit 11. The eye detection unit 14 detects the driver's eyes from the captured image acquired by the image acquisition unit 11. The state determination unit 15 compares the first divergence calculated by the image determination unit 13 with a first threshold, and when the first divergence is equal to or greater than the first threshold, the eye detection unit 14 performs the driver's eyes. If the time during which the vehicle cannot be detected continues for a predetermined time or more, the driver determines that the vehicle is in an inoperable state. If the time during which the eyes of the driver cannot be detected by the eye detection unit 14 does not continue for the predetermined time or longer, the driver is in a normal state. Is determined. When the state determination unit 15 determines that the driver is in a normal state, the registration unit 16 registers the captured image acquired by the image acquisition unit 11 in the database 12 as a normal image. With this configuration, the inoperability detecting device 10 can register normal images in the database 12 and use them for subsequent determinations, and therefore can detect an inoperable state in consideration of individual differences in driving posture. In addition, since the inoperability detecting device 10 detects the inoperable state by combining the first deviation degree between the normal image and the current captured image and the presence / absence of eye detection, the normal state and the inoperable state are accurately discriminated. can do.
 また、実施の形態1の画像判定部13は、上記第1乖離度を、データベース12に登録済みの正常画像の輝度と画像取得部11により取得された撮像画像の輝度とに基づいて算出する。これにより、運転不能検知装置10は、正常画像の輝度と現在の撮像画像の輝度とが乖離している場合、運転者の顔が正常な運転時の位置から移動して運転不能状態となっている疑いが有ることを判定できる。 Further, the image determination unit 13 according to the first embodiment calculates the first divergence degree based on the luminance of the normal image registered in the database 12 and the luminance of the captured image acquired by the image acquisition unit 11. Thereby, when the brightness of the normal image and the brightness of the current captured image are deviated, the driving impossibility detection device 10 moves from the position during normal driving and becomes incapable of driving. It can be determined that there is a suspicion.
実施の形態2.
 図4は、実施の形態2に係る運転不能検知装置10の構成例を示すブロック図である。実施の形態2に係る運転不能検知装置10は、図1に示された実施の形態1の運転不能検知装置10に対して車両機器2及び車両情報取得部21が追加された構成である。図4において図1と同一又は相当する部分は、同一の符号を付し説明を省略する。
Embodiment 2. FIG.
FIG. 4 is a block diagram illustrating a configuration example of the inoperability detecting apparatus 10 according to the second embodiment. The inoperability detecting apparatus 10 according to Embodiment 2 has a configuration in which a vehicle device 2 and a vehicle information acquisition unit 21 are added to the inoperability detecting apparatus 10 of Embodiment 1 shown in FIG. 4, parts that are the same as or correspond to those in FIG. 1 are given the same reference numerals, and descriptions thereof are omitted.
 車両機器2は、例えば、ステアリングセンサ、速度計、ギアポジションセンサ、アクセルポジションセンサ、ブレーキ圧センサ、加速度計、温度計、湿度計、シートポジションセンサ、又はGPS(Global Positioning System)受信機である。車両機器2は、CAN(Controller Area Network)等を介して運転不能検知装置10に接続される。この車両機器2は、ステアリング角度、車速、又はギアポジション等の情報を含む車両情報を車両情報取得部21へ出力する。 The vehicle device 2 is, for example, a steering sensor, a speedometer, a gear position sensor, an accelerator position sensor, a brake pressure sensor, an accelerometer, a thermometer, a hygrometer, a seat position sensor, or a GPS (Global Positioning System) receiver. The vehicle device 2 is connected to the inoperability detecting device 10 via a CAN (Controller Area Network) or the like. The vehicle device 2 outputs vehicle information including information such as a steering angle, a vehicle speed, or a gear position to the vehicle information acquisition unit 21.
 車両情報取得部21は、車両の状態を示す車両情報を車両機器2から取得し、画像判定部13a、及び登録部16aへ出力する。 The vehicle information acquisition part 21 acquires the vehicle information which shows the state of a vehicle from the vehicle apparatus 2, and outputs it to the image determination part 13a and the registration part 16a.
 データベース12aには、画像取得部11により取得された撮像画像のうちの運転者が正常状態であるときの撮像画像が、車両情報取得部21により取得された車両情報と関連付けて登録される。即ち、データベース12aには、運転者が正常状態であるときの正常画像と、この正常画像が撮像されたときの車両の状態を示す車両情報とが、関連付けて登録される。 In the database 12a, a captured image when the driver is in a normal state among the captured images acquired by the image acquisition unit 11 is registered in association with the vehicle information acquired by the vehicle information acquisition unit 21. That is, in the database 12a, a normal image when the driver is in a normal state and vehicle information indicating the state of the vehicle when the normal image is captured are associated and registered.
 図5は、実施の形態2に係る運転不能検知装置10の動作例を示すフローチャートである。車両のイグニッションスイッチがオンされて運転不能検知装置10への電源供給が開始されると、運転不能検知装置10は、図5のフローチャートに示す動作を周期的に繰り返す。 FIG. 5 is a flowchart showing an operation example of the inoperability detecting apparatus 10 according to the second embodiment. When the ignition switch of the vehicle is turned on and the supply of power to the inoperability detecting device 10 is started, the inoperability detecting device 10 periodically repeats the operation shown in the flowchart of FIG.
 ステップST1において、画像取得部11は、撮像装置1から撮像画像を取得し、画像判定部13a、目検出部14、及び登録部16aへ出力する。 In step ST1, the image acquisition unit 11 acquires a captured image from the imaging device 1, and outputs the acquired image to the image determination unit 13a, the eye detection unit 14, and the registration unit 16a.
 ステップST21において、車両情報取得部21は、車両機器2から車両情報を取得し、画像判定部13a及び登録部16aへ出力する。 In step ST21, the vehicle information acquisition unit 21 acquires vehicle information from the vehicle device 2 and outputs the vehicle information to the image determination unit 13a and the registration unit 16a.
 ステップST2aにおいて、画像判定部13aは、第1乖離度の算出に十分な量の正常画像がデータベース12aに登録されているか否かを判定する。このとき、画像判定部13aは、第1乖離度の算出に用いる正常画像として、ステップST21で取得された現在の車両情報に一致する車両情報が関連付けられている正常画像を選定する。例えば、画像判定部13aは、現在のステアリング角度及び車速に一致するステアリング角度及び車速のときに撮像された正常画像を選定する。なお、現在の車両情報とデータベース12aに登録済みの車両情報とが厳密に一致している必要はなく、実質的に一致しているとみなせる程度類似していればよい。画像判定部13aは、第1乖離度の算出に十分な量の正常画像がデータベース12aに登録されていると判定した場合(ステップST2a“YES”)、ステップST3へ進む。一方、画像判定部13aは、第1乖離度の算出に十分な量の正常画像がデータベース12aに登録されていないと判定した場合(ステップST2a“NO”)、ステップST9aへ進む。この際、画像判定部13aは、イグニッションスイッチがオンされてから一定期間(例えば、60秒間)は運転者が正常状態であるものとみなし、ステップST1で取得された現在の撮像画像を正常画像とし、この正常画像とステップST21で取得された車両情報とを関連付けてデータベース12aに登録するよう、登録部16aに対して指示する。 In step ST2a, the image determination unit 13a determines whether or not an amount of normal images sufficient for calculating the first divergence degree is registered in the database 12a. At this time, the image determination unit 13a selects a normal image associated with vehicle information that matches the current vehicle information acquired in step ST21 as the normal image used for calculating the first divergence degree. For example, the image determination unit 13a selects a normal image captured at the steering angle and vehicle speed that match the current steering angle and vehicle speed. Note that the current vehicle information and the vehicle information already registered in the database 12a do not need to be exactly the same, but may be similar to each other as long as they can be regarded as substantially matching. If the image determination unit 13a determines that a sufficient amount of normal images for calculation of the first divergence degree is registered in the database 12a (step ST2a “YES”), the process proceeds to step ST3. On the other hand, if the image determination unit 13a determines that a sufficient amount of normal images for calculation of the first divergence degree is not registered in the database 12a (step ST2a “NO”), the process proceeds to step ST9a. At this time, the image determination unit 13a considers that the driver is in a normal state for a certain period (for example, 60 seconds) after the ignition switch is turned on, and sets the current captured image acquired in step ST1 as a normal image. The registration unit 16a is instructed to associate the normal image with the vehicle information acquired in step ST21 and register it in the database 12a.
 ステップST3では、画像判定部13aは、ステップST21で取得された現在の車両情報に一致する車両情報が関連付けられている正常画像を、データベース12aから取得し、第1乖離度を算出する。ステップST3以降のステップST4~ST8,ST10~ST12は、図3のステップST4~ST8,ST10~ST12と同じ動作である。 In step ST3, the image determination unit 13a acquires from the database 12a a normal image associated with vehicle information that matches the current vehicle information acquired in step ST21, and calculates a first degree of divergence. Steps ST4 to ST8 and ST10 to ST12 after step ST3 are the same operations as steps ST4 to ST8 and ST10 to ST12 of FIG.
 ステップST9aにおいて、登録部16aは、ステップST2aで画像判定部13aから指示を受けた場合、又は、ステップST8で状態判定部15から指示を受けた場合、ステップST1で取得された現在の撮像画像を正常画像とし、この正常画像とステップST21で取得された現在の車両情報とを関連付けてデータベース12aに登録する。 In step ST9a, when the registration unit 16a receives an instruction from the image determination unit 13a in step ST2a, or receives an instruction from the state determination unit 15 in step ST8, the registration unit 16a acquires the current captured image acquired in step ST1. A normal image is registered, and the normal image and the current vehicle information acquired in step ST21 are associated with each other and registered in the database 12a.
 以上のように、実施の形態2に係る運転不能検知装置10は、車両の状態を示す車両情報を取得する車両情報取得部21を備える。データベース12aは、正常画像と、当該正常画像が撮像されたときの車両の状態を示す車両情報とを関連付けて登録する。画像判定部13aは、データベース12aに登録済みの正常画像のうち、車両情報取得部21により取得された車両情報に一致する車両情報が関連付けられた正常画像を用いて第1乖離度を算出する。この構成により、運転不能検知装置10は、運転姿勢の個人差に加えて車両の状態の違いを考慮して、運転不能状態をより精度よく検知することができる。 As described above, the inoperability detecting apparatus 10 according to the second embodiment includes the vehicle information acquisition unit 21 that acquires vehicle information indicating the state of the vehicle. The database 12a registers a normal image in association with vehicle information indicating the state of the vehicle when the normal image is captured. The image determination unit 13a calculates a first degree of divergence using a normal image associated with vehicle information that matches the vehicle information acquired by the vehicle information acquisition unit 21 among normal images registered in the database 12a. With this configuration, the inoperability detecting device 10 can detect the inoperable state with higher accuracy in consideration of the difference in the state of the vehicle in addition to the individual difference in the driving posture.
実施の形態3.
 図6は、実施の形態3に係る運転不能検知装置10の構成例を示すブロック図である。実施の形態3に係る運転不能検知装置10は、図1に示された実施の形態1の運転不能検知装置10に対して顔移動判定部31が追加された構成である。図6において図1と同一又は相当する部分は、同一の符号を付し説明を省略する。
Embodiment 3 FIG.
FIG. 6 is a block diagram illustrating a configuration example of the inoperability detecting apparatus 10 according to the third embodiment. The driving disability detection device 10 according to the third embodiment has a configuration in which a face movement determination unit 31 is added to the driving disability detection device 10 of the first embodiment shown in FIG. In FIG. 6, the same or corresponding parts as those in FIG.
 顔移動判定部31は、データベース12に登録済みの正常画像と、画像取得部11により取得された撮像画像とを比較して、両画像の第2乖離度を算出する。顔移動判定部31は、算出した第2乖離度を状態判定部15bへ出力する。この顔移動判定部31は、両画像の顔位置の乖離の度合いを示す第2乖離度を算出するものである。第2乖離度が大きい場合、運転者の顔が正常な運転時の位置から移動している可能性が高く、運転不能状態となっている疑いが有る。 The face movement determination unit 31 compares the normal image registered in the database 12 with the captured image acquired by the image acquisition unit 11, and calculates the second degree of divergence between the two images. The face movement determination unit 31 outputs the calculated second deviation degree to the state determination unit 15b. The face movement determination unit 31 calculates a second divergence degree indicating the degree of divergence between the face positions of both images. When the second divergence degree is large, there is a high possibility that the driver's face is moving from the normal driving position, and there is a suspicion that the driver is in a driving impossible state.
 ここで、顔移動判定部31による第2乖離度の算出方法の一例を説明する。
 顔移動判定部31は、現在から一定時間前(例えば、10秒前)までの直近の時間帯に撮像された複数の撮像画像のそれぞれから顔位置を検出する。そして、顔移動判定部31は、直近10秒間の顔位置の分布を表す判定用ヒストグラムを作成する。なお、直近10秒間の撮像画像には、状態判定部15bにより正常画像と判定されたものが含まれていてもよい。また、現在から一定時間前までの時間帯に撮像された撮像画像は、データベース12が一時的に保持してもよいし、顔移動判定部31が一時的に保持してもよい。
Here, an example of a method of calculating the second deviation degree by the face movement determination unit 31 will be described.
The face movement determination unit 31 detects a face position from each of a plurality of captured images captured in the most recent time zone from the present to a certain time before (for example, 10 seconds before). Then, the face movement determination unit 31 creates a determination histogram that represents the distribution of face positions over the last 10 seconds. Note that the captured image for the latest 10 seconds may include an image that has been determined as a normal image by the state determination unit 15b. Also, the captured image captured in the time zone from the present to a certain time ago may be temporarily stored in the database 12 or may be temporarily stored in the face movement determination unit 31.
 また、顔移動判定部31は、データベース12に登録済みの正常画像のうち、イグニッションスイッチがオンされた時点から、現在より上記一定時間前までの時間帯に撮像された複数の正常画像を取得する。例えば、現在時刻が、イグニッションスイッチがオンされた時点から10分経過した時点である場合、顔移動判定部31は、過去9分50秒間における複数の正常画像をデータベース12から取得する。顔移動判定部31は、データベース12から取得した複数の正常画像のそれぞれから顔位置を検出する。そして、顔移動判定部31は、過去9分50秒間の顔位置の分布を表す基準ヒストグラムを作成する。なお、判定用ヒストグラムと基準ヒストグラムとでは対象とする画像の枚数が異なるため、顔移動判定部31は両ヒストグラムの度数を相対値にする。 In addition, the face movement determination unit 31 acquires a plurality of normal images captured in a time period from the time when the ignition switch is turned on to the predetermined time before the present time among the normal images registered in the database 12. . For example, when the current time is the time when 10 minutes have passed since the ignition switch was turned on, the face movement determination unit 31 acquires a plurality of normal images for the past 9 minutes and 50 seconds from the database 12. The face movement determination unit 31 detects a face position from each of a plurality of normal images acquired from the database 12. Then, the face movement determination unit 31 creates a reference histogram representing the distribution of face positions for the past 9 minutes and 50 seconds. Note that since the number of target images differs between the determination histogram and the reference histogram, the face movement determination unit 31 sets the frequencies of both histograms to relative values.
 顔移動判定部31は、直近10秒間の判定用ヒストグラムと、過去9分50秒間の基準ヒストグラムとを比較し、第2乖離度を算出する。例えば、顔移動判定部31は、ヒストグラム交差法を用いて2つのヒストグラムの相関値を算出し、相関値の逆数を第2乖離度とする。顔移動判定部31が複数の正常画像及び複数の撮像画像を用いて顔位置の移動を判定することにより、運転者が運転中に姿勢を変えた場合に直ちに運転不能状態と誤判定されることを防止できる。 The face movement determination unit 31 compares the determination histogram for the latest 10 seconds with the reference histogram for the past 9 minutes and 50 seconds, and calculates the second divergence degree. For example, the face movement determination unit 31 calculates a correlation value between two histograms using a histogram intersection method, and sets the reciprocal of the correlation value as the second divergence degree. The face movement determination unit 31 determines the movement of the face position using a plurality of normal images and a plurality of captured images, so that when the driver changes his / her posture while driving, it is erroneously determined to be incapable of driving. Can be prevented.
 顔移動判定部31は、次回(例えば、イグニッションスイッチがオンされた時点から10分1秒経過した時点)、直近10秒間に撮像された撮像画像を用いて判定用ヒストグラムを作成する。また、顔移動判定部31は、データベース12に登録済みの正常画像のうち、イグニッションスイッチがオンされた時点から9分51秒後までの間に撮像された正常画像を用いて、基準ヒストグラムを作成する。基準ヒストグラムを作成する際、顔移動判定部31は、過去9分50秒間の正常画像における顔位置を示す度数に小さい重みを付け、新しく追加される9分51秒目の正常画像における顔位置を示す度数に大きい重みを付ける。これにより、古い時刻に撮像された正常画像の顔位置より新しい時刻に撮像された正常画像の顔位置が基準ヒストグラムに反映されやすくなる。 The face movement determination unit 31 creates a histogram for determination by using the captured images captured in the latest 10 seconds at the next time (for example, when 10 minutes and 1 second have elapsed since the ignition switch was turned on). In addition, the face movement determination unit 31 creates a reference histogram using normal images captured from the time when the ignition switch is turned on until 9 minutes 51 seconds among normal images registered in the database 12. To do. When creating the reference histogram, the face movement determination unit 31 assigns a small weight to the frequency indicating the face position in the normal image for the past 9 minutes and 50 seconds, and determines the face position in the newly added normal image at 9 minutes and 51 seconds. Give a high weight to the frequency shown. Accordingly, the face position of the normal image captured at a newer time than the face position of the normal image captured at the old time is easily reflected in the reference histogram.
 図7は、実施の形態3に係る運転不能検知装置10の動作例を示すフローチャートである。車両のイグニッションスイッチがオンされて運転不能検知装置10への電源供給が開始されると、運転不能検知装置10は、図7のフローチャートに示す動作を周期的に繰り返す。図7のステップST1~ST3,ST5~ST12は、図3のステップST1~ST3,ST5~ST12の動作と同じである。 FIG. 7 is a flowchart showing an operation example of the inoperability detecting apparatus 10 according to the third embodiment. When the ignition switch of the vehicle is turned on and the supply of power to the inoperability detector 10 is started, the inoperability detector 10 periodically repeats the operation shown in the flowchart of FIG. Steps ST1 to ST3 and ST5 to ST12 in FIG. 7 are the same as the operations in steps ST1 to ST3 and ST5 to ST12 in FIG.
 ステップST31において、顔移動判定部31は、イグニッションスイッチがオンされた時点から、現在より上記一定時間前までの時間帯に撮像された正常画像を、データベース12から取得する。そして、顔移動判定部31は、データベース12から取得した正常画像における顔位置と、ステップST1で取得された現在の撮像画像を含む、現在から上記一定時間前までの直近の時間帯に撮像された撮像画像における顔位置とに基づいて、第2乖離度を算出し、状態判定部15bへ出力する。 In step ST31, the face movement determination unit 31 acquires, from the database 12, a normal image captured in a time zone from when the ignition switch is turned on until the predetermined time before the present time. Then, the face movement determination unit 31 is captured in the most recent time zone from the present to the predetermined time before, including the face position in the normal image acquired from the database 12 and the current captured image acquired in step ST1. Based on the face position in the captured image, the second divergence degree is calculated and output to the state determination unit 15b.
 ステップST32において、状態判定部15bは、画像判定部13により算出された第1乖離度と上記第1閾値とを比較すると共に、顔移動判定部31により算出された第2乖離度と予め定められた第2閾値とを比較する。状態判定部15bは、第1乖離度が第1閾値以上である場合、かつ、第2乖離度が第2閾値以上である場合(ステップST32“YES”)、運転者が運転不能状態となっている疑い有りと判定してステップST5へ進む。一方、状態判定部15bは、第1乖離度が第1閾値未満である場合、又は第2乖離度が第2閾値未満である場合(ステップST32“NO”)、ステップST8へ進む。 In step ST <b> 32, the state determination unit 15 b compares the first divergence calculated by the image determination unit 13 with the first threshold and is determined in advance as the second divergence calculated by the face movement determination unit 31. The second threshold value is compared. When the first divergence is greater than or equal to the first threshold and the second divergence is greater than or equal to the second threshold (step ST32 “YES”), the state determination unit 15b becomes incapable of driving. The process proceeds to step ST5. On the other hand, when the first deviation degree is less than the first threshold value or when the second deviation degree is less than the second threshold value (step ST32 “NO”), the state determination unit 15b proceeds to step ST8.
 以上のように、実施の形態3に係る運転不能検知装置10は、データベース12に登録済みの正常画像における運転者の顔位置と画像取得部11により取得された撮像画像における運転者の顔位置との乖離の度合いを示す第2乖離度を算出する。状態判定部15bは、画像判定部13により算出された第1乖離度が第1閾値以上である場合、かつ、顔移動判定部31により算出された第2乖離度が第2閾値以上である場合に、目検出部14において運転者の目を検出できない時間が所定時間以上継続すると運転者が運転不能状態であると判定する。一方、状態判定部15bは、目検出部14において運転者の目を検出できない時間が所定時間以上継続しなければ運転者が正常状態であると判定する。この構成により、運転不能検知装置10は、正常画像の顔位置と撮像画像の顔位置とが乖離している場合、運転者の顔が正常な運転時の位置から移動して運転不能状態となっている疑いが有ることを判定できる。また、運転不能検知装置10は、正常画像と現在の撮像画像との第1乖離度及び第2乖離度、並びに目の検出有無を組み合わせて運転不能状態を検知するので、正常状態と運転不能状態とをさらに精度よく判別することができる。 As described above, the driving disability detection device 10 according to the third embodiment includes the driver's face position in the normal image registered in the database 12 and the driver's face position in the captured image acquired by the image acquisition unit 11. A second divergence degree indicating the degree of divergence is calculated. When the first divergence calculated by the image determination unit 13 is greater than or equal to the first threshold and the second divergence calculated by the face movement determination unit 31 is greater than or equal to the second threshold. In addition, if the eye detection unit 14 cannot detect the driver's eyes for a predetermined time or more, it is determined that the driver is in an inoperable state. On the other hand, the state determination unit 15b determines that the driver is in a normal state if the eye detection unit 14 cannot detect the driver's eyes for a predetermined time or longer. With this configuration, when the face position of the normal image and the face position of the captured image deviate from each other, the driving impossibility detection device 10 moves from the position during normal driving and becomes incapable of driving. Can be determined to be suspected. In addition, since the inoperability detecting device 10 detects the inoperable state by combining the first and second divergence degrees between the normal image and the current captured image, and the presence or absence of eye detection, the normal state and the inoperable state are detected. Can be more accurately discriminated.
 なお、実施の形態3の運転不能検知装置10に対して、実施の形態2の車両機器2及び車両情報取得部21の構成を組み合わせてもよい。 In addition, you may combine the structure of the vehicle equipment 2 of Embodiment 2, and the vehicle information acquisition part 21 with respect to the driving | operation impossible detection apparatus 10 of Embodiment 3. FIG.
実施の形態4.
 実施の形態4に係る運転不能検知装置10の構成は、実施の形態3の図6に示された構成と図面上は同一であるため、以下では図6を援用する。
Embodiment 4 FIG.
Since the configuration of the inoperability detecting apparatus 10 according to the fourth embodiment is the same as the configuration shown in FIG. 6 of the third embodiment in the drawing, FIG. 6 is used below.
 図8は、実施の形態4に係る運転不能検知装置10の動作例を示すフローチャートである。車両のイグニッションスイッチがオンされて運転不能検知装置10への電源供給が開始されると、運転不能検知装置10は、図8のフローチャートに示す動作を周期的に繰り返す。図8のステップST1~ST3,ST5~ST12,ST31,ST32は、図3のステップST1~ST3,ST5~ST12,ST31,ST32の動作と同じである。 FIG. 8 is a flowchart showing an operation example of the inoperability detecting apparatus 10 according to the fourth embodiment. When the ignition switch of the vehicle is turned on and power supply to the inoperability detecting device 10 is started, the inoperability detecting device 10 periodically repeats the operation shown in the flowchart of FIG. Steps ST1 to ST3, ST5 to ST12, ST31, and ST32 in FIG. 8 are the same as the operations in steps ST1 to ST3, ST5 to ST12, ST31, and ST32 in FIG.
 ステップST41において、顔移動判定部31は、ステップST1で取得された現在の撮像画像から運転者の顔位置を検出できた場合(ステップST41“YES”)、ステップST31へ進み、顔位置を検出できない場合(ステップST41“NO”)、ステップST42へ進む。 In step ST41, when the face movement determination unit 31 can detect the driver's face position from the current captured image acquired in step ST1 (step ST41 “YES”), the process proceeds to step ST31 and the face position cannot be detected. If so (step ST41 "NO"), the process proceeds to step ST42.
 ステップST42において、状態判定部15bは、顔移動判定部31により撮像画像における運転者の顔位置を検出できない場合、顔移動判定部31により算出されるはずであった第2乖離度を用いた閾値判定を行わず、ステップST3で画像判定部13により算出された第1乖離度を用いた閾値判定のみを行うことを決定する。状態判定部15bは、第1乖離度を用いた閾値判定のみを行うことを決定した場合、第1乖離度及び第2乖離度を用いた閾値判定を行う場合(ステップST32)に比べて、第1閾値を大きい値に変更する。 In step ST <b> 42, the state determination unit 15 b uses the second deviation degree that should have been calculated by the face movement determination unit 31 when the face movement determination unit 31 cannot detect the driver's face position in the captured image. It is determined that only the threshold determination using the first divergence calculated by the image determination unit 13 in step ST3 is performed without performing the determination. When it is determined that only the threshold determination using the first divergence degree is performed, the state determination unit 15b is more than the case where the threshold determination is performed using the first divergence degree and the second divergence degree (step ST32). 1 Threshold value is changed to a larger value.
 ステップST43において、状態判定部15bは、画像判定部13により算出された第1乖離度が、変更後の第1閾値以上である場合(ステップST43“YES”)、運転者が運転不能状態となっている疑い有りと判定してステップST5へ進む。一方、状態判定部15bは、画像判定部13により算出された第1乖離度が、変更後の第1閾値未満である場合(ステップST43“NO”)、ステップST8へ進む。 In step ST43, when the first divergence calculated by the image determination unit 13 is greater than or equal to the first threshold after the change (step ST43 “YES”), the state determination unit 15b enters a state in which the driver cannot drive. The process proceeds to step ST5. On the other hand, when the first divergence calculated by the image determination unit 13 is less than the changed first threshold (step ST43 “NO”), the state determination unit 15b proceeds to step ST8.
 以上のように、実施の形態4の状態判定部15bは、顔移動判定部31により撮像画像における運転者の顔位置が検出できない場合、画像判定部13により算出された第1乖離度を第1閾値と比較し、当該第1乖離度が第1閾値以上である場合に、目検出部14において運転者の目を検出できない時間が所定時間以上継続すると運転者が運転不能状態であると判定し、目検出部14において運転者の目を検出できない時間が所定時間以上継続しなければ運転者が正常状態であると判定する。運転不能検知装置10は、不正確な第2乖離度を用いた閾値判定を行わないことにより、正常状態と運転不能状態とを精度よく判別することができる。 As described above, when the face movement determination unit 31 cannot detect the driver's face position in the captured image, the state determination unit 15b according to the fourth embodiment uses the first divergence calculated by the image determination unit 13 as the first degree of deviation. When the first deviation degree is equal to or greater than the first threshold value when compared with the threshold value, it is determined that the driver is in an inoperable state if the eye detection unit 14 continues for a predetermined time or longer during which the driver's eyes cannot be detected. If the time during which the eyes of the driver cannot be detected by the eye detection unit 14 does not continue for a predetermined time or more, it is determined that the driver is in a normal state. The inoperability detecting device 10 can accurately determine the normal state and the inoperable state by not performing the threshold determination using the inaccurate second degree of deviation.
 また、実施の形態4の状態判定部15bは、顔移動判定部31により撮像画像における運転者の顔位置が検出できない場合、第1閾値を大きい値に変更する。これにより、運転不能検知装置10は、運転者の顔位置が多少移動しただけでは運転不能状態になっている疑いが有るとは判定せず、より大きく移動した場合に運転不能状態になっている疑いが有ると判定することになる。よって、運転不能検知装置10は、第2乖離度を用いた閾値判定を行わず第1乖離度を用いた閾値判定のみを行う場合に、過剰に、運転不能状態になっている疑い有りと判定することを防ぐことができる。 Moreover, the state determination part 15b of Embodiment 4 changes a 1st threshold value to a large value, when the face movement determination part 31 cannot detect a driver | operator's face position in a captured image. As a result, the driving impossibility detecting device 10 does not determine that there is a suspicion that the driver is in an inoperable state only by a slight movement of the driver's face position, and is in an inoperable state when the driver moves more greatly. It will be determined that there is doubt. Therefore, the inoperability detection apparatus 10 determines that there is an excessive suspicion that the inoperability is excessive when only the threshold determination using the first divergence is performed without performing the threshold determination using the second divergence. Can be prevented.
 なお、実施の形態4の運転不能検知装置10に対して、実施の形態2の車両機器2及び車両情報取得部21の構成を組み合わせてもよい。 In addition, you may combine the structure of the vehicle equipment 2 of Embodiment 2, and the vehicle information acquisition part 21 with respect to the driving impossible detection apparatus 10 of Embodiment 4. FIG.
 また、実施の形態1~4において、同一車両を運転する運転者が複数人存在する場合、運転不能検知装置10は、撮像画像等に基づく個人認証を行い、運転者ごとにデータベース12を構築すればよい。そして、運転不能検知装置10は、運転者が正常状態であるか運転不能状態であるかを、その運転者に対応するデータベース12の正常画像を用いて判定すればよい。 In the first to fourth embodiments, when there are a plurality of drivers who drive the same vehicle, the driving impossibility detecting device 10 performs personal authentication based on a captured image or the like, and builds a database 12 for each driver. That's fine. Then, the inoperability detecting device 10 may determine whether the driver is in a normal state or in an inoperable state using a normal image in the database 12 corresponding to the driver.
 最後に、各実施の形態に係る運転不能検知装置10のハードウェア構成を説明する。
 図9A及び図9Bは、各実施の形態に係る運転不能検知装置10のハードウェア構成例を示す図である。運転不能検知装置10におけるデータベース12,12aは、メモリ102である。運転不能検知装置10における画像取得部11、画像判定部13,13a、目検出部14、状態判定部15,15b、登録部16,16a、車両情報取得部21、及び顔移動判定部31の機能は、処理回路により実現される。即ち、運転不能検知装置10は、上記機能を実現するための処理回路を備える。処理回路は、専用のハードウェアとしての処理回路100であってもよいし、メモリ102に格納されるプログラムを実行するプロセッサ101であってもよい。
Finally, the hardware configuration of the inoperability detecting apparatus 10 according to each embodiment will be described.
9A and 9B are diagrams illustrating a hardware configuration example of the inoperability detecting apparatus 10 according to each embodiment. The databases 12 and 12a in the inoperability detecting apparatus 10 are the memory 102. Functions of the image acquisition unit 11, the image determination units 13 and 13 a, the eye detection unit 14, the state determination units 15 and 15 b, the registration units 16 and 16 a, the vehicle information acquisition unit 21, and the face movement determination unit 31 in the driving disability detection device 10. Is realized by a processing circuit. That is, the inoperability detecting device 10 includes a processing circuit for realizing the above function. The processing circuit may be the processing circuit 100 as dedicated hardware, or may be the processor 101 that executes a program stored in the memory 102.
 図9Aに示されるように、処理回路が専用のハードウェアである場合、処理回路100は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field Programmable Gate Array)、又はこれらを組み合わせたものが該当する。画像取得部11、画像判定部13,13a、目検出部14、状態判定部15,15b、登録部16,16a、車両情報取得部21、及び顔移動判定部31の機能を複数の処理回路100で実現してもよいし、各部の機能をまとめて1つの処理回路100で実現してもよい。 As shown in FIG. 9A, when the processing circuit is dedicated hardware, the processing circuit 100 includes, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit). ), FPGA (Field Programmable Gate Array), or a combination thereof. The functions of the image acquisition unit 11, the image determination units 13 and 13 a, the eye detection unit 14, the state determination units 15 and 15 b, the registration units 16 and 16 a, the vehicle information acquisition unit 21, and the face movement determination unit 31 are a plurality of processing circuits 100. Alternatively, the functions of the respective units may be combined and realized by one processing circuit 100.
 図9Bに示されるように、処理回路がプロセッサ101である場合、画像取得部11、画像判定部13,13a、目検出部14、状態判定部15,15b、登録部16,16a、車両情報取得部21、及び顔移動判定部31の機能は、ソフトウェア、ファームウェア、又はソフトウェアとファームウェアとの組み合わせにより実現される。ソフトウェア又はファームウェアはプログラムとして記述され、メモリ102に格納される。プロセッサ101は、メモリ102に格納されたプログラムを読みだして実行することにより、各部の機能を実現する。即ち、運転不能検知装置10は、プロセッサ101により実行されるときに、図3等のフローチャートで示されるステップが結果的に実行されることになるプログラムを格納するためのメモリ102を備える。また、このプログラムは、画像取得部11、画像判定部13,13a、目検出部14、状態判定部15,15b、登録部16,16a、車両情報取得部21、及び顔移動判定部31の手順又は方法をコンピュータに実行させるものであるとも言える。 As shown in FIG. 9B, when the processing circuit is the processor 101, the image acquisition unit 11, the image determination units 13, 13a, the eye detection unit 14, the state determination units 15, 15b, the registration units 16, 16a, and the vehicle information acquisition The functions of the unit 21 and the face movement determination unit 31 are realized by software, firmware, or a combination of software and firmware. Software or firmware is described as a program and stored in the memory 102. The processor 101 reads out and executes a program stored in the memory 102, thereby realizing the function of each unit. That is, the inoperability detecting apparatus 10 includes a memory 102 for storing a program that, when executed by the processor 101, results in the steps shown in the flowchart of FIG. In addition, this program is a procedure of the image acquisition unit 11, the image determination units 13 and 13 a, the eye detection unit 14, the state determination units 15 and 15 b, the registration units 16 and 16 a, the vehicle information acquisition unit 21, and the face movement determination unit 31. It can also be said that the method is executed by a computer.
 ここで、プロセッサ101とは、CPU(Central Processing Unit)、処理装置、演算装置、又はマイクロプロセッサ等のことである。
 メモリ102は、RAM(Random Access Memory)、ROM(Read Only Memory)、EPROM(Erasable Programmable ROM)、又はフラッシュメモリ等の不揮発性もしくは揮発性の半導体メモリであってもよいし、ハードディスク又はフレキシブルディスク等の磁気ディスクであってもよいし、CD(Compact Disc)又はDVD(Digital Versatile Disc)等の光ディスクであってもよい。
Here, the processor 101 is a CPU (Central Processing Unit), a processing device, an arithmetic device, a microprocessor, or the like.
The memory 102 may be a RAM (Random Access Memory), a ROM (Read Only Memory), an EPROM (Erasable Programmable ROM), or a nonvolatile or volatile semiconductor memory such as a flash memory, a hard disk, a flexible disk, or the like. The magnetic disk may be a CD (Compact Disc) or a DVD (Digital Versatile Disc).
 なお、画像取得部11、画像判定部13,13a、目検出部14、状態判定部15,15b、登録部16,16a、車両情報取得部21、及び顔移動判定部31の機能について、一部を専用のハードウェアで実現し、一部をソフトウェア又はファームウェアで実現するようにしてもよい。このように、運転不能検知装置10における処理回路は、ハードウェア、ソフトウェア、ファームウェア、又はこれらの組み合わせによって、上述の機能を実現することができる。 Part of the functions of the image acquisition unit 11, the image determination units 13 and 13 a, the eye detection unit 14, the state determination units 15 and 15 b, the registration units 16 and 16 a, the vehicle information acquisition unit 21, and the face movement determination unit 31. May be realized by dedicated hardware, and a part thereof may be realized by software or firmware. As described above, the processing circuit in the inoperability detecting apparatus 10 can realize the above-described functions by hardware, software, firmware, or a combination thereof.
 なお、本発明はその発明の範囲内において、各実施の形態の自由な組み合わせ、各実施の形態の任意の構成要素の変形、又は各実施の形態の任意の構成要素の省略が可能である。 In the present invention, within the scope of the invention, free combinations of the respective embodiments, modification of arbitrary components of the respective embodiments, or omission of arbitrary components of the respective embodiments are possible.
 この発明に係る運転不能検知装置は、運転姿勢の個人差を考慮して運転不能状態を検知するようにしたので、乗員を監視する乗員監視システム等に用いられる運転不能検知装置に用いるのに適している。 The inoperability detecting apparatus according to the present invention detects an inoperable state in consideration of individual differences in driving posture, and is therefore suitable for use in an inoperable detecting apparatus used in an occupant monitoring system for monitoring an occupant. ing.
 1 撮像装置、2 車両機器、10 運転不能検知装置、11 画像取得部、12,12a データベース、13,13a 画像判定部、14 目検出部、15,15b 状態判定部、16,16a 登録部、21 車両情報取得部、31 顔移動判定部、100 処理回路、101 プロセッサ、102 メモリ。 DESCRIPTION OF SYMBOLS 1 Image pick-up device, 2 Vehicle equipment, 10 Unoperability detection apparatus, 11 Image acquisition part, 12, 12a database, 13, 13a Image determination part, 14 Eye detection part, 15, 15b State determination part, 16, 16a Registration part, 21 Vehicle information acquisition unit, 31 face movement determination unit, 100 processing circuit, 101 processor, 102 memory.

Claims (7)

  1.  車両の運転者の撮像画像を取得する画像取得部と、
     前記画像取得部により取得された撮像画像のうちの前記運転者が正常状態であるときの撮像画像を正常画像として登録するデータベースと、
     前記データベースに登録済みの正常画像と前記画像取得部により取得された撮像画像との乖離の度合いを示す第1乖離度を算出する画像判定部と、
     前記画像取得部により取得された撮像画像から前記運転者の目を検出する目検出部と、
     前記画像判定部により算出された第1乖離度を第1閾値と比較し、当該第1乖離度が前記第1閾値以上である場合に、前記目検出部において前記運転者の目を検出できない時間が予め定められた時間以上継続すると前記運転者が運転不能状態であると判定し、前記目検出部において前記運転者の目を検出できない時間が前記予め定められた時間以上継続しなければ前記運転者が正常状態であると判定する状態判定部と、
     前記状態判定部により前記運転者が正常状態であると判定された場合、前記画像取得部により取得された撮像画像を正常画像として前記データベースに登録する登録部とを備える運転不能検知装置。
    An image acquisition unit for acquiring a captured image of the driver of the vehicle;
    A database that registers a captured image as a normal image when the driver is in a normal state among the captured images acquired by the image acquisition unit;
    An image determination unit that calculates a first divergence degree indicating a degree of divergence between the normal image registered in the database and the captured image acquired by the image acquisition unit;
    An eye detection unit that detects the eyes of the driver from the captured image acquired by the image acquisition unit;
    When the first deviation degree calculated by the image determination unit is compared with a first threshold value and the first deviation degree is equal to or greater than the first threshold value, the eye detection unit cannot detect the driver's eyes. Is continued for a predetermined time or more, it is determined that the driver is in an inoperable state, and if the time during which the eyes of the driver cannot be detected by the eye detection unit does not continue for the predetermined time or longer, the driving is performed. A state determination unit that determines that the person is in a normal state;
    A driving disability detection device comprising: a registration unit that registers the captured image acquired by the image acquisition unit as a normal image in the database when the state determination unit determines that the driver is in a normal state.
  2.  前記車両の状態を示す車両情報を取得する車両情報取得部を備え、
     前記データベースは、正常画像と、当該正常画像が撮像されたときの前記車両の状態を示す車両情報とを関連付けて登録し、
     前記画像判定部は、前記データベースに登録済みの正常画像のうち、前記車両情報取得部により取得された車両情報に一致する車両情報が関連付けられた正常画像を用いて第1乖離度を算出することを特徴とする請求項1記載の運転不能検知装置。
    A vehicle information acquisition unit that acquires vehicle information indicating a state of the vehicle;
    The database registers a normal image in association with vehicle information indicating a state of the vehicle when the normal image is captured;
    The image determination unit calculates a first degree of divergence using a normal image associated with vehicle information that matches vehicle information acquired by the vehicle information acquisition unit among normal images registered in the database. The inoperability detecting device according to claim 1.
  3.  前記画像判定部は、前記第1乖離度を、前記データベースに登録済みの正常画像の輝度と前記画像取得部により取得された撮像画像の輝度とに基づいて算出することを特徴とする請求項1記載の運転不能検知装置。 The said image determination part calculates the said 1st deviation degree based on the brightness | luminance of the normal image registered into the said database, and the brightness | luminance of the captured image acquired by the said image acquisition part. The inoperability detecting device as described.
  4.  前記データベースに登録済みの正常画像における前記運転者の顔位置と前記画像取得部により取得された撮像画像における前記運転者の顔位置との乖離の度合いを示す第2乖離度を算出する顔移動判定部を備え、
     前記状態判定部は、前記画像判定部により算出された第1乖離度が前記第1閾値以上である場合、かつ、前記顔移動判定部により算出された第2乖離度が第2閾値以上である場合に、前記目検出部において前記運転者の目を検出できない時間が前記予め定められた時間以上継続すると前記運転者が運転不能状態であると判定し、前記目検出部において前記運転者の目を検出できない時間が前記予め定められた時間以上継続しなければ前記運転者が正常状態であると判定することを特徴とする請求項3記載の運転不能検知装置。
    Face movement determination for calculating a second divergence degree indicating a degree of divergence between the driver's face position in the normal image registered in the database and the driver's face position in the captured image acquired by the image acquisition unit. Part
    The state determination unit is configured such that the first divergence calculated by the image determination unit is equal to or greater than the first threshold, and the second divergence calculated by the face movement determination unit is equal to or greater than a second threshold. In this case, when the time during which the eyes of the driver cannot be detected by the eye detection unit continues for the predetermined time or longer, it is determined that the driver is incapable of driving, and the eyes of the driver are determined by the eye detection unit. 4. The driving disability detection device according to claim 3, wherein the driver is determined to be in a normal state unless the time during which the vehicle cannot be detected continues for the predetermined time or longer.
  5.  前記状態判定部は、前記顔移動判定部により前記撮像画像における前記運転者の顔位置が検出できない場合、前記画像判定部により算出された第1乖離度を前記第1閾値と比較し、当該第1乖離度が前記第1閾値以上である場合に、前記目検出部において前記運転者の目を検出できない時間が前記予め定められた時間以上継続すると前記運転者が運転不能状態であると判定し、前記目検出部において前記運転者の目を検出できない時間が前記予め定められた時間以上継続しなければ前記運転者が正常状態であると判定することを特徴とする請求項4記載の運転不能検知装置。 When the face movement determination unit cannot detect the driver's face position in the captured image, the state determination unit compares the first divergence calculated by the image determination unit with the first threshold, When the degree of deviation is equal to or greater than the first threshold, if the time during which the eyes of the driver cannot be detected by the eye detection unit continues for the predetermined time or longer, the driver determines that the driver is in an inoperable state. 5. The inability to drive according to claim 4, wherein if the time during which the eyes of the driver cannot be detected by the eye detection unit does not continue for the predetermined time or longer, the driver is determined to be in a normal state. Detection device.
  6.  前記状態判定部は、前記顔移動判定部により前記撮像画像における前記運転者の顔位置が検出できない場合、前記第1閾値を大きい値に変更することを特徴とする請求項5記載の運転不能検知装置。 6. The driving impossibility detection according to claim 5, wherein the state determination unit changes the first threshold to a large value when the face movement determination unit cannot detect the driver's face position in the captured image. apparatus.
  7.  画像取得部が、車両の運転者の撮像画像を取得するステップと、
     画像判定部が、前記画像取得部により取得された撮像画像のうちの前記運転者が正常状態であるときの撮像画像を正常画像として登録するデータベースを参照し、前記データベースに登録済みの正常画像と前記画像取得部により取得された撮像画像との乖離の度合いを示す第1乖離度を算出するステップと、
     目検出部が、前記画像取得部により取得された撮像画像から前記運転者の目を検出するステップと、
     状態判定部が、前記画像判定部により算出された第1乖離度を第1閾値と比較し、当該第1乖離度が前記第1閾値以上である場合に、前記目検出部において前記運転者の目を検出できない時間が予め定められた時間以上継続すると前記運転者が運転不能状態であると判定し、前記目検出部において前記運転者の目を検出できない時間が前記予め定められた時間以上継続しなければ前記運転者が正常状態であると判定するステップと、
     登録部が、前記状態判定部により前記運転者が正常状態であると判定された場合、前記画像取得部により取得された撮像画像を正常画像として前記データベースに登録するステップとを備える運転不能検知方法。
    An image acquisition unit acquiring a captured image of the driver of the vehicle;
    An image determination unit refers to a database that registers, as a normal image, a captured image when the driver is in a normal state among the captured images acquired by the image acquisition unit, and a normal image registered in the database Calculating a first degree of deviation indicating a degree of deviation from a captured image acquired by the image acquisition unit;
    An eye detection unit detecting the driver's eyes from the captured image acquired by the image acquisition unit;
    The state determination unit compares the first divergence calculated by the image determination unit with a first threshold, and when the first divergence is equal to or greater than the first threshold, When the time during which the eyes cannot be detected continues for a predetermined time or more, the driver determines that the driver is incapable of driving, and the time during which the eyes of the driver cannot be detected by the eye detection unit continues for the predetermined time or longer. Otherwise, determining that the driver is in a normal state;
    When the state determination unit determines that the driver is in a normal state, the registration unit registers the captured image acquired by the image acquisition unit as a normal image in the database. .
PCT/JP2018/018937 2018-05-16 2018-05-16 Driving inability detection device and driving inability detection method WO2019220572A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/018937 WO2019220572A1 (en) 2018-05-16 2018-05-16 Driving inability detection device and driving inability detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/018937 WO2019220572A1 (en) 2018-05-16 2018-05-16 Driving inability detection device and driving inability detection method

Publications (1)

Publication Number Publication Date
WO2019220572A1 true WO2019220572A1 (en) 2019-11-21

Family

ID=68539993

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/018937 WO2019220572A1 (en) 2018-05-16 2018-05-16 Driving inability detection device and driving inability detection method

Country Status (1)

Country Link
WO (1) WO2019220572A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009163434A (en) * 2007-12-28 2009-07-23 Toyota Motor Corp Emergency evacuation system and method
JP2010097379A (en) * 2008-10-16 2010-04-30 Denso Corp Driver monitoring device and program for driver monitoring device
JP2013120446A (en) * 2011-12-06 2013-06-17 Denso Corp Image processing device
JP2016009255A (en) * 2014-06-23 2016-01-18 株式会社デンソー Driver's undrivable state detector
JP2016027452A (en) * 2014-06-23 2016-02-18 株式会社デンソー Driving disabled state detector of driver

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009163434A (en) * 2007-12-28 2009-07-23 Toyota Motor Corp Emergency evacuation system and method
JP2010097379A (en) * 2008-10-16 2010-04-30 Denso Corp Driver monitoring device and program for driver monitoring device
JP2013120446A (en) * 2011-12-06 2013-06-17 Denso Corp Image processing device
JP2016009255A (en) * 2014-06-23 2016-01-18 株式会社デンソー Driver's undrivable state detector
JP2016027452A (en) * 2014-06-23 2016-02-18 株式会社デンソー Driving disabled state detector of driver

Similar Documents

Publication Publication Date Title
JP7163129B2 (en) Object tracking method and apparatus
US10755124B2 (en) Passenger counting device, system, method and program, and vehicle movement amount calculation device, method and program
EP2541493B1 (en) Pupil detection device and pupil detection method
JP5466610B2 (en) Gaze estimation device
JP6090129B2 (en) Viewing area estimation device
JP6633462B2 (en) Information processing apparatus and information processing method
WO2017081821A1 (en) Method for estimating state of endoscope
WO2021053780A1 (en) Cognitive function estimation device, learning device, and cognitive function estimation method
US8503737B2 (en) Visual line estimating apparatus
JP6573744B2 (en) Object tracking device and object tracking method
WO2019220572A1 (en) Driving inability detection device and driving inability detection method
TW201929210A (en) Semiconductor device, imaging system, and program
JP7267467B2 (en) ATTENTION DIRECTION DETERMINATION DEVICE AND ATTENTION DIRECTION DETERMINATION METHOD
JP2021071794A5 (en)
JP7098972B2 (en) Behavior recognition device, behavior recognition system, behavior recognition method and program
WO2019102525A1 (en) Abnormality detection device and abnormality detection method
JPH09322153A (en) Automatic monitor
JP2010128961A (en) Image monitoring apparatus
JP7302534B2 (en) State determination device and state determination method
JP2023104680A (en) Driver monitor device, driver monitor method and computer program for driver monitor
JP2019068933A (en) Estimation device and estimation method
US11615627B2 (en) Object recognition apparatus, vehicle control apparatus, object recognition method, and vehicle control method
JP2008146132A (en) Image detection device, program, and image detection method
JP7320188B2 (en) Driver Abnormal Posture Detector
US20230394702A1 (en) Device, method, and computer program for estimating seat position

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18919099

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020518886

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18919099

Country of ref document: EP

Kind code of ref document: A1