WO2019030855A1 - Drive incapability state determination device and drive incapability state determination method - Google Patents

Drive incapability state determination device and drive incapability state determination method Download PDF

Info

Publication number
WO2019030855A1
WO2019030855A1 PCT/JP2017/028931 JP2017028931W WO2019030855A1 WO 2019030855 A1 WO2019030855 A1 WO 2019030855A1 JP 2017028931 W JP2017028931 W JP 2017028931W WO 2019030855 A1 WO2019030855 A1 WO 2019030855A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
driver
state
state determination
unit
Prior art date
Application number
PCT/JP2017/028931
Other languages
French (fr)
Japanese (ja)
Inventor
慶友樹 小川
敬 平野
政信 大澤
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2019535504A priority Critical patent/JP6594595B2/en
Priority to PCT/JP2017/028931 priority patent/WO2019030855A1/en
Publication of WO2019030855A1 publication Critical patent/WO2019030855A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/08Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to drivers or passengers
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present invention relates to a technology for determining an inoperable state of a driver driving a vehicle.
  • One of the methods of detecting the driver's inoperable state from the captured image is a method of detecting the inoperable state when a state in which the driver's face can not be detected from the captured image is continuously detected for a predetermined time.
  • the driver's face is sequentially detected from a captured image captured by a camera mounted on the vehicle, and the driver's face detected while the vehicle is traveling is out of a predetermined range of the image.
  • an inoperable state detection device is disclosed that detects that the driver is inoperable.
  • the present invention has been made to solve the above-described problems, and prevents the driver from being erroneously determined to be inoperable when the driver's face is blocked by a shield.
  • the purpose is
  • an image acquisition unit that acquires a captured image in a vehicle, a face detection unit that detects a driver's face from the captured image acquired by the image acquisition unit, and an image acquisition unit The captured image acquired by the determination area acquisition unit when the determination area acquisition unit that acquires the determination area image from the acquired captured image and the state in which the face detection unit can not detect the driver's face continue for the first threshold time And a state determination unit that determines the state of the driver based on the difference between the determination area image of the inside and the determination area image of the comparison image.
  • the shield when the face of the driver is blocked by the shield, it is possible to prevent the driver from being erroneously determined to be inoperable.
  • FIG. 1 is a block diagram showing a configuration of an inoperable state determination device according to Embodiment 1.
  • 2A and 2B are diagrams showing an example of processing of the face detection unit and the determination area acquisition unit of the inoperable state determination device according to the first embodiment.
  • FIGS. 3A and 3B are diagrams showing an example of a hardware configuration of the inoperable state determination device according to the first embodiment.
  • FIG. 4A, FIG. 4B, FIG. 4C, and FIG. 4D are figures which show the determination processing of the 1st state determination part of the operation impossible state determination apparatus based on Embodiment 1.
  • FIG. 5 is a flowchart showing the operation of the inoperable state determination device according to the first embodiment.
  • FIG. 5 is a flowchart showing an operation in a warning state of the inoperable state determination device according to the first embodiment.
  • FIG. 7 is a diagram showing a display example based on the determination result of the inoperable state determination device of the invention according to the first embodiment.
  • FIG. 1 is a block diagram showing the configuration of the inoperable state determination device 100 according to the first embodiment.
  • the inoperable state determination device 100 first performs a process of detecting the driver's face, and when the face is not successfully detected, a captured image of the first driver's seat acquired from the imaging device in the vehicle compartment and The image is captured in advance by the imaging device in the vehicle compartment, and the image of the second driver's seat obtained by imaging the driver's seat on which the driver is not seated is compared.
  • the inoperable state determination device 100 determines whether or not the driver is inoperable based on the comparison result of the captured image of the first driver's seat and the image of the second driver's seat.
  • the inoperable state determination device 100 calculates the difference between the captured image of the first driver's seat and the captured image of the second driver's seat. If the calculated difference is equal to or greater than the reference value, the inoperable state determination device 100 determines that the driver is not inoperable. On the other hand, when the calculated difference is less than the reference value, the inoperable state determination device 100 determines that the driver is inoperable. As a result, if the driver's face is not successfully detected because the driver's face is covered with the driver's hand or a cover such as a towel, the driver's face can not be detected. It is possible to prevent an erroneous determination that the vehicle is in an inoperable state.
  • the inoperable state determination apparatus 100 includes an image acquisition unit 101, a face detection unit 102, a determination area acquisition unit 103, a first state determination unit (state determination unit) 104, a comparison image storage unit 105, A face information acquisition unit 106 and a second state determination unit 107 are provided.
  • a camera 200, a warning device 300 and a vehicle control device 400 are connected to the inoperable state determination device 100.
  • the camera 200 is an imaging device for imaging the interior of a vehicle equipped with the inoperable state determination device 100.
  • the camera 200 is, for example, an infrared camera.
  • the camera 200 is installed at a position where at least the head of the driver sitting on the driver's seat can be photographed. Also, the camera 200 is configured of one or more cameras.
  • the warning device 300 generates information indicating a warning or information indicating a warning by voice or voice and display for the driver of the vehicle based on the determination result of the inoperable state determination device 100.
  • the warning device 300 performs output control of information indicating a generated alert or information indicating a warning.
  • the warning device 300 for example, a speaker, or a speaker and a display make an audio output or an audio output and display information indicating a warning or information indicating a warning based on the output control.
  • the vehicle control device 400 controls the traveling of the vehicle based on the determination result of the inoperable state determination device 100.
  • the image acquisition unit 101 acquires a captured image captured by the camera 200.
  • the captured image is an image captured such that at least the head of the driver is viewed when the driver is seated in the driver's seat in a general driving posture.
  • the image acquisition unit 101 outputs the acquired captured image to the face detection unit 102 and the determination area acquisition unit 103.
  • the face detection unit 102 analyzes the captured image acquired by the image acquisition unit 101 to detect a distribution of luminance values.
  • the face detection unit 102 performs a process of detecting the driver's face from the captured image by comparing the detected distribution of the luminance values with the distribution of the luminance values for matching obtained in advance by learning.
  • the distribution of the luminance values for comparison is obtained statistically by learning the distribution of luminance values in the case where a human face is imaged using a plurality of captured images in which a human face is imaged. Shows a distribution pattern of various luminance values.
  • the face detection unit 102 When the face detection unit 102 can not detect the driver's face, the face detection unit 102 outputs information indicating failure in face detection to the first state determination unit 104 and the determination area acquisition unit 103.
  • the face detection unit 102 detects a driver's face, the face detection unit 102 outputs, to the face information acquisition unit 106, information indicating success in face detection and a captured image.
  • the determination area acquisition unit 103 determines an image of the determination area (hereinafter referred to as a determination area image) from the captured image acquired by the image acquisition unit 101. get.
  • the determination area is an area including all the headrests of the driver's seat.
  • the determination area is an area set in advance based on the position of the driver's seat and the position of the camera 200 for each vehicle type. Further, the determination area is an area determined in consideration of the adjustment width of the driver's seat in the front-rear direction, the adjustment width of the driver's seat inclination, and the adjustment width of the driver's seat's headrest in the vertical direction.
  • the determination area acquisition unit 103 outputs the acquired determination area image to the first state determination unit 104.
  • FIG. 2 is a diagram showing an example of processing of the face detection unit 102 and the determination area acquisition unit 103 of the inoperable state determination apparatus 100 according to the first embodiment.
  • FIG. 2A illustrates an example of determination in a captured image obtained by capturing the driver's seat from the front
  • FIG. 2B illustrates an example of a determination region in a captured image obtained by capturing the driver's seat from the oblique front.
  • 2A and 2B show an example in which the face detection unit 102 succeeds in detecting the face Xa of the driver X.
  • the face detection unit 102 detects the face Xa of the driver X from the captured image A shown in FIGS. 2A and 2B.
  • the determination area acquisition unit 103 acquires an image of the determination area B, which is an area determined in advance, from the captured image A.
  • the determination area B is an area including all the headrests H of the driver's seat.
  • the determination area acquisition unit 103 outputs the acquired image of the determination area B to the first state determination unit 104.
  • the first state determination unit 104 determines the state of the driver.
  • the first state determination unit 104 compares the determination area image in the captured image used in the face detection input from the determination area acquisition unit 103 with the determination area image in the comparison image captured in advance.
  • the first state determination unit 104 determines the state of the driver based on the obtained comparison result.
  • the comparison image is an image captured in advance by the camera 200, and is a captured image obtained by capturing a driver's seat on which the driver is not seated.
  • the first state determination unit 104 continuously inputs information indicating a failure in face detection from the face detection unit 102 continuously for a time determined to require attention (hereinafter referred to as a first threshold time). Then, the difference between the determination area image in the captured image used for face detection and the determination area image in the comparison image is calculated. The first state determination unit 104 determines, based on the calculated difference and the reference value, whether the driver needs a warning (hereinafter, referred to as a warning state). Furthermore, the first state determination unit 104 determines that the driver is inoperable when the alert state continues for the second threshold time.
  • a warning state hereinafter, referred to as a warning state
  • the second threshold time is a time obtained by adding a time for determining that the driver is inoperable to the first threshold time.
  • the comparison image storage unit 105 stores a comparison image to be referred to when the first state determination unit 104 compares the determination region images.
  • the comparison image is an image captured in advance by the camera 200, and is a captured image obtained by capturing a driver's seat on which the driver is not seated.
  • the face information acquisition unit 106 analyzes the captured image when the face detection unit 102 receives the information indicating the success of the face detection and the captured image used in the face detection, and configures the driver's face. Obtain location information of elements such as eyes, nose and mouth.
  • the face information acquisition unit 106 acquires face information for determining the inoperable state, such as the degree of eye opening of the driver and the face orientation of the driver, from the acquired position information of the components of the face of the driver.
  • the face information acquisition unit 106 outputs the acquired face information to the second state determination unit 107.
  • the second state determination unit 107 detects, for example, the degree of eye opening or the face orientation from the face information acquired by the face information acquisition unit 106. The second state determination unit 107 determines whether or not the driver is incapable of driving based on the acquired degree of eye opening or face orientation. If the second state determination unit 107 determines that the driver is inoperable, the second state determination unit 107 outputs the determination result to the warning device 300 or the vehicle control device 400.
  • FIG. 3A and FIG. 3B are diagrams showing an example of a hardware configuration of the inoperable state determination device 100.
  • Each function of the image acquisition unit 101, the face detection unit 102, the determination area acquisition unit 103, the first state determination unit 104, the face information acquisition unit 106, and the second state determination unit 107 in the inoperable state determination device 100 is processed It is realized by a circuit. That is, the inoperable state determination device 100 includes a processing circuit for realizing the above functions.
  • the processing circuit may be the processing circuit 100a which is dedicated hardware as shown in FIG. 3A, or may be the processor 100b which executes a program stored in the memory 100c as shown in FIG. 3B. Good.
  • the image acquisition unit 101, the face detection unit 102, the determination area acquisition unit 103, the first state determination unit 104, the face information acquisition unit 106, and the second state determination unit 107 are dedicated hardware.
  • the processing circuit 100a may be, for example, a single circuit, a complex circuit, a programmed processor, a parallel programmed processor, an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination thereof. The thing corresponds.
  • each function of each unit of image acquisition unit 101, face detection unit 102, determination area acquisition unit 103, first state determination unit 104, face information acquisition unit 106 and second state determination unit 107 is realized by a processing circuit
  • the functions of the respective units may be integrated and realized by one processing circuit.
  • the processor 100b when the image acquisition unit 101, the face detection unit 102, the determination region acquisition unit 103, the first state determination unit 104, the face information acquisition unit 106, and the second state determination unit 107 are the processor 100b.
  • the function of each unit is realized by software, firmware, or a combination of software and firmware.
  • the software or firmware is described as a program and stored in the memory 100c.
  • the processor 100b reads out and executes the program stored in the memory 100c, whereby the image acquisition unit 101, the face detection unit 102, the determination area acquisition unit 103, the first state determination unit 104, the face information acquisition unit 106, and The respective functions of the second state determination unit 107 are realized.
  • a memory 100c is provided for storing a program that results in the steps shown in FIG. 5 and FIG. 6 described later. Further, these programs are obtained by computerizing the procedure or method of the image acquisition unit 101, the face detection unit 102, the determination area acquisition unit 103, the first state determination unit 104, the face information acquisition unit 106, and the second state determination unit 107. It can be said that the
  • the processor 100 b refers to, for example, a central processing unit (CPU), a processing device, an arithmetic device, a processor, a microprocessor, a microcomputer, or a digital signal processor (DSP).
  • the memory 100c may be, for example, a nonvolatile or volatile semiconductor memory such as a random access memory (RAM), a read only memory (ROM), a flash memory, an erasable programmable ROM (EPROM), or an electrically EPROM (EEPROM). It may be a hard disk, a magnetic disk such as a flexible disk, or an optical disk such as a mini disk, a CD (Compact Disc), a DVD (Digital Versatile Disc), or the like.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable ROM
  • EEPROM electrically EPROM
  • It may be a hard disk, a magnetic disk such as a flexible disk, or an optical disk such as a mini disk, a CD (
  • the image acquisition unit 101 face detection unit 102, determination area acquisition unit 103, first state determination unit 104, face information acquisition unit 106, and second state determination unit 107 are partially dedicated. It may be realized by hardware, and a part may be realized by software or firmware. As described above, the processing circuit 100a in the inoperable state determination device 100 can realize each of the functions described above by hardware, software, firmware, or a combination thereof.
  • FIG. 4 is a diagram showing a determination process of the first state determination unit 104 of the inoperable state determination apparatus 100 according to the first embodiment.
  • 4A shows an example of the image of the determination area in the comparison image stored in the comparison image storage unit 105
  • FIG. 4B to FIG. 4D show an example of a captured image at the time of face detection and an image of the determination area in the captured image.
  • FIG. 4A shows an image of a determination area in a comparison image obtained by imaging a driver's seat where a driver is not seated.
  • 4B shows the driver's face not blocked by the shield
  • FIG. 4C shows the driver's face blocked by the shield
  • FIG. 4D shows the driver's face tilted forward. It shows the state of doing.
  • the image of the determination area C in the comparative image of FIG. 4A and the images of the determination areas Ba, Bb, and Bc in the captured image A at face detection in FIGS. 4B to 4D are images obtained by capturing the same area.
  • the first state determination unit 104 compares the image of the determination area Ba in the captured image A with the image of the determination area C in the comparison image, and calculates the difference between the two images in which the headrest is captured. Do.
  • the first state determination unit 104 calculates the difference between the image of the determination area C in the comparison image and the determination area Ba in the captured image A based on the difference between the area Ha and the area Hb in which the headrest is imaged.
  • the first state determination unit 104 uses at least one of the number of pixels at which the headrest is imaged, the length of the outline at which the headrest can be visually recognized, and the luminance of the determination area image. calculate.
  • the first state determination unit 104 compares the image of the determination area Bb in the captured image A with the image of the determination area C in the comparison image, and calculates the difference between the two images in which the headrest is captured. Do.
  • the first state determination unit 104 calculates the difference between the image of the determination area C of the comparison image and the image of the determination area Bb of the captured image A based on the difference between the area Ha and the area Hc where the headrest is imaged. .
  • the first state determination unit 104 compares the image of the determination area Bc in the captured image A with the image of the determination area C in the comparison image, and calculates the difference between the two images in which the headrest is captured. Do.
  • the first state determination unit 104 calculates the difference between the image of the determination area C of the comparative image and the image of the determination area Bc of the captured image A based on the difference between the area Ha and the area Hd in which the headrest is imaged. .
  • the first state determination unit 104 generates an image of the determination area C in the comparison image of FIG. 4A and an image of the determination area Ba in the captured image A of FIG. 4B in which the driver's face is not blocked by the shield.
  • the ratio of the area Ha in which the headrest is captured in the image of the determination area C in the comparison image is 100%, and the ratio of the area Hb in which the headrest is captured in the image of the determination area Ba in the captured image A Is calculated as, for example, 20%.
  • the first state determination unit 104 calculates a difference ratio (80%) of the ratio of the area Ha (100%) to the ratio of the area Hb (20%), and the calculated difference ratio is a predetermined reference value. It is determined that (for example, 50%) or more.
  • the first state determination unit 104 determines from the determination result that the headrest of the captured image A is sufficiently hidden by the driver's head or the like, and that there is a large difference as compared with the comparison image.
  • the first state determination unit 104 displays the image of the determination area C in the comparison image of FIG. 4A and the image of the determination area Bb in the captured image A of FIG. 4C in which the driver's face is blocked by the shield.
  • the proportion of the area Ha in which the headrest is captured in the image of the determination area C in the comparison image is 100%, and the area Hc in the image of the determination area Bb in the captured image A
  • the ratio is calculated as, for example, 20%.
  • the first state determination unit 104 calculates a difference ratio (80%) of the ratio of the area Ha (100%) to the ratio of the area Hc (20%), and the calculated difference ratio is a reference value previously determined. It is determined that (for example, 50%) or more.
  • the first state determination unit 104 determines from the determination result that the headrest of the captured image A is sufficiently hidden by the driver's head or the like, and that there is a large difference as compared with the comparison image.
  • the first state determination unit 104 controls the image of the determination area C in the comparison image of FIG. 4A and the image of the determination area Bc in the captured image A of FIG. 4D in which the driver's face is inclined forward.
  • the ratio of the area Ha obtained by imaging the headrest to the image of the determination area C in the comparison image is 100%, and the ratio of the area Hd acquired by imaging the headrest to the image of the determination area Bc in the imaged image A Calculated as 80%.
  • the first state determination unit 104 calculates the difference ratio (20%) of the ratio of the area Ha (100%) to the ratio of the area Hd (80%), and the calculated difference ratio is a predetermined reference value. It determines that it is less than (for example, 50%).
  • the first state determination unit 104 determines from the determination result that the headrest of the captured image A is not hidden by the driver's head or the like, and there is no large difference compared to the comparison image.
  • the first state determination unit 104 determines that the driver needs to call attention. For example, in comparison between the image of the determination area C in the comparative image of FIG. 4A and the image of the determination area Ba in the captured image A of FIG. 4B, the first state determination unit 104 determines the difference ratio between the area Ha and the area Hb. Is greater than the reference value, it is determined that the driver X is not in a state requiring a warning. Similarly, in comparison between the image of the determination area C in the comparative image of FIG. 4A and the image of the determination area Bb in the captured image A of FIG. 4C, the first state determination unit 104 determines the difference between the area Ha and the area Hc.
  • the first state determination unit 104 determines that the driver X needs to call attention.
  • FIG. 5 is a flowchart showing the operation of the inoperable state determination apparatus 100 according to the first embodiment. It is assumed that the first state determination unit 104 does not determine that the driver is in the alert state as a premise of performing the process of the flowchart in FIG. 5.
  • the face detection unit 102 When the image acquisition unit 101 acquires a captured image (step ST1), the face detection unit 102 performs processing for detecting a driver's face from the acquired captured image (step ST2). The face detection unit 102 determines whether or not the driver's face has been successfully detected from the captured image in the process of step ST2 (step ST3). If the face detection is successful (step ST3; YES), the face detection unit 102 outputs, to the face information acquisition unit 106, information indicating that the face detection is successful and the captured image. The face information acquisition unit 106 acquires driver's face information from the captured image input from the face detection unit 102 (step ST4), and outputs the face information to the second state determination unit 107. The second state determination unit 107 executes a process of determining whether the driver is incapable of driving based on the driver's face information acquired in step ST4 (step ST5). Thereafter, the flowchart ends the process.
  • step ST3 the face detection unit 102 outputs information indicating that face detection has failed to the determination area acquisition unit 103 and the first state determination unit 104.
  • the determination area acquisition unit 103 acquires an image of the determination area from the captured image acquired in step ST1 (step ST6), and outputs the image to the first state determination unit 104.
  • the first state determination unit 104 determines whether a counter (not shown) has been activated (step ST7). If the counter has not been activated (step ST7; NO), the first state determination unit 104 starts counting the counter (step ST8), and counts up the counter (step ST9). On the other hand, when the counter has been started (step ST7; YES), the flowchart proceeds to the process of step ST9.
  • the first state determination unit 104 refers to the count value of the counter and determines whether the first threshold time has elapsed (step ST10). If the first threshold time has not elapsed (step ST10; NO), the flowchart ends the process. On the other hand, when the first threshold time set in advance has elapsed (step ST10; YES), the first state determination unit 104 compares the ratio of the area in which the headrest is captured with the area of the determination area acquired in step ST6. The difference between the image and the ratio of the area in which the headrest is captured is calculated (step ST11).
  • the first state determination unit 104 determines whether the difference calculated in step ST11 is equal to or greater than the reference value (step ST12). If the difference is equal to or greater than the reference value (step ST12; YES), the first state determination unit 104 determines that the driver is not in an inoperable state, and ends the process. On the other hand, when it is determined that the difference is less than the reference value (step ST12; NO), the first state determination unit 104 determines that the driver is in the alert state (step ST13). The first state determination unit 104 outputs the determination result of step ST13 to the warning device 300 or the vehicle control device 400 (step ST14), and ends the process.
  • FIG. 6 is a flowchart showing the operation in the alert state of the inoperable state determination device 100 according to the first embodiment.
  • the first state determination unit 104 determines in step ST13 of the flowchart of FIG.
  • the image acquisition unit 101 acquires a captured image (step ST21)
  • the face detection unit 102 performs processing for detecting the driver's face from the acquired captured image (step ST22).
  • the face detection unit 102 determines whether or not the driver's face has been successfully detected from the captured image in the process of step ST22 (step ST23). If the face detection is successful (step ST23; YES), the face detection unit 102 outputs information indicating that the face detection is successful to the first state determination unit 104. When information indicating that the face detection has succeeded is input, the first state determination unit 104 cancels the driver's attention (step ST24).
  • the face detection unit 102 outputs information indicating that face detection has failed to the first state determination unit 104.
  • the first state determination unit 104 counts up the counter (step ST25).
  • the first state determination unit 104 refers to the count value of the counter and determines whether the second threshold time has elapsed (step ST26). If the second threshold time has not elapsed (step ST26; NO), the flowchart ends the process.
  • step ST26 when the second threshold time has elapsed (step ST26; YES), the first state determination unit 104 determines that the driver is incapable of driving (step ST27).
  • the first state determination unit 104 outputs the determination result of step ST27 to the warning device 300 or the vehicle control device 400 (step ST28).
  • the first state determination unit 104 resets the counter (step ST29), and ends the process.
  • the flowcharts of FIG. 5 and FIG. 6 show the processing of one cycle, and when the driver is driving, the processing of the flowchart is repeatedly performed.
  • the inoperable state determination apparatus 100 includes the image acquisition unit 101 that acquires a captured image in a vehicle, and a face detection unit that detects the driver's face from the acquired captured image. 102, the determination area acquisition unit 103 for acquiring a determination area image from the acquired captured image, and the determination in the acquired captured image when the state in which the driver's face can not be detected continues for the first threshold time
  • the first state determination unit 104 is configured to determine the state of the driver based on the difference between the area image and the determination area image in the comparison image.
  • the inoperable state determination apparatus 100 compares the difference between the determination area image in the captured image and the determination area image in the comparison image with the reference value to find that the driver's face is at the normal position. It can be determined that it is located. When the driver's face is blocked by the shield, it is possible to prevent the driver from being erroneously determined to be inoperable.
  • the first state determination unit 104 determines that the driver is in the alerting state, and the driver is in the alerting state. Since the driver is determined to be inoperable when it is determined that there is a situation where the face detection unit 102 can not detect the driver's face continues for the second threshold time, the driver's The inoperable state can be determined more accurately.
  • FIG. 7 is a diagram showing an example of notifying the driver of the determination result of the inoperable state determination device 100 of the invention according to the first embodiment.
  • the display 301 displays a determination result 302 of the inoperable state determination device 100 and a button 303 for canceling the determination result.
  • the determination result 302 is displayed as, for example, “determining as inoperable state”.
  • the button 303 is displayed as, for example, “reset to normal state”.
  • the driver refers to the determination result 302 displayed on the display 301, and when the driver is not in the inoperable state, pressing the button 303 causes the inoperable state determination device 100 to cancel the determination result. it can.
  • the inoperable state determination device is suitable for being used in a driver monitoring system or the like for which improvement in determination accuracy is required.
  • Reference Signs List 100 non-operational state determination device, 101 image acquisition unit, 102 face detection unit, 103 determination area acquisition unit, 104 first state determination unit, 105 comparison image storage unit, 106 face information acquisition unit, 107 second state determination unit .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • Image Analysis (AREA)

Abstract

This drive incapability state determination device is provided with: an image acquisition unit (101) for acquiring a captured image of the interior of a vehicle; a face detection unit (102) for detecting the face of a driver from the acquired captured image; a determination area acquisition unit (103) for acquiring a determination area image from the acquired captured image; and a first state determination unit (104) by which, if a state in which the driver's face cannot be detected continues for a first threshold time, the state of the driver is determined on the basis of a difference between the determination area image in the acquired captured image and a determination area image in a comparison image.

Description

運転不能状態判定装置および運転不能状態判定方法Inoperable state determination device and inoperable state determination method
 この発明は、車両を運転する運転者の運転不能状態を判定する技術に関するものである。 The present invention relates to a technology for determining an inoperable state of a driver driving a vehicle.
 車両を運転する運転者を撮像した撮像画像から運転者の運転不能状態を判定する技術がある。撮像画像から運転者の運転不能状態を検出する方法の1つとして、撮像画像から運転者の顔を検出できない状態が一定時間継続して検知した場合に、運転不能状態と検出する方法がある。
 例えば、特許文献1には、車両に搭載されたカメラが撮像した撮像画像から、運転者の顔を逐次検出し、車両の走行中に検出した運転者の顔が画像の所定範囲から外れている場合に、運転者が運転不能状態であることを検出する運転不能状態検出装置が開示されている。
There is a technique for determining a driver's inoperable state from a captured image obtained by imaging a driver driving a vehicle. One of the methods of detecting the driver's inoperable state from the captured image is a method of detecting the inoperable state when a state in which the driver's face can not be detected from the captured image is continuously detected for a predetermined time.
For example, in Patent Document 1, the driver's face is sequentially detected from a captured image captured by a camera mounted on the vehicle, and the driver's face detected while the vehicle is traveling is out of a predetermined range of the image In some cases, an inoperable state detection device is disclosed that detects that the driver is inoperable.
特開2016-27452号公報JP, 2016-27452, A
 上述した特許文献1に記載された運転不能状態検出装置では、運転者の顔が、例えば運転者の手、またはタオルの遮蔽物によって遮られた場合に、運転者の顔の画像を所定範囲内で検出することができず、運転者が運転不能状態であると誤検出する場合があるという課題があった。 In the inoperable state detection device described in Patent Document 1 described above, when the driver's face is blocked by, for example, the driver's hand or a towel cover, the driver's face image is within a predetermined range Therefore, there is a problem that the driver may erroneously detect that the driver can not drive the vehicle.
 この発明は、上記のような課題を解決するためになされたもので、運転者の顔が遮蔽物で遮られている場合に、運転者が運転不能状態であると誤判定するのを防止することを目的とする。 The present invention has been made to solve the above-described problems, and prevents the driver from being erroneously determined to be inoperable when the driver's face is blocked by a shield. The purpose is
 この発明に係る運転不能状態判定装置は、車両内の撮像画像を取得する画像取得部と、画像取得部が取得した撮像画像から、運転者の顔を検出する顔検出部と、画像取得部が取得した撮像画像から判定領域画像を取得する判定領域取得部と、顔検出部が運転者の顔を検出できない状態が、第1の閾値時間継続した場合に、判定領域取得部が取得した撮像画像内の判定領域画像と、比較画像内の判定領域画像との差分に基づいて運転者の状態を判定する状態判定部とを備えるものである。 In the driving impossible state determination device according to the present invention, an image acquisition unit that acquires a captured image in a vehicle, a face detection unit that detects a driver's face from the captured image acquired by the image acquisition unit, and an image acquisition unit The captured image acquired by the determination area acquisition unit when the determination area acquisition unit that acquires the determination area image from the acquired captured image and the state in which the face detection unit can not detect the driver's face continue for the first threshold time And a state determination unit that determines the state of the driver based on the difference between the determination area image of the inside and the determination area image of the comparison image.
 この発明によれば、運転者の顔が遮蔽物で遮られている場合に、運転者が運転不能状態であると誤判定するのを防止することができる。 According to the present invention, when the face of the driver is blocked by the shield, it is possible to prevent the driver from being erroneously determined to be inoperable.
実施の形態1に係る運転不能状態判定装置の構成を示すブロック図である。FIG. 1 is a block diagram showing a configuration of an inoperable state determination device according to Embodiment 1. 図2A、図2Bは、実施の形態1に係る運転不能状態判定装置の顔検出部および判定領域取得部の処理の一例を示す図である。2A and 2B are diagrams showing an example of processing of the face detection unit and the determination area acquisition unit of the inoperable state determination device according to the first embodiment. 図3A、図3Bは、実施の形態1に係る運転不能状態判定装置のハードウェア構成例を示す図である。FIGS. 3A and 3B are diagrams showing an example of a hardware configuration of the inoperable state determination device according to the first embodiment. 図4A、図4B、図4C、図4Dは、実施の形態1に係る運転不能状態判定装置の第1の状態判定部の判定処理を示す図である。FIG. 4A, FIG. 4B, FIG. 4C, and FIG. 4D are figures which show the determination processing of the 1st state determination part of the operation impossible state determination apparatus based on Embodiment 1. FIG. 実施の形態1に係る運転不能状態判定装置の動作を示すフローチャートである。5 is a flowchart showing the operation of the inoperable state determination device according to the first embodiment. 実施の形態1に係る運転不能状態判定装置の注意喚起状態での動作を示すフローチャートである。5 is a flowchart showing an operation in a warning state of the inoperable state determination device according to the first embodiment. 実施の形態1に係る発明の運転不能状態判定装置の判定結果に基づく表示例を示す図である。FIG. 7 is a diagram showing a display example based on the determination result of the inoperable state determination device of the invention according to the first embodiment.
 以下、この発明をより詳細に説明するために、この発明を実施するための形態について、添付の図面に従って説明する。
実施の形態1.
 図1は、実施の形態1に係る運転不能状態判定装置100の構成を示すブロック図である。
 運転不能状態判定装置100は、まず、運転者の顔を検出する処理を行い、当該顔の検出に成功しなかった場合に、車室内の撮像装置から取得した第1の運転席の撮像画像と、車室内の撮像装置によって予め撮像された画像であって、運転者が着座していない運転席の座席を撮像した第2の運転席の画像とを比較する。運転不能状態判定装置100は、第1の運転席の撮像画像と、第2の運転席の画像との比較結果に基づいて、運転者が運転不能状態であるか否かを判定する。
Hereinafter, in order to explain the present invention in more detail, a mode for carrying out the present invention will be described according to the attached drawings.
Embodiment 1
FIG. 1 is a block diagram showing the configuration of the inoperable state determination device 100 according to the first embodiment.
The inoperable state determination device 100 first performs a process of detecting the driver's face, and when the face is not successfully detected, a captured image of the first driver's seat acquired from the imaging device in the vehicle compartment and The image is captured in advance by the imaging device in the vehicle compartment, and the image of the second driver's seat obtained by imaging the driver's seat on which the driver is not seated is compared. The inoperable state determination device 100 determines whether or not the driver is inoperable based on the comparison result of the captured image of the first driver's seat and the image of the second driver's seat.
 詳細は後述するが、運転不能状態判定装置100は、第1の運転席の撮像画像と第2の運転席の撮像画像との差分を算出する。運転不能状態判定装置100は、算出した差分が基準値以上である場合は、運転者が運転不能状態でないと判定する。一方、運転不能状態判定装置100は、算出した差分が基準値未満である場合は、運転者が運転不能状態であると判定する。
 これにより、運転不能状態判定装置100は、運転者の顔が運転者の手またはタオル等の遮蔽物で覆われているために、運転者の顔の検出に成功しなかった場合に、運転者が運転不能状態であると誤判定するのを防止することができる。
Although details will be described later, the inoperable state determination device 100 calculates the difference between the captured image of the first driver's seat and the captured image of the second driver's seat. If the calculated difference is equal to or greater than the reference value, the inoperable state determination device 100 determines that the driver is not inoperable. On the other hand, when the calculated difference is less than the reference value, the inoperable state determination device 100 determines that the driver is inoperable.
As a result, if the driver's face is not successfully detected because the driver's face is covered with the driver's hand or a cover such as a towel, the driver's face can not be detected. It is possible to prevent an erroneous determination that the vehicle is in an inoperable state.
 図1に示すように、運転不能状態判定装置100は、画像取得部101、顔検出部102、判定領域取得部103、第1の状態判定部(状態判定部)104、比較画像蓄積部105、顔情報取得部106および第2の状態判定部107を備える。
 運転不能状態判定装置100には、カメラ200、警告装置300および車両制御装置400が接続されている。
 カメラ200は、運転不能状態判定装置100を搭載した車両の車室内を撮像する撮像装置である。カメラ200は、例えば赤外線カメラである。カメラ200は、運転席に着座した運転者の少なくとも頭部を撮影可能な位置に設置される。また、カメラ200は、1台または複数台で構成される。
As shown in FIG. 1, the inoperable state determination apparatus 100 includes an image acquisition unit 101, a face detection unit 102, a determination area acquisition unit 103, a first state determination unit (state determination unit) 104, a comparison image storage unit 105, A face information acquisition unit 106 and a second state determination unit 107 are provided.
A camera 200, a warning device 300 and a vehicle control device 400 are connected to the inoperable state determination device 100.
The camera 200 is an imaging device for imaging the interior of a vehicle equipped with the inoperable state determination device 100. The camera 200 is, for example, an infrared camera. The camera 200 is installed at a position where at least the head of the driver sitting on the driver's seat can be photographed. Also, the camera 200 is configured of one or more cameras.
 警告装置300は、運転不能状態判定装置100の判定結果に基づいて、車両の運転者に対して音声または音声と表示によって、注意喚起を示す情報または警告を示す情報を生成する。警告装置300は、生成した注意喚起を示す情報または警告を示す情報の出力制御を行う。警告装置300を構成する、例えばスピーカ、またはスピーカおよびディスプレイは、出力制御に基づいて、注意喚起を示す情報または警告を示す情報を音声出力、または音声出力および表示する。
 車両制御装置400は、運転不能状態判定装置100の判定結果に基づいて、車両の走行を制御する。
The warning device 300 generates information indicating a warning or information indicating a warning by voice or voice and display for the driver of the vehicle based on the determination result of the inoperable state determination device 100. The warning device 300 performs output control of information indicating a generated alert or information indicating a warning. The warning device 300, for example, a speaker, or a speaker and a display make an audio output or an audio output and display information indicating a warning or information indicating a warning based on the output control.
The vehicle control device 400 controls the traveling of the vehicle based on the determination result of the inoperable state determination device 100.
 運転不能状態判定装置100の各構成について説明する。
 画像取得部101は、カメラ200が撮像した撮像画像を取得する。撮像画像は、運転者が運転席に一般的な運転姿勢で着座している場合に、当該運転者の少なくとも頭部が映るように撮像された画像である。画像取得部101は、取得した撮像画像を顔検出部102および判定領域取得部103に出力する。
 顔検出部102は、画像取得部101により取得された撮像画像を解析して輝度値の分布を検出する。顔検出部102は、検出した輝度値の分布と、予め学習により得られた照合用の輝度値の分布とを比較して、撮像画像から運転者の顔を検出する処理を行う。ここで、照合用の輝度値の分布は、人の顔が撮像された複数の撮像画像を用いて人の顔が撮像されている場合の輝度値の分布を学習して得られた、統計的な輝度値の分布のパターンを示すものである。顔検出部102は、運転者の顔を検出できなかった場合には、第1の状態判定部104および判定領域取得部103に対して、顔検出の失敗を示す情報を出力する。また、顔検出部102は、運転者の顔を検出した場合には、顔情報取得部106に対して、顔検出の成功を示す情報および撮像画像を出力する。
The components of the inoperable state determination device 100 will be described.
The image acquisition unit 101 acquires a captured image captured by the camera 200. The captured image is an image captured such that at least the head of the driver is viewed when the driver is seated in the driver's seat in a general driving posture. The image acquisition unit 101 outputs the acquired captured image to the face detection unit 102 and the determination area acquisition unit 103.
The face detection unit 102 analyzes the captured image acquired by the image acquisition unit 101 to detect a distribution of luminance values. The face detection unit 102 performs a process of detecting the driver's face from the captured image by comparing the detected distribution of the luminance values with the distribution of the luminance values for matching obtained in advance by learning. Here, the distribution of the luminance values for comparison is obtained statistically by learning the distribution of luminance values in the case where a human face is imaged using a plurality of captured images in which a human face is imaged. Shows a distribution pattern of various luminance values. When the face detection unit 102 can not detect the driver's face, the face detection unit 102 outputs information indicating failure in face detection to the first state determination unit 104 and the determination area acquisition unit 103. In addition, when the face detection unit 102 detects a driver's face, the face detection unit 102 outputs, to the face information acquisition unit 106, information indicating success in face detection and a captured image.
 判定領域取得部103は、顔検出部102から顔検出の失敗を示す情報が入力されると、画像取得部101が取得した撮像画像から判定領域の画像(以下、判定領域画像と記載する)を取得する。ここで判定領域は、運転席のヘッドレストを全て含む領域である。また、判定領域は、車種毎の運転座席の位置およびカメラ200の位置に基づいて予め設定される領域である。また、判定領域は、運転席の座席の前後方向への調整幅、運転席の座席の傾きの調整幅、および運転席のヘッドレストの上下方向への調整幅を考慮して定められた領域である。判定領域取得部103は、取得した判定領域画像を、第1の状態判定部104に出力する。 When information indicating failure in face detection is input from the face detection unit 102, the determination area acquisition unit 103 determines an image of the determination area (hereinafter referred to as a determination area image) from the captured image acquired by the image acquisition unit 101. get. Here, the determination area is an area including all the headrests of the driver's seat. Further, the determination area is an area set in advance based on the position of the driver's seat and the position of the camera 200 for each vehicle type. Further, the determination area is an area determined in consideration of the adjustment width of the driver's seat in the front-rear direction, the adjustment width of the driver's seat inclination, and the adjustment width of the driver's seat's headrest in the vertical direction. . The determination area acquisition unit 103 outputs the acquired determination area image to the first state determination unit 104.
 図2は、実施の形態1に係る運転不能状態判定装置100の顔検出部102および判定領域取得部103の処理の一例を示す図である。
 図2Aは、運転席を正面から撮像した撮像画像における判定の例を示し、図2Bは運転席を斜め前方から撮像した撮像画像における判定領域の例を示した図である。なお、図2Aおよび図2Bでは、顔検出部102が運転者Xの顔Xaの検出に成功する場合を例に示している。
 顔検出部102は、図2Aおよび図2Bで示した撮像画像Aから運転者Xの顔Xaを検出する。
 判定領域取得部103は、撮像画像Aから予め定められた領域である判定領域Bの画像を取得する。判定領域Bは、運転席のヘッドレストHを全て含む領域である。判定領域取得部103は、取得した判定領域Bの画像を第1の状態判定部104に出力する。
FIG. 2 is a diagram showing an example of processing of the face detection unit 102 and the determination area acquisition unit 103 of the inoperable state determination apparatus 100 according to the first embodiment.
FIG. 2A illustrates an example of determination in a captured image obtained by capturing the driver's seat from the front, and FIG. 2B illustrates an example of a determination region in a captured image obtained by capturing the driver's seat from the oblique front. 2A and 2B show an example in which the face detection unit 102 succeeds in detecting the face Xa of the driver X.
The face detection unit 102 detects the face Xa of the driver X from the captured image A shown in FIGS. 2A and 2B.
The determination area acquisition unit 103 acquires an image of the determination area B, which is an area determined in advance, from the captured image A. The determination area B is an area including all the headrests H of the driver's seat. The determination area acquisition unit 103 outputs the acquired image of the determination area B to the first state determination unit 104.
 第1の状態判定部104は、顔検出部102から顔検出の失敗を示す情報が入力されると、運転者の状態を判定する。第1の状態判定部104は、判定領域取得部103から入力された顔検出の際に用いられた撮像画像における判定領域画像と、予め撮像された比較画像における判定領域画像とを比較する。第1の状態判定部104は、得られた比較結果に基づいて運転者の状態を判定する。ここで、比較画像は、カメラ200によって予め撮像された画像であって、運転者が着座していない運転席の座席を撮像した撮像画像である。 When the information indicating failure in face detection is input from the face detection unit 102, the first state determination unit 104 determines the state of the driver. The first state determination unit 104 compares the determination area image in the captured image used in the face detection input from the determination area acquisition unit 103 with the determination area image in the comparison image captured in advance. The first state determination unit 104 determines the state of the driver based on the obtained comparison result. Here, the comparison image is an image captured in advance by the camera 200, and is a captured image obtained by capturing a driver's seat on which the driver is not seated.
 具体的には、顔検出部102は、顔検出に失敗すると顔検出に成功するまで繰り返し顔検出を実行する。第1の状態判定部104は、注意喚起が必要であると判定する時間(以下、第1の閾値時間と記載する)継続して、顔検出部102から繰り返し顔検出の失敗を示す情報が入力されると、顔検出の際に用いられた撮像画像における判定領域画像と、比較画像における判定領域画像との差分を算出する。第1の状態判定部104は、算出した差分と基準値とに基づいて、運転者が注意喚起を必要としている状態(以下、注意喚起状態と記載する)であるか判定を行う。さらに、第1の状態判定部104は、注意喚起状態が、第2の閾値時間継続した場合、運転者が運転不能状態であると判定する。ここで、第2の閾値時間とは、第1の閾値時間に対して、運転者が運転不能状態であると判断する時間を加算した時間である。
 第1の状態判定部104は、運転者が注意喚起状態であると判定した場合、または運転者が運転不能状態であると判定した場合、当該判定結果を、警告装置300または車両制御装置400に出力する。
Specifically, when the face detection unit 102 fails in face detection, the face detection unit 102 repeatedly performs face detection until the face detection is successful. The first state determination unit 104 continuously inputs information indicating a failure in face detection from the face detection unit 102 continuously for a time determined to require attention (hereinafter referred to as a first threshold time). Then, the difference between the determination area image in the captured image used for face detection and the determination area image in the comparison image is calculated. The first state determination unit 104 determines, based on the calculated difference and the reference value, whether the driver needs a warning (hereinafter, referred to as a warning state). Furthermore, the first state determination unit 104 determines that the driver is inoperable when the alert state continues for the second threshold time. Here, the second threshold time is a time obtained by adding a time for determining that the driver is inoperable to the first threshold time.
When the first state determination unit 104 determines that the driver is in the alert state or determines that the driver is in the inoperable state, the determination result is output to the warning device 300 or the vehicle control device 400. Output.
 比較画像蓄積部105は、第1の状態判定部104が判定領域画像を比較する際に参照する比較画像を蓄積する。比較画像は、カメラ200によって予め撮像された画像であって、運転者が着座していない運転席の座席を撮像した撮像画像である。 The comparison image storage unit 105 stores a comparison image to be referred to when the first state determination unit 104 compares the determination region images. The comparison image is an image captured in advance by the camera 200, and is a captured image obtained by capturing a driver's seat on which the driver is not seated.
 顔情報取得部106は、顔検出部102から顔検出の成功を示す情報および顔検出の際に用いられた撮像画像が入力されると、当該撮像画像を解析して、運転者の顔の構成要素、例えば目、鼻および口の位置情報を取得する。顔情報取得部106は、取得した運転者の顔の構成要素の位置情報から、運転者の開眼度、および運転者の顔向き等、運転不能状態を判定するための顔情報を取得する。顔情報取得部106は、取得した顔情報を、第2の状態判定部107に出力する。 The face information acquisition unit 106 analyzes the captured image when the face detection unit 102 receives the information indicating the success of the face detection and the captured image used in the face detection, and configures the driver's face. Obtain location information of elements such as eyes, nose and mouth. The face information acquisition unit 106 acquires face information for determining the inoperable state, such as the degree of eye opening of the driver and the face orientation of the driver, from the acquired position information of the components of the face of the driver. The face information acquisition unit 106 outputs the acquired face information to the second state determination unit 107.
 第2の状態判定部107は、顔情報取得部106が取得した顔情報から、例えば開眼度または顔向きを検出する。第2の状態判定部107は、取得した開眼度または顔向き等から、運転者が運転不能状態であるか否か判定を行う。第2の状態判定部107は、運転者が運転不能状態であると判定した場合、当該判定結果を、警告装置300または車両制御装置400に出力する。 The second state determination unit 107 detects, for example, the degree of eye opening or the face orientation from the face information acquired by the face information acquisition unit 106. The second state determination unit 107 determines whether or not the driver is incapable of driving based on the acquired degree of eye opening or face orientation. If the second state determination unit 107 determines that the driver is inoperable, the second state determination unit 107 outputs the determination result to the warning device 300 or the vehicle control device 400.
 次に、運転不能状態判定装置100のハードウェア構成例を説明する。
 図3Aおよび図3Bは、運転不能状態判定装置100のハードウェア構成例を示す図である。
 運転不能状態判定装置100における画像取得部101、顔検出部102、判定領域取得部103、第1の状態判定部104、顔情報取得部106および第2の状態判定部107の各機能は、処理回路により実現される。即ち、運転不能状態判定装置100は、上記各機能を実現するための処理回路を備える。当該処理回路は、図3Aに示すように専用のハードウェアである処理回路100aであってもよいし、図3Bに示すようにメモリ100cに格納されているプログラムを実行するプロセッサ100bであってもよい。
Next, a hardware configuration example of the inoperable state determination device 100 will be described.
FIG. 3A and FIG. 3B are diagrams showing an example of a hardware configuration of the inoperable state determination device 100. As shown in FIG.
Each function of the image acquisition unit 101, the face detection unit 102, the determination area acquisition unit 103, the first state determination unit 104, the face information acquisition unit 106, and the second state determination unit 107 in the inoperable state determination device 100 is processed It is realized by a circuit. That is, the inoperable state determination device 100 includes a processing circuit for realizing the above functions. The processing circuit may be the processing circuit 100a which is dedicated hardware as shown in FIG. 3A, or may be the processor 100b which executes a program stored in the memory 100c as shown in FIG. 3B. Good.
 図3Aに示すように、画像取得部101、顔検出部102、判定領域取得部103、第1の状態判定部104、顔情報取得部106および第2の状態判定部107が専用のハードウェアである場合、処理回路100aは、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field-programmable Gate Array)、またはこれらを組み合わせたものが該当する。画像取得部101、顔検出部102、判定領域取得部103、第1の状態判定部104、顔情報取得部106および第2の状態判定部107の各部の機能それぞれを処理回路で実現してもよいし、各部の機能をまとめて1つの処理回路で実現してもよい。 As shown in FIG. 3A, the image acquisition unit 101, the face detection unit 102, the determination area acquisition unit 103, the first state determination unit 104, the face information acquisition unit 106, and the second state determination unit 107 are dedicated hardware. In some cases, the processing circuit 100a may be, for example, a single circuit, a complex circuit, a programmed processor, a parallel programmed processor, an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination thereof. The thing corresponds. Even if each function of each unit of image acquisition unit 101, face detection unit 102, determination area acquisition unit 103, first state determination unit 104, face information acquisition unit 106 and second state determination unit 107 is realized by a processing circuit Alternatively, the functions of the respective units may be integrated and realized by one processing circuit.
 図3Bに示すように、画像取得部101、顔検出部102、判定領域取得部103、第1の状態判定部104、顔情報取得部106および第2の状態判定部107がプロセッサ100bである場合、各部の機能は、ソフトウェア、ファームウェア、またはソフトウェアとファームウェアとの組み合わせにより実現される。ソフトウェアまたはファームウェアはプログラムとして記述され、メモリ100cに格納される。プロセッサ100bは、メモリ100cに記憶されたプログラムを読み出して実行することにより、画像取得部101、顔検出部102、判定領域取得部103、第1の状態判定部104、顔情報取得部106および第2の状態判定部107の各機能を実現する。即ち、画像取得部101、顔検出部102、判定領域取得部103、第1の状態判定部104、顔情報取得部106および第2の状態判定部107は、プロセッサ100bにより実行されるときに、後述する図5および図6に示す各ステップが結果的に実行されることになるプログラムを格納するためのメモリ100cを備える。また、これらのプログラムは、画像取得部101、顔検出部102、判定領域取得部103、第1の状態判定部104、顔情報取得部106および第2の状態判定部107の手順または方法をコンピュータに実行させるものであるともいえる。 As shown in FIG. 3B, when the image acquisition unit 101, the face detection unit 102, the determination region acquisition unit 103, the first state determination unit 104, the face information acquisition unit 106, and the second state determination unit 107 are the processor 100b. The function of each unit is realized by software, firmware, or a combination of software and firmware. The software or firmware is described as a program and stored in the memory 100c. The processor 100b reads out and executes the program stored in the memory 100c, whereby the image acquisition unit 101, the face detection unit 102, the determination area acquisition unit 103, the first state determination unit 104, the face information acquisition unit 106, and The respective functions of the second state determination unit 107 are realized. That is, when the image acquisition unit 101, the face detection unit 102, the determination area acquisition unit 103, the first state determination unit 104, the face information acquisition unit 106, and the second state determination unit 107 are executed by the processor 100b, A memory 100c is provided for storing a program that results in the steps shown in FIG. 5 and FIG. 6 described later. Further, these programs are obtained by computerizing the procedure or method of the image acquisition unit 101, the face detection unit 102, the determination area acquisition unit 103, the first state determination unit 104, the face information acquisition unit 106, and the second state determination unit 107. It can be said that the
 ここで、プロセッサ100bとは、例えば、CPU(Central Processing Unit)、処理装置、演算装置、プロセッサ、マイクロプロセッサ、マイクロコンピュータ、またはDSP(Digital Signal Processor)などのことである。
 メモリ100cは、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable ROM)、EEPROM(Electrically EPROM)等の不揮発性または揮発性の半導体メモリであってもよいし、ハードディスク、フレキシブルディスク等の磁気ディスクであってもよいし、ミニディスク、CD(Compact Disc)、DVD(Digital Versatile Disc)等の光ディスクであってもよい。
Here, the processor 100 b refers to, for example, a central processing unit (CPU), a processing device, an arithmetic device, a processor, a microprocessor, a microcomputer, or a digital signal processor (DSP).
The memory 100c may be, for example, a nonvolatile or volatile semiconductor memory such as a random access memory (RAM), a read only memory (ROM), a flash memory, an erasable programmable ROM (EPROM), or an electrically EPROM (EEPROM). It may be a hard disk, a magnetic disk such as a flexible disk, or an optical disk such as a mini disk, a CD (Compact Disc), a DVD (Digital Versatile Disc), or the like.
 なお、画像取得部101、顔検出部102、判定領域取得部103、第1の状態判定部104、顔情報取得部106および第2の状態判定部107の各機能について、一部を専用のハードウェアで実現し、一部をソフトウェアまたはファームウェアで実現するようにしてもよい。このように、運転不能状態判定装置100における処理回路100aは、ハードウェア、ソフトウェア、ファームウェア、またはこれらの組み合わせによって、上述の各機能を実現することができる。 Note that hardware functions of the image acquisition unit 101, face detection unit 102, determination area acquisition unit 103, first state determination unit 104, face information acquisition unit 106, and second state determination unit 107 are partially dedicated. It may be realized by hardware, and a part may be realized by software or firmware. As described above, the processing circuit 100a in the inoperable state determination device 100 can realize each of the functions described above by hardware, software, firmware, or a combination thereof.
 次に、第1の状態判定部104の詳細について、図4を参照しながら説明する。
 図4は、実施の形態1に係る運転不能状態判定装置100の第1の状態判定部104の判定処理を示す図である。図4Aは比較画像蓄積部105に蓄積された比較画像における判定領域の画像、図4Bから図4Dは顔検出時の撮像画像および当該撮像画像における判定領域の画像の一例を示している。図4Aは、運転者が着座していない運転席の座席を撮像した比較画像における判定領域の画像を示している。また、図4Bは運転者の顔が遮蔽物で遮られていない状態を示し、図4Cは運転者の顔が遮蔽物で遮られている状態を示し、図4Dは運転者の顔が前傾している状態を示している。
Next, details of the first state determination unit 104 will be described with reference to FIG.
FIG. 4 is a diagram showing a determination process of the first state determination unit 104 of the inoperable state determination apparatus 100 according to the first embodiment. 4A shows an example of the image of the determination area in the comparison image stored in the comparison image storage unit 105, and FIG. 4B to FIG. 4D show an example of a captured image at the time of face detection and an image of the determination area in the captured image. FIG. 4A shows an image of a determination area in a comparison image obtained by imaging a driver's seat where a driver is not seated. 4B shows the driver's face not blocked by the shield, FIG. 4C shows the driver's face blocked by the shield, and FIG. 4D shows the driver's face tilted forward. It shows the state of doing.
 図4Aの比較画像における判定領域Cの画像と、図4Bから図4Dの顔検出時の撮像画像Aにおける判定領域Ba,Bb,Bcの画像とは、同一領域を撮像した画像となる。
 まず、図4Aの比較画像と、図4Bの顔検出時の撮像画像との比較について説明する。第1の状態判定部104は、撮像画像Aにおける判定領域Baの画像と、比較画像における判定領域Cの画像とを比較し、比較された2つの画像のヘッドレストが撮像された領域の差分を算出する。第1の状態判定部104は、比較画像における判定領域Cの画像と、撮像画像Aにおける判定領域Baとの差分を、ヘッドレストが撮像された領域Haと領域Hbとの差分に基づいて算出する。
 第1の状態判定部104は、ヘッドレストが撮像された領域の差分を、ヘッドレストが撮像されているピクセル数、ヘッドレストが視認可能な輪郭の長さ、判定領域画像の輝度の少なくともいずれかを用いて算出する。
The image of the determination area C in the comparative image of FIG. 4A and the images of the determination areas Ba, Bb, and Bc in the captured image A at face detection in FIGS. 4B to 4D are images obtained by capturing the same area.
First, the comparison between the comparative image of FIG. 4A and the captured image at face detection of FIG. 4B will be described. The first state determination unit 104 compares the image of the determination area Ba in the captured image A with the image of the determination area C in the comparison image, and calculates the difference between the two images in which the headrest is captured. Do. The first state determination unit 104 calculates the difference between the image of the determination area C in the comparison image and the determination area Ba in the captured image A based on the difference between the area Ha and the area Hb in which the headrest is imaged.
The first state determination unit 104 uses at least one of the number of pixels at which the headrest is imaged, the length of the outline at which the headrest can be visually recognized, and the luminance of the determination area image. calculate.
 次に、図4Aの比較画像と、図4Cの顔検出時の撮像画像との比較について説明する。
第1の状態判定部104は、撮像画像Aにおける判定領域Bbの画像と、比較画像における判定領域Cの画像とを比較し、比較された2つの画像のヘッドレストが撮像された領域の差分を算出する。第1の状態判定部104は、比較画像の判定領域Cの画像と撮像画像Aの判定領域Bbの画像との差分を、ヘッドレストが撮像された領域Haと領域Hcとの差分に基づいて算出する。
Next, comparison between the comparison image of FIG. 4A and the captured image at face detection of FIG. 4C will be described.
The first state determination unit 104 compares the image of the determination area Bb in the captured image A with the image of the determination area C in the comparison image, and calculates the difference between the two images in which the headrest is captured. Do. The first state determination unit 104 calculates the difference between the image of the determination area C of the comparison image and the image of the determination area Bb of the captured image A based on the difference between the area Ha and the area Hc where the headrest is imaged. .
 次に、図4Aの比較画像と、図4Dの顔検出時の撮像画像との比較について説明する。第1の状態判定部104は、撮像画像Aにおける判定領域Bcの画像と、比較画像における判定領域Cの画像とを比較し、比較された2つの画像のヘッドレストが撮像された領域の差分を算出する。第1の状態判定部104は、比較画像の判定領域Cの画像と撮像画像Aの判定領域Bcの画像との差分を、ヘッドレストが撮像された領域Haと領域Hdとの差分に基づいて算出する。 Next, the comparison between the comparative image of FIG. 4A and the captured image at the time of face detection of FIG. 4D will be described. The first state determination unit 104 compares the image of the determination area Bc in the captured image A with the image of the determination area C in the comparison image, and calculates the difference between the two images in which the headrest is captured. Do. The first state determination unit 104 calculates the difference between the image of the determination area C of the comparative image and the image of the determination area Bc of the captured image A based on the difference between the area Ha and the area Hd in which the headrest is imaged. .
 第1の状態判定部104が、図4Aの比較画像における判定領域Cの画像と、運転者の顔が遮蔽物で遮られていない状態を示す図4Bの撮像画像Aにおける判定領域Baの画像との比較を行う場合、比較画像における判定領域Cの画像のうちヘッドレストが撮像された領域Haの割合を100%とし、撮像画像Aにおける判定領域Baの画像のうちヘッドレストが撮像された領域Hbの割合を、例えば20%と算出する。第1の状態判定部104は、領域Haの割合(100%)と、領域Hbの割合(20%)の差分割合(80%)を算出し、算出した差分割合が予め定めておいた基準値(例えば、50%)以上であると判定する。第1の状態判定部104は、判定結果から撮像画像Aのヘッドレストが運転者の頭部等で十分に隠れており、比較画像と比べて大きく差異があると判定する。 The first state determination unit 104 generates an image of the determination area C in the comparison image of FIG. 4A and an image of the determination area Ba in the captured image A of FIG. 4B in which the driver's face is not blocked by the shield. The ratio of the area Ha in which the headrest is captured in the image of the determination area C in the comparison image is 100%, and the ratio of the area Hb in which the headrest is captured in the image of the determination area Ba in the captured image A Is calculated as, for example, 20%. The first state determination unit 104 calculates a difference ratio (80%) of the ratio of the area Ha (100%) to the ratio of the area Hb (20%), and the calculated difference ratio is a predetermined reference value. It is determined that (for example, 50%) or more. The first state determination unit 104 determines from the determination result that the headrest of the captured image A is sufficiently hidden by the driver's head or the like, and that there is a large difference as compared with the comparison image.
 一方、第1の状態判定部104が、図4Aの比較画像における判定領域Cの画像と、運転者の顔が遮蔽物で遮られた状態を示す図4Cの撮像画像Aにおける判定領域Bbの画像との比較を行う場合、比較画像における判定領域Cの画像のうちヘッドレストが撮像された領域Haの割合を100%とし、撮像画像Aにおける判定領域Bbの画像のうちヘッドレストが撮像された領域Hcの割合を、例えば20%と算出する。第1の状態判定部104は、領域Haの割合(100%)と、領域Hcの割合(20%)の差分割合(80%)を算出し、算出した差分割合が予め定めておいた基準値(例えば、50%)以上であると判定する。第1の状態判定部104は、判定結果から撮像画像Aのヘッドレストが運転者の頭部等で十分に隠れており、比較画像と比べて大きく差異があると判定する。 On the other hand, the first state determination unit 104 displays the image of the determination area C in the comparison image of FIG. 4A and the image of the determination area Bb in the captured image A of FIG. 4C in which the driver's face is blocked by the shield. In the comparison image, the proportion of the area Ha in which the headrest is captured in the image of the determination area C in the comparison image is 100%, and the area Hc in the image of the determination area Bb in the captured image A The ratio is calculated as, for example, 20%. The first state determination unit 104 calculates a difference ratio (80%) of the ratio of the area Ha (100%) to the ratio of the area Hc (20%), and the calculated difference ratio is a reference value previously determined. It is determined that (for example, 50%) or more. The first state determination unit 104 determines from the determination result that the headrest of the captured image A is sufficiently hidden by the driver's head or the like, and that there is a large difference as compared with the comparison image.
 また、第1の状態判定部104が、図4Aの比較画像における判定領域Cの画像と、運転者の顔が前傾している状態を示す図4Dの撮像画像Aにおける判定領域Bcの画像との比較を行う場合、比較画像における判定領域Cの画像のうちヘッドレストを撮像した領域Haの割合を100%とし、撮像画像Aにおける判定領域Bcの画像のうちヘッドレストを撮像した領域Hdの割合を例えば80%と算出する。第1の状態判定部104は、領域Haの割合(100%)と、領域Hdの割合(80%)の差分割合(20%)を算出し、算出した差分割合が予め定めておいた基準値(例えば、50%)未満であると判定する。第1の状態判定部104は、判定結果から撮像画像Aのヘッドレストが運転者の頭部等で隠れておらず、比較画像と比べて大きな差異がないと判定する。 In addition, the first state determination unit 104 controls the image of the determination area C in the comparison image of FIG. 4A and the image of the determination area Bc in the captured image A of FIG. 4D in which the driver's face is inclined forward. In the comparison, the ratio of the area Ha obtained by imaging the headrest to the image of the determination area C in the comparison image is 100%, and the ratio of the area Hd acquired by imaging the headrest to the image of the determination area Bc in the imaged image A Calculated as 80%. The first state determination unit 104 calculates the difference ratio (20%) of the ratio of the area Ha (100%) to the ratio of the area Hd (80%), and the calculated difference ratio is a predetermined reference value. It determines that it is less than (for example, 50%). The first state determination unit 104 determines from the determination result that the headrest of the captured image A is not hidden by the driver's head or the like, and there is no large difference compared to the comparison image.
 第1の状態判定部104は、算出した差分割合が、予め定められた基準値未満であった場合には、運転者が注意喚起を必要とする状態であると判定する。
 例えば、図4Aの比較画像における判定領域Cの画像と、図4Bの撮像画像Aにおける判定領域Baの画像との比較では、第1の状態判定部104は、領域Haと領域Hbとの差分割合が基準値以上であることから、運転者Xが注意喚起を必要とする状態ではないと判定する。
 同様に、図4Aの比較画像における判定領域Cの画像と、図4Cの撮像画像Aにおける判定領域Bbの画像との比較では、第1の状態判定部104は、領域Haと領域Hcとの差分割合が基準値以上であることから、運転者Xが注意喚起を必要とする状態ではないと判定する。
 一方、例えば、図4Aの比較画像における判定領域Cと、図4Dの撮像画像Aにおける判定領域Bcの画像との比較では、領域Haと領域Hdとの差分割合が基準値未満であることから、第1の状態判定部104は、運転者Xが注意喚起を必要とする状態であると判定する。
If the calculated difference ratio is less than a predetermined reference value, the first state determination unit 104 determines that the driver needs to call attention.
For example, in comparison between the image of the determination area C in the comparative image of FIG. 4A and the image of the determination area Ba in the captured image A of FIG. 4B, the first state determination unit 104 determines the difference ratio between the area Ha and the area Hb. Is greater than the reference value, it is determined that the driver X is not in a state requiring a warning.
Similarly, in comparison between the image of the determination area C in the comparative image of FIG. 4A and the image of the determination area Bb in the captured image A of FIG. 4C, the first state determination unit 104 determines the difference between the area Ha and the area Hc. Since the ratio is equal to or higher than the reference value, it is determined that the driver X is not in a state requiring attention.
On the other hand, for example, in the comparison between the determination area C in the comparison image of FIG. 4A and the image of the determination area Bc in the captured image A of FIG. The first state determination unit 104 determines that the driver X needs to call attention.
 次に、運転不能状態判定装置100の動作について説明する。
 運転不能状態判定装置100の動作は、第1の状態判定部104が、運転者が注意喚起状態であると判定していない場合の動作と、第1の状態判定部104が、運転者が注意喚起状態であると既に判定している場合の動作とに分けて説明する。
 まず、図5のフローチャートを参照しながら、第1の状態判定部104が、運転者が注意喚起状態であると判定していない場合の、運転不能状態判定装置100の動作について説明する。
 図5は、実施の形態1に係る運転不能状態判定装置100の動作を示すフローチャートである。図5のフローチャートの処理が行われる前提として、第1の状態判定部104が、運転者が注意喚起状態であると判定していないものとする。
Next, the operation of the inoperable state determination device 100 will be described.
The operation of the inoperable state determination device 100 is performed when the first state determination unit 104 does not determine that the driver is in the alerting state, and the first state determination unit 104 is controlled by the driver. The operation in the case where it has already been determined to be in the awake state will be described separately.
First, the operation of the inoperable state determination device 100 when the first state determination unit 104 does not determine that the driver is in the alert state will be described with reference to the flowchart in FIG. 5.
FIG. 5 is a flowchart showing the operation of the inoperable state determination apparatus 100 according to the first embodiment. It is assumed that the first state determination unit 104 does not determine that the driver is in the alert state as a premise of performing the process of the flowchart in FIG. 5.
 画像取得部101が撮像画像を取得すると(ステップST1)、顔検出部102は取得された撮像画像から運転者の顔を検出する処理を行う(ステップST2)。顔検出部102は、ステップST2の処理により、撮像画像から運転者の顔検出に成功したか否か判定を行う(ステップST3)。顔検出に成功した場合(ステップST3;YES)、顔検出部102は、顔情報取得部106に対して顔検出に成功したことを示す情報と撮像画像とを出力する。顔情報取得部106は、顔検出部102から入力された撮像画像から、運転者の顔情報を取得し(ステップST4)、第2の状態判定部107に出力する。第2の状態判定部107は、ステップST4で取得された運転者の顔情報に基づいて、運転者が運転不能状態であるかの判定処理を実行する(ステップST5)。その後、フローチャートは処理を終了する。 When the image acquisition unit 101 acquires a captured image (step ST1), the face detection unit 102 performs processing for detecting a driver's face from the acquired captured image (step ST2). The face detection unit 102 determines whether or not the driver's face has been successfully detected from the captured image in the process of step ST2 (step ST3). If the face detection is successful (step ST3; YES), the face detection unit 102 outputs, to the face information acquisition unit 106, information indicating that the face detection is successful and the captured image. The face information acquisition unit 106 acquires driver's face information from the captured image input from the face detection unit 102 (step ST4), and outputs the face information to the second state determination unit 107. The second state determination unit 107 executes a process of determining whether the driver is incapable of driving based on the driver's face information acquired in step ST4 (step ST5). Thereafter, the flowchart ends the process.
 一方、顔検出に失敗した場合(ステップST3;NO)、顔検出部102は、判定領域取得部103および第1の状態判定部104に対して顔検出に失敗したことを示す情報を出力する。判定領域取得部103は、ステップST1で取得された撮像画像から判定領域の画像を取得し(ステップST6)、第1の状態判定部104に出力する。第1の状態判定部104は、顔検出部102から顔検出に失敗したことを示す情報が入力されると、カウンタ(図示しない)が起動済みであるか否か判定を行う(ステップST7)。カウンタが起動済みでない場合(ステップST7;NO)、第1の状態判定部104はカウンタのカウントを開始し(ステップST8)、カウンタのカウントアップを行う(ステップST9)。一方、カウンタが起動済みである場合(ステップST7;YES)、フローチャートはステップST9の処理に進む。 On the other hand, if face detection has failed (step ST3; NO), the face detection unit 102 outputs information indicating that face detection has failed to the determination area acquisition unit 103 and the first state determination unit 104. The determination area acquisition unit 103 acquires an image of the determination area from the captured image acquired in step ST1 (step ST6), and outputs the image to the first state determination unit 104. When the information indicating that the face detection has failed is input from the face detection unit 102, the first state determination unit 104 determines whether a counter (not shown) has been activated (step ST7). If the counter has not been activated (step ST7; NO), the first state determination unit 104 starts counting the counter (step ST8), and counts up the counter (step ST9). On the other hand, when the counter has been started (step ST7; YES), the flowchart proceeds to the process of step ST9.
 第1の状態判定部104は、カウンタのカウント値を参照し、第1の閾値時間が経過したか否か判定を行う(ステップST10)。第1の閾値時間が経過していない場合(ステップST10;NO)、フローチャートは処理を終了する。一方、予め設定した第1の閾値時間が経過した場合(ステップST10;YES)、第1の状態判定部104は、ステップST6で取得された判定領域画像のヘッドレストを撮像した領域の割合と、比較画像のヘッドレストを撮像した領域の割合との差分を算出する(ステップST11)。 The first state determination unit 104 refers to the count value of the counter and determines whether the first threshold time has elapsed (step ST10). If the first threshold time has not elapsed (step ST10; NO), the flowchart ends the process. On the other hand, when the first threshold time set in advance has elapsed (step ST10; YES), the first state determination unit 104 compares the ratio of the area in which the headrest is captured with the area of the determination area acquired in step ST6. The difference between the image and the ratio of the area in which the headrest is captured is calculated (step ST11).
 第1の状態判定部104は、ステップST11で算出した差分が、基準値以上であるか否か判定を行う(ステップST12)。差分が基準値以上である場合(ステップST12;YES)、第1の状態判定部104は、運転者が運転不能状態ではないと判定し、処理を終了する。一方、差分が基準値未満であると判定した場合(ステップST12;NO)、第1の状態判定部104は、運転者が注意喚起状態であると判定する(ステップST13)。第1の状態判定部104は、ステップST13の判定結果を警告装置300または車両制御装置400に出力し(ステップST14)、処理を終了する。 The first state determination unit 104 determines whether the difference calculated in step ST11 is equal to or greater than the reference value (step ST12). If the difference is equal to or greater than the reference value (step ST12; YES), the first state determination unit 104 determines that the driver is not in an inoperable state, and ends the process. On the other hand, when it is determined that the difference is less than the reference value (step ST12; NO), the first state determination unit 104 determines that the driver is in the alert state (step ST13). The first state determination unit 104 outputs the determination result of step ST13 to the warning device 300 or the vehicle control device 400 (step ST14), and ends the process.
 次に、図6のフローチャートを参照しながら、第1の状態判定部104が、運転者が注意喚起状態であると判定済みである場合の、運転不能状態判定装置100の動作について説明する。
 図6は、実施の形態1に係る運転不能状態判定装置100の注意喚起状態での動作を示すフローチャートである。図6のフローチャートの処理が行われる前提として、第1の状態判定部104が、図5のフローチャートのステップST13において、注意喚起状態であると判定しているものとする。
 画像取得部101が撮像画像を取得すると(ステップST21)、顔検出部102は取得された撮像画像から運転者の顔を検出する処理を行う(ステップST22)。顔検出部102は、ステップST22の処理により撮像画像から運転者の顔検出に成功したか否か判定を行う(ステップST23)。顔検出に成功した場合(ステップST23;YES)、顔検出部102は、顔検出に成功したことを示す情報を第1の状態判定部104に出力する。第1の状態判定部104、顔検出に成功したことを示す情報が入力されると、運転者への注意喚起を解除する(ステップST24)。
Next, the operation of the inoperable state determination device 100 when the first state determination unit 104 has determined that the driver is in the alert state will be described with reference to the flowchart in FIG. 6.
FIG. 6 is a flowchart showing the operation in the alert state of the inoperable state determination device 100 according to the first embodiment. As a premise that the process of the flowchart of FIG. 6 is performed, it is assumed that the first state determination unit 104 determines in step ST13 of the flowchart of FIG.
When the image acquisition unit 101 acquires a captured image (step ST21), the face detection unit 102 performs processing for detecting the driver's face from the acquired captured image (step ST22). The face detection unit 102 determines whether or not the driver's face has been successfully detected from the captured image in the process of step ST22 (step ST23). If the face detection is successful (step ST23; YES), the face detection unit 102 outputs information indicating that the face detection is successful to the first state determination unit 104. When information indicating that the face detection has succeeded is input, the first state determination unit 104 cancels the driver's attention (step ST24).
 一方、顔検出に失敗した場合(ステップST23;NO)、顔検出部102は、顔検出に失敗したことを示す情報を第1の状態判定部104に出力する。第1の状態判定部104は、顔検出に失敗したことを示す情報が入力されると、カウンタのカウントアップを行う(ステップST25)。第1の状態判定部104は、カウンタのカウント値を参照し、第2の閾値時間が経過したか否か判定を行う(ステップST26)。第2の閾値時間が経過していない場合(ステップST26;NO)、フローチャートは処理を終了する。 On the other hand, when face detection has failed (step ST23; NO), the face detection unit 102 outputs information indicating that face detection has failed to the first state determination unit 104. When the information indicating that the face detection has failed is input, the first state determination unit 104 counts up the counter (step ST25). The first state determination unit 104 refers to the count value of the counter and determines whether the second threshold time has elapsed (step ST26). If the second threshold time has not elapsed (step ST26; NO), the flowchart ends the process.
 一方、第2の閾値時間が経過した場合(ステップST26;YES)、第1の状態判定部104は、運転者が運転不能状態であると判定する(ステップST27)。第1の状態判定部104は、ステップST27の判定結果を警告装置300または車両制御装置400に出力する(ステップST28)。第1の状態判定部104は、カウンタをリセットし(ステップST29)、処理を終了する。
 なお、図5および図6のフローチャートはそれぞれ1周期の処理を示したものであり、運転者が運転中である場合には、当該フローチャートの処理が繰り返し実施される。
On the other hand, when the second threshold time has elapsed (step ST26; YES), the first state determination unit 104 determines that the driver is incapable of driving (step ST27). The first state determination unit 104 outputs the determination result of step ST27 to the warning device 300 or the vehicle control device 400 (step ST28). The first state determination unit 104 resets the counter (step ST29), and ends the process.
The flowcharts of FIG. 5 and FIG. 6 show the processing of one cycle, and when the driver is driving, the processing of the flowchart is repeatedly performed.
 以上のように、この実施の形態1の運転不能状態判定装置100は、車両内の撮像画像を取得する画像取得部101と、取得された撮像画像から、運転者の顔を検出する顔検出部102と、取得された撮像画像から判定領域画像を取得する判定領域取得部103と、運転者の顔を検出できない状態が、第1の閾値時間継続した場合に、取得された撮像画像内の判定領域画像と、比較画像内の判定領域画像との差分に基づいて運転者の状態を判定する第1の状態判定部104とを備えるように構成した。
 従って、実施の形態1の運転不能状態判定装置100は、撮像画像内の判定領域画像と、比較画像内の判定領域画像との差分と基準値との比較により、運転者の顔が正常位置に位置していると判定することができる。運転者の顔が遮蔽物で遮られている場合に、運転者が運転不能状態であると誤判定するのを防止することができる。
As described above, the inoperable state determination apparatus 100 according to the first embodiment includes the image acquisition unit 101 that acquires a captured image in a vehicle, and a face detection unit that detects the driver's face from the acquired captured image. 102, the determination area acquisition unit 103 for acquiring a determination area image from the acquired captured image, and the determination in the acquired captured image when the state in which the driver's face can not be detected continues for the first threshold time The first state determination unit 104 is configured to determine the state of the driver based on the difference between the area image and the determination area image in the comparison image.
Therefore, the inoperable state determination apparatus 100 according to the first embodiment compares the difference between the determination area image in the captured image and the determination area image in the comparison image with the reference value to find that the driver's face is at the normal position. It can be determined that it is located. When the driver's face is blocked by the shield, it is possible to prevent the driver from being erroneously determined to be inoperable.
 また、この実施の形態1によれば、第1の状態判定部104は、差分が基準値未満である場合に、運転者が注意喚起状態であると判定し、さらに運転者が注意喚起状態であると判定し、且つ顔検出部102が運転者の顔を検出できない状態が第2の閾値時間継続した場合に、運転者が運転不能状態であると判定するように構成したので、運転者の運転不能状態をより精度よく判定することができる。 Further, according to the first embodiment, when the difference is less than the reference value, the first state determination unit 104 determines that the driver is in the alerting state, and the driver is in the alerting state. Since the driver is determined to be inoperable when it is determined that there is a situation where the face detection unit 102 can not detect the driver's face continues for the second threshold time, the driver's The inoperable state can be determined more accurately.
 上述した実施の形態1で示した運転不能状態判定装置100において、運転者が注意喚起を必要とする状態であると判定した場合、または運転者が運転不能状態であると判定した場合、外部の警告装置300が音声により警告を行う。
 また、警告装置300はディスプレイに運転不能状態と判定していることを表示し、当該運転不能状態との判定を解除する入力を受け付けるボタンを表示してもよい。
 図7は、実施の形態1に係る発明の運転不能状態判定装置100の判定結果を運転者に通知する一例を示す図である。
 ディスプレイ301には、運転不能状態判定装置100の判定結果302と、当該判定結果を解除するためのボタン303が表示される。判定結果302は、例えば「運転不能状態と判定中」と表示される。また、ボタン303は、例えば「正常状態にリセット」と表示される。
In the inoperable state determination apparatus 100 shown in the first embodiment described above, when it is determined that the driver needs to be alerted, or when it is determined that the driver is inoperable, external The warning device 300 gives a warning by voice.
In addition, the warning device 300 may display on the display that it is determined that the driving is not possible, and may display a button for receiving an input for canceling the determination of the driving impossible.
FIG. 7 is a diagram showing an example of notifying the driver of the determination result of the inoperable state determination device 100 of the invention according to the first embodiment.
The display 301 displays a determination result 302 of the inoperable state determination device 100 and a button 303 for canceling the determination result. The determination result 302 is displayed as, for example, “determining as inoperable state”. Further, the button 303 is displayed as, for example, “reset to normal state”.
 運転者は、ディスプレイ301に表示された判定結果302を参照し、自身が運転不能状態でない場合には、ボタン303を押下することにより、運転不能状態判定装置100に、判定結果を解除させることができる。 The driver refers to the determination result 302 displayed on the display 301, and when the driver is not in the inoperable state, pressing the button 303 causes the inoperable state determination device 100 to cancel the determination result. it can.
 なお、本願発明はその発明の範囲内において、実施の形態の任意の構成要素の変形、もしくは実施の形態の任意の構成要素の省略が可能である。 In the present invention, within the scope of the invention, modifications of optional components of the embodiment or omission of optional components of the embodiment is possible.
 この発明に係る運転不能状態判定装置は、判定精度の向上が求められるドライバモニタリングシステム等に用いられるのに適している。 The inoperable state determination device according to the present invention is suitable for being used in a driver monitoring system or the like for which improvement in determination accuracy is required.
 100 運転不能状態判定装置、101 画像取得部、102 顔検出部、103 判定領域取得部、104 第1の状態判定部、105 比較画像蓄積部、106 顔情報取得部、107 第2の状態判定部。 Reference Signs List 100 non-operational state determination device, 101 image acquisition unit, 102 face detection unit, 103 determination area acquisition unit, 104 first state determination unit, 105 comparison image storage unit, 106 face information acquisition unit, 107 second state determination unit .

Claims (6)

  1.  車両内の撮像画像を取得する画像取得部と、
     前記画像取得部が取得した撮像画像から、運転者の顔を検出する顔検出部と、
     前記画像取得部が取得した撮像画像から判定領域画像を取得する判定領域取得部と、
     前記顔検出部が前記運転者の顔を検出できない状態が、第1の閾値時間継続した場合に、前記判定領域取得部が取得した前記撮像画像内の判定領域画像と、比較画像内の判定領域画像との差分に基づいて前記運転者の状態を判定する状態判定部とを備えた運転不能状態判定装置。
    An image acquisition unit that acquires a captured image in the vehicle;
    A face detection unit that detects a driver's face from the captured image acquired by the image acquisition unit;
    A determination area acquisition unit that acquires a determination area image from the captured image acquired by the image acquisition unit;
    When the state where the face detection unit can not detect the driver's face continues for the first threshold time, the determination area image in the captured image acquired by the determination area acquisition unit and the determination area in the comparison image And a state determination unit that determines the state of the driver based on a difference with an image.
  2.  前記比較画像は、前記運転者が着座していない運転席を撮像した撮像画像であることを特徴とする請求項1記載の運転不能状態判定装置。 The said comparative image is a captured image which imaged the driver's seat in which the said driver | operator is not seated, The driving impossible state determination apparatus of Claim 1 characterized by the above-mentioned.
  3.  前記状態判定部は、基準値以上の前記差分を検出した場合に、前記運転者が注意喚起状態でないと判定することを特徴とする請求項1または請求項2記載の運転不能状態判定装置。 The driving impossible state judging device according to claim 1 or 2, wherein the state judging unit judges that the driver is not in a state of being alerted when the difference larger than a reference value is detected.
  4.  前記状態判定部は、前記差分が基準値未満である場合に、前記運転者が注意喚起状態であると判定することを特徴とする請求項1または請求項2記載の運転不能状態判定装置。 The driving impossible state judging device according to claim 1 or 2, wherein the state judging unit judges that the driver is in a state of being alerted when the difference is smaller than a reference value.
  5.  前記状態判定部は、前記運転者が注意喚起状態であると判定し、且つ前記顔検出部が前記運転者の顔を検出できない状態が第2の閾値時間継続した場合に、前記運転者が運転不能状態であると判定することを特徴とする請求項4記載の運転不能状態判定装置。 The state determination unit determines that the driver is in a state of alerting, and the driver drives when the state in which the face detection unit can not detect the driver's face continues for a second threshold time. The inoperable state determination apparatus according to claim 4, wherein the inoperable state is determined.
  6.  画像取得部が、車両内の撮像画像を取得するステップと、
     顔検出部が、前記取得された撮像画像から、運転者の顔を検出するステップと、
     判定領域取得部が、前記取得された撮像画像から判定領域画像を取得するステップと、
     状態判定部が、前記運転者の顔を検出できない状態が、第1の閾値時間継続した場合に、前記取得された前記撮像画像内の判定領域画像と、比較画像内の判定領域画像との差分に基づいて前記運転者の状態を判定するステップとを備えた運転不能状態判定方法。
    The image acquisition unit acquires a captured image in the vehicle;
    Detecting a face of the driver from the captured image acquired by the face detection unit;
    A determination area acquisition unit acquiring a determination area image from the acquired captured image;
    The difference between the determination area image in the acquired captured image and the determination area image in the comparison image when the state determination unit can not detect the driver's face continues for the first threshold time Determining the state of the driver based on the condition.
PCT/JP2017/028931 2017-08-09 2017-08-09 Drive incapability state determination device and drive incapability state determination method WO2019030855A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2019535504A JP6594595B2 (en) 2017-08-09 2017-08-09 Inoperable state determination device and inoperable state determination method
PCT/JP2017/028931 WO2019030855A1 (en) 2017-08-09 2017-08-09 Drive incapability state determination device and drive incapability state determination method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/028931 WO2019030855A1 (en) 2017-08-09 2017-08-09 Drive incapability state determination device and drive incapability state determination method

Publications (1)

Publication Number Publication Date
WO2019030855A1 true WO2019030855A1 (en) 2019-02-14

Family

ID=65272592

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/028931 WO2019030855A1 (en) 2017-08-09 2017-08-09 Drive incapability state determination device and drive incapability state determination method

Country Status (2)

Country Link
JP (1) JP6594595B2 (en)
WO (1) WO2019030855A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2021192007A1 (en) * 2020-03-24 2021-09-30
CN113561982A (en) * 2021-08-06 2021-10-29 上汽通用五菱汽车股份有限公司 Driver coma processing method and device and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016027452A (en) * 2014-06-23 2016-02-18 株式会社デンソー Driving disabled state detector of driver
JP2016085563A (en) * 2014-10-24 2016-05-19 株式会社デンソー On-vehicle control device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016027452A (en) * 2014-06-23 2016-02-18 株式会社デンソー Driving disabled state detector of driver
JP2016085563A (en) * 2014-10-24 2016-05-19 株式会社デンソー On-vehicle control device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2021192007A1 (en) * 2020-03-24 2021-09-30
WO2021192007A1 (en) * 2020-03-24 2021-09-30 三菱電機株式会社 Face detection device
JP7345632B2 (en) 2020-03-24 2023-09-15 三菱電機株式会社 face detection device
CN113561982A (en) * 2021-08-06 2021-10-29 上汽通用五菱汽车股份有限公司 Driver coma processing method and device and readable storage medium

Also Published As

Publication number Publication date
JP6594595B2 (en) 2019-10-23
JPWO2019030855A1 (en) 2019-12-19

Similar Documents

Publication Publication Date Title
JP4658899B2 (en) Vehicle occupant detection device
JP6960995B2 (en) State judgment device and state judgment method
JP2018128974A (en) Driver state monitoring device
JP6971582B2 (en) Status detector, status detection method, and program
WO2019030855A1 (en) Drive incapability state determination device and drive incapability state determination method
JPWO2006087812A1 (en) Image processing method, image processing system, image processing apparatus, and computer program
JP4840638B2 (en) Vehicle occupant monitoring device
JP4888382B2 (en) Abnormality detection apparatus and method, and program
WO2022113275A1 (en) Sleep detection device and sleep detection system
JP6945775B2 (en) In-vehicle image processing device and in-vehicle image processing method
JP7501548B2 (en) Driver monitor device, driver monitor method, and computer program for driver monitor
JP2022143854A (en) Occupant state determination device and occupant state determination method
JP7312971B2 (en) vehicle display
JP2004287752A (en) Supervisory device
JP4888707B2 (en) Suspicious person detection device
JP7003332B2 (en) Driver monitoring device and driver monitoring method
JP5040634B2 (en) Warning device, warning method and warning program
WO2022176037A1 (en) Adjustment device, adjustment system, display device, occupant monitoring device, and adjustment method
JP7370456B2 (en) Occupant condition determination device and occupant condition determination method
JP2007148506A (en) Driving support device
JP7183420B2 (en) In-vehicle image processing device and in-vehicle image processing method
WO2023032029A1 (en) Blocking determination device, passenger monitoring device, and blocking determination method
WO2024079779A1 (en) Passenger state determination device, passenger state determination system, passenger state determination method and program
WO2023157720A1 (en) Face registration control device for vehicle and face registration control method for vehicle
JP7483060B2 (en) Hand detection device, gesture recognition device, and hand detection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17921128

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019535504

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17921128

Country of ref document: EP

Kind code of ref document: A1