WO2022270205A1 - Facial authentication device for driver and facial authentication program - Google Patents

Facial authentication device for driver and facial authentication program Download PDF

Info

Publication number
WO2022270205A1
WO2022270205A1 PCT/JP2022/021391 JP2022021391W WO2022270205A1 WO 2022270205 A1 WO2022270205 A1 WO 2022270205A1 JP 2022021391 W JP2022021391 W JP 2022021391W WO 2022270205 A1 WO2022270205 A1 WO 2022270205A1
Authority
WO
WIPO (PCT)
Prior art keywords
driver
software
state
execution
unsuitable
Prior art date
Application number
PCT/JP2022/021391
Other languages
French (fr)
Japanese (ja)
Inventor
貴洋 石川
祐樹 古市
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Priority to JP2023529722A priority Critical patent/JPWO2022270205A1/ja
Publication of WO2022270205A1 publication Critical patent/WO2022270205A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present disclosure relates to a driver's face authentication device and a face authentication program.
  • an occupant condition monitoring system that monitors the condition of occupants in vehicles such as automobiles.
  • a driver state monitoring system for monitoring the state of a driver includes an imaging unit that captures an image of the area around the headrest of the driver's seat, and an illumination unit that illuminates the area around the headrest of the driver's seat.
  • the driver condition monitoring system captures an image of the area around the driver's face illuminated by light with an imaging unit while the driver is seated in the driver's seat, analyzes the face image of the driver included in the captured image, and detects the driver. (see, for example, Patent Document 1).
  • the driver condition monitoring system judges, for example, looking aside, closed eyes, drowsiness, etc. from the facial image, and judges whether the driver is in a suitable driving condition or unsuitable driving condition based on the judgment result.
  • the driver condition monitoring system identifies that the driver is unsuitable for driving, for example, the driver abnormality response system (hereinafter referred to as EDSS (Emergency Driving Stop System)) is activated, or driving by automatic driving or by advanced driving support system or disable the running rule.
  • EDSS Ergency Driving Stop System
  • the driver condition monitoring system periodically (for example, 30 times/second) executes software for detecting the unsuitable driving condition for determining whether the driver is in an appropriate driving condition or an unsuitable driving condition.
  • the driver condition monitoring system periodically executes face recognition software for judging driver change at a predetermined cycle.
  • the frequency of executing both software increases, the amount of calculation increases steadily, and the high-spec It is necessary to install a microcomputer.
  • An object of the present disclosure is to prevent an increase in the steady amount of computation while appropriately determining whether the driver is in a suitable driving state or an unsuitable driving state.
  • the image acquisition unit acquires the face image of the driver.
  • the first software execution unit executes, at predetermined intervals, driving unsuitable state detection software for determining whether the driver is in an appropriate driving state or an unsuitable driving state.
  • the second software execution unit executes face recognition software for judging driver change.
  • the change possible situation determination unit determines whether or not the situation is such that a driver change is possible.
  • the software execution control unit causes the face authentication software to be executed when it is determined that the driver can be changed according to the determination result of the change possible state determination unit.
  • FIG. 1 shows one embodiment and is a functional block diagram of a driver state monitoring system
  • FIG. 2 is a diagram showing the relationship between sensor behavior and driver behavior
  • FIG. 3 is a diagram showing the relationship between sensor behavior and driver behavior
  • FIG. 4 is a timing chart
  • FIG. 5 is a timing chart
  • FIG. 6 is a timing chart
  • FIG. 7 is a timing chart
  • FIG. 8 is a timing chart
  • FIG. 9 is a flow chart
  • FIG. 10 is a diagram showing a notification screen when a face image is saved
  • FIG. 11 is a diagram showing a notification screen when sending a face image
  • FIG. 12 is a diagram showing a notification screen when changing drivers.
  • FIG. 13 is a diagram showing a notification screen when the face image is saved after driver change
  • FIG. 14 is a diagram showing a notification screen at the time of face image transmission after driver change.
  • a vehicle is equipped with a driver status monitoring system including a driver status monitor (registered trademark) (hereinafter referred to as DSM (Driver Status Monitor)) as a passenger status monitoring system for monitoring the status of passengers.
  • the driver condition monitoring system judges inattentiveness, closed eyes, drowsiness, etc. based on the degree of drooping of the eyelids, the degree of opening of the pupil, the direction of the line of sight, the speed of movement of the line of sight, etc. from the face image, and determines the driver's suitable driving based on the judgment results. It is determined whether it is in an appropriate state or an unsuitable driving state.
  • the driver state monitoring system determines that the driver is in an unsuitable state for driving, for example, it activates EDSS or disables automatic driving or advanced driving support system driving.
  • the driver state monitoring system 1 includes a control device 2, a driver state recognition device 3, a vehicle information recognition device 4, a driving environment recognition device 5, and an HMI (Human Machine Interface) 6. .
  • the driver's face authentication device 7 is configured as part of the driver state monitoring system 1 and includes a control device 2 and a driver state recognition device 3 .
  • the driver state recognition device 3 includes a driver camera 31, an LED 32, a buckle sensor 33, a door open/close sensor 34, a seating sensor 35, an electronic key interior detection sensor 36, a handle touch sensor 37, and a handle torque sensor 38.
  • the driver camera 31 is a camera having an image sensor such as a CCD (Charge Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor), and is arranged on the upper surface of the instrument panel, the meter panel, the rearview mirror, the steering column, and the like. .
  • the driver camera 31 outputs the captured image to the control device 2 with the vicinity of the headrest of the driver's seat taken as the imaging direction.
  • a plurality of driver cameras 31 may be provided, and a plurality of driver cameras 31 may be configured to perform imaging operations in synchronization.
  • the LED 32 emits near-infrared light around the headrest of the driver's seat. That is, when the driver is seated in the driver's seat, near-infrared light emitted from the LED 32 is emitted around the driver's face, and the driver's camera 31 captures an image of the area around the driver's face irradiated with the near-infrared light. .
  • the number of LEDs 32 may be plural, and the configuration may be such that a plurality of LEDs 32 perform lighting operations in synchronization.
  • the buckle sensor 33 detects whether or not the convex portion on the seat belt side is stuck in the concave portion on the buckle side, and outputs the detection result to the control device 2 .
  • the door open/close sensor 34 detects the open/close state of the door and outputs the detection result to the control device 2 .
  • the seating sensor 35 detects the pressure distribution of the driver's seat and outputs the detected pressure distribution to the control device 2 .
  • the electronic key vehicle interior detection sensor 36 detects whether or not the electronic key is present in the vehicle interior, and outputs the detection result to the control device 2 .
  • the handle touch sensor 37 detects the gripping state of the handle and outputs the detection result to the control device 2 .
  • the steering wheel torque sensor 38 detects the steering force of the steering wheel and outputs the detection result to the control device 2 .
  • the vehicle information recognition device 4 includes a vehicle speed sensor 41 , a steering angle sensor 42 , an accelerator sensor 43 , a brake sensor 44 and a shift sensor 45 .
  • Vehicle speed sensor 41 detects the vehicle speed and outputs the detected vehicle speed to control device 2 .
  • the steering angle sensor 42 detects the steering angle of the steering wheel and outputs the detected steering angle to the control device 2 .
  • the accelerator sensor 43 detects the amount of operation of the accelerator pedal and outputs the detected amount of operation to the control device 2 .
  • the brake sensor 44 detects the amount of operation of the brake pedal and outputs the detected amount of operation to the control device 2 .
  • the shift sensor 45 detects the state of the shift position and outputs the detected state of the shift position to the control device 2 .
  • the driving environment recognition device 5 includes a front camera 51 , a rear camera 52 , a front sensor 53 , a rear sensor 54 , a navigation device 55 and a G sensor 56 .
  • the front camera 51 outputs the captured image to the control device 2 with the front of the vehicle including the marking lines painted on the road surface as the imaging direction.
  • the rearward camera 52 outputs the imaged image to the control device 2 with the rearward direction of the vehicle as the imaging direction.
  • the front sensor 53 is a sensor such as a millimeter wave sensor, radar, or lidar (LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging)), detects an object such as a preceding vehicle or a pedestrian in front of the vehicle, and detects the object.
  • LiDAR Light Detection and Ranging, Laser Imaging Detection and Ranging
  • the result is output to the control device 2.
  • the rearward sensor 54 is a sensor such as a millimeter wave sensor, radar, or lidar, detects objects such as a following vehicle or a pedestrian behind the vehicle, and outputs the detection result to the control device 2 .
  • the driving environment recognition device 5 calculates the relative speed to the vehicle ahead based on the distance to the vehicle ahead detected by the front sensor 53, and calculates the relative speed to the vehicle behind based on the distance to the following vehicle detected by the rear sensor 54. Calculate the relative velocity of
  • the navigation device 55 determines the current vehicle position by measuring GPS position coordinates based on GPS signals transmitted from GPS (Global Positioning System) satellites, and calculates a route from the determined current vehicle position to the destination. , and performs navigation processing such as guiding the calculated route.
  • the satellite positioning system is not limited to GPS, and various GNSS (Global Navigation Satellite Systems) such as GLONASS, Galileo, BeiDou, and IRNSS can be adopted.
  • the G sensor 56 detects three-dimensional acceleration of the vehicle in the longitudinal direction, the lateral direction, and the vertical direction, and outputs the detection results to the control device 2 .
  • the G sensor 56 may be a sensor included in the navigation device 55, or may be a sensor installed for other purposes.
  • the HMI 6 includes a display 61, a speaker 62, and a cancel switch 63.
  • the display 61 has a flat display such as a liquid crystal display or an organic EL (Electro-Luminescence) display, and a touch panel is formed on the front surface of the flat display.
  • a touch panel is formed on the front surface of the flat display.
  • the display 61 detects a touch operation on the touch panel from the driver or passenger, the display 61 performs screen control such as switching screens or displaying icons on the screen according to the touch operation.
  • the display 61 displays, as one of the displayed information, a status indicating the degree of the driver's posture collapse in five stages. The lower the number, the lower the degree of collapse, indicating that the driver's posture is normal and the driver is in a suitable driving state suitable for driving. The higher the number, the higher the degree of collapse, indicating that the driver's posture is abnormal. , indicates that the driver is in a driving unsuitable state unsuitable for driving.
  • the control device 2 displays the status on the display 61 by outputting a driver status signal indicating the status of the driver to the display 61 . The status is displayed on the display 61 so that the driver can grasp his or her driving posture.
  • the speaker 62 is a speaker shared with the navigation device 55, the audio device, and the like.
  • the control device 2 outputs a driver status signal indicating the status of the driver to the speaker 62 , thereby outputting the status by voice from the speaker 62 .
  • the driver can grasp his or her driving posture by outputting the status by voice from the speaker 62 .
  • the cancel switch 63 is a switch that temporarily stops detection of the driver's state. If the cancel switch 63 is operated before the trip, detection of the driver's state is stopped during one trip period immediately after the operation. Further, when the cancel switch 63 is operated during a trip, detection of the driver's state is stopped during the period during which the cancel switch 63 is operated or for a certain period of time (about several seconds) after the cancel switch 63 is operated. That is, for example, by operating the cancel switch 63 in advance when the driver performs an action of picking up an object, even if the driver's posture changes, it will not be erroneously detected as being unsuitable for driving.
  • the control device 2 is composed of a microcomputer having a CPU (Central Processing Unit), ROM (Read Only Memory), RAM (Random Access Memory), and I/O (Input/Output).
  • the microcomputer executes a computer program stored in a non-transitional physical storage medium, executes processing corresponding to the computer program, and controls overall operation of the driver state monitoring system 1 .
  • Microcomputer is synonymous with processor.
  • the non-transitional physical storage medium may share hardware with other computer resources.
  • the control device 2 connects the driver state recognition device 3, the vehicle information recognition device 4, the driving environment recognition device 5, and the HMI 6 with a CAN (Controller Area Network) (registered trademark) or the like via a wired connection or a wireless LAN or Bluetooth (registered trademark). Data communication is possible by wireless connection such as.
  • CAN Controller Area Network
  • LAN Local Area Network
  • Bluetooth registered trademark
  • the control device 2 includes an image acquisition unit 21, a first software execution unit 22, a second software execution unit 23, a shift availability determination unit 24, a software execution control unit 25, a face image storage unit 26, a face and an image transmission unit 27 .
  • These units 21 to 27 correspond to functions executed by the face detection program. That is, the control device 2 performs the functions of the units 21 to 27 by executing the face detection program.
  • the image acquisition unit 21 receives the image captured by the driver camera 31 from the driver camera 31 and acquires the face image of the driver captured by the driver camera 31 .
  • the first software execution unit 22 executes software for detecting the unsuitable driving state for determining whether the driver is in an appropriate driving state or an unsuitable driving state at predetermined intervals. That is, the first software execution unit 22 executes software for detecting an unsuitable driving state at a predetermined cycle, so that the first software execution unit 22 detects the driver's face image based on the degree of eyelid drooping, the degree of pupil opening, the direction of the line of sight, the speed of movement of the line of sight, and the like. Thus, it becomes possible to determine whether the driver is looking aside, with closed eyes, drowsiness, or the like.
  • the second software execution unit 23 executes face recognition software for determining driver change.
  • the second software execution unit 23 registers the face image of the driver to be compared when executing the face authentication software in the following pattern.
  • the second software execution unit 23 executes the face recognition software for judging driver change for the first time after the ignition has been switched from off to on. to register.
  • the second software execution unit 23 stores the face image of the registered driver and the driver when it is determined that the driver can be changed. is compared with the face image of the person, and it is determined whether or not the person is the same person.
  • the second software execution unit 23 determines that the person is the same person, the second software execution unit 23 continues using the registered face image of the driver for the next determination, and if it determines that the person is not the same person, discards the registered face image of the driver. , newly register the face image of the driver when it is determined that the driver can be changed.
  • the second software execution unit 23 receives the face image of the driver transmitted from the server after the ignition has been switched from off to on, the received face image of the driver is registered.
  • the second software execution unit 23 executes the face authentication software after registering the face image of the driver received from the server, in this case also, the face image of the registered driver and the fact that the driver can be changed are displayed.
  • the face image of the driver at the time of determination is compared to determine whether or not the person is the same person. If the second software execution unit 23 determines that the person is the same person, the second software execution unit 23 continues using the registered face image of the driver for the next determination, and if it determines that the person is not the same person, discards the registered face image of the driver. , newly register the face image of the driver when it is determined that the driver can be changed.
  • the second software execution unit 23 if there is a face image of a registered driver in the past ride, and if the elapsed time from the registration date and time to the current date and time is less than a predetermined period, the registered driver Continue to use the driver's facial image.
  • the second software execution unit 23 executes the face authentication software after this, in this case also, the registered face image of the driver and the face of the driver when it is determined that the driver can be changed The image is compared to determine whether or not the person is the same person.
  • the second software execution unit 23 determines that the person is the same person, the second software execution unit 23 continues using the registered face image of the driver for the next determination, and if it determines that the person is not the same person, discards the registered face image of the driver. , newly register the face image of the driver when it is determined that the driver can be changed.
  • the change possible situation determination unit 24 determines whether or not the situation is such that it is possible to change the driver.
  • the alternation availability determination unit 24 periodically acquires CAN signals from various sensors, identifies the behavior of the various sensors based on the acquired CAN signals, and determines the behavior of the various sensors as shown in FIGS. Then, the behavior of the driver that can cause the driver change is determined, and it is determined whether or not the driver change is possible.
  • Various sensors to determine driver behavior that may cause a driver change include a vehicle speed sensor that detects vehicle speed, an accelerator sensor that detects the state of the accelerator pedal, a brake sensor that detects the state of the brake pedal, and a seat belt that detects the state of the seat belt.
  • Door open/close sensor that detects the open/closed state of the door
  • Seat sensor that detects whether the driver is seated
  • Electronic key interior detection sensor that detects the presence of the electronic key inside the vehicle
  • a sensor that detects the operation of the parking brake may be used.
  • Sensors not referenced in the behavior of various sensors may be in any state. For example, when the vehicle speed sensor is not referenced, the vehicle speed sensor detects 0 km/h (indicating that the vehicle is stopped). , or 50 km/h (indicating running) may be detected. That is, if the circumstantial evidence flag is established from the behavior of various sensors, the second software execution unit 23 executes the face recognition software regardless of whether the vehicle is running or stopped.
  • the change possible situation determining unit 24 determines that the driver change is possible, the circumstantial evidence flag is established, and when it is determined that the driver change is not possible, the circumstantial evidence flag is not established.
  • the change possible situation determining unit 24 determines that the driver change is possible in the following cases (a) to (f), and flags the situation evidence flag.
  • Establish FIG. 2 and FIG. 3 are examples of several cases in which the behavior of a driver that can cause a change of driver is determined from the behavior of various sensors, and the present invention is not limited to this.
  • the change possible state determination unit 24 sequentially detects "change from the door closed state to the door open state” and “change from the door open state to the door closed state” while maintaining the seating sensor ON state, for example, , it is determined that the driver change is not possible, and the circumstantial evidence flag is not established.
  • the software execution control unit 25 causes the first software execution unit 22 to execute the software for detecting the unsuitable driving state at a predetermined cycle.
  • the software execution control unit 25 determines that the driver change is possible based on the determination result of the change possible state determination unit 24 instead of causing the second software execution unit 23 to execute the face authentication software at a predetermined cycle. Execute it as a condition.
  • the software execution control unit 25 executes the driving unsuitable state detection software and the face authentication software in the following pattern so that the face recognition software is executed while the driving unsuitable state detection software is not being executed. to adjust. As shown in FIG. 4, the software execution control unit 25 executes the unsuitable driving state detection software at a cycle of 33 [msec], and executes the unsuitable driving state detection software at a time of 28 [msec]. .
  • Part 1 When a request to run unsuitable driving state detection software occurs while face authentication software is running (Part 1) As shown in FIG. 7, when the software execution control unit 25 determines that an execution request for software for detecting an unsuitable driving state is generated during execution of the face authentication software, it interrupts the execution of the software for face authentication and detects the unsuitable driving state. Start running detection software. The software execution control unit 25 restarts the execution of the face authentication software after finishing the execution of the driving unsuitable state detection software.
  • Part 2 When a request to run unsuitable driving state detection software occurs while face authentication software is running (Part 2) As shown in FIG. 8, when the software execution control unit 25 determines that an execution request for software for detecting an unsuitable state for driving is generated during execution of face authentication software, the software execution control unit 25 continues execution of the face authentication software, and performs the next driving. Cancels execution of inappropriate state detection software.
  • the facial image storage unit 26 stores the driver's facial image when the circumstantial evidence flag is established by the shift possible situation determination unit 24 .
  • the facial image transmission unit 27 transmits the facial image of the driver to the server when the circumstantial evidence flag is established by the shift possible situation determination unit 24 .
  • the control device 2 starts the driver state monitoring process when the conditions for starting the driver state monitoring process are met, for example, when the ignition is switched from off to on.
  • the condition for starting the driver status monitoring process is not limited to, for example, that the ignition has been switched from off to on, and may be that the driver has performed a predetermined operation.
  • the control device 2 starts the driver state monitoring process, it periodically acquires CAN signals from various sensors (S1), determines the behavior of the driver based on the behavior of the various sensors, and determines whether it is possible to change the driver. It is determined whether or not (S2, corresponding to the procedure for determining the possible replacement situation).
  • control device 2 determines that the driver change is not possible (S2: NO), it does not set the circumstantial evidence flag and determines whether the conditions for terminating the driver status monitoring process are met (S3).
  • the control device 2 determines that the conditions for ending the driver state monitoring process are satisfied, for example, because the ignition is switched from ON to OFF (S3: YES), the control device 2 ends the driver state monitoring process.
  • the termination condition of the driver status monitoring process is not limited to, for example, that the ignition is switched from on to off, and may be that the driver performs a predetermined operation. For example, if the control device 2 determines that the ignition is still on and the conditions for terminating the driver state monitoring process are not satisfied (S3: NO), the control device 2 returns to step S1, and repeats step S1 and subsequent steps.
  • control device 2 determines that the driver change is possible (S2: YES), it establishes a situation evidence flag (S4), saves the driver's face image (S5), and saves the driver's face image. It notifies that it has been saved (S6).
  • the control device 2 causes the display 61 to display a notification screen when the face image is saved, and outputs a notification sound when the face image is saved from the speaker 62 .
  • a driver can recognize that the face image is saved by displaying a notification screen when the face image is saved on the display 61 and outputting a notification sound when the face image is saved from the speaker 62.
  • the control device 2 causes the driver's facial image to be transmitted to the server (S7), and notifies that the driver's facial image has been transmitted to the server (S8).
  • the control device 2 causes the display 61 to display a notification screen when the face image is transmitted, and causes the speaker 62 to output a notification sound when the face image is transmitted.
  • a notification screen when the face image is transmitted is displayed on the display 61, and a notification sound when the face image is transmitted is output from the speaker 62, so that the driver can grasp that the face image has been transmitted.
  • the control device 2 determines whether or not a condition for starting execution of the face authentication software is satisfied (S9). YES) to start execution of face authentication software (S10, corresponding to software execution control procedure). If the control device 2 determines that the conditions for starting execution of the face authentication software are not met (S9: NO), it waits until the conditions are met. In this case, the control device 2 controls (1) when the circumstantial evidence flag is established while the software for detecting the unsuitable driving state is not executed, and (2) when the circumstantial evidence flag is established while the software for detecting the unsuitable driving state is executed.
  • the control device 2 determines the face recognition result, determines whether the current face recognition result and the previous face recognition result are the same person, and determines whether the driver has changed. It is determined whether or not (S11). When the control device 2 determines that the current face recognition result and the previous face recognition result are the same person, and determines that the driver has not changed (S11: NO), the end condition of the driver status monitoring process is set. Establishment is determined (S3).
  • the control device 2 determines that the current face recognition result and the previous face recognition result are not the same person, and determines that the driver has changed (S11: YES), it notifies the driver change (S12). As shown in FIG. 12 , the control device 2 causes the display 61 to display a notification screen when the driver is changed, and causes the speaker 62 to output a notification sound when the driver is changed.
  • the driver can grasp that the system has correctly recognized the driver change by displaying a notification screen at the time of the driver change on the display 61 and outputting the notification sound at the time of the driver change from the speaker 62. ⁇
  • the control device 2 saves the face image of the driver after the change (S13), and notifies that the face image of the driver after the change has been saved (S14).
  • the face image of the driver stored in this manner is used as the face image registered in the first pattern described above.
  • the control device 2 causes the display 61 to display a notification screen when the face image is saved after the driver change, and causes the speaker 62 to output a notification sound when the face image is saved after the driver change.
  • the display 61 displays a notification screen for saving the face image after the driver change, and the speaker 62 outputs a notification sound for saving the face image after the driver change, thereby saving the face image after the change. It is possible to grasp what has been done.
  • the control device 2 causes the face image of the driver after the change to be transmitted to the server (S15), and notifies that the face image of the driver after the change has been transmitted to the server (S16).
  • the face image of the driver transmitted to the server in this way is used as the face image registered in the above-described second pattern.
  • the control device 2 causes the display 61 to display a notification screen when the face image is transmitted after the driver change, and outputs notification sound from the speaker 62 when the face image is transmitted after the driver change).
  • the display 61 displays a notification screen when the face image is to be sent after the driver change, and the notification sound when the face image is to be sent after the driver change is output from the speaker 62, thereby transmitting the face image after the change. It is possible to grasp what has been done.
  • the face recognition software is executed on the condition that it is determined that the driver can be changed instead of executing the face recognition software periodically.
  • the software for detecting unsuitable driving conditions at predetermined intervals it is possible to appropriately determine whether the driver is in an appropriate or unsuitable condition for driving. By doing so, it is possible to reduce the frequency of running face recognition software.
  • it is possible to appropriately determine whether the driver is in the appropriate driving state or in the unsuitable driving state while avoiding an increase in the steady amount of calculation. That is, by not executing the face authentication software in a situation where there is no possibility of a driver change, it is possible to avoid a constant increase in the amount of computation.
  • the face recognition software is run while the unsuitable driving state detection software is not running. By executing the face recognition software without affecting the execution of the unsuitable driving state detection software at a predetermined cycle, it is possible to appropriately determine whether the vehicle is in a suitable driving state or an unsuitable driving state. can be secured.
  • the controller and techniques described in this disclosure may be implemented by a dedicated computer provided by configuring a processor and memory programmed to perform one or more functions embodied by the computer program.
  • the controller and techniques described in this disclosure may be implemented by a dedicated computer provided by configuring the processor with one or more dedicated hardware logic circuits.
  • the controller and techniques described in this disclosure can be implemented by a combination of a processor and memory programmed to perform one or more functions and a processor configured with one or more hardware logic circuits. It may also be implemented by one or more dedicated computers configured.
  • the computer program may also be stored as computer-executable instructions on a computer-readable non-transitional tangible storage medium.

Abstract

This facial authentication device (7) for a driver comprises: an image acquisition unit (21) that acquires a facial image of a driver; a first software execution unit (22) that executes, in a predetermined cycle, unsuitable driving state detection software for determining whether a driver is in a suitable driving state or an unsuitable driving state; a second software execution unit (23) that executes facial authentication software for determining substitution of the driver; a substitutability state determination unit (24) for determining whether it is possible to substitute the driver; and a software execution control unit (25) that, when it is determined by the determination result of the substitutability state determination unit that it is possible to substitute the driver, causes the facial authentication software to be executed.

Description

ドライバの顔認証装置及び顔認証プログラムDriver's face authentication device and face authentication program 関連出願の相互参照Cross-reference to related applications
 本出願は、2021年6月25日に出願された日本出願番号2021-105740号に基づくもので、ここにその記載内容を援用する。 This application is based on Japanese Application No. 2021-105740 filed on June 25, 2021, and the contents thereof are incorporated herein.
 本開示は、ドライバの顔認証装置及び顔認証プログラムに関する。 The present disclosure relates to a driver's face authentication device and a face authentication program.
 例えば自動車等の車両において乗員の状態を監視する乗員状態監視システムが供されている。ドライバの状態を監視するドライバ状態監視システムは、ドライバ席のヘッドレスト周辺を撮像する撮像部と、ドライバ席のヘッドレスト周辺に光を照射する照明部とを備える。ドライバ状態監視システムは、ドライバがドライバ席に着座している状態で、光が照射されたドライバの顔周辺を撮像部により撮像し、その撮像した画像に含まれるドライバの顔画像を解析してドライバの状態を監視する(例えば特許文献1参照)。 For example, an occupant condition monitoring system that monitors the condition of occupants in vehicles such as automobiles is provided. A driver state monitoring system for monitoring the state of a driver includes an imaging unit that captures an image of the area around the headrest of the driver's seat, and an illumination unit that illuminates the area around the headrest of the driver's seat. The driver condition monitoring system captures an image of the area around the driver's face illuminated by light with an imaging unit while the driver is seated in the driver's seat, analyzes the face image of the driver included in the captured image, and detects the driver. (see, for example, Patent Document 1).
特許第6372388号公報Japanese Patent No. 6372388
 ドライバ状態監視システムは、顔画像から例えば脇見、閉眼、眠気等を判定し、その判定結果によりドライバが運転に適する運転適正状態であるか適さない運転不適状態であるかを判定する。ドライバ状態監視システムは、ドライバが運転不適状態であると特定すると、例えばドライバ異常時対応システム(以下、EDSS(Emergency Driving Stop System)と称する)を発動したり自動運転による走行や高度運転支援システムによる走行ルを使用不可に設定したりする。 The driver condition monitoring system judges, for example, looking aside, closed eyes, drowsiness, etc. from the facial image, and judges whether the driver is in a suitable driving condition or unsuitable driving condition based on the judgment result. When the driver condition monitoring system identifies that the driver is unsuitable for driving, for example, the driver abnormality response system (hereinafter referred to as EDSS (Emergency Driving Stop System)) is activated, or driving by automatic driving or by advanced driving support system or disable the running rule.
 ドライバ状態監視システムは、ドライバが運転適正状態であるか運転不適状態であるかを判定するための運転不適状態検出のソフトウェアを所定周期で定期的に(例えば30回/秒)実行している。又、ドライバ状態監視システムは、ドライバ交代を判定するための顔認証のソフトウェアも所定周期で定期的に実行することが想定されている。しかしながら、運転不適状態検出のソフトウェアと顔認証のソフトウェアとをそれぞれ定期的に実行する構成では、それらの両方のソフトウェアを実行する頻度が高くなり、演算量が定常的に増大し、より高スペックなマイコンを搭載する必要が生じる。 The driver condition monitoring system periodically (for example, 30 times/second) executes software for detecting the unsuitable driving condition for determining whether the driver is in an appropriate driving condition or an unsuitable driving condition. In addition, it is assumed that the driver condition monitoring system periodically executes face recognition software for judging driver change at a predetermined cycle. However, in a configuration in which the driving unsuitable state detection software and the face authentication software are each periodically executed, the frequency of executing both software increases, the amount of calculation increases steadily, and the high-spec It is necessary to install a microcomputer.
 本開示は、ドライバが運転適正状態であるか運転不適状態であるかを適切に判定しつつ、定常的な演算量の増大を未然に回避することを目的とする。 An object of the present disclosure is to prevent an increase in the steady amount of computation while appropriately determining whether the driver is in a suitable driving state or an unsuitable driving state.
 本開示の一態様によれば、画像取得部は、ドライバの顔画像を取得する。第1ソフトウェア実行部は、ドライバが運転適正状態であるか運転不適状態であるかを判定するための運転不適状態検出のソフトウェアを所定周期で実行する。第2ソフトウェア実行部は、ドライバ交代を判定するための顔認証のソフトウェアを実行する。交代可能状況判定部は、ドライバ交代が可能な状況であるか否かを判定する。ソフトウェア実行制御部は、交代可能状況判定部の判定結果によりドライバ交代が可能な状況であると判定されると、顔認証のソフトウェアを実行させる。 According to one aspect of the present disclosure, the image acquisition unit acquires the face image of the driver. The first software execution unit executes, at predetermined intervals, driving unsuitable state detection software for determining whether the driver is in an appropriate driving state or an unsuitable driving state. The second software execution unit executes face recognition software for judging driver change. The change possible situation determination unit determines whether or not the situation is such that a driver change is possible. The software execution control unit causes the face authentication software to be executed when it is determined that the driver can be changed according to the determination result of the change possible state determination unit.
 顔認証のソフトウェアを定期的に実行するのではなく、ドライバ交代が可能な状況であると判定したことを条件として顔認証のソフトウェアを実行させるようにした。運転不適状態検出のソフトウェアを所定周期で実行することで、ドライバが運転適正状態であるか運転不適状態であるかを適切に判定することができ、顔認証のソフトウェアをドライバ交代が可能な状況であることを条件として実行することで、顔認証のソフトウェアを実行する頻度を低減させることができる。これにより、ドライバが運転適正状態であるか運転不適状態であるかを適切に判定しつつ、定常的な演算量の増大を未然に回避することができる。即ち、ドライバ交代の可能性がない状況では顔認証のソフトウェアを実行しないことで、定常的な演算量の増大を未然に回避することができる。 Instead of running the face recognition software regularly, we decided to run the face recognition software on the condition that it was determined that it was possible to change drivers. By executing the software for detecting unsuitable driving conditions at predetermined intervals, it is possible to appropriately determine whether the driver is in an appropriate or unsuitable condition for driving. By doing so, it is possible to reduce the frequency of running face recognition software. As a result, it is possible to appropriately determine whether the driver is in the appropriate driving state or in the unsuitable driving state, while avoiding an increase in the steady amount of calculation. That is, by not executing the face authentication software in a situation where there is no possibility of a driver change, it is possible to avoid a constant increase in the amount of computation.
 本開示についての上記目的及びその他の目的、特徴や利点は、添付の図面を参照しながら下記の詳細な記述により、より明確になる。その図面は、
図1は、一実施形態を示し、ドライバ状態監視システムの機能ブロック図であり、 図2は、センサの振舞とドライバの振舞の関係を示す図であり、 図3は、センサの振舞とドライバの振舞の関係を示す図であり、 図4は、タイミングチャートであり、 図5は、タイミングチャートであり、 図6は、タイミングチャートであり、 図7は、タイミングチャートであり、 図8は、タイミングチャートであり、 図9は、フローチャートであり、 図10は、顔画像保存時の通知画面を示す図であり、 図11は、顔画像送信時の通知画面を示す図であり、 図12は、ドライバ交代時の通知画面を示す図であり、 図13は、ドライバ交代後の顔画像保存時の通知画面を示す図であり、 図14は、ドライバ交代後の顔画像送信時の通知画面を示す図である。
The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings. The drawing is
FIG. 1 shows one embodiment and is a functional block diagram of a driver state monitoring system, FIG. 2 is a diagram showing the relationship between sensor behavior and driver behavior; FIG. 3 is a diagram showing the relationship between sensor behavior and driver behavior; FIG. 4 is a timing chart, FIG. 5 is a timing chart, FIG. 6 is a timing chart, FIG. 7 is a timing chart, FIG. 8 is a timing chart, FIG. 9 is a flow chart, FIG. 10 is a diagram showing a notification screen when a face image is saved; FIG. 11 is a diagram showing a notification screen when sending a face image; FIG. 12 is a diagram showing a notification screen when changing drivers. FIG. 13 is a diagram showing a notification screen when the face image is saved after driver change, FIG. 14 is a diagram showing a notification screen at the time of face image transmission after driver change.
 以下、一実施形態について図面を参照して説明する。
 車両には乗員の状態を監視する乗員状態監視システムとしてドライバステータスモニタ(登録商標)(以下、DSM(Driver Status Monitor)と称する)を含むドライバ状態監視システムが搭載されている。ドライバ状態監視システムは、顔画像から瞼の下がり度合、瞳孔の開き度合い、視線方向、視線の移動速度等に基づいて脇見、閉眼、眠気等を判定し、その判定結果によりドライバが運転に適する運転適正状態であるか適さない運転不適状態であるかを判定する。ドライバ状態監視システムは、ドライバが運転不適状態であることを確定すると、例えばEDSSを発動したり自動運転による走行や高度運転支援システムによる走行を使用不可に設定したりする。
An embodiment will be described below with reference to the drawings.
A vehicle is equipped with a driver status monitoring system including a driver status monitor (registered trademark) (hereinafter referred to as DSM (Driver Status Monitor)) as a passenger status monitoring system for monitoring the status of passengers. The driver condition monitoring system judges inattentiveness, closed eyes, drowsiness, etc. based on the degree of drooping of the eyelids, the degree of opening of the pupil, the direction of the line of sight, the speed of movement of the line of sight, etc. from the face image, and determines the driver's suitable driving based on the judgment results. It is determined whether it is in an appropriate state or an unsuitable driving state. When the driver state monitoring system determines that the driver is in an unsuitable state for driving, for example, it activates EDSS or disables automatic driving or advanced driving support system driving.
 図1に示すように、ドライバ状態監視システム1は、制御装置2と、ドライバ状態認識装置3と、車両情報認識装置4と、走行環境認識装置5と、HMI(Human Machine Interface)6とを備える。ドライバの顔認証装置7は、ドライバ状態監視システム1の一部として構成され、制御装置2と、ドライバ状態認識装置3とを備えて構成される。 As shown in FIG. 1, the driver state monitoring system 1 includes a control device 2, a driver state recognition device 3, a vehicle information recognition device 4, a driving environment recognition device 5, and an HMI (Human Machine Interface) 6. . The driver's face authentication device 7 is configured as part of the driver state monitoring system 1 and includes a control device 2 and a driver state recognition device 3 .
 ドライバ状態認識装置3は、ドライバカメラ31と、LED32と、バックルセンサ33と、ドア開閉センサ34と、着座センサ35と、電子鍵車室内検出センサ36と、ハンドルタッチセンサ37と、ハンドルトルクセンサ38とを備える。ドライバカメラ31は、例えばCCD(Charge Coupled Device)やCMOS(Complementary Metal Oxide Semiconductor)等のイメージセンサを有するカメラであり、インスツルメントパネルの上面、メーターパネル、バックミラー、ステアリングコラム等に配置される。ドライバカメラ31は、ドライバ席のヘッドレスト周辺を撮像方向とし、撮像した画像を制御装置2に出力する。ドライバカメラ31の個数は複数でも良く、複数のドライバカメラ31が同期して撮像動作を行う構成でも良い。 The driver state recognition device 3 includes a driver camera 31, an LED 32, a buckle sensor 33, a door open/close sensor 34, a seating sensor 35, an electronic key interior detection sensor 36, a handle touch sensor 37, and a handle torque sensor 38. and The driver camera 31 is a camera having an image sensor such as a CCD (Charge Coupled Device) or CMOS (Complementary Metal Oxide Semiconductor), and is arranged on the upper surface of the instrument panel, the meter panel, the rearview mirror, the steering column, and the like. . The driver camera 31 outputs the captured image to the control device 2 with the vicinity of the headrest of the driver's seat taken as the imaging direction. A plurality of driver cameras 31 may be provided, and a plurality of driver cameras 31 may be configured to perform imaging operations in synchronization.
 LED32は、近赤外光をドライバ席のヘッドレスト周辺に出射する。即ち、ドライバがドライバ席に着座した状態では、LED32から出射された近赤外光がドライバの顔周辺に照射され、近赤外光が照射されたドライバの顔周辺がドライバカメラ31により撮像される。LED32の個数は複数でも良く、複数のLED32が同期して照明動作を行う構成でも良い。 The LED 32 emits near-infrared light around the headrest of the driver's seat. That is, when the driver is seated in the driver's seat, near-infrared light emitted from the LED 32 is emitted around the driver's face, and the driver's camera 31 captures an image of the area around the driver's face irradiated with the near-infrared light. . The number of LEDs 32 may be plural, and the configuration may be such that a plurality of LEDs 32 perform lighting operations in synchronization.
 バックルセンサ33は、シートベルト側の凸部形状部分がバックル側の凹部形状部分に刺さっている否かを検出し、その検出結果を制御装置2に出力する。ドア開閉センサ34は、ドアの開閉状態を検出し、その検出結果を制御装置2に出力する。着座センサ35は、ドライバ席のシートの圧力分布を検出し、その検出した圧力分布を制御装置2に出力する。電子鍵車室内検出センサ36は、電子鍵が車室内に存在するか否かを検出し、その検出結果を制御装置2に出力する。ハンドルタッチセンサ37は、ハンドルの手把持状態を検出し、その検出結果を制御装置2に出力する。ハンドルトルクセンサ38は、ハンドルの操舵力を検出し、その検出結果を制御装置2に出力する。 The buckle sensor 33 detects whether or not the convex portion on the seat belt side is stuck in the concave portion on the buckle side, and outputs the detection result to the control device 2 . The door open/close sensor 34 detects the open/close state of the door and outputs the detection result to the control device 2 . The seating sensor 35 detects the pressure distribution of the driver's seat and outputs the detected pressure distribution to the control device 2 . The electronic key vehicle interior detection sensor 36 detects whether or not the electronic key is present in the vehicle interior, and outputs the detection result to the control device 2 . The handle touch sensor 37 detects the gripping state of the handle and outputs the detection result to the control device 2 . The steering wheel torque sensor 38 detects the steering force of the steering wheel and outputs the detection result to the control device 2 .
 車両情報認識装置4は、車速センサ41と、舵角センサ42と、アクセルセンサ43と、ブレーキセンサ44と、シフトセンサ45とを備える。車速センサ41は、車両速度を検出し、その検出した車両速度を制御装置2に出力する。舵角センサ42は、ハンドルの操舵角を検出し、その検出した操舵角を制御装置2に出力する。アクセルセンサ43は、アクセルペダルの操作量を検出し、その検出した操作量を制御装置2に出力する。ブレーキセンサ44は、ブレーキペダルの操作量を検出し、その検出した操作量を制御装置2に出力する。シフトセンサ45は、シフトポジションの状態を検出し、その検出したシフトポジションの状態を制御装置2に出力する。 The vehicle information recognition device 4 includes a vehicle speed sensor 41 , a steering angle sensor 42 , an accelerator sensor 43 , a brake sensor 44 and a shift sensor 45 . Vehicle speed sensor 41 detects the vehicle speed and outputs the detected vehicle speed to control device 2 . The steering angle sensor 42 detects the steering angle of the steering wheel and outputs the detected steering angle to the control device 2 . The accelerator sensor 43 detects the amount of operation of the accelerator pedal and outputs the detected amount of operation to the control device 2 . The brake sensor 44 detects the amount of operation of the brake pedal and outputs the detected amount of operation to the control device 2 . The shift sensor 45 detects the state of the shift position and outputs the detected state of the shift position to the control device 2 .
 走行環境認識装置5は、前方カメラ51と、後方カメラ52と、前方センサ53と、後方センサ54と、ナビゲーション装置55と、Gセンサ56とを備える。前方カメラ51は、道路の路面上にペイントされている区画線等を含む車両前方を撮像方向とし、撮像した画像を制御装置2に出力する。後方カメラ52は、車両後方を撮像方向とし、撮像した画像を制御装置2に出力する。前方センサ53は、ミリ波センサ、レーダ、ライダ(LiDAR(Light Detection and Ranging、Laser Imaging Detection and Ranging))等のセンサであり、車両前方の先行車両や歩行者等の物体を検出し、その検出結果を制御装置2に出力する。後方センサ54は、ミリ波センサ、レーダ、ライダ等のセンサであり、車両後方の後続車両や歩行者等の物体を検出し、その検出結果を制御装置2に出力する。走行環境認識装置5は、前方センサ53により検出された前方車両との距離に基づいて前方車両との相対速度を算出し、後方センサ54により検出された後続車両との距離に基づいて後方車両との相対速度を算出する。 The driving environment recognition device 5 includes a front camera 51 , a rear camera 52 , a front sensor 53 , a rear sensor 54 , a navigation device 55 and a G sensor 56 . The front camera 51 outputs the captured image to the control device 2 with the front of the vehicle including the marking lines painted on the road surface as the imaging direction. The rearward camera 52 outputs the imaged image to the control device 2 with the rearward direction of the vehicle as the imaging direction. The front sensor 53 is a sensor such as a millimeter wave sensor, radar, or lidar (LiDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging)), detects an object such as a preceding vehicle or a pedestrian in front of the vehicle, and detects the object. The result is output to the control device 2. The rearward sensor 54 is a sensor such as a millimeter wave sensor, radar, or lidar, detects objects such as a following vehicle or a pedestrian behind the vehicle, and outputs the detection result to the control device 2 . The driving environment recognition device 5 calculates the relative speed to the vehicle ahead based on the distance to the vehicle ahead detected by the front sensor 53, and calculates the relative speed to the vehicle behind based on the distance to the following vehicle detected by the rear sensor 54. Calculate the relative velocity of
 ナビゲーション装置55は、GPS(Global Positioning System)衛星から送信されたGPS信号に基づいてGPS位置座標を測位して車両現在位置を確定し、その確定した車両現在位置から目的地までの経路を算出し、その算出した経路を案内する等のナビゲーション処理を行う。尚、衛星測位システムとしては、GPSに限らず、GLONASS、Galileo、BeiDou、IRNSS等の多様なGNSS(Global Navigation Satellite System)を採用することができる。Gセンサ56は、車両の前後方向、左右方向、上下方向の3次元の加速度を検出し、その検出結果を制御装置2に出力する。Gセンサ56は、ナビゲーション装置55が備えるセンサでも良いし、他の用途で搭載されているセンサでも良い。 The navigation device 55 determines the current vehicle position by measuring GPS position coordinates based on GPS signals transmitted from GPS (Global Positioning System) satellites, and calculates a route from the determined current vehicle position to the destination. , and performs navigation processing such as guiding the calculated route. The satellite positioning system is not limited to GPS, and various GNSS (Global Navigation Satellite Systems) such as GLONASS, Galileo, BeiDou, and IRNSS can be adopted. The G sensor 56 detects three-dimensional acceleration of the vehicle in the longitudinal direction, the lateral direction, and the vertical direction, and outputs the detection results to the control device 2 . The G sensor 56 may be a sensor included in the navigation device 55, or may be a sensor installed for other purposes.
 HMI6は、ディスプレイ61と、スピーカ62と、キャンセルスイッチ63とを備える。ディスプレイ61は、液晶ディスプレイや有機EL(Electro-Luminescence)ディスプレイ等の平板型のディスプレイを有し、その平板型のディスプレイの前面にタッチパネルが形成されている。ディスプレイ61は、ドライバやパッセンジャからのタッチパネルに対するタッチ操作を検出すると、そのタッチ操作に応じて画面を切換えたり画面にアイコンを表示したりする等の画面制御を行う。 The HMI 6 includes a display 61, a speaker 62, and a cancel switch 63. The display 61 has a flat display such as a liquid crystal display or an organic EL (Electro-Luminescence) display, and a touch panel is formed on the front surface of the flat display. When the display 61 detects a touch operation on the touch panel from the driver or passenger, the display 61 performs screen control such as switching screens or displaying icons on the screen according to the touch operation.
 ディスプレイ61は、表示する情報の一つとしてドライバの姿勢の崩れ度合を示すステータスを5段階で表示する。ステータスは、数値が低いほど崩れ度合が低く、ドライバの姿勢が正常であり、ドライバが運転に適する運転適正状態であることを示し、数値が高いほど崩れ度合が高く、ドライバの姿勢が異常であり、ドライバが運転に適さない運転不適状態であることを示す。制御装置2は、ドライバの状態を示すドライバ状態信号をディスプレイ61に出力することで、ステータスをディスプレイ61に表示させる。ドライバは、ディスプレイ61にステータスが表示されることにより、自分の運転姿勢を把握することができる。 The display 61 displays, as one of the displayed information, a status indicating the degree of the driver's posture collapse in five stages. The lower the number, the lower the degree of collapse, indicating that the driver's posture is normal and the driver is in a suitable driving state suitable for driving. The higher the number, the higher the degree of collapse, indicating that the driver's posture is abnormal. , indicates that the driver is in a driving unsuitable state unsuitable for driving. The control device 2 displays the status on the display 61 by outputting a driver status signal indicating the status of the driver to the display 61 . The status is displayed on the display 61 so that the driver can grasp his or her driving posture.
 スピーカ62は、ナビゲーション装置55やオーディオ装置等と共用されるスピーカである。制御装置2は、ドライバの状態を示すドライバ状態信号をスピーカ62に出力することで、ステータスをスピーカ62から音声出力させる。ドライバは、スピーカ62からステータスが音声出力されることにより、自分の運転姿勢を把握することができる。 The speaker 62 is a speaker shared with the navigation device 55, the audio device, and the like. The control device 2 outputs a driver status signal indicating the status of the driver to the speaker 62 , thereby outputting the status by voice from the speaker 62 . The driver can grasp his or her driving posture by outputting the status by voice from the speaker 62 .
 キャンセルスイッチ63は、ドライバの状態の検出を一時的に中止するスイッチである。トリップ前にキャンセルスイッチ63が操作されると、その操作された直後の1トリップの期間でドライバの状態の検出が中止される。又、トリップ中にキャンセルスイッチ63が操作されると、キャンセルスイッチ63が操作されている期間、又はキャンセルスイッチ63が操作されてから一定時間(数秒程度)、ドライバの状態の検出が中止される。即ち、例えばドライバが物体を取る動作を行う際には予めキャンセルスイッチ63を操作しておくことで、ドライバの姿勢が崩れても運転不適状態であると誤って検出されなくなる。 The cancel switch 63 is a switch that temporarily stops detection of the driver's state. If the cancel switch 63 is operated before the trip, detection of the driver's state is stopped during one trip period immediately after the operation. Further, when the cancel switch 63 is operated during a trip, detection of the driver's state is stopped during the period during which the cancel switch 63 is operated or for a certain period of time (about several seconds) after the cancel switch 63 is operated. That is, for example, by operating the cancel switch 63 in advance when the driver performs an action of picking up an object, even if the driver's posture changes, it will not be erroneously detected as being unsuitable for driving.
 制御装置2は、CPU(Central Processing Unit)、ROM(Read Only Memory)、RAM(Random Access Memory)及びI/O(Input/Output)を有するマイクロコンピュータにより構成されている。マイクロコンピュータは、非遷移的実体的記憶媒体に格納されているコンピュータプログラムを実行することで、コンピュータプログラムに対応する処理を実行し、ドライバ状態監視システム1の動作全般を制御する。マイクロコンピュータはプロセッサと同じ意味である。ドライバ状態監視システム1において、非遷移的実体的記憶媒体は、ハードウェアを他のコンピュータ資源と共有していても良い。 The control device 2 is composed of a microcomputer having a CPU (Central Processing Unit), ROM (Read Only Memory), RAM (Random Access Memory), and I/O (Input/Output). The microcomputer executes a computer program stored in a non-transitional physical storage medium, executes processing corresponding to the computer program, and controls overall operation of the driver state monitoring system 1 . Microcomputer is synonymous with processor. In the driver state monitoring system 1, the non-transitional physical storage medium may share hardware with other computer resources.
 制御装置2は、上記したドライバ状態認識装置3、車両情報認識装置4、走行環境認識装置5及びHMI6とCAN(Controller Area Network)(登録商標)等の有線接続又は無線LANやBluetooth(登録商標)等の無線接続によりデータ通信可能である。 The control device 2 connects the driver state recognition device 3, the vehicle information recognition device 4, the driving environment recognition device 5, and the HMI 6 with a CAN (Controller Area Network) (registered trademark) or the like via a wired connection or a wireless LAN or Bluetooth (registered trademark). Data communication is possible by wireless connection such as.
 制御装置2は、画像取得部21と、第1ソフトウェア実行部22と、第2ソフトウェア実行部23と、交代可能状況判定部24と、ソフトウェア実行制御部25と、顔画像保存部26と、顔画像送信部27とを備える。これらの各部21~27は、顔検出プログラムにより実行される機能に相当する。即ち、制御装置2は、顔検出プログラムを実行することで各部21~27の機能を行う。 The control device 2 includes an image acquisition unit 21, a first software execution unit 22, a second software execution unit 23, a shift availability determination unit 24, a software execution control unit 25, a face image storage unit 26, a face and an image transmission unit 27 . These units 21 to 27 correspond to functions executed by the face detection program. That is, the control device 2 performs the functions of the units 21 to 27 by executing the face detection program.
 画像取得部21は、ドライバカメラ31により撮像された画像を当該ドライバカメラ31から入力し、ドライバカメラ31により撮像されたドライバの顔画像を取得する。 The image acquisition unit 21 receives the image captured by the driver camera 31 from the driver camera 31 and acquires the face image of the driver captured by the driver camera 31 .
 第1ソフトウェア実行部22は、ドライバが運転適正状態であるか運転不適状態であるかを判定するための運転不適状態検出のソフトウェアを所定周期で実行する。即ち、第1ソフトウェア実行部22は、運転不適状態検出のソフトウェアを所定周期で実行することで、ドライバの顔画像から瞼の下がり度合、瞳孔の開き度合い、視線方向、視線の移動速度等に基づいて脇見、閉眼、眠気等を判定可能な状態となり、運転適正状態であるか運転不適状態であるかを判定可能な状態となる。 The first software execution unit 22 executes software for detecting the unsuitable driving state for determining whether the driver is in an appropriate driving state or an unsuitable driving state at predetermined intervals. That is, the first software execution unit 22 executes software for detecting an unsuitable driving state at a predetermined cycle, so that the first software execution unit 22 detects the driver's face image based on the degree of eyelid drooping, the degree of pupil opening, the direction of the line of sight, the speed of movement of the line of sight, and the like. Thus, it becomes possible to determine whether the driver is looking aside, with closed eyes, drowsiness, or the like.
 第2ソフトウェア実行部23は、ドライバ交代を判定するための顔認証のソフトウェアを実行する。第2ソフトウェア実行部23は、顔認証のソフトウェアを実行する際の比較対象となるドライバの顔画像を以下に示すパターンで登録する。第1パターンとして、第2ソフトウェア実行部23は、イグニッションがオフからオンに切り替わった後であり、ドライバ交代を判定するための顔認証のソフトウェアを初めて実行する場合には、現在のドライバの顔画像を登録する。第2ソフトウェア実行部23は、現在のドライバの顔画像の登録後に顔認証のソフトウェアを実行する場合は、登録したドライバの顔画像と、ドライバ交代が可能な状況であると判定された際のドライバの顔画像とを比較し、同一人物であるか否かを判定する。第2ソフトウェア実行部23は、同一人物であると判定すると、登録したドライバの顔画像をそのまま継続して次回の判定に用い、同一人物でないと判定すると、登録したドライバの顔画顔を破棄し、ドライバ交代が可能な状況であると判定された際のドライバの顔画像を新たに登録する。 The second software execution unit 23 executes face recognition software for determining driver change. The second software execution unit 23 registers the face image of the driver to be compared when executing the face authentication software in the following pattern. As a first pattern, the second software execution unit 23 executes the face recognition software for judging driver change for the first time after the ignition has been switched from off to on. to register. When the face authentication software is to be executed after the face image of the current driver is registered, the second software execution unit 23 stores the face image of the registered driver and the driver when it is determined that the driver can be changed. is compared with the face image of the person, and it is determined whether or not the person is the same person. If the second software execution unit 23 determines that the person is the same person, the second software execution unit 23 continues using the registered face image of the driver for the next determination, and if it determines that the person is not the same person, discards the registered face image of the driver. , newly register the face image of the driver when it is determined that the driver can be changed.
 第2パターンとして、第2ソフトウェア実行部23は、イグニッションがオフからオンに切り替わった後であり、サーバから送信されたドライバの顔画像を受信すると、その受信したドライバの顔画像を登録する。第2ソフトウェア実行部23は、サーバから受信したドライバの顔画像の登録後に顔認証のソフトウェアを実行する場合は、この場合も、登録したドライバの顔画像と、ドライバ交代が可能な状況であると判定された際のドライバの顔画像とを比較し、同一人物であるか否かを判定する。第2ソフトウェア実行部23は、同一人物であると判定すると、登録したドライバの顔画像をそのまま継続して次回の判定に用い、同一人物でないと判定すると、登録したドライバの顔画顔を破棄し、ドライバ交代が可能な状況であると判定された際のドライバの顔画像を新たに登録する。 As a second pattern, when the second software execution unit 23 receives the face image of the driver transmitted from the server after the ignition has been switched from off to on, the received face image of the driver is registered. When the second software execution unit 23 executes the face authentication software after registering the face image of the driver received from the server, in this case also, the face image of the registered driver and the fact that the driver can be changed are displayed. The face image of the driver at the time of determination is compared to determine whether or not the person is the same person. If the second software execution unit 23 determines that the person is the same person, the second software execution unit 23 continues using the registered face image of the driver for the next determination, and if it determines that the person is not the same person, discards the registered face image of the driver. , newly register the face image of the driver when it is determined that the driver can be changed.
 第3パターンとして、第2ソフトウェア実行部23は、過去の乗車において登録済みのドライバの顔画像が存在し、その登録日時から現在日時までの経過期間が所定期間未満であれば、その登録済みのドライバの顔画像を継続して用いる。第2ソフトウェア実行部23は、これ以降に顔認証のソフトウェアを実行する場合は、この場合も、登録したドライバの顔画像と、ドライバ交代が可能な状況であると判定された際のドライバの顔画像とを比較し、同一人物であるか否かを判定する。第2ソフトウェア実行部23は、同一人物であると判定すると、登録したドライバの顔画像をそのまま継続して次回の判定に用い、同一人物でないと判定すると、登録したドライバの顔画顔を破棄し、ドライバ交代が可能な状況であると判定された際のドライバの顔画像を新たに登録する。 As a third pattern, the second software execution unit 23, if there is a face image of a registered driver in the past ride, and if the elapsed time from the registration date and time to the current date and time is less than a predetermined period, the registered driver Continue to use the driver's facial image. When the second software execution unit 23 executes the face authentication software after this, in this case also, the registered face image of the driver and the face of the driver when it is determined that the driver can be changed The image is compared to determine whether or not the person is the same person. If the second software execution unit 23 determines that the person is the same person, the second software execution unit 23 continues using the registered face image of the driver for the next determination, and if it determines that the person is not the same person, discards the registered face image of the driver. , newly register the face image of the driver when it is determined that the driver can be changed.
 交代可能状況判定部24は、ドライバ交代が可能な状況であるか否かを判定する。交代可能状況判定部24は、各種センサからCAN信号を周期的に取得し、その取得したCAN信号に基づいて各種センサの振舞を特定し、図2及び図3に示すように、各種センサの振舞からドライバ交代が生じ得るドライバの振舞を確定し、ドライバ交代が可能な状況であるか否かを判定する。ドライバ交代が生じ得るドライバの振舞を確定するための各種センサは、車速を検出する車速センサ、アクセルペダルの状態を検出するアクセルセンサ、ブレーキペダルの状態を検出するブレーキセンサ、シートベルトの状態を検出するバックルセンサ、ドアの開閉状態を検出するドア開閉センサ、ドライバの着座状態を検出する着座センサ、電子鍵が車室内に存在する状態を検出する電子鍵車室内検出センサ、シフトポジションの状態を検出するシフトセンサ等であり、一つの検出結果又は幾つかの検出結果の組み合わせに基づいてドライバ交代が生じ得るドライバの振舞を確定する。これらのセンサ以外に、例えばパーキングブレーキの作動を検出するセンサ等を用いても良い。又、各種センサの振舞にて参照されていないセンサはどのような状態でも良く、例えば車速センサが参照されていない場合は、車速センサが、0km/h(停車中を表す)を検出していても良いし、50km/h(走行中を表す)を検出していても良い。即ち、各種センサの振舞から状況証拠フラグが成立すれば、走行中であるか停車中であるかに関係なく第2ソフトウェア実行部23が顔認証ソフトを実行する。 The change possible situation determination unit 24 determines whether or not the situation is such that it is possible to change the driver. The alternation availability determination unit 24 periodically acquires CAN signals from various sensors, identifies the behavior of the various sensors based on the acquired CAN signals, and determines the behavior of the various sensors as shown in FIGS. Then, the behavior of the driver that can cause the driver change is determined, and it is determined whether or not the driver change is possible. Various sensors to determine driver behavior that may cause a driver change include a vehicle speed sensor that detects vehicle speed, an accelerator sensor that detects the state of the accelerator pedal, a brake sensor that detects the state of the brake pedal, and a seat belt that detects the state of the seat belt. Door open/close sensor that detects the open/closed state of the door, Seat sensor that detects whether the driver is seated, Electronic key interior detection sensor that detects the presence of the electronic key inside the vehicle, Detects the state of the shift position It is a shift sensor or the like, which determines the behavior of the driver that can cause a driver change based on one detection result or a combination of several detection results. Other than these sensors, for example, a sensor that detects the operation of the parking brake may be used. Sensors not referenced in the behavior of various sensors may be in any state. For example, when the vehicle speed sensor is not referenced, the vehicle speed sensor detects 0 km/h (indicating that the vehicle is stopped). , or 50 km/h (indicating running) may be detected. That is, if the circumstantial evidence flag is established from the behavior of various sensors, the second software execution unit 23 executes the face recognition software regardless of whether the vehicle is running or stopped.
 交代可能状況判定部24は、ドライバ交代が可能な状況であると判定すると、状況証拠フラグを成立させ、ドライバ交代が可能な状況でないと判定すると、状況証拠フラグを成立させない。具体的には、交代可能状況判定部24は、図2及び図3に示すように以下の(a)~(f)の場合に、ドライバ交代が可能な状況であると判定し、状況証拠フラグを成立させる。尚、図2及び図3は、各種センサの振舞からドライバ交代が生じ得るドライバの振舞を確定する幾つかの場合の一例であり、これに限らない。 When the change possible situation determining unit 24 determines that the driver change is possible, the circumstantial evidence flag is established, and when it is determined that the driver change is not possible, the circumstantial evidence flag is not established. Specifically, as shown in FIGS. 2 and 3, the change possible situation determining unit 24 determines that the driver change is possible in the following cases (a) to (f), and flags the situation evidence flag. Establish FIG. 2 and FIG. 3 are examples of several cases in which the behavior of a driver that can cause a change of driver is determined from the behavior of various sensors, and the present invention is not limited to this.
 (a)「ドア閉状態からドア開状態への変化」、「着座センサオン状態から着座センサオフ状態への変化」、「着座センサオフ状態から着座センサオン状態への変化」、「ドア閉状態からドア開状態への変化」を順次検出した場合
 (b)ドア閉状態を維持したまま(拘束条件)、「着座センサオン状態から着座センサオフ状態への変化」、「着座センサオフ状態から着座センサオン状態への変化」を順次検出した場合
(a) "Change from door closed state to door open state", "change from seating sensor ON state to seating sensor OFF state", "change from seating sensor OFF state to seating sensor ON state", "door closed state to door open state" (b) While the door is kept closed (restraint condition), the "change from the seating sensor ON state to the seating sensor OFF state" and the "change from the seating sensor OFF state to the seating sensor ON state" are detected. When detected sequentially
 (c)着座センサ非搭載であり、ドア閉状態且つバックルセンサオフ状態を維持したまま(拘束条件)、「シフトポジションP状態からシフトポジションD状態への変化」、「上記以降のシフトポジションD状態から変化しないこと」を順次検出した場合
 (d)着座センサ非搭載であり、「バックルセンサオン状態からバックルセンサオフ状態への変化」、「ドア閉状態からドア開状態への変化」、「ドア開状態からドア閉状態への変化」、「バックルセンサオフ状態からバックルセンサオン状態への変化」を順次検出した場合
(c) Without a seating sensor, while maintaining the door closed state and buckle sensor off state (restraint condition), "change from shift position P state to shift position D state", "shift position D state after the above" (d) The seating sensor is not installed, and "change from buckle sensor ON state to buckle sensor OFF state", "change from door closed state to door open state", "door When the "change from open state to door closed state" and "change from buckle sensor off state to buckle sensor on state" are detected sequentially
 (e)着座センサ非搭載であり、ドア閉状態を維持したまま(拘束条件)、「バックルセンサオン状態からバックルセンサオフ状態への変化」、「バックルセンサオフ状態からバックルセンサオン状態への変化」を順次検出した場合
 (f)着座センサ及びバックルセンサ非搭載であり、「ドア閉状態からドア開状態への変化」、「電子鍵が車室内存在状態から車室外存在状態への変化」、「電子鍵が車室外存在状態から車室内存在状態への変化」、「ドア開状態からドア閉状態への変化」を順次検出した場合
(e) Seating sensor is not installed and the door is kept closed (restraint condition), "change from buckle sensor ON state to buckle sensor OFF state", "change from buckle sensor OFF state to buckle sensor ON state" (f) Seat sensor and buckle sensor are not installed, "change from door closed state to door open state", "change from state where the electronic key is present inside the vehicle to state where the electronic key is present outside the vehicle", When the electronic key changes from being outside the vehicle to being inside the vehicle, and then from the door open to the door closed.
 一方、交代可能状況判定部24は、例えば着座センサオン状態を維持した状態で 「ドア閉状態からドア開状態への変化」、「ドア開状態からドア閉状態への変化」を順次検出しても、ドライバ交代が可能な状況でないと判定し、状況証拠フラグを成立させない。 On the other hand, even if the change possible state determination unit 24 sequentially detects "change from the door closed state to the door open state" and "change from the door open state to the door closed state" while maintaining the seating sensor ON state, for example, , it is determined that the driver change is not possible, and the circumstantial evidence flag is not established.
 ソフトウェア実行制御部25は、第1ソフトウェア実行部22による運転不適状態検出のソフトウェアを所定周期で実行させる。又、ソフトウェア実行制御部25は、第2ソフトウェア実行部23による顔認証のソフトウェアを所定周期で実行させるのではなく、交代可能状況判定部24の判定結果によりドライバ交代が可能な状況であると判定されたことを条件として実行させる。 The software execution control unit 25 causes the first software execution unit 22 to execute the software for detecting the unsuitable driving state at a predetermined cycle. In addition, the software execution control unit 25 determines that the driver change is possible based on the determination result of the change possible state determination unit 24 instead of causing the second software execution unit 23 to execute the face authentication software at a predetermined cycle. Execute it as a condition.
 この場合、ソフトウェア実行制御部25は、運転不適状態検出のソフトウェアの非実行中に顔認証のソフトウェアを実行させるように、運転不適状態検出のソフトウェアの実行と顔認証のソフトウェアの実行を以下のパターンで調整する。ソフトウェア実行制御部25は、図4に示すように、運転不適状態検出のソフトウェアを33[msec]周期で実行し、1回の運転不適状態検出のソフトウェアの実行時間を28[msec]で実行する。 In this case, the software execution control unit 25 executes the driving unsuitable state detection software and the face authentication software in the following pattern so that the face recognition software is executed while the driving unsuitable state detection software is not being executed. to adjust. As shown in FIG. 4, the software execution control unit 25 executes the unsuitable driving state detection software at a cycle of 33 [msec], and executes the unsuitable driving state detection software at a time of 28 [msec]. .
 (1)運転不適状態検出のソフトウェアの非実行中に状況証拠フラグが成立した場合
 ソフトウェア実行制御部25は、図5に示すように、運転不適状態検出のソフトウェアの非実行中に状況証拠フラグの成立を判定すると、状況証拠フラグの成立を判定した直後に顔認証のソフトウェアの実行を開始させる。
(1) When the circumstantial evidence flag is established while the software for detecting the unsuitable driving state is not executed The software execution control unit 25, as shown in FIG. When it is determined that the circumstantial evidence flag is established, execution of the face authentication software is started immediately after it is determined that the circumstantial evidence flag is established.
 (2)運転不適状態検出のソフトウェアの実行中に状況証拠フラグが成立した場合
 ソフトウェア実行制御部25は、図6に示すように、運転不適状態検出のソフトウェアの実行中に状況証拠フラグの成立を判定すると、運転不適状態検出のソフトウェアの実行を終了後に顔認証のソフトウェアの実行を開始させる。
(2) When the circumstantial evidence flag is established during execution of the software for detecting the unsuitable driving state The software execution control unit 25, as shown in FIG. If determined, the face authentication software is started after the execution of the driving unsuitable state detection software is finished.
 (3)顔認証のソフトウェアの実行中に運転不適状態検出のソフトウェアの実行要求が発生した場合(その1)
 ソフトウェア実行制御部25は、図7に示すように、顔認証のソフトウェアの実行中に運転不適状態検出のソフトウェアの実行要求の発生を判定すると、顔認証のソフトウェアの実行を中断させ、運転不適状態検出のソフトウェアの実行を開始させる。ソフトウェア実行制御部25は、運転不適状態検出のソフトウェアの実行を終了後に顔認証のソフトウェアの実行を再開させる。
(3) When a request to run unsuitable driving state detection software occurs while face authentication software is running (Part 1)
As shown in FIG. 7, when the software execution control unit 25 determines that an execution request for software for detecting an unsuitable driving state is generated during execution of the face authentication software, it interrupts the execution of the software for face authentication and detects the unsuitable driving state. Start running detection software. The software execution control unit 25 restarts the execution of the face authentication software after finishing the execution of the driving unsuitable state detection software.
(4)顔認証のソフトウェアの実行中に運転不適状態検出のソフトウェアの実行要求が発生した場合(その2)
 ソフトウェア実行制御部25は、図8に示すように、顔認証のソフトウェアの実行中に運転不適状態検出のソフトウェアの実行要求の発生を判定すると、顔認証のソフトウェアの実行を継続させ、次の運転不適状態検出のソフトウェアの実行をキャンセルさせる。
(4) When a request to run unsuitable driving state detection software occurs while face authentication software is running (Part 2)
As shown in FIG. 8, when the software execution control unit 25 determines that an execution request for software for detecting an unsuitable state for driving is generated during execution of face authentication software, the software execution control unit 25 continues execution of the face authentication software, and performs the next driving. Cancels execution of inappropriate state detection software.
 顔画像保存部26は、交代可能状況判定部24により状況証拠フラグが成立されると、ドライバの顔画像を保存する。顔画像送信部27は、交代可能状況判定部24により状況証拠フラグが成立されると、ドライバの顔画像をサーバに送信させる。 The facial image storage unit 26 stores the driver's facial image when the circumstantial evidence flag is established by the shift possible situation determination unit 24 . The facial image transmission unit 27 transmits the facial image of the driver to the server when the circumstantial evidence flag is established by the shift possible situation determination unit 24 .
 次に、上記した構成の作用について図9から図14を参照して説明する。
 制御装置2は、例えばイグニッションがオフからオンに切り替わったことでドライバの状態監視処理の開始条件が成立すると、ドライバの状態監視処理を開始する。ドライバの状態監視処理の開始条件は、例えばイグニッションがオフからオンに切り替わったことに限らず、ドライバが所定操作を行ったことでも良い。制御装置2は、ドライバの状態監視処理を開始すると、各種センサからCAN信号を周期的に取得し(S1)、各種センサの振舞からドライバの振舞を確定し、ドライバ交代が可能な状況であるか否かを判定する(S2、交代可能状況判定手順に相当する)。
Next, the operation of the above configuration will be described with reference to FIGS. 9 to 14. FIG.
The control device 2 starts the driver state monitoring process when the conditions for starting the driver state monitoring process are met, for example, when the ignition is switched from off to on. The condition for starting the driver status monitoring process is not limited to, for example, that the ignition has been switched from off to on, and may be that the driver has performed a predetermined operation. When the control device 2 starts the driver state monitoring process, it periodically acquires CAN signals from various sensors (S1), determines the behavior of the driver based on the behavior of the various sensors, and determines whether it is possible to change the driver. It is determined whether or not (S2, corresponding to the procedure for determining the possible replacement situation).
 制御装置2は、ドライバ交代が可能な状況でないと判定すると(S2:NO)、状況証拠フラグを成立させず、ドライバの状態監視処理の終了条件の成立を判定する(S3)。制御装置2は、例えばイグニッションがオンからオフに切り替わったことでドライバの状態監視処理の終了条件が成立したと判定すると(S3:YES)、ドライバの状態監視処理を終了する。ドライバの状態監視処理の終了条件は、例えばイグニッションがオンからオフに切り替わったことに限らず、ドライバが所定操作を行ったことでも良い。制御装置2は、例えばイグニッションがオンのままであり、ドライバの状態監視処理の終了条件が成立していない判定すると(S3:NO)、上記したステップS1に戻り、ステップS1以降を繰り返す。 When the control device 2 determines that the driver change is not possible (S2: NO), it does not set the circumstantial evidence flag and determines whether the conditions for terminating the driver status monitoring process are met (S3). When the control device 2 determines that the conditions for ending the driver state monitoring process are satisfied, for example, because the ignition is switched from ON to OFF (S3: YES), the control device 2 ends the driver state monitoring process. The termination condition of the driver status monitoring process is not limited to, for example, that the ignition is switched from on to off, and may be that the driver performs a predetermined operation. For example, if the control device 2 determines that the ignition is still on and the conditions for terminating the driver state monitoring process are not satisfied (S3: NO), the control device 2 returns to step S1, and repeats step S1 and subsequent steps.
 一方、制御装置2は、ドライバ交代が可能な状況であると判定すると(S2:YES)、状況証拠フラグを成立させ(S4)、ドライバの顔画像を保存し(S5)、ドライバの顔画像を保存したことを報知する(S6)。制御装置2は、図10に示すように、顔画像保存時の通知画面をディスプレイ61に表示させ、顔画像保存時の通知音声をスピーカ62から音声出力させる。ドライバは、ディスプレイ61に顔画像保存時の通知画面が表示され、スピーカ62から顔画像保存時の通知音声が音声出力されることにより、顔画像が保存されたことを把握することができる。 On the other hand, when the control device 2 determines that the driver change is possible (S2: YES), it establishes a situation evidence flag (S4), saves the driver's face image (S5), and saves the driver's face image. It notifies that it has been saved (S6). As shown in FIG. 10 , the control device 2 causes the display 61 to display a notification screen when the face image is saved, and outputs a notification sound when the face image is saved from the speaker 62 . A driver can recognize that the face image is saved by displaying a notification screen when the face image is saved on the display 61 and outputting a notification sound when the face image is saved from the speaker 62.例文帳に追加
 制御装置2は、ドライバの顔画像をサーバに送信させ(S7)、ドライバの顔画像をサーバに送信させたことを報知する(S8)。制御装置2は、図11に示すように、顔画像送信時の通知画面をディスプレイ61に表示させ、顔画像送信時の通知音声をスピーカ62から音声出力させる。ドライバは、ディスプレイ61に顔画像送信時の通知画面が表示され、スピーカ62から顔画像送信時の通知音声が音声出力されることにより、顔画像が送信されたことを把握することができる。 The control device 2 causes the driver's facial image to be transmitted to the server (S7), and notifies that the driver's facial image has been transmitted to the server (S8). As shown in FIG. 11 , the control device 2 causes the display 61 to display a notification screen when the face image is transmitted, and causes the speaker 62 to output a notification sound when the face image is transmitted. A notification screen when the face image is transmitted is displayed on the display 61, and a notification sound when the face image is transmitted is output from the speaker 62, so that the driver can grasp that the face image has been transmitted.
 制御装置2は、顔認証ソフトウェアの実行を開始可能な条件が成立しているか否かを判定し(S9)、顔認証ソフトウェアの実行を開始可能な条件が成立していると判定すると(S9:YES)、顔認証ソフトウェアの実行を開始させる(S10、ソフトウェア実行制御手順に相当する)。制御装置2は、顔認証ソフトウェアの実行を開始可能な条件が成立していないと判定すると(S9:NO)、条件が成立するまで待機する。この場合、制御装置2は、上記した(1)運転不適状態検出のソフトウェアの非実行中に状況証拠フラグが成立した場合、(2)運転不適状態検出のソフトウェアの実行中に状況証拠フラグが成立した場合、(3)顔認証のソフトウェアの実行中に運転不適状態検出のソフトウェアの実行要求が発生した場合(その1)、(4)顔認証のソフトウェアの実行中に運転不適状態検出のソフトウェアの実行要求が発生した場合(その2)、の状況に応じて顔認証ソフトウェアを実行する。 The control device 2 determines whether or not a condition for starting execution of the face authentication software is satisfied (S9). YES) to start execution of face authentication software (S10, corresponding to software execution control procedure). If the control device 2 determines that the conditions for starting execution of the face authentication software are not met (S9: NO), it waits until the conditions are met. In this case, the control device 2 controls (1) when the circumstantial evidence flag is established while the software for detecting the unsuitable driving state is not executed, and (2) when the circumstantial evidence flag is established while the software for detecting the unsuitable driving state is executed. (3) When the execution request for the unsuitable driving state detection software is generated while the face authentication software is running (Part 1), (4) When the driving unsuitable state detection software is running while the face recognition software is running When an execution request is generated (part 2), the face authentication software is executed according to the situation.
 制御装置2は、顔認証ソフトウェアの実行を終了すると、顔認証結果を判定し、今回の顔認証結果と前回の顔認証結果とが同一人物であるか否かを判定し、ドライバが交代したか否かを判定する(S11)。制御装置2は、今回の顔認証結果と前回の顔認証結果とが同一人物であると判定し、ドライバが交代していないと判定すると(S11:NO)、ドライバの状態監視処理の終了条件の成立を判定する(S3)。 When the execution of the face recognition software is completed, the control device 2 determines the face recognition result, determines whether the current face recognition result and the previous face recognition result are the same person, and determines whether the driver has changed. It is determined whether or not (S11). When the control device 2 determines that the current face recognition result and the previous face recognition result are the same person, and determines that the driver has not changed (S11: NO), the end condition of the driver status monitoring process is set. Establishment is determined (S3).
 制御装置2は、今回の顔認証結果と前回の顔認証結果とが同一人物でないと判定し、ドライバが交代していると判定すると(S11:YES)、ドライバ交代を報知する(S12)。制御装置2は、図12に示すように、ドライバ交代時の通知画面をディスプレイ61に表示させ、ドライバ交代時の通知音声をスピーカ62から音声出力させる。ドライバは、ディスプレイ61にドライバ交代時の通知画面が表示され、スピーカ62からドライバ交代時の通知音声が音声出力されることにより、ドライバ交代をシステムが正しく認識したことを把握することができる。 When the control device 2 determines that the current face recognition result and the previous face recognition result are not the same person, and determines that the driver has changed (S11: YES), it notifies the driver change (S12). As shown in FIG. 12 , the control device 2 causes the display 61 to display a notification screen when the driver is changed, and causes the speaker 62 to output a notification sound when the driver is changed. The driver can grasp that the system has correctly recognized the driver change by displaying a notification screen at the time of the driver change on the display 61 and outputting the notification sound at the time of the driver change from the speaker 62.例文帳に追加
 制御装置2は、交代後のドライバの顔画像を保存し(S13)、交代後のドライバの顔画像を保存したことを報知する(S14)。このようにして保存されたドライバの顔画像は、上記した第1パターンにおいて登録される顔画像として用いられる。制御装置2は、図13に示すように、ドライバ交代後の顔画像保存時の通知画面をディスプレイ61に表示させ、ドライバ交代後の顔画像保存時の通知音声をスピーカ62から音声出力させる。ドライバは、ディスプレイ61にドライバ交代後の顔画像保存時の通知画面が表示され、スピーカ62からドライバ交代後の顔画像保存時の通知音声が音声出力されることにより、交代後の顔画像が保存されたことを把握することができる。 The control device 2 saves the face image of the driver after the change (S13), and notifies that the face image of the driver after the change has been saved (S14). The face image of the driver stored in this manner is used as the face image registered in the first pattern described above. As shown in FIG. 13 , the control device 2 causes the display 61 to display a notification screen when the face image is saved after the driver change, and causes the speaker 62 to output a notification sound when the face image is saved after the driver change. For the driver, the display 61 displays a notification screen for saving the face image after the driver change, and the speaker 62 outputs a notification sound for saving the face image after the driver change, thereby saving the face image after the change. It is possible to grasp what has been done.
 制御装置2は、交代後のドライバの顔画像をサーバに送信させ(S15)、交代後のドライバの顔画像をサーバに送信させたことを報知する(S16)。このようにしてサーバに送信させたドライバの顔画像は、上記した第2パターンにおいて登録される顔画像として用いられる。制御装置2は、図14に示すように、ドライバ交代後の顔画像送信時の通知画面をディスプレイ61に表示させ、ドライバ交代後の顔画像送信時の通知音声をスピーカ62から音声出力させる)。ドライバは、ディスプレイ61にドライバ交代後の顔画像送信時の通知画面が表示され、スピーカ62からドライバ交代後の顔画像送信時の通知音声が音声出力されることにより、交代後の顔画像が送信されたことを把握することができる。 The control device 2 causes the face image of the driver after the change to be transmitted to the server (S15), and notifies that the face image of the driver after the change has been transmitted to the server (S16). The face image of the driver transmitted to the server in this way is used as the face image registered in the above-described second pattern. As shown in FIG. 14, the control device 2 causes the display 61 to display a notification screen when the face image is transmitted after the driver change, and outputs notification sound from the speaker 62 when the face image is transmitted after the driver change). For the driver, the display 61 displays a notification screen when the face image is to be sent after the driver change, and the notification sound when the face image is to be sent after the driver change is output from the speaker 62, thereby transmitting the face image after the change. It is possible to grasp what has been done.
 以上は、ドライバへの報知として各種画面をディスプレイ61に表示させ、各種音声をスピーカ62から音声出力させることを例示したが、各種画面をディスプレイ61に表示させるのみ、又は各種音声をスピーカ62から音声出力させるのみであっても良い。 In the above, various screens are displayed on the display 61 and various sounds are output from the speaker 62 as notifications to the driver. It is also possible to simply output.
 以上に説明したように本実施形態によれば、次に示す作用効果を得ることができる。
 ドライバの顔検出装置7において、顔認証のソフトウェアを定期的に実行するのではなく、ドライバ交代が可能な状況であると判定したことを条件として顔認証のソフトウェアを実行させるようにした。運転不適状態検出のソフトウェアを所定周期で実行することで、ドライバが運転適正状態であるか運転不適状態であるかを適切に判定することができ、顔認証のソフトウェアをドライバ交代が可能な状況であることを条件として実行することで、顔認証のソフトウェアを実行する頻度を低減させることができる。これにより、ドライバが運転適正状態であるか運転不適状態であるかを適切に判定しつつ、定常的な演算量の増大を未然に回避することができる。即ち、ドライバ交代の可能性がない状況では顔認証のソフトウェアを実行しないことで、定常的な演算量の増大を未然に回避することができる。
As described above, according to this embodiment, the following effects can be obtained.
In the face detection device 7 of the driver, the face recognition software is executed on the condition that it is determined that the driver can be changed instead of executing the face recognition software periodically. By executing the software for detecting unsuitable driving conditions at predetermined intervals, it is possible to appropriately determine whether the driver is in an appropriate or unsuitable condition for driving. By doing so, it is possible to reduce the frequency of running face recognition software. As a result, it is possible to appropriately determine whether the driver is in the appropriate driving state or in the unsuitable driving state, while avoiding an increase in the steady amount of calculation. That is, by not executing the face authentication software in a situation where there is no possibility of a driver change, it is possible to avoid a constant increase in the amount of computation.
 運転不適状態検出のソフトウェアの非実行中に顔認証のソフトウェアを実行するようにした。運転不適状態検出のソフトウェアを所定周期で実行することに何ら影響を及ぼさずに顔認証のソフトウェアを実行することで、運転適正状態であるか運転不適状態であるかを適切に判定可能な状態を確保することができる。  The face recognition software is run while the unsuitable driving state detection software is not running. By executing the face recognition software without affecting the execution of the unsuitable driving state detection software at a predetermined cycle, it is possible to appropriately determine whether the vehicle is in a suitable driving state or an unsuitable driving state. can be secured.
 本開示は、実施例に準拠して記述されたが、当該実施例や構造に限定されるものではないと理解される。本開示は、様々な変形例や均等範囲内の変形をも包含する。加えて、様々な組み合わせや形態、更には、それらに一要素のみ、それ以上、或いはそれ以下を含む他の組み合わせや形態をも、本開示の範疇や思想範囲に入るものである。 Although the present disclosure has been described with reference to examples, it is understood that it is not limited to those examples or structures. The present disclosure also includes various modifications and modifications within the equivalent range. In addition, various combinations and configurations, as well as other combinations and configurations including single elements, more, or less, are within the scope and spirit of this disclosure.
 本開示に記載の制御部及びその手法は、コンピュータプログラムにより具体化された一つ乃至は複数の機能を実行するようにプログラムされたプロセッサ及びメモリを構成することにより提供された専用コンピュータにより実現されても良い。或いは、本開示に記載の制御部及びその手法は、一つ以上の専用ハードウェア論理回路によりプロセッサを構成することにより提供された専用コンピュータにより実現されても良い。若しくは、本開示に記載の制御部及びその手法は、一つ乃至は複数の機能を実行するようにプログラムされたプロセッサ及びメモリと一つ以上のハードウェア論理回路により構成されたプロセッサとの組み合わせにより構成された一つ以上の専用コンピュータにより実現されても良い。又、コンピュータプログラムは、コンピュータにより実行されるインストラクションとして、コンピュータ読み取り可能な非遷移有形記録媒体に記憶されていても良い。 The controller and techniques described in this disclosure may be implemented by a dedicated computer provided by configuring a processor and memory programmed to perform one or more functions embodied by the computer program. can be Alternatively, the controller and techniques described in this disclosure may be implemented by a dedicated computer provided by configuring the processor with one or more dedicated hardware logic circuits. Alternatively, the controller and techniques described in this disclosure can be implemented by a combination of a processor and memory programmed to perform one or more functions and a processor configured with one or more hardware logic circuits. It may also be implemented by one or more dedicated computers configured. The computer program may also be stored as computer-executable instructions on a computer-readable non-transitional tangible storage medium.

Claims (10)

  1.  ドライバの顔画像を取得する画像取得部(21)と、
     ドライバが運転適正状態であるか運転不適状態であるかを判定するための運転不適状態検出のソフトウェアを所定周期で実行する第1ソフトウェア実行部(22)と、
     ドライバ交代を判定するための顔認証のソフトウェアを実行する第2ソフトウェア実行部(23)と、
     ドライバ交代が可能な状況であるか否かを判定する交代可能状況判定部(24)と、
     前記交代可能状況判定部の判定結果によりドライバ交代が可能な状況であると判定されると、前記顔認証のソフトウェアを実行させるソフトウェア実行制御部(25)と、を備えるドライバの顔認証装置。
    an image acquisition unit (21) for acquiring a facial image of the driver;
    a first software execution unit (22) for executing, at predetermined intervals, software for detecting an unsuitable driving state for determining whether the driver is in an appropriate driving state or an unsuitable driving state;
    a second software execution unit (23) for executing face recognition software for judging driver change;
    A change possible situation determination unit (24) that determines whether or not the driver change is possible;
    A software execution control unit (25) for executing the software for face authentication when it is determined that the driver can be changed according to the determination result of the change possible state determination unit.
  2.  前記ソフトウェア実行制御部は、運転不適状態検出のソフトウェアの非実行中に前記顔認証のソフトウェアを実行させる請求項1に記載したドライバの顔認証装置。  The driver's face authentication device according to claim 1, wherein the software execution control unit causes the face authentication software to be executed while the driving unsuitable state detection software is not being executed.
  3.  前記ソフトウェア実行制御部は、運転不適状態検出のソフトウェアの非実行中にドライバ交代が可能な状況であると特定された場合には、前記顔認証のソフトウェアの実行を開始させる請求項2に記載したドライバの顔認証装置。 3. The software execution control unit according to claim 2, wherein the software execution control unit starts execution of the face recognition software when it is specified that the driver can be changed while the software for detecting the unsuitable driving state is not being executed. Driver face recognition device.
  4.  前記ソフトウェア実行制御部は、運転不適状態検出のソフトウェアの実行中にドライバ交代が可能な状況であると特定された場合には、前記運転不適状態検出のソフトウェアの実行を終了後に前記顔認証のソフトウェアの実行を開始させる請求項2に記載したドライバの顔認証装置。 The software execution control unit, when it is specified that a driver change is possible during execution of the unsuitable driving state detection software, executes the face authentication software after completing the execution of the unsuitable driving state detection software. 3. The driver's face authentication device according to claim 2, which initiates the execution of.
  5.  前記ソフトウェア実行制御部は、前記顔認証のソフトウェアの実行中に前記運転不適状態検出のソフトウェアの実行要求が発生した場合には、前記顔認証のソフトウェアの実行を中断させ、前記運転不適状態検出のソフトウェアの実行を開始させ、前記運転不適状態検出のソフトウェアの実行を終了後に前記顔認証のソフトウェアの実行を再開させる請求項2に記載したドライバの顔認証装置。 The software execution control unit suspends execution of the face authentication software and detects the driving unsuitable state when an execution request for the software for detecting the unsuitable state for driving is generated during execution of the software for face authentication. 3. The driver's face authentication device according to claim 2, wherein execution of software is started, and execution of said face authentication software is resumed after finishing execution of said driving unsuitable state detection software.
  6.  前記ソフトウェア実行制御部は、前記顔認証のソフトウェアの実行中に前記運転不適状態検出のソフトウェアの実行要求が発生した場合には、前記顔認証のソフトウェアの実行を継続させ、次の運転不適状態検出のソフトウェアの実行をキャンセルさせる請求項2に記載したドライバの顔認証装置。 The software execution control unit, when an execution request for the software for detecting the state of unsuitable driving is generated while the software for face authentication is being executed, causes the execution of the software for face recognition to continue, and detects the next unsuitable state for driving. 3. The driver's face authentication device according to claim 2, which cancels the execution of the software.
  7.  前記交代可能状況判定部は、シートベルトの状態を検出するバックルセンサ、ドアの開閉状態を検出するドア開閉センサ、ドライバの着座状態を検出する着座センサ、電子鍵が車室内に存在する状態を検出する電子鍵車室内検出センサ、 ハンドルの手把持状態を検出するハンドルタッチセンサ、ハンドルの操舵力を検出するハンドルトルクセンサ、車速を検出する車速センサ、ハンドルの操舵角を検出する舵角センサ、アクセルペダルの状態を検出するアクセルセンサ、ブレーキペダルの状態を検出するブレーキセンサ、シフトポジションの状態を検出するシフトセンサのうち少なくとも何れか一つの検出結果に基づいてドライバ交代が可能な状況であるか否かを判定する請求項1から6の何れか一項に記載したドライバの顔認証装置。 The alternation possible state determination unit includes a buckle sensor that detects the state of the seat belt, a door open/close sensor that detects the open/closed state of the door, a seating sensor that detects the seated state of the driver, and a state that the electronic key is present in the vehicle compartment. electronic key interior detection sensor, steering wheel touch sensor that detects the hand grip state of the steering wheel, steering wheel torque sensor that detects the steering force of the steering wheel, vehicle speed sensor that detects the vehicle speed, steering angle sensor that detects the steering angle of the steering wheel, accelerator Whether it is possible to change drivers based on the detection result of at least one of an accelerator sensor that detects the state of the pedal, a brake sensor that detects the state of the brake pedal, and a shift sensor that detects the state of the shift position. 7. The driver's face authentication device according to any one of claims 1 to 6, which determines whether or not
  8.  前記ドライバの顔画像を保存する顔画像保存部(26)を備え、
     前記顔画像保存部は、前記交代可能状況判定部の判定結果によりドライバ交代が可能な状況であると判定されると、前記ドライバの顔画像を保存する請求項1から7の何れか一項に記載したドライバの顔認証装置。
    A facial image storage unit (26) for storing the facial image of the driver,
    8. The facial image storage unit according to any one of claims 1 to 7, wherein the facial image storage unit stores the facial image of the driver when it is determined by the determination result of the driver replacement possible situation determination unit that the driver can be replaced. A facial recognition device for the described driver.
  9.  前記ドライバの顔画像をサーバに送信する顔画像送信部(27)を備え、
     前記顔画像送信部は、前記交代可能状況判定部の判定結果によりドライバ交代が可能な状況であると判定されると、前記ドライバの顔画像を前記サーバに送信する請求項1から8の何れか一項に記載したドライバの顔認証装置。
    A facial image transmission unit (27) for transmitting the facial image of the driver to a server,
    9. The facial image transmission unit according to any one of claims 1 to 8, wherein the facial image transmission unit transmits the facial image of the driver to the server when it is determined that the driver can be changed according to the determination result of the driver change possible state determination unit. 1. The driver's face authentication device described in 1.
  10.  ドライバが運転適正状態であるか運転不適状態であるかを判定するための運転不適状態検出のソフトウェアを所定周期で実行する第1ソフトウェア実行部(22)と、ドライバ交代を判定するための顔認証のソフトウェアを実行する第2ソフトウェア実行部(23)と、を備えるドライバの顔認証装置(7)に、
     ドライバ交代が可能な状況であるか否かを判定する交代可能状況判定手順と、
     前記交代可能状況判定手順の判定結果によりドライバ交代が可能な状況であると判定すると、前記顔認証のソフトウェアを実行させるソフトウェア実行制御手順と、を実行させるドライバの顔認証プログラム。
    A first software execution unit (22) that executes software for detecting an unsuitable driving state for determining whether the driver is in an appropriate driving state or an unsuitable driving state at predetermined intervals, and face recognition for determining driver change. to the driver's face authentication device (7) comprising a second software execution unit (23) that executes the software of
    a change possible situation determination procedure for determining whether or not a driver change is possible;
    and a software execution control procedure for executing the software for face authentication when it is determined that the driver can be changed according to the determination result of the procedure for determining whether the driver can be changed.
PCT/JP2022/021391 2021-06-25 2022-05-25 Facial authentication device for driver and facial authentication program WO2022270205A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023529722A JPWO2022270205A1 (en) 2021-06-25 2022-05-25

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-105740 2021-06-25
JP2021105740 2021-06-25

Publications (1)

Publication Number Publication Date
WO2022270205A1 true WO2022270205A1 (en) 2022-12-29

Family

ID=84544518

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/021391 WO2022270205A1 (en) 2021-06-25 2022-05-25 Facial authentication device for driver and facial authentication program

Country Status (2)

Country Link
JP (1) JPWO2022270205A1 (en)
WO (1) WO2022270205A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008213585A (en) * 2007-03-01 2008-09-18 Toyota Motor Corp Drunk-driving prevention device
JP2016147617A (en) * 2015-02-13 2016-08-18 矢崎総業株式会社 Vehicular illumination control device
JP2017077817A (en) * 2015-10-21 2017-04-27 いすゞ自動車株式会社 State determination device
JP2017215650A (en) * 2016-05-30 2017-12-07 アイシン精機株式会社 Alarm device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008213585A (en) * 2007-03-01 2008-09-18 Toyota Motor Corp Drunk-driving prevention device
JP2016147617A (en) * 2015-02-13 2016-08-18 矢崎総業株式会社 Vehicular illumination control device
JP2017077817A (en) * 2015-10-21 2017-04-27 いすゞ自動車株式会社 State determination device
JP2017215650A (en) * 2016-05-30 2017-12-07 アイシン精機株式会社 Alarm device

Also Published As

Publication number Publication date
JPWO2022270205A1 (en) 2022-12-29

Similar Documents

Publication Publication Date Title
JP7024806B2 (en) Information presentation device
US11008016B2 (en) Display system, display method, and storage medium
WO2018186127A1 (en) Travel support device
JP6617534B2 (en) Driving assistance device
US10569649B2 (en) In-vehicle control apparatus
CN106463065B (en) Driving incapability state detection device for driver
CN111361552B (en) Automatic driving system
JP6341055B2 (en) In-vehicle control device
US8566017B2 (en) Driving support apparatus for vehicle
WO2016157883A1 (en) Travel control device and travel control method
WO2015198540A1 (en) Device for detecting driving incapacity state of driver
WO2016027412A1 (en) In-vehicle control apparatus
JPWO2018220827A1 (en) Vehicle control system, vehicle control method, and vehicle control program
WO2014148025A1 (en) Travel control device
JP6379720B2 (en) Driver inoperability detection device
JP6662080B2 (en) Driver status judgment device
WO2018056103A1 (en) Vehicle control device, vehicle control method, and moving body
US10906550B2 (en) Vehicle control apparatus
WO2020100585A1 (en) Information processing device, information processing method, and program
US20210039638A1 (en) Driving support apparatus, control method of vehicle, and non-transitory computer-readable storage medium
WO2021039779A1 (en) Vehicle control device
US20210300403A1 (en) Vehicle controller and vehicle control method
WO2022270205A1 (en) Facial authentication device for driver and facial authentication program
CN112406886B (en) Vehicle control device and control method, vehicle, and storage medium
JP2019109659A (en) Travel support system and on-vehicle device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22828124

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023529722

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE