WO2018225176A1 - State determination device and state determination method - Google Patents

State determination device and state determination method Download PDF

Info

Publication number
WO2018225176A1
WO2018225176A1 PCT/JP2017/021121 JP2017021121W WO2018225176A1 WO 2018225176 A1 WO2018225176 A1 WO 2018225176A1 JP 2017021121 W JP2017021121 W JP 2017021121W WO 2018225176 A1 WO2018225176 A1 WO 2018225176A1
Authority
WO
WIPO (PCT)
Prior art keywords
driver
image
posture
determination
unit
Prior art date
Application number
PCT/JP2017/021121
Other languages
French (fr)
Japanese (ja)
Inventor
翔悟 甫天
友美 保科
和樹 國廣
洸暉 安部
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2017/021121 priority Critical patent/WO2018225176A1/en
Priority to JP2019523262A priority patent/JP6960995B2/en
Publication of WO2018225176A1 publication Critical patent/WO2018225176A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present invention relates to a technique for determining the state of a driver who drives a vehicle.
  • Patent Document 1 discloses that a driver detected while a vehicle is running by sequentially detecting a head above the driver's neck from a captured image of a driver's seat captured by a camera mounted on the vehicle.
  • An inoperable state detection device for a driver that detects that the driver is in an inoperable state because the driving posture is broken when the head is out of a predetermined range of the captured image is disclosed.
  • the head of the driver other than the driver's head such as the head of another occupant reflected in the captured image, the vehicle interior, or the landscape, is the driver's head. If it is erroneously detected, there is a problem that a driver who is not in an inoperable state is erroneously detected as in an inoperable state.
  • the present invention has been made to solve the above-described problems, and it is an object of the present invention to suppress erroneous detection of a driver's inoperable state and improve the determination accuracy of the driver's state.
  • the state determination device includes an image acquisition unit that acquires a captured image in a vehicle including a driver, a face detection unit that detects a driver's face position from the captured image acquired by the image acquisition unit, and face detection The driver's face position detected by the driver is compared with the driver's reference point calculated in advance, and the movement of the driver's face position equal to or greater than the first posture collapse determination amount is detected in the set movement direction.
  • the posture determination unit determines that the driver's posture is broken, and the posture determination unit determines that the driver's posture is broken, the first in the captured image acquired by the image acquisition unit Based on the result of re-determination by the image comparison unit that compares the image of the region with the image of the first region in the comparison image and re-determines whether the driver's posture has collapsed And a state determination unit that determines whether or not the driver is in an inoperable state.
  • the present invention it is possible to suppress erroneous detection of the driver's inability to drive and improve the determination accuracy of the driver's state.
  • FIG. 1 is a block diagram illustrating a configuration of a state determination device according to Embodiment 1.
  • FIG. 2A and 2B are diagrams illustrating an example of the first region set by the image comparison unit of the state determination device according to Embodiment 1.
  • FIG. 3A and 3B are diagrams illustrating a hardware configuration example of the state determination device according to the first embodiment. It is a figure which shows the process of the face detection part of the state determination apparatus which concerns on Embodiment 1, and a reference point calculation part. It is a figure which shows an example of the setting of the moving direction of the driver
  • FIG. 6A, 6B, and 6C are diagrams illustrating processing of the image comparison unit of the state determination device according to Embodiment 1.
  • FIG. 7A, 7B, and 7C are diagrams illustrating processing of the image comparison unit of the state determination device according to Embodiment 1.
  • FIG. 6 is a flowchart showing an operation of a reference point calculation process of the state determination device according to the first embodiment.
  • 3 is a flowchart illustrating an operation of a state determination process of the state determination device according to the first embodiment.
  • 10A, 10B, and 10C are diagrams illustrating determination of the displacement amount of the driver's face by the posture determination unit of the state determination device according to Embodiment 1.
  • FIG. 6 is a block diagram illustrating a configuration of a state determination device according to Embodiment 2.
  • FIG. 13A and 13B are diagrams illustrating detection of the head center axis by the axis detection unit of the state determination device according to the second embodiment.
  • 14A, 14B, and 14C are diagrams illustrating determination of the inclination of the head center axis by the axis detection unit of the state determination device according to the second embodiment.
  • 6 is a flowchart illustrating an operation of a state determination process of the state determination device according to the second embodiment.
  • FIG. 13A and 13B are diagrams illustrating detection of the head center axis by the axis detection unit of the state determination device according to the second embodiment.
  • 14A, 14B, and 14C are diagrams illustrating determination of the inclination of the head center axis by the axis detection unit of the state determination device according to the second embodiment.
  • 6 is a flowchart illustrating an operation of a state determination process of the state determination device
  • FIG. 10 is a block diagram illustrating a configuration of a state determination device according to a third embodiment.
  • FIG. 17A and FIG. 17B are diagrams illustrating a setting example of the second region set by the contour detection unit of the state determination device according to the third embodiment.
  • 12 is a flowchart illustrating an operation of a state determination process of the state determination device according to the third embodiment. It is a figure which shows the example of a display based on the determination result of the state determination apparatus of invention which concerns on Embodiment 1 to Embodiment 3.
  • FIG. 1 is a block diagram illustrating a configuration of a state determination device 100 according to the first embodiment.
  • the state determination device 100 first detects the driver's face and determines whether or not the driver's posture is broken based on the detected movement amount and movement direction of the face.
  • the state determination device 100 is a captured image of the first driver's seat acquired from the imaging device in the vehicle interior when it is determined that the posture has collapsed, and an image captured in advance by the imaging device in the vehicle interior.
  • the image of the second driver's seat when the driver in the normal state is seated in the driver's seat is compared.
  • the state determination device 100 determines the driver's posture collapse based on the comparison result between the captured image of the first driver's seat and the image of the second driver's seat, and whether or not the driver is in an inoperable state. Determine whether.
  • the state determination device 100 is a captured image of the first driver's seat acquired from the imaging device in the vehicle interior when it is determined that the posture is collapsed, and an image captured in advance by the imaging device in the vehicle interior. A comparison is made with an image of a third driver's seat obtained by imaging a driver's seat where the driver is not seated.
  • the state determination device 100 determines the driver's posture collapse based on the comparison result between the captured image of the first driver's seat and the image of the third driver's seat, and whether or not the driver is in an inoperable state. Determine whether.
  • the state determination device 100 calculates the degree of coincidence between the captured image of the first driver seat and the captured image of the second driver seat. Alternatively, the state determination device 100 calculates the degree of deviation between the captured image of the first driver seat and the captured image of the third driver seat. If the calculated degree of coincidence or divergence is small, the state determination device 100 determines that the driver is in a posture different from the normal posture, that is, the posture has collapsed, and determines that the driver is incapable of driving. To do. On the other hand, if the calculated degree of coincidence or divergence is large, the state determination device 100 determines that the driver is in a posture equivalent to the posture in the normal state, that is, the posture is not collapsed, and the driver is not in an inoperable state.
  • the state determination device 100 erroneously detects that the head of another occupant is the driver's face, and determines that the driver's posture has collapsed based on the detection result.
  • the determination check is performed by comparing the captured image of the first driver seat and the captured image of the second driver seat, or by comparing the captured image of the first driver seat and the captured image of the third driver seat. Therefore, it is possible to suppress erroneous detection that the vehicle is in an inoperable state.
  • the state determination device 100 includes an image acquisition unit 101, a face detection unit 102, a reference point calculation unit 103, a posture determination unit 104, an image comparison unit 105, a comparative image storage unit 106, and a state determination unit 107.
  • the camera 200, the vehicle information recognition device 300, the warning device 400, and the vehicle control device 500 are connected to the state determination device 100.
  • the camera 200 is an imaging unit that captures an image of the inside of the vehicle on which the state determination device 100 is mounted.
  • the camera 200 is an infrared camera, for example.
  • the camera 200 is installed at a position where at least the head of the driver seated in the driver's seat can be photographed.
  • the camera 200 is comprised by 1 unit
  • the vehicle information recognition device 300 is a device that recognizes vehicle information of a vehicle on which the state determination device 100 is mounted.
  • the vehicle information recognition device 300 includes, for example, a vehicle speed sensor and a brake sensor.
  • the vehicle speed sensor is a sensor that acquires the traveling speed of the vehicle.
  • the brake sensor is a sensor that detects an operation amount of a brake pedal of the vehicle.
  • the warning device 400 Based on the determination result of the state determination device 100, the warning device 400 generates information indicating a warning or information indicating a warning by voice or voice and display to the driver of the vehicle.
  • the warning device 400 controls the output of the information indicating the generated alert or the information indicating the warning.
  • a speaker, or a speaker and a display constituting the warning device 400 outputs, or outputs and displays, information indicating alerting or information indicating warning based on output control.
  • the vehicle control device 500 controls traveling of the vehicle based on the determination result of the state determination device 100.
  • the image acquisition unit 101 acquires a captured image captured by the camera 200.
  • the captured image is an image captured so that at least the head of the driver in the vehicle is reflected.
  • the image acquisition unit 101 outputs the acquired captured image to the face detection unit 102 and the image comparison unit 105.
  • the face detection unit 102 analyzes the captured image acquired by the image acquisition unit 101 to detect the driver's face position, and outputs the detected position to a reference point calculation unit 103 and a posture determination unit 104 described later.
  • the face position is represented by two-dimensional coordinates on the captured image. For example, the face detection unit 102 sets a frame that is in contact with the contour of the driver's head based on the captured image, and sets the center point of the set frame.
  • the coordinates are the driver's face position. Details of the processing of the face detection unit 102 will be described later. However, the present invention is not limited to this, and the face detection unit 102 may detect both eyes by analyzing a captured image, for example, and may set the center of both eyes as the face position.
  • the reference point calculation unit 103 calculates the face position in the posture when the driver is driving in a normal state (hereinafter referred to as a reference point) based on the face position acquired from the face detection unit 102.
  • the reference point calculation unit 103 outputs the calculated reference point to the posture determination unit 104. Details of the processing of the reference point calculation unit 103 will be described later.
  • the posture determination unit 104 calculates a movement amount and a movement direction of the driver's head from the reference point based on the reference point and the driver's face position acquired by the face detection unit 102.
  • the posture determination unit 104 is out of alignment. Is determined.
  • the posture determination unit 104 determines that the driver's posture is not collapsed when the calculated movement amount is less than a preset first posture collapse determination amount in the calculated movement direction.
  • the posture determination unit 104 outputs the determination result to the image comparison unit 105.
  • the posture determination unit 104 calculates the distance between the reference point calculated by the reference point calculation unit 103 and the driver's face position acquired by the face detection unit 102. In the first embodiment, the calculated distance is set as the movement amount of the driver's head.
  • the posture determination unit 104 divides the captured image into a plurality of regions around the reference point calculated by the reference point calculation unit 103, and the face position of the driver acquired by the face detection unit 102 belongs to any region. Determine whether. In the first embodiment, the region as the determination result is set as the head moving direction.
  • the posture determination unit 104 may use the angle formed by the straight line connecting the reference point and the driver's face position acquired by the face detection unit 102 and the X axis with the reference point as the origin as the head movement direction. . Details of the processing of the posture determination unit 104 will be described later.
  • the image comparison unit 105 is a first area in a captured image used for the determination (hereinafter referred to as a captured image at the time of posture determination). And the image of the first region in the comparison image captured in advance are compared to determine whether the driver has lost his posture.
  • the comparative image is a captured image captured when the driver is seated in the driver's seat in a normal state, or a captured image captured of the seat of the driver's seat where the driver is not seated.
  • the first area is an area determined in consideration of the adjustment width in the front-rear direction of the driver's seat, the adjustment width of the inclination of the driver's seat, and the adjustment width in the vertical direction of the headrest of the driver's seat. There is an area including all the headrests of the driver's seat.
  • the first region may be determined in consideration of an average seat position of the driver's seat, an average seat height of the driver, or a range imaged when a general driver performs a normal driving operation.
  • the first area is an area set in advance based on the position of the driver's seat and the position of the imaging device for each vehicle type.
  • the image comparison unit 105 uses the first region image in the captured image at the time of posture determination and the first region image in the comparison image. The degree of coincidence is calculated.
  • the image comparison unit 105 uses the first region image in the captured image at the time of posture determination and the first region in the comparison image. The degree of deviation from the image is calculated. The image comparison unit 105 re-determines whether or not the driver's posture is broken based on the calculated degree of coincidence or deviation.
  • the image comparison unit 105 determines that the driver's posture has not collapsed when the degree of coincidence or the degree of divergence is equal to or greater than a threshold value. That is, based on the face position that the face detection unit 102 erroneously detects as the driver's face other than the driver's face, the image comparison unit 105 causes the posture determination unit 104 to detect the incorrect face position and the reference point. Based on the above, it is determined that the posture is misjudged. On the other hand, when the degree of coincidence or divergence is less than the threshold, the image comparison unit 105 determines that the driver's posture is broken, that is, the determination result of the posture determination unit 104 is correct. The image comparison unit 105 outputs the determination result to the state determination unit 107. Details of the degree of coincidence, the degree of deviation, and the method of calculating the degree of coincidence or deviation by the image comparison unit 105 will be described later.
  • FIG. 2 is a diagram illustrating an example of the first region used by the image comparison unit 105 of the state determination device 100 according to the first embodiment.
  • FIG. 2A shows an example of a first region in a captured image obtained by imaging the driver's seat from the front
  • FIG. 2B is a diagram showing an example of the first region in a captured image obtained by imaging the driver's seat from diagonally forward.
  • the first areas Ea and Eb shown in FIGS. 2A and 2B are areas including all the headrests H of the driver's seat.
  • the first areas Ea and Eb are areas in which the seat position of the driver's seat, the seat height of the driver X, and normal driving operation are taken into consideration.
  • the first regions Ea and Eb are regions that include all the headrests H of the driver's seat regardless of whether the seat position of the driver's seat is the foremost position or the last position.
  • the comparison image storage unit 106 stores a comparison image to be referred to when the image comparison unit 105 compares captured images.
  • the comparative image is a captured image obtained by capturing in advance the same region as the first region in the captured image at the time of posture determination.
  • the comparative image is a captured image captured when the driver is seated in the driver's seat in a normal state, or a captured image captured of a driver's seat where the driver is not seated in the driver's seat.
  • the captured image in which the driver is seated in the driver's seat in a normal state is, for example, an image captured when calculating a reference point described later.
  • the comparative image storage unit 106 stores a comparative image for each driver.
  • the state determination unit 107 determines whether or not the driver is in a driving impossible state based on the determination result of the image comparison unit 105. Based on the determination result, the state determination unit 107 determines that the driver is incapable of driving when the posture collapse state continues for a certain time or more. More specifically, the state determination unit 107 drives when the posture collapse state continues for a time period in which it is determined that attention is required based on the determination result (hereinafter referred to as a warning determination time). It is determined that the user needs to be alerted (hereinafter, alert state). The state determination unit 107 determines that the driver is in an inoperable state when the time for determining that the alert state is in an inoperable state (hereinafter referred to as an inoperable determination time) continues. The state determination unit 107 outputs the determination result to the external warning device 400 or the vehicle control device 500 when it is determined that the driver is in an alert state or when the driver is determined to be incapable of driving. To do.
  • 3A and 3B are diagrams illustrating a hardware configuration example of the state determination device 100.
  • the functions of the image acquisition unit 101, the face detection unit 102, the reference point calculation unit 103, the posture determination unit 104, the image comparison unit 105, and the state determination unit 107 in the state determination device 100 are realized by a processing circuit. That is, the state determination apparatus 100 includes a processing circuit for realizing the above functions.
  • the processing circuit may be a processing circuit 100a that is dedicated hardware as shown in FIG. 3A or a processor 100b that executes a program stored in the memory 100c as shown in FIG. 3B. Good.
  • the processing circuit 100a For example, a single circuit, a composite circuit, a programmed processor, a processor programmed in parallel, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-programmable Gate Array), or a combination thereof is applicable.
  • the functions of the image acquisition unit 101, the face detection unit 102, the reference point calculation unit 103, the posture determination unit 104, the image comparison unit 105, and the state determination unit 107 may be realized by a processing circuit. You may implement
  • the function of each unit is software , Firmware, or a combination of software and firmware.
  • Software or firmware is described as a program and stored in the memory 100c.
  • the processor 100b reads out and executes the program stored in the memory 100c, whereby the image acquisition unit 101, the face detection unit 102, the reference point calculation unit 103, the posture determination unit 104, the image comparison unit 105, and the state determination unit 107 Implement each function.
  • a memory 100c is provided for storing a program in which each step shown in FIG.
  • these programs may cause a computer to execute the procedures or methods of the image acquisition unit 101, the face detection unit 102, the reference point calculation unit 103, the posture determination unit 104, the image comparison unit 105, and the state determination unit 107. I can say that.
  • the processor 100b is, for example, a CPU (Central Processing Unit), a processing device, an arithmetic device, a processor, a microprocessor, a microcomputer, or a DSP (Digital Signal Processor).
  • the memory 100c may be, for example, a nonvolatile or volatile semiconductor memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable ROM), or an EEPROM (Electrically EPROM). Further, it may be a magnetic disk such as a hard disk or a flexible disk, or an optical disk such as a mini disk, CD (Compact Disc), or DVD (Digital Versatile Disc).
  • the processing circuit 100a in the state determination device 100 can realize the above-described functions by hardware, software, firmware, or a combination thereof.
  • FIG. 4 is a diagram illustrating processing of the face detection unit 102 and the reference point calculation unit 103 of the state determination device 100 according to the first embodiment.
  • the face detection unit 102 acquires the vehicle speed when the captured image A is captured from the vehicle information recognition device 300, and determines whether or not the acquired vehicle speed is a preset threshold value, for example, 30 km / h or more.
  • the face detection unit 102 detects the face area B of the driver's X from the captured image A when the vehicle speed is determined to be, for example, 30 km / h or higher.
  • the face detection unit 102 sets a frame C in contact with the contour of the head above the neck of the driver X in the face area B detected from the captured image A, and acquires the center point P of the set frame C. .
  • the face detection unit 102 calculates the two-dimensional coordinates (px, py) on the captured image A of the acquired center point P, and acquires it as the face position of the driver X.
  • the face detection unit 102 outputs the calculated two-dimensional coordinates (px, py) of the center point P to the reference point calculation unit 103.
  • the reference point calculation unit 103 records the two-dimensional coordinates (px, py) of the center point P calculated by the face detection unit 102 in a recording area (not shown).
  • the reference point calculation unit 103 obtains an average value of the recorded two-dimensional coordinates and performs a reference. Calculate points.
  • the number of samples set in advance is set based on the number of coordinates for an arbitrary fixed time, for example. Specifically, when the imaging speed of the camera 200 is 10 fps, 30 samples corresponding to a fixed time of 3 seconds are set as the number of samples.
  • the reference point calculation unit 103 outputs the calculated reference point to the posture determination unit 104 as the reference point of the driver X.
  • the reference point of the driver X is held in the posture determination unit 104 until the driving by the driver X ends.
  • the reference point calculation unit 103 refers to information input from the vehicle information recognition device 300, for example, vehicle speed information and brake information, and determines that the driving by the driver X has ended, the reference point calculation unit 103 The information recorded in the storage area and the reference point set in the posture determination unit 104 are reset.
  • the face detection unit 102 does not calculate the two-dimensional coordinates of the center point when the vehicle speed is less than a preset threshold value or when the driver's face area is not detected from the captured image. Further, the reference point calculation unit 103 does not calculate the reference point when the number of recorded two-dimensional coordinates of the center point is less than a preset number of samples.
  • the face detection unit 102 monitors the posture determination unit 104 and acquires a face position for posture determination processing when a reference point has already been set.
  • the face detection unit 102 acquires the face position for the posture determination process regardless of the vehicle speed.
  • the face detection unit 102 detects the face region B of the driver X from the captured image A acquired by the image acquisition unit 101 as shown in FIG. 4, and touches the contour of the head included in the detected face region B.
  • the center point P of the frame C is acquired.
  • the face detection unit 102 acquires the two-dimensional coordinates (px, py) of the acquired center point P on the captured image A as the face position Pa of the driver X.
  • the face detection unit 102 outputs the acquired driver's face position Pa to the posture determination unit 104.
  • the posture determination unit 104 is set with a pre-calculated reference point Q, and calculates the movement direction and the movement amount of the face position Pa of the driver X with respect to the reference point Q.
  • the posture determination unit 104 determines whether or not the calculated movement amount of the face position Pa is greater than or equal to a first posture collapse determination amount set in advance in the calculated movement direction of the face position Pa. It is determined whether or not the posture is broken.
  • FIG. 5 is a diagram illustrating an example of setting of the movement direction of the driver's head by the state determination device 100 according to Embodiment 1 and a first posture collapse determination amount set in each movement direction.
  • an area divided into four areas, Area 1, Area 2, Area 3, and Area 4 are set with the reference point Q as the center. Each area indicates the moving direction of the face position Pa.
  • the posture determination unit 104 specifies that the moving direction of the face position Pa is Area2.
  • a first posture collapse determination amount is set.
  • the first posture collapse determination amount in Area 1 is m thr1
  • the first posture collapse determination amount in Area 2 is m thr2
  • the first posture collapse determination amount in Area 3 is the first posture collapse determination amount in m thr3 and Area 4.
  • the posture collapse determination amount is set to m thr4 . Since the face position Pa is located at Area2, the first posture collapse determination amount of the face position Pa is m thr2 .
  • the posture determination unit 104 calculates the distance m between the two points of the face position Pa and the reference point Q and sets it as the movement amount m of the face position Pa.
  • the posture determination unit 104 compares the calculated movement amount m of the face position Pa with the first posture collapse determination amount m thr2 of the face position Pa, and the movement amount m is equal to or greater than the first posture collapse determination amount m thr2 . If there is, it is determined that the posture of the driver X is broken. On the other hand, when the movement amount m is less than the first posture collapse determination amount m thr2 , the posture determination unit 104 determines that the posture of the driver X is not collapsed.
  • FIGS. 6 and 7 are diagrams illustrating processing of the image comparison unit 105 of the state determination device 100 according to the first embodiment.
  • FIG. 6 shows a case where comparison is made based on the degree of coincidence between the image of the first region in the captured image at the time of posture determination and the image of the first region in the comparison image
  • FIG. 6B and FIG. 6C show an example of the captured image at the time of posture determination and the image of the first region in the captured image.
  • FIG. 6A shows a first area Da of a comparative image captured when the driver X is seated in the driver's seat in a normal state.
  • the image of the first region Da of the comparison image in FIG. 6A and the images of the first regions Ea and Eb of the captured image A at the time of posture determination in FIG. 6B or 6C are images obtained by capturing the same region.
  • the image comparison unit 105 compares the image of the first region Ea of the captured image A with the image of the first region Da of the comparative image, or the image of the first region Eb of the captured image A and the first region Da of the comparative image. The degree of coincidence between the two images is calculated by comparing the images.
  • the image comparison unit 105 determines the degree of coincidence between the image of the first area Da of the comparison image and the images of the first areas Ea and Eb of the captured image A, the degree of coincidence of the areas where the headrest is imaged, that is, the areas Ha and Hb, Alternatively, it is calculated based on the degree of coincidence between the region Ha and the region Hc.
  • the image comparison unit 105 calculates the degree of coincidence of the area in which the headrest is imaged using at least one of the number of pixels in which the headrest is imaged, the length of the contour that the headrest can be visually recognized, and the brightness of the image in the first area. To do.
  • the image comparison unit 105 compares the image of the first area Da of the comparison image of FIG. 6A with the image of the first area Ea of the captured image A of FIG. 6B.
  • the degree of coincidence between the area Ha where the headrest is imaged and the area Hb where the headrest is imaged in the image of the first area Ea of the captured image A is calculated.
  • the image comparison unit 105 calculates a high degree of coincidence between the area Ha and the area Hb.
  • the image comparison unit 105 compares the image of the first area Da of the comparison image of FIG. 6A and the image of the first area Eb of FIG. 6C, the headrest of the image of the first area Da of the comparison image is set.
  • the degree of coincidence between the imaged area Ha and the area Hc in which the headrest is imaged in the image of the first area Eb of the captured image A is calculated.
  • the image comparison unit 105 calculates a low degree of coincidence between the area Ha and the area Hc.
  • the image comparison unit 105 determines that the driver's posture has not collapsed when the calculated degree of coincidence is equal to or greater than a preset threshold value. For example, in the comparison between the image of the first area Da of the comparison image of FIG. 6A and the image of the first area Ea of the captured image A of FIG. 6B, the image comparison unit 105 determines the degree of coincidence between the area Ha and the area Hb. Since it is high, it is determined that the posture of the driver X has not collapsed. On the other hand, the image comparison unit 105 determines that the driver's posture is broken when the calculated degree of coincidence is less than a preset threshold value. For example, in the comparison between the first area Da of the comparison image of FIG. 6A and the image of the first area Eb of the captured image A of FIG. 6C, the image comparison unit 105 has a low degree of coincidence between the area Ha and the area Hb. Therefore, it is determined that the posture of the driver X is broken.
  • FIG. 7 shows a case where determination is made based on the degree of divergence between the image of the first region in the captured image at the time of posture determination and the image of the first region in the comparison image.
  • FIG. 7A is stored in the comparison image storage unit 106.
  • FIG. 7B and FIG. 7C show an example of a captured image at the time of posture determination and an image of the first region in the captured image.
  • FIG. 7A shows a first region Db of a comparative image obtained by imaging the seat of the driver's seat where the driver is not seated.
  • the image of the first region Db of the comparison image in FIG. 7A and the images of the first regions Fa and Fb of the captured image A at the time of posture determination in FIG. 7B or 7C are images obtained by capturing the same region.
  • the image comparison unit 105 compares the image of the first region Fa of the captured image A with the image of the first region Db of the comparative image, or the image of the first region Fb of the captured image A and the first region Db of the comparative image. The difference between the two images is calculated by comparing with the images.
  • the image comparison unit 105 determines the divergence between the image of the first area Db of the comparison image and the images of the first areas Fa and Fb of the captured image A, and the divergence between the areas where the headrest is imaged, that is, the areas Hc and Hd, Or it calculates based on the deviation degree of field Hc and field He.
  • the image comparison unit 105 calculates the divergence degree of the area in which the headrest is imaged using at least one of the number of pixels in which the headrest is imaged, the length of the contour that the headrest can be visually recognized, and the brightness of the image in the first area. To do.
  • the image comparison unit 105 compares the image of the first area Db of the comparison image of FIG. 7A with the image of the first area Fa of the captured image A of FIG. 7B, the image of the first area Db of the comparison image is displayed.
  • the degree of divergence between the area Hc in which the headrest is imaged and the area Hd in which the headrest is imaged in the image of the first area Fa of the captured image A is calculated.
  • the image comparison unit 105 calculates a high degree of divergence between the area Hc and the area Hd.
  • the image comparison unit 105 compares the image of the first region Db of the comparison image in FIG. 7A and the image of the first region Fb of the captured image A in FIG. 7C, the image comparison unit 105 of the first region Db of the comparison image.
  • the degree of divergence between the area Hc in which the headrest is imaged and the area He in which the headrest is imaged in the image of the first area Fb of the captured image A is calculated.
  • the image comparison unit 105 calculates a low degree of divergence between the region Hc and the region He.
  • the image comparison unit 105 determines that the driver's posture has not collapsed when the calculated degree of divergence is equal to or greater than a preset threshold value. For example, in the comparison between the image of the first region Db of the comparison image of FIG. 7A and the image of the first region Fa of the captured image A of FIG. 7B, the state determination unit 107 determines the degree of divergence between the region Hc and the region Hd. Since it is high, it is determined that the posture of the driver X has not collapsed. On the other hand, the image comparison unit 105 determines that the driver's posture is broken when the calculated degree of divergence is less than a preset threshold. For example, in the comparison between the first area Db of the comparison image in FIG. 7A and the image of the first area Fb in the captured image A in FIG. 7C, the image comparison unit 105 has a low degree of coincidence between the area Ha and the area Hb. Therefore, it is determined that the posture of the driver X is broken.
  • FIG. 8 is a flowchart showing the operation of the reference point calculation process of the state determination apparatus 100 according to the first embodiment.
  • the face detection unit 102 determines whether the vehicle speed when the captured image is captured is equal to or higher than a preset threshold (step ST2). ). If the vehicle speed is less than a preset threshold value (step ST2; NO), the process ends.
  • step ST2 when the vehicle speed is equal to or higher than a preset threshold (step ST2; YES), the face detection unit 102 determines whether or not a face region can be detected from the captured image input in step ST1 (step ST3). If the face area is not detectable (step ST3; NO), the process is terminated. On the other hand, when the face area can be detected (step ST3; YES), the face detection unit 102 detects the face area from the captured image input in step ST1, acquires the face position from the detected face area, and sets the reference point. It outputs to the calculation part 103 (step ST4).
  • the reference point calculation unit 103 records the face position acquired in step ST4 in a buffer or the like (step ST5).
  • the reference point calculation unit 103 determines whether or not the number of recorded face positions is equal to or greater than a preset threshold value (step ST6). If the number of recorded face positions is less than a preset threshold value (step ST6; NO), the process ends. On the other hand, if the number of recorded face area position information is greater than or equal to a preset threshold value (step ST6; YES), a reference point is calculated from the recorded face position (step ST7).
  • the reference point calculation unit 103 outputs the reference point calculated in step ST7 to the posture determination unit 104 (step ST8), and ends the process.
  • FIG. 9 is a flowchart showing the operation of the state determination process of the state determination device 100 according to the first embodiment.
  • the image acquisition unit 101 always acquires a captured image and outputs it to the face detection unit 102 and the image comparison unit 105.
  • the face detection unit 102 determines whether the posture determination unit 104 determines that the driver's posture is broken (step ST22). .
  • the flowchart proceeds to the process of step ST27.
  • step ST23 determines whether or not a reference point is set in the posture determination unit 104.
  • the determination processing in step ST23 is performed by the face detection unit 102 monitoring the posture determination unit 104.
  • the face detection unit 102 acquires the driver's face position from the captured image input in step ST21, and the posture determination unit 104 Output (step ST24).
  • Posture determination section 104 compares the reference point set in advance by reference point calculation section 103 with the driver's face position acquired in step ST23, and calculates the movement direction and amount of movement of the driver's face position. (Step ST25). The posture determination unit 104 determines whether or not the driver's posture is broken based on the movement direction and movement amount of the driver's face position calculated in step ST25 (step ST26). If the driver's posture is not collapsed (step ST26; NO), the process is terminated.
  • the image comparison unit 105 acquires an image of the first region in the captured image at the time of the posture determination acquired by the image acquisition unit 101 (step ST27).
  • the image comparison unit 105 compares the image of the first region of the captured image at the posture determination acquired in step ST27 with the image of the first region of the comparison image accumulated in the comparison image accumulation unit 106, and the degree of coincidence Alternatively, it is determined whether or not the divergence degree is greater than or equal to a preset threshold value (step ST28).
  • the image comparison unit 105 determines that the posture determination unit 104 has made an erroneous determination. Based on the determination result of the image comparison unit 105, the state determination unit 107 cancels the determination that the posture of the driver is broken by the posture determination unit 104, resets the counter (step ST29), and ends the process. .
  • step ST28 determines that the degree of coincidence or the degree of divergence is not greater than or equal to a preset threshold value (step ST28; NO).
  • the state determination unit 107 determines whether or not a counter (not shown) has been activated (step ST30). If the counter has been activated (step ST30; YES), the process proceeds to step ST32. On the other hand, when the counter is not activated (step ST30; NO), the state determination unit 107 starts counting the counter (step ST31), and counts up the counter (step ST32).
  • the state determination unit 107 refers to the count value of the counter and determines whether or not an alert determination time (for example, 3 seconds) has elapsed (step ST33). If the attention determination time has not elapsed (step ST33; NO), the flowchart ends the process.
  • an alert determination time for example, 3 seconds
  • the state determination unit 107 further refers to the count value of the counter and determines whether or not the operation impossibility determination time (for example, 10 seconds) has elapsed. Is performed (step ST34).
  • the state determination unit 107 determines that the driver needs to be alerted (step ST35). The state determination unit 107 outputs the determination result to the external warning device 400 or the vehicle control device 500, and ends the process.
  • step ST34 when the driving impossibility determination time has elapsed (step ST34; YES), the state determination unit 107 determines that the driver is in an inoperable state (step ST36). The state determination unit 107 outputs the determination result to the external warning device 400 or the vehicle control device 500, and ends the process.
  • step ST35 of the flowchart of FIG. 9 if the state determination unit 107 determines that the driver needs to be alerted, the processing frequency of the subsequent image comparison unit 105 may be reduced. . If the state determination unit 107 determines that the driver needs to be alerted, the state determination unit 107 outputs the determination result to the image comparison unit 105. The image comparison unit 105 reduces the processing frequency of comparing the image of the first region and the comparison image based on the output determination result. Further, when the state determination unit 107 outputs a determination result indicating that the driver is incapable of driving to the image comparison unit 105, the image comparison unit 105 performs a process of comparing the image in the first region with the comparison image. The frequency may be set lower. Thereby, the processing load of the image comparison process can be reduced.
  • the posture determination unit 104 calculates the movement direction of the driver's face position and the movement amount of the face position relative to the reference point, and the calculated movement amount of the face position is the first movement direction in the movement direction of the face position.
  • a configuration is shown in which it is determined whether or not the driver's posture is broken by comparing with the posture collapse determination amount.
  • the posture determination unit 104 may be configured to determine whether or not the driver's posture is broken in consideration of the driver's face orientation in addition to the movement direction and movement amount of the driver's face position.
  • the driver's face orientation is calculated based on the amount of displacement of the driver's face in the left-right direction at a certain time and the amount of displacement of the driver's face in the up-down direction at a certain time.
  • FIG. 10 is a diagram illustrating calculation of the amount of displacement of the driver's face by the posture determination unit 104 of the state determination device 100 according to the first embodiment.
  • FIG. 10A is a diagram defining left and right directions and up and down directions of the driver's face.
  • the Yaw direction is the left-right direction facing the driver X
  • the Pitch direction is the up-down direction facing the driver X.
  • the posture determination unit 104 calculates a displacement amount in the Yaw direction and the Pitch direction with respect to the driver's front view from the captured image acquired by the image acquisition unit 101.
  • FIG. 10B shows a case where the posture determination unit 104 calculates the displacement “Y d ” in the Yaw direction as time elapses.
  • FIG. 10A is a diagram defining left and right directions and up and down directions of the driver's face.
  • the Yaw direction is the left-right direction facing the driver X
  • the Pitch direction is the up-down direction facing the driver X.
  • FIG. 10C shows a case where the posture determination unit 104 calculates the displacement “P d ” in the pitch direction with the passage of time.
  • the posture determination unit 104 adds the displacement amount “Y d ” calculated in FIG. 10B or the displacement amount “P d ” calculated in FIG. 10C in addition to the comparison between the movement amount of the face position and the first posture collapse determination amount.
  • a second posture collapse determination amount which is a preset threshold value for the amount of displacement in the face direction, to determine whether or not the driver's posture is collapsed.
  • the second posture collapse determination amount is configured by at least one of a face direction yaw direction determination amount and a face direction pitch direction determination amount.
  • FIG. 11 is a diagram illustrating an example of setting of the movement direction of the driver's head by the state determination device 100 according to the first embodiment and a second posture collapse determination amount set in each movement direction.
  • a second posture collapse determination amount is set in addition to the first posture collapse determination amount.
  • the first posture collapse determination amount in Area1 is m thr1
  • the face direction Yaw direction determination amount (second posture collapse determination amount) is y thr1
  • the face direction pitch direction determination amount (second The posture collapse determination amount) is set to P thr1 .
  • the first posture collapse determination amount of the movement amount of the face position in Area3 is set to m thr3
  • the Pitch direction determination amount of the face direction is set to P thr3 .
  • the posture determination unit 104 calculates a distance m between the two points of the face position Pa and the reference point Q and sets it as the movement amount m of the face position Pa. Further, the posture determining unit 104 calculates the displacement amount Y d and the displacement amount P d of the driver's face direction at the face position Pa.
  • the posture determination unit 104 has the calculated movement amount m of the face position Pa equal to or larger than the first posture collapse determination amount m thr2 , the calculated face direction displacement amount Y d is equal to or larger than the Yaw direction determination amount Y thr2 , and when the displacement amount P d is Pitch direction determination amount P thr2 above, it determines that the posture of the driver is collapsed.
  • the posture determination unit 104 determines that the movement amount m is less than the first posture collapse determination amount m thr2 , the face direction displacement amount Y d is less than the Yaw direction determination amount Y thr2 , or the displacement amount. If P d is less than Pitch direction determination amount P thr2, it determines that the posture of the driver is not collapsed.
  • the posture determination unit 104 determines whether or not the driver's posture is broken in consideration of the amount of displacement in a certain time of the driver's face direction, so that the driver's driving state determination accuracy is improved. improves.
  • the image acquisition unit 101 that acquires a captured image in the vehicle including the driver
  • the face detection unit 102 that detects the face position of the driver from the acquired captured image.
  • the detected driver's face position is compared with the driver's reference point calculated in advance, and the driver's face position that is greater than or equal to the first posture collapse determination amount is moved in the set moving direction.
  • the posture determination unit 104 determines that the posture of the driver is collapsed, and the image of the first region in the acquired captured image when it is determined that the posture of the driver is collapsed The comparison with the image of the first region in the comparison image to re-determine whether or not the driver's posture has collapsed, and based on the result of the re-determination, the driver is unable to drive And a state determination unit 107 for determining whether or not Even if the driver's head other than the driver's head, such as the head of another occupant reflected in the captured image, vehicle interior, or scenery, is mistakenly detected as the driver's head, It is possible to prevent the person from being erroneously detected as being incapable of driving. As a result, it is possible to accurately determine whether the driver is unable to drive.
  • the posture determination unit 104 detects a displacement of the driver's face that is greater than or equal to the second posture collapse determination amount, it is determined that the driver's posture is collapsed. Since it comprised so, a driver
  • the image comparison unit 105 determines the first in the captured image. Since the processing frequency for comparing the image of the region and the image of the first region in the comparison image is reduced, the processing load of the state determination device can be reduced.
  • FIG. 12 is a block diagram showing a configuration of state determination apparatus 100A according to the second embodiment.
  • the state determination device 100A according to the second embodiment is configured by adding an axis detection unit 108 to the state determination device 100 according to the first embodiment shown in FIG.
  • the same or corresponding parts as those of the state determination apparatus 100 according to the first embodiment are denoted by the same reference numerals as those used in the first embodiment, and the description thereof is omitted or simplified.
  • the axis detection unit 108 refers to the determination result of the posture determination unit 104, and if the determination result indicates that the driver's posture is broken, the driver is determined based on the captured image at the time of posture determination acquired by the image acquisition unit 101.
  • the center axis of the head is detected.
  • the axis detector 108 calculates the inclination of the detected head central axis with respect to a preset axis.
  • the axis detection unit 108 determines whether or not the calculated inclination is within a preset threshold range.
  • the axis detection unit 108 cancels the determination that the posture of the driver is broken by the posture determination unit 104 when the calculated inclination is within a preset threshold range.
  • the axis detection unit 108 maintains the determination that the driver's posture is broken by the posture determination unit 104 and outputs the result to the image comparison unit 105.
  • FIG. 13 is a diagram illustrating detection of the head center axis by the axis detection unit 108 of the state determination device 100A according to the second embodiment.
  • FIG. 13A shows a case where the driver X is seated in the driver's seat in a normal state.
  • the lateral direction of the vehicle (not shown) on which the driver X is riding and the horizontal direction on the road surface is the x axis
  • the longitudinal direction of the vehicle and the horizontal direction on the road surface is the y axis
  • the vertical direction of the vehicle the x axis and the y axis.
  • FIG. 13B is a view of the head of the driver X as viewed from above.
  • the head center axis R is an axis passing through the center of a circle Xa obtained when the driver X's head is viewed from above.
  • the axis detection unit 108 detects the head center axis R based on the positions of both eyes, the base of the nose, and the position of the top of the nose obtained from the captured image obtained at the time of posture determination acquired by the image acquisition unit 101. .
  • the axis detection unit 108 detects the inclination of the head center axis R with respect to the x axis based on the center position of both eyes and the position of the top of the nose.
  • the axis detection unit 108 detects the inclination of the head center axis R with respect to the y axis based on the distance between the top of the nose and the base of the nose.
  • FIG. 14 is a diagram illustrating determination of the inclination of the head center axis by the axis detection unit 108 of the state determination apparatus 100A according to the second embodiment.
  • 14A is a diagram illustrating the inclination of the head center axis R with respect to the x axis
  • FIG. 14B is a diagram illustrating the inclination of the head center axis R with respect to the y axis.
  • the axis detector 108 calculates an angle ⁇ fx formed by the head center axis R and the x axis.
  • the axis detector 108 calculates an angle ⁇ fy between the head center axis R and the y axis.
  • Axis detecting unit 108 outputs the calculated angle theta f-x and the angle theta f-y is, it is determined whether or not the threshold range shown in FIG. 14C (range between the angle theta thr2 from the angle theta thr1) . If the angle theta f-x and the angle theta f-y is in the range between the angle theta thr1 angle theta thr2, the determination of the axis detecting unit 108 is collapsed position of the driver by the posture determining unit 104 Is released.
  • the operation by the posture determining unit 104 is maintained.
  • the above-described threshold range is an example and can be set as appropriate. Also the threshold range of the angle ⁇ f-x, and a threshold range of the angle theta f-y may be respectively configured separately.
  • the axis detection unit 108 in the state determination apparatus 100A is a processor 100b that executes a program stored in the processing circuit 100a illustrated in FIG. 3A or the memory 100c illustrated in FIG. 3B.
  • FIG. 15 is a flowchart showing the operation of the state determination process of the state determination device 100A according to the second embodiment.
  • the same steps as those of state determination apparatus 100 according to Embodiment 1 are denoted by the same reference numerals as those used in FIG. 9, and the description thereof is omitted or simplified.
  • the image acquisition unit 101 always acquires a captured image and outputs it to the face detection unit 102, the axis detection unit 108, and the image comparison unit 105.
  • the axis detection unit 108 drives from the captured image at the time of posture determination acquired by the image acquisition unit 101.
  • a person's head central axis is detected (step ST41).
  • the axis detection unit 108 calculates the angle formed between the detected head center axis and the x axis, and the angle formed between the head center axis and the y axis (step ST42).
  • the axis detection unit 108 determines whether or not the calculated angle with the x axis and the angle with the y axis are within a preset threshold range (step ST43). If the calculated angle with the x-axis and the angle with the y-axis are within a preset threshold range (step ST43; YES), the axis detection unit 108 causes the posture determination unit 104 to lose the driver's posture. Is released (step ST44), and the process is terminated.
  • the axis detection unit 108 determines the driver's posture by the posture determination unit 104.
  • the determination that the driver's posture is broken by the posture determination unit 104 is notified to the image comparison unit 105 (step ST45).
  • the image comparison unit 105 acquires an image of the first region in the captured image at the time of posture determination acquired by the image acquisition unit 101 (step ST27). Thereafter, the flowchart performs the processing after step ST28.
  • the posture determination unit 104 determines that the driver's posture is broken, the inclination of the center axis of the driver's head is detected from the acquired captured image.
  • the detected inclination of the central axis of the head is within a preset threshold range, it is configured to include the axis detection unit 108 that cancels the determination that the driver's posture has collapsed. It is possible to prevent a driver who is driving in a posture different from the posture in the state from being erroneously determined as a state requiring caution or an inoperable state. Thereby, the precision of a driver
  • driving with the driver's head moving to the front side of the vehicle, or driving the driver looking into the front side of the vehicle may be misjudged as a state requiring caution or an inoperable state. Can be suppressed.
  • FIG. 16 is a block diagram showing a configuration of state determination apparatus 100B according to the third embodiment.
  • the state determination device 100B according to Embodiment 3 is configured by adding a contour detection unit 109 to the state determination device 100 according to Embodiment 1 shown in FIG.
  • the same or corresponding parts as those of the state determination apparatus 100 according to the first embodiment are denoted by the same reference numerals as those used in the first embodiment, and the description thereof is omitted or simplified.
  • the contour detection unit 109 refers to the determination result of the posture determination unit 104, and when the determination result indicates that the driver's posture is broken, the contour detection unit 109 outputs the second image in the posture determination acquired by the image acquisition unit 101. Get an image of the region.
  • the second area is an area including all the headrests of the driver's seat. Further, the second region is a range where the driver's arm base enters when the driver is seated in the driver's seat in a normal state, and the driver's both elbows when the driver widens both arms. This is the area that falls within.
  • FIG. 17 is a diagram illustrating an example of the second region acquired by the contour detection unit 109 of the state determination device 100B according to the third embodiment.
  • FIG. 17A shows an example of the second area set in the captured image obtained by imaging the driver's seat from the front
  • FIG. 17B shows an example of the second area acquired from the captured image obtained by imaging the driver's seat from diagonally forward.
  • the second areas Ec and Ed shown in FIGS. 17A and 17B are areas including all the headrests H of the driver's seat.
  • the second regions Ec and Ed are regions in consideration of the position of the base Xb of the arm of the driver X and the positions of both elbows Xc when the driver X spreads both arms.
  • the second areas Ec and Ed are areas in which the seat position of the driver's seat, the seat height of the driver X, and normal driving operation are taken into consideration.
  • the second areas Ec and Ed are areas including all the headrests H of the driver's seat in the area regardless of whether the seat position of the driver's seat is the frontmost position or the last position.
  • the contour detection unit 109 performs edge detection on the acquired image of the second region and detects the contour of the driver.
  • the contour detection unit 109 determines that the driver is performing armrest driving when a predefined triangle shape exists in the contour around the driver's neck among the detected contours of the driver.
  • the contour detection unit 109 cancels the determination that the posture of the driver is broken by the posture determination unit 104 when it is determined that the driver is performing the armrest driving.
  • the contour detecting unit 109 maintains the determination that the posture of the driver is broken by the posture determining unit 104 when the contour around the neck of the driver does not exist, and the image comparing unit 105. Output to.
  • the contour detection unit 109 performs edge detection on the images of the second areas Ec and Ed shown in FIGS. 17A and 17B, for example, and when detecting the contour of the driver X, the contour Sa around the neck has a triangular shape Sa and It is determined that the triangle Sb exists. The contour detection unit 109 determines that the driver X is performing armrest driving on the captured images shown in FIGS. 17A and 17B.
  • the contour detection unit 109 in the state determination device 100B is a processor 100b that executes a program stored in the processing circuit 100a illustrated in FIG. 3A or the memory 100c illustrated in FIG. 3B.
  • FIG. 18 is a flowchart showing the operation of the state determination process of the state determination device 100B according to the third embodiment.
  • the same steps as those of state determination apparatus 100 according to Embodiment 1 are denoted by the same reference numerals as those used in FIG. 9, and the description thereof is omitted or simplified.
  • the image acquisition unit 101 always acquires a captured image and outputs it to the face detection unit 102, the contour detection unit 109, and the image comparison unit 105.
  • step ST ⁇ b> 26 when the posture determination unit 104 determines that the driver's posture has collapsed (step ST ⁇ b>26; YES), the contour detection unit 109 outputs the second image in the posture determination acquired by the image acquisition unit 101. An image of the area is acquired (step ST51). The contour detection unit 109 detects the contour of the driver from the image of the second area acquired in step ST51 (step ST52).
  • the contour detection unit 109 determines whether or not there is a shape that matches or is similar to a triangular shape preset for the contour around the driver's neck among the contours of the driver detected in step ST52 (step S52). ST53). If there is a shape that matches or resembles a preset triangular shape (step ST53; YES), the contour detection unit 109 determines that the driver is performing an armrest operation. (Step ST54). In addition, the contour detection unit 109 cancels the determination by the posture determination unit 104 that the driver's posture is broken (step ST55), and ends the process.
  • step ST53 when there is no shape that matches or resembles a preset triangular shape (step ST53; NO), the contour detection unit 109 maintains the determination that the posture of the driver is broken by the posture determination unit 104, The image comparison unit 105 is notified of the determination by the posture determination unit 104 that the driver's posture has collapsed (step ST56). Based on the determination notified in step ST56, the image comparison unit 105 acquires the image of the first region in the captured image at the time of posture determination acquired by the image acquisition unit 101 (step ST27). Thereafter, the flowchart performs the processing after step ST28.
  • the posture determination unit 104 determines that the driver's posture is broken
  • the acquired image of the second region in the captured image at the time of posture determination is obtained. Since the contour of the driver is detected, and the detected contour includes a triangular shape, the contour detection unit 109 for canceling the determination that the posture of the driver is broken is provided. It is possible to prevent the driver who is performing the armrest driving in the moved state from being erroneously determined to be in a state that requires alerting or incapable of driving. Thereby, the precision of a driver
  • the configuration in which the contour detection unit 109 is added and applied to the state determination device 100 described in the first embodiment has been described, but the state determination device 100A illustrated in the second embodiment is applied to the state determination device 100A.
  • the contour detection unit 109 may perform processing in parallel with the axis detection unit 108 or may execute processing at a stage subsequent to the processing of the axis detection unit 108.
  • FIG. 19 is a diagram illustrating an example of notifying the driver of the determination results of the state determination devices 100, 100A, and 100B according to the first to third embodiments.
  • Display 401 displays determination results 402 of state determination apparatuses 100, 100A, and 100B and a button 403 for canceling the determination results.
  • the determination result 402 is displayed as, for example, “determining that operation is not possible”.
  • the button 403 displays, for example, “Reset to normal state”.
  • the driver refers to the determination result 402 displayed on the display 401. If the driver is not in an inoperable state, the driver causes the state determination devices 100, 100A, and 100B to cancel the determination result by pressing the button 403. be able to
  • the present invention can freely combine each embodiment, modify any component of each embodiment, or omit any component of each embodiment. It is.
  • the state determination apparatus is applied to a driver monitoring system or the like that requires improvement in determination accuracy, and is suitable for determining a driving state based on a driver's posture change.
  • 100, 100A, 100B state determination device 101 image acquisition unit, 102 face detection unit, 103 reference point calculation unit, 104 posture determination unit, 105 image comparison unit, 106 comparison image storage unit, 107 state determination unit, 108 axis detection unit 109, contour detection unit.

Abstract

This state determination device is provided with: a face detection unit (102) which detects the position of a driver's face from a captured image of the interior of a vehicle; a posture determination unit (104) which compares the detected position of the driver's face with a reference point pre-calculated for the driver, and determines that the driver's posture has collapsed if the posture determination unit (104) detects that the displacement (if any) of the driver's face in a preset displacement direction is equal to or greater than a first posture collapse determination value; an image comparison unit (105) which, if it is determined that the driver's posture has collapsed, compares a first region image included in the captured image with a corresponding first region image included in a comparison image to re-determine whether or not the driver's posture has collapsed; and a state determination unit (107) which determines whether or not the driver is in a state in which the driver is incapable of driving, on the basis of the re-determination result.

Description

状態判定装置および状態判定方法State determination device and state determination method
 この発明は、車両を運転する運転者の状態を判定する技術に関するものである。 The present invention relates to a technique for determining the state of a driver who drives a vehicle.
 従来、車両を運転する運転者を撮像した撮像画像から運転者の運転姿勢の崩れを検出し、運転者が運転することができない状態(以下、運転不能状態と記載する)であるか否か判定する技術がある。例えば、特許文献1には、車両に搭載されたカメラにより撮像された運転席の撮像画像から、運転者の首よりも上の頭部を逐次検出し、車両の走行中に検出された運転者の頭部が、撮像画像の所定範囲から外れている場合に運転姿勢が崩れているとして運転者が運転不能状態であることを検出するドライバの運転不能状態検出装置が開示されている。 Conventionally, the driver's driving posture is detected from a captured image obtained by imaging the driver who drives the vehicle, and it is determined whether or not the driver is unable to drive (hereinafter, referred to as a driving impossible state). There is technology to do. For example, Patent Document 1 discloses that a driver detected while a vehicle is running by sequentially detecting a head above the driver's neck from a captured image of a driver's seat captured by a camera mounted on the vehicle. An inoperable state detection device for a driver that detects that the driver is in an inoperable state because the driving posture is broken when the head is out of a predetermined range of the captured image is disclosed.
特開2016-27452号公報JP 2016-27452 A
 上記特許文献1に記載されたドライバの運転不能状態検出装置では、撮像画像に映りこんだ他の乗員の頭部、車両内装または風景などの運転者の頭部以外を運転者の頭部であると誤って検出した場合、運転不能状態でない運転者を運転不能状態であると誤検出してしまうという課題があった。 In the driver inoperable state detection device described in Patent Document 1, the head of the driver other than the driver's head, such as the head of another occupant reflected in the captured image, the vehicle interior, or the landscape, is the driver's head. If it is erroneously detected, there is a problem that a driver who is not in an inoperable state is erroneously detected as in an inoperable state.
 この発明は、上記のような課題を解決するためになされたもので、運転者の運転不能状態の誤検出を抑制し、運転者の状態の判定精度を向上させることを目的とする。 The present invention has been made to solve the above-described problems, and it is an object of the present invention to suppress erroneous detection of a driver's inoperable state and improve the determination accuracy of the driver's state.
 この発明に係る状態判定装置は、運転者を含む車内の撮像画像を取得する画像取得部と、画像取得部が取得した撮像画像から、運転者の顔位置を検出する顔検出部と、顔検出部が検出した運転者の顔位置と、予め算出された運転者の基準点とを比較し、設定された移動方向に、第1の姿勢崩れ判定量以上の運転者の顔位置の移動を検出した場合に、運転者の姿勢が崩れていると判定する姿勢判定部と、姿勢判定部が運転者の姿勢が崩れていると判定した場合に、画像取得部が取得した撮像画像内の第1領域の画像と、比較画像内の第1領域の画像との比較を行って運転者の姿勢が崩れているか否か再判定を行う画像比較部と、画像比較部の再判定の結果に基づいて、運転者が運転不能状態であるか否か判定を行う状態判定部とを備える。 The state determination device according to the present invention includes an image acquisition unit that acquires a captured image in a vehicle including a driver, a face detection unit that detects a driver's face position from the captured image acquired by the image acquisition unit, and face detection The driver's face position detected by the driver is compared with the driver's reference point calculated in advance, and the movement of the driver's face position equal to or greater than the first posture collapse determination amount is detected in the set movement direction. In this case, when the posture determination unit determines that the driver's posture is broken, and the posture determination unit determines that the driver's posture is broken, the first in the captured image acquired by the image acquisition unit Based on the result of re-determination by the image comparison unit that compares the image of the region with the image of the first region in the comparison image and re-determines whether the driver's posture has collapsed And a state determination unit that determines whether or not the driver is in an inoperable state.
 この発明によれば、運転者の運転不能状態の誤検出を抑制し、運転者の状態の判定精度を向上させることができる。 According to the present invention, it is possible to suppress erroneous detection of the driver's inability to drive and improve the determination accuracy of the driver's state.
実施の形態1に係る状態判定装置の構成を示すブロック図である。1 is a block diagram illustrating a configuration of a state determination device according to Embodiment 1. FIG. 図2A、図2Bは、実施の形態1に係る状態判定装置の画像比較部が設定する第1領域の一例を示す図である。2A and 2B are diagrams illustrating an example of the first region set by the image comparison unit of the state determination device according to Embodiment 1. FIG. 図3A、図3Bは、実施の形態1に係る状態判定装置のハードウェア構成例を示す図である。3A and 3B are diagrams illustrating a hardware configuration example of the state determination device according to the first embodiment. 実施の形態1に係る状態判定装置の顔検出部および基準点算出部の処理を示す図である。It is a figure which shows the process of the face detection part of the state determination apparatus which concerns on Embodiment 1, and a reference point calculation part. 実施の形態1の状態判定装置による運転者の頭部の移動方向の設定および各移動方向に設定された第1の姿勢崩れ判定量の一例を示す図である。It is a figure which shows an example of the setting of the moving direction of the driver | operator's head by the state determination apparatus of Embodiment 1, and the 1st attitude | position collapse determination amount set to each moving direction. 図6A、図6B、図6Cは、実施の形態1に係る状態判定装置の画像比較部の処理を示す図である。6A, 6B, and 6C are diagrams illustrating processing of the image comparison unit of the state determination device according to Embodiment 1. FIG. 図7A、図7B、図7Cは、実施の形態1に係る状態判定装置の画像比較部の処理を示す図である。7A, 7B, and 7C are diagrams illustrating processing of the image comparison unit of the state determination device according to Embodiment 1. FIG. 実施の形態1に係る状態判定装置の基準点算出処理の動作を示すフローチャートである。6 is a flowchart showing an operation of a reference point calculation process of the state determination device according to the first embodiment. 実施の形態1に係る状態判定装置の状態判定処理の動作を示すフローチャートである。3 is a flowchart illustrating an operation of a state determination process of the state determination device according to the first embodiment. 図10A、図10B、図10Cは、実施の形態1に係る状態判定装置の姿勢判定部による運転者の顔向きの変位量の判定を示す図である。10A, 10B, and 10C are diagrams illustrating determination of the displacement amount of the driver's face by the posture determination unit of the state determination device according to Embodiment 1. FIG. 実施の形態1の状態判定装置による運転者の頭部の移動方向の設定および各移動方向に設定された第2の姿勢崩れ判定量の一例を示す図である。It is a figure which shows an example of the setting of the movement direction of the driver | operator's head by the state determination apparatus of Embodiment 1, and the 2nd posture collapse determination amount set to each movement direction. 実施の形態2に係る状態判定装置の構成を示すブロック図である。6 is a block diagram illustrating a configuration of a state determination device according to Embodiment 2. FIG. 図13A、図13Bは、実施の形態2に係る状態判定装置の軸検出部による頭部中心軸の検出を示す図である。13A and 13B are diagrams illustrating detection of the head center axis by the axis detection unit of the state determination device according to the second embodiment. 図14A、図14B、図14Cは、実施の形態2に係る状態判定装置の軸検出部による頭部中心軸の傾きの判定を示す図である。14A, 14B, and 14C are diagrams illustrating determination of the inclination of the head center axis by the axis detection unit of the state determination device according to the second embodiment. 実施の形態2に係る状態判定装置の状態判定処理の動作を示すフローチャートである。6 is a flowchart illustrating an operation of a state determination process of the state determination device according to the second embodiment. 実施の形態3に係る状態判定装置の構成を示すブロック図である。FIG. 10 is a block diagram illustrating a configuration of a state determination device according to a third embodiment. 図17A,図17Bは、実施の形態3に係る状態判定装置の輪郭検出部が設定する第2領域の設定例を示す図である。FIG. 17A and FIG. 17B are diagrams illustrating a setting example of the second region set by the contour detection unit of the state determination device according to the third embodiment. 実施の形態3に係る状態判定装置の状態判定処理の動作を示すフローチャートである。12 is a flowchart illustrating an operation of a state determination process of the state determination device according to the third embodiment. 実施の形態1から実施の形態3に係る発明の状態判定装置の判定結果に基づく表示例を示す図である。It is a figure which shows the example of a display based on the determination result of the state determination apparatus of invention which concerns on Embodiment 1 to Embodiment 3. FIG.
 以下、この発明をより詳細に説明するために、この発明を実施するための形態について、添付の図面に従って説明する。
実施の形態1.
 図1は、実施の形態1に係る状態判定装置100の構成を示すブロック図である。
 状態判定装置100は、まず、運転者の顔を検出し当該検出した顔の移動量および移動方向に基づいて、運転者の姿勢が崩れているか否かを判定する。そして、状態判定装置100は、姿勢が崩れていると判定したときに車室内の撮像装置から取得した第1の運転席の撮像画像と、車室内の撮像装置によって予め撮像された画像であって、正常な状態の運転者が運転席に着座しているときの第2の運転席の画像とを比較する。状態判定装置100は、第1の運転席の撮像画像と、第2の運転席の画像との比較結果に基づいて、運転者の姿勢崩れを判定し、運転者が運転不能状態であるか否かを判定する。
 または、状態判定装置100は、姿勢が崩れていると判定したときに車室内の撮像装置から取得した第1の運転席の撮像画像と、車室内の撮像装置によって予め撮像された画像であって、運転者が着座していない運転席の座席を撮像した第3の運転席の画像とを比較する。状態判定装置100は、第1の運転席の撮像画像と、第3の運転席の画像との比較結果に基づいて、運転者の姿勢崩れを判定し、運転者が運転不能状態であるか否かを判定する。
Hereinafter, in order to explain the present invention in more detail, modes for carrying out the present invention will be described with reference to the accompanying drawings.
Embodiment 1 FIG.
FIG. 1 is a block diagram illustrating a configuration of a state determination device 100 according to the first embodiment.
The state determination device 100 first detects the driver's face and determines whether or not the driver's posture is broken based on the detected movement amount and movement direction of the face. The state determination device 100 is a captured image of the first driver's seat acquired from the imaging device in the vehicle interior when it is determined that the posture has collapsed, and an image captured in advance by the imaging device in the vehicle interior. The image of the second driver's seat when the driver in the normal state is seated in the driver's seat is compared. The state determination device 100 determines the driver's posture collapse based on the comparison result between the captured image of the first driver's seat and the image of the second driver's seat, and whether or not the driver is in an inoperable state. Determine whether.
Alternatively, the state determination device 100 is a captured image of the first driver's seat acquired from the imaging device in the vehicle interior when it is determined that the posture is collapsed, and an image captured in advance by the imaging device in the vehicle interior. A comparison is made with an image of a third driver's seat obtained by imaging a driver's seat where the driver is not seated. The state determination device 100 determines the driver's posture collapse based on the comparison result between the captured image of the first driver's seat and the image of the third driver's seat, and whether or not the driver is in an inoperable state. Determine whether.
 詳細は後述するが、状態判定装置100は、第1の運転席の撮像画像と第2の運転席の撮像画像との一致度を算出する。または、状態判定装置100は、第1の運転席の撮像画像と第3の運転席の撮像画像との乖離度を算出する。
 状態判定装置100は、算出した一致度または乖離度が小さい場合は運転者が正常な状態の姿勢とは異なる姿勢、即ち姿勢が崩れていると判定し、運転者が運転不能状態であると判定する。
 一方、状態判定装置100は、算出した一致度または乖離度が大きい場合は運転者が正常な状態の姿勢と同等の姿勢、即ち姿勢が崩れていないと判定し、運転者が運転不能状態でないと判定する。
 これにより、状態判定装置100が、例えば他の乗員の頭部等を運転者の顔であると誤検出し、当該検出結果に基づいて運転者の姿勢が崩れていると判定した場合であっても、第1の運転席の撮像画像と第2の運転席の撮像画像との比較、または第1の運転席の撮像画像と第3の運転席の撮像画像との比較によって、当該判定のチェックを行うことができるため、運転不能状態であると誤検出することを抑制することができる。
Although details will be described later, the state determination device 100 calculates the degree of coincidence between the captured image of the first driver seat and the captured image of the second driver seat. Alternatively, the state determination device 100 calculates the degree of deviation between the captured image of the first driver seat and the captured image of the third driver seat.
If the calculated degree of coincidence or divergence is small, the state determination device 100 determines that the driver is in a posture different from the normal posture, that is, the posture has collapsed, and determines that the driver is incapable of driving. To do.
On the other hand, if the calculated degree of coincidence or divergence is large, the state determination device 100 determines that the driver is in a posture equivalent to the posture in the normal state, that is, the posture is not collapsed, and the driver is not in an inoperable state. judge.
Thereby, for example, the state determination device 100 erroneously detects that the head of another occupant is the driver's face, and determines that the driver's posture has collapsed based on the detection result. The determination check is performed by comparing the captured image of the first driver seat and the captured image of the second driver seat, or by comparing the captured image of the first driver seat and the captured image of the third driver seat. Therefore, it is possible to suppress erroneous detection that the vehicle is in an inoperable state.
 状態判定装置100は、画像取得部101、顔検出部102、基準点算出部103、姿勢判定部104、画像比較部105、比較画像蓄積部106および状態判定部107を備える。
 また、状態判定装置100には、カメラ200、車両情報認識装置300、警告装置400および車両制御装置500が接続されている。
 カメラ200は、状態判定装置100を搭載した車両内を撮像する撮像手段である。カメラ200は、例えば赤外線カメラ等である。カメラ200は、運転席に着座した運転者の少なくとも頭部を撮影可能な位置に設置される。また、カメラ200は、1台または複数台で構成される。
The state determination device 100 includes an image acquisition unit 101, a face detection unit 102, a reference point calculation unit 103, a posture determination unit 104, an image comparison unit 105, a comparative image storage unit 106, and a state determination unit 107.
In addition, the camera 200, the vehicle information recognition device 300, the warning device 400, and the vehicle control device 500 are connected to the state determination device 100.
The camera 200 is an imaging unit that captures an image of the inside of the vehicle on which the state determination device 100 is mounted. The camera 200 is an infrared camera, for example. The camera 200 is installed at a position where at least the head of the driver seated in the driver's seat can be photographed. Moreover, the camera 200 is comprised by 1 unit | set or multiple units | sets.
 車両情報認識装置300は、状態判定装置100を搭載した車両の車両情報を認識する装置である。車両情報認識装置300は、例えば車速センサおよびブレーキセンサを備える。車速センサは、車両の走行速度を取得するセンサである。ブレーキセンサは、車両のブレーキペダルの操作量を検出するセンサである。
 警告装置400は、状態判定装置100の判定結果に基づいて、車両の運転者に対して音声または音声と表示によって、注意喚起を示す情報または警告を示す情報を生成する。警告装置400は、生成した注意喚起を示す情報または警告を示す情報の出力制御を行う。警告装置400を構成する、例えばスピーカ、またはスピーカおよびディスプレイは、出力制御に基づいて、注意喚起を示す情報または警告を示す情報を音声出力、または音声出力および表示する。
 車両制御装置500は、状態判定装置100の判定結果に基づいて、車両の走行を制御する。
The vehicle information recognition device 300 is a device that recognizes vehicle information of a vehicle on which the state determination device 100 is mounted. The vehicle information recognition device 300 includes, for example, a vehicle speed sensor and a brake sensor. The vehicle speed sensor is a sensor that acquires the traveling speed of the vehicle. The brake sensor is a sensor that detects an operation amount of a brake pedal of the vehicle.
Based on the determination result of the state determination device 100, the warning device 400 generates information indicating a warning or information indicating a warning by voice or voice and display to the driver of the vehicle. The warning device 400 controls the output of the information indicating the generated alert or the information indicating the warning. For example, a speaker, or a speaker and a display constituting the warning device 400 outputs, or outputs and displays, information indicating alerting or information indicating warning based on output control.
The vehicle control device 500 controls traveling of the vehicle based on the determination result of the state determination device 100.
 状態判定装置100の各構成について説明する。
 画像取得部101は、カメラ200が撮像した撮像画像を取得する。撮像画像は、車内の運転者の少なくとも頭部が映るように撮像された画像である。画像取得部101は、取得した撮像画像を顔検出部102および画像比較部105に出力する。
 顔検出部102は、画像取得部101により取得された撮像画像を解析して運転者の顔位置を検出し、後述する基準点算出部103および姿勢判定部104に出力する。顔位置は、撮像画像上の二次元座標で表され、例えば、顔検出部102は、撮像画像に基づいて運転者の頭部の輪郭に接する枠を設定し、当該設定した枠の中心点の座標を運転者の顔位置とする。顔検出部102の処理の詳細については後述する。なお、これに限らず、顔検出部102は、例えば撮像画像を解析して両目を検出し、両目の中心を顔位置としてもよい。
Each configuration of the state determination device 100 will be described.
The image acquisition unit 101 acquires a captured image captured by the camera 200. The captured image is an image captured so that at least the head of the driver in the vehicle is reflected. The image acquisition unit 101 outputs the acquired captured image to the face detection unit 102 and the image comparison unit 105.
The face detection unit 102 analyzes the captured image acquired by the image acquisition unit 101 to detect the driver's face position, and outputs the detected position to a reference point calculation unit 103 and a posture determination unit 104 described later. The face position is represented by two-dimensional coordinates on the captured image. For example, the face detection unit 102 sets a frame that is in contact with the contour of the driver's head based on the captured image, and sets the center point of the set frame. The coordinates are the driver's face position. Details of the processing of the face detection unit 102 will be described later. However, the present invention is not limited to this, and the face detection unit 102 may detect both eyes by analyzing a captured image, for example, and may set the center of both eyes as the face position.
 基準点算出部103は、顔検出部102から取得した顔位置に基づいて、運転者が正常な状態で運転しているときの姿勢における顔位置(以下、基準点と記載する)を算出する。基準点算出部103は、当該算出した基準点を姿勢判定部104に出力する。基準点算出部103の処理の詳細については後述する。 The reference point calculation unit 103 calculates the face position in the posture when the driver is driving in a normal state (hereinafter referred to as a reference point) based on the face position acquired from the face detection unit 102. The reference point calculation unit 103 outputs the calculated reference point to the posture determination unit 104. Details of the processing of the reference point calculation unit 103 will be described later.
 姿勢判定部104は、基準点と顔検出部102が取得した運転者の顔位置に基づいて、基準点からの運転者の頭部の移動量および移動方向を算出する。姿勢判定部104は、算出した移動量が、算出した移動方向において予め設定された顔位置の移動量の閾値である第1の姿勢崩れ判定量以上である場合、運転者の姿勢が崩れていると判定する。一方、姿勢判定部104は、算出した移動量が、算出した移動方向において予め設定された第1の姿勢崩れ判定量未満である場合、運転者の姿勢が崩れていないと判定する。姿勢判定部104は、判定結果を画像比較部105に出力する。 The posture determination unit 104 calculates a movement amount and a movement direction of the driver's head from the reference point based on the reference point and the driver's face position acquired by the face detection unit 102. When the calculated movement amount is equal to or greater than a first posture collapse determination amount that is a threshold value of the movement amount of the face position set in advance in the calculated movement direction, the posture determination unit 104 is out of alignment. Is determined. On the other hand, the posture determination unit 104 determines that the driver's posture is not collapsed when the calculated movement amount is less than a preset first posture collapse determination amount in the calculated movement direction. The posture determination unit 104 outputs the determination result to the image comparison unit 105.
 姿勢判定部104は、基準点算出部103が算出した基準点と顔検出部102が取得した運転者の顔位置との距離を算出する。この実施の形態1においては、算出した距離を運転者の頭部の移動量とする。また、姿勢判定部104は、基準点算出部103が算出した基準点を中心として、撮像画像を複数の領域に分割し、顔検出部102が取得した運転者の顔位置がいずれの領域に属するかを判定する。この実施の形態1においては、判定結果である領域を頭部の移動方向とする。
なお、姿勢判定部104は、基準点と顔検出部102が取得した運転者の顔位置とを結んだ直線が、基準点を原点としたX軸となす角度を頭部の移動方向としてもよい。姿勢判定部104の処理の詳細については後述する。
The posture determination unit 104 calculates the distance between the reference point calculated by the reference point calculation unit 103 and the driver's face position acquired by the face detection unit 102. In the first embodiment, the calculated distance is set as the movement amount of the driver's head. The posture determination unit 104 divides the captured image into a plurality of regions around the reference point calculated by the reference point calculation unit 103, and the face position of the driver acquired by the face detection unit 102 belongs to any region. Determine whether. In the first embodiment, the region as the determination result is set as the head moving direction.
The posture determination unit 104 may use the angle formed by the straight line connecting the reference point and the driver's face position acquired by the face detection unit 102 and the X axis with the reference point as the origin as the head movement direction. . Details of the processing of the posture determination unit 104 will be described later.
 画像比較部105は、姿勢判定部104の判定結果が姿勢崩れを示すものである場合、当該判定の際に用いられた撮像画像(以下、姿勢判定時の撮像画像と記載する)における第1領域の画像と、予め撮像された比較画像における第1領域の画像とを比較し、運転者の姿勢崩れを判定する。
 ここで、比較画像は、運転者が正常な状態で運転席に着座しているときに撮像した撮像画像、または運転者が着座していない運転席の座席を撮像した撮像画像である。
 また、第1領域は、運転席の座席の前後方向への調整幅、運転席の座席の傾きの調整幅、および運転席のヘッドレストの上下方向への調整幅を考慮して定められた領域であり、且つ運転席のヘッドレストを全て含む領域である。また、第1領域は、運転席の平均的な座席位置、運転者の平均的な座高または一般の運転者が通常の運転動作を行う際に撮像される範囲を考慮して定めてもよい。第1領域は、車種毎の運転座席の位置および撮像装置の位置に基づいて予め設定される領域である。
When the determination result of the posture determination unit 104 indicates posture collapse, the image comparison unit 105 is a first area in a captured image used for the determination (hereinafter referred to as a captured image at the time of posture determination). And the image of the first region in the comparison image captured in advance are compared to determine whether the driver has lost his posture.
Here, the comparative image is a captured image captured when the driver is seated in the driver's seat in a normal state, or a captured image captured of the seat of the driver's seat where the driver is not seated.
The first area is an area determined in consideration of the adjustment width in the front-rear direction of the driver's seat, the adjustment width of the inclination of the driver's seat, and the adjustment width in the vertical direction of the headrest of the driver's seat. There is an area including all the headrests of the driver's seat. The first region may be determined in consideration of an average seat position of the driver's seat, an average seat height of the driver, or a range imaged when a general driver performs a normal driving operation. The first area is an area set in advance based on the position of the driver's seat and the position of the imaging device for each vehicle type.
 画像比較部105は、比較画像として運転者が着座しているときに撮像した撮像画像を用いる場合、姿勢判定時の撮像画像における第1領域の画像と、比較画像における第1領域の画像との一致度を算出する。一方、画像比較部105は、比較画像として運転者が着座していない運転席の座席の撮像画像を用いる場合、姿勢判定時の撮像画像における第1領域の画像と、比較画像における第1領域の画像との乖離度を算出する。画像比較部105は、算出した一致度または乖離度に基づいて、運転者の姿勢が崩れているか否かの再判定を行う。 When using the captured image captured when the driver is seated as the comparison image, the image comparison unit 105 uses the first region image in the captured image at the time of posture determination and the first region image in the comparison image. The degree of coincidence is calculated. On the other hand, when the captured image of the driver's seat where the driver is not seated is used as the comparison image, the image comparison unit 105 uses the first region image in the captured image at the time of posture determination and the first region in the comparison image. The degree of deviation from the image is calculated. The image comparison unit 105 re-determines whether or not the driver's posture is broken based on the calculated degree of coincidence or deviation.
 画像比較部105は、一致度または乖離度が閾値以上である場合は、運転者の姿勢が崩れていないと判定する。即ち、画像比較部105は、顔検出部102が運転者の顔以外を運転者の顔であると誤検出した顔位置に基づいて、姿勢判定部104が、当該誤った顔位置と基準点とに基づいて姿勢崩れと誤判定したものと判定する。一方、画像比較部105は、一致度または乖離度が閾値未満である場合は、運転者の姿勢が崩れている、即ち姿勢判定部104の判定結果が正しいと判定する。画像比較部105は、判定結果を状態判定部107に出力する。
 なお、一致度、乖離度および画像比較部105による一致度または乖離度の算出方法の詳細は後述する。
The image comparison unit 105 determines that the driver's posture has not collapsed when the degree of coincidence or the degree of divergence is equal to or greater than a threshold value. That is, based on the face position that the face detection unit 102 erroneously detects as the driver's face other than the driver's face, the image comparison unit 105 causes the posture determination unit 104 to detect the incorrect face position and the reference point. Based on the above, it is determined that the posture is misjudged. On the other hand, when the degree of coincidence or divergence is less than the threshold, the image comparison unit 105 determines that the driver's posture is broken, that is, the determination result of the posture determination unit 104 is correct. The image comparison unit 105 outputs the determination result to the state determination unit 107.
Details of the degree of coincidence, the degree of deviation, and the method of calculating the degree of coincidence or deviation by the image comparison unit 105 will be described later.
 図2は、実施の形態1に係る状態判定装置100の画像比較部105が用いる第1領域の一例を示す図である。図2Aは、運転席を正面から撮像した撮像画像における第1領域の例を示し、図2Bは運転席を斜め前方から撮像した撮像画像における第1領域の例を示した図である。
 図2Aおよび図2Bで示した第1領域Ea,Ebは、運転席のヘッドレストHを全て含む領域である。また、第1領域Ea,Ebは、運転席の座席位置、運転者Xの座高、通常の運転動作を考慮した領域である。なお、第1領域Ea,Ebは、運転席の座席位置が最前位置であった場合にも、最後位置にあった場合にも、運転席のヘッドレストHが全て含まれる領域である。
FIG. 2 is a diagram illustrating an example of the first region used by the image comparison unit 105 of the state determination device 100 according to the first embodiment. FIG. 2A shows an example of a first region in a captured image obtained by imaging the driver's seat from the front, and FIG. 2B is a diagram showing an example of the first region in a captured image obtained by imaging the driver's seat from diagonally forward.
The first areas Ea and Eb shown in FIGS. 2A and 2B are areas including all the headrests H of the driver's seat. The first areas Ea and Eb are areas in which the seat position of the driver's seat, the seat height of the driver X, and normal driving operation are taken into consideration. The first regions Ea and Eb are regions that include all the headrests H of the driver's seat regardless of whether the seat position of the driver's seat is the foremost position or the last position.
 比較画像蓄積部106は、画像比較部105が撮像画像を比較する際に参照する比較画像を蓄積する。比較画像は、姿勢判定時の撮像画像における第1領域と同一の領域を予め撮像した撮像画像である。比較画像は、運転者が正常な状態で運転席に着座しているときに撮像した撮像画像、または運転者が運転席に着座していない運転席の座席を撮像した撮像画像である。運転者が正常な状態で運転席に着座している撮像画像は、例えば、後述する基準点を算出する際に撮像された画像である。運転者を撮像した撮像画像である場合、比較画像蓄積部106は、運転者毎に比較画像を蓄積する。 The comparison image storage unit 106 stores a comparison image to be referred to when the image comparison unit 105 compares captured images. The comparative image is a captured image obtained by capturing in advance the same region as the first region in the captured image at the time of posture determination. The comparative image is a captured image captured when the driver is seated in the driver's seat in a normal state, or a captured image captured of a driver's seat where the driver is not seated in the driver's seat. The captured image in which the driver is seated in the driver's seat in a normal state is, for example, an image captured when calculating a reference point described later. When the captured image is an image of the driver, the comparative image storage unit 106 stores a comparative image for each driver.
 状態判定部107は、画像比較部105の判定結果に基づいて、運転者が運転不能状態であるか否か判定を行う。状態判定部107は、当該判定結果に基づいて姿勢崩れの状態が一定時間以上継続した場合、運転者が運転不能状態であると判定する。
 より具体的には、状態判定部107は、当該判定結果に基づいて姿勢崩れの状態が、注意喚起が必要であると判定する時間(以下、注意喚起判定時間と記載する)継続した場合、運転者に対して注意喚起を行う必要がある状態(以下、注意喚起状態)であると判定する。状態判定部107は、注意喚起状態が運転不能状態であると判断する時間(以下、運転不能判定時間と記載する)継続した場合、運転者が運転不能状態であると判定する。
 状態判定部107は、運転者が注意喚起状態であると判定した場合、または運転者が運転不能状態であると判定した場合、当該判定結果を、外部の警告装置400または車両制御装置500に出力する。
The state determination unit 107 determines whether or not the driver is in a driving impossible state based on the determination result of the image comparison unit 105. Based on the determination result, the state determination unit 107 determines that the driver is incapable of driving when the posture collapse state continues for a certain time or more.
More specifically, the state determination unit 107 drives when the posture collapse state continues for a time period in which it is determined that attention is required based on the determination result (hereinafter referred to as a warning determination time). It is determined that the user needs to be alerted (hereinafter, alert state). The state determination unit 107 determines that the driver is in an inoperable state when the time for determining that the alert state is in an inoperable state (hereinafter referred to as an inoperable determination time) continues.
The state determination unit 107 outputs the determination result to the external warning device 400 or the vehicle control device 500 when it is determined that the driver is in an alert state or when the driver is determined to be incapable of driving. To do.
 次に、状態判定装置100のハードウェア構成例を説明する。
 図3Aおよび図3Bは、状態判定装置100のハードウェア構成例を示す図である。
 状態判定装置100における画像取得部101、顔検出部102、基準点算出部103、姿勢判定部104、画像比較部105および状態判定部107の各機能は、処理回路により実現される。即ち、状態判定装置100は、上記各機能を実現するための処理回路を備える。当該処理回路は、図3Aに示すように専用のハードウェアである処理回路100aであってもよいし、図3Bに示すようにメモリ100cに格納されているプログラムを実行するプロセッサ100bであってもよい。
Next, a hardware configuration example of the state determination device 100 will be described.
3A and 3B are diagrams illustrating a hardware configuration example of the state determination device 100.
The functions of the image acquisition unit 101, the face detection unit 102, the reference point calculation unit 103, the posture determination unit 104, the image comparison unit 105, and the state determination unit 107 in the state determination device 100 are realized by a processing circuit. That is, the state determination apparatus 100 includes a processing circuit for realizing the above functions. The processing circuit may be a processing circuit 100a that is dedicated hardware as shown in FIG. 3A or a processor 100b that executes a program stored in the memory 100c as shown in FIG. 3B. Good.
 図3Aに示すように、画像取得部101、顔検出部102、基準点算出部103、姿勢判定部104、画像比較部105および状態判定部107が専用のハードウェアである場合、処理回路100aは、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field-programmable Gate Array)、またはこれらを組み合わせたものが該当する。画像取得部101、顔検出部102、基準点算出部103、姿勢判定部104、画像比較部105および状態判定部107の各部の機能それぞれを処理回路で実現してもよいし、各部の機能をまとめて1つの処理回路で実現してもよい。 As shown in FIG. 3A, when the image acquisition unit 101, the face detection unit 102, the reference point calculation unit 103, the posture determination unit 104, the image comparison unit 105, and the state determination unit 107 are dedicated hardware, the processing circuit 100a For example, a single circuit, a composite circuit, a programmed processor, a processor programmed in parallel, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-programmable Gate Array), or a combination thereof is applicable. The functions of the image acquisition unit 101, the face detection unit 102, the reference point calculation unit 103, the posture determination unit 104, the image comparison unit 105, and the state determination unit 107 may be realized by a processing circuit. You may implement | achieve collectively by one processing circuit.
 図3Bに示すように、画像取得部101、顔検出部102、基準点算出部103、姿勢判定部104、画像比較部105および状態判定部107がプロセッサ100bである場合、各部の機能は、ソフトウェア、ファームウェア、またはソフトウェアとファームウェアとの組み合わせにより実現される。ソフトウェアまたはファームウェアはプログラムとして記述され、メモリ100cに格納される。プロセッサ100bは、メモリ100cに記憶されたプログラムを読み出して実行することにより、画像取得部101、顔検出部102、基準点算出部103、姿勢判定部104、画像比較部105および状態判定部107の各機能を実現する。即ち、画像取得部101、顔検出部102、基準点算出部103、姿勢判定部104、画像比較部105および状態判定部107は、プロセッサ100bにより実行されるときに、後述する図8および図9に示す各ステップが結果的に実行されることになるプログラムを格納するためのメモリ100cを備える。また、これらのプログラムは、画像取得部101、顔検出部102、基準点算出部103、姿勢判定部104、画像比較部105および状態判定部107の手順または方法をコンピュータに実行させるものであるともいえる。 As shown in FIG. 3B, when the image acquisition unit 101, the face detection unit 102, the reference point calculation unit 103, the posture determination unit 104, the image comparison unit 105, and the state determination unit 107 are a processor 100b, the function of each unit is software , Firmware, or a combination of software and firmware. Software or firmware is described as a program and stored in the memory 100c. The processor 100b reads out and executes the program stored in the memory 100c, whereby the image acquisition unit 101, the face detection unit 102, the reference point calculation unit 103, the posture determination unit 104, the image comparison unit 105, and the state determination unit 107 Implement each function. That is, when the image acquisition unit 101, the face detection unit 102, the reference point calculation unit 103, the posture determination unit 104, the image comparison unit 105, and the state determination unit 107 are executed by the processor 100b, FIGS. A memory 100c is provided for storing a program in which each step shown in FIG. In addition, these programs may cause a computer to execute the procedures or methods of the image acquisition unit 101, the face detection unit 102, the reference point calculation unit 103, the posture determination unit 104, the image comparison unit 105, and the state determination unit 107. I can say that.
 ここで、プロセッサ100bとは、例えば、CPU(Central Processing Unit)、処理装置、演算装置、プロセッサ、マイクロプロセッサ、マイクロコンピュータ、またはDSP(Digital Signal Processor)などのことである。
 メモリ100cは、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable ROM)、EEPROM(Electrically EPROM)等の不揮発性または揮発性の半導体メモリであってもよいし、ハードディスク、フレキシブルディスク等の磁気ディスクであってもよいし、ミニディスク、CD(Compact Disc)、DVD(Digital Versatile Disc)等の光ディスクであってもよい。
Here, the processor 100b is, for example, a CPU (Central Processing Unit), a processing device, an arithmetic device, a processor, a microprocessor, a microcomputer, or a DSP (Digital Signal Processor).
The memory 100c may be, for example, a nonvolatile or volatile semiconductor memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable ROM), or an EEPROM (Electrically EPROM). Further, it may be a magnetic disk such as a hard disk or a flexible disk, or an optical disk such as a mini disk, CD (Compact Disc), or DVD (Digital Versatile Disc).
 なお、画像取得部101、顔検出部102、基準点算出部103、姿勢判定部104、画像比較部105および状態判定部107の各機能について、一部を専用のハードウェアで実現し、一部をソフトウェアまたはファームウェアで実現するようにしてもよい。このように、状態判定装置100における処理回路100aは、ハードウェア、ソフトウェア、ファームウェア、またはこれらの組み合わせによって、上述の各機能を実現することができる。 Note that some of the functions of the image acquisition unit 101, the face detection unit 102, the reference point calculation unit 103, the posture determination unit 104, the image comparison unit 105, and the state determination unit 107 are realized by dedicated hardware. May be realized by software or firmware. As described above, the processing circuit 100a in the state determination device 100 can realize the above-described functions by hardware, software, firmware, or a combination thereof.
 次に、顔検出部102および基準点算出部103による基準点算出処理の詳細について、図4を参照しながら説明する。
 図4は、実施の形態1に係る状態判定装置100の顔検出部102および基準点算出部103の処理を示す図である。
 顔検出部102は、撮像画像Aが撮像された際の車速を車両情報認識装置300から取得し、取得した車速が予め設定した閾値、例えば30km/h以上であるか否か判定を行う。顔検出部102は、車速が例えば30km/h以上であると判定した場合に、撮像画像Aから運転者のXの顔領域Bを検出する。車速が低速度帯よりも高速度帯である方が、運転者の頭部の位置が固定されることを考慮して、車速の閾値は例えば30km/h以上の任意の値が設定される。顔検出部102は、撮像画像Aから検出した顔領域Bの中で、運転者Xの首から上の頭部の輪郭に接する枠Cを設定し、設定した枠Cの中心点Pを取得する。顔検出部102は、取得した中心点Pの、撮像画像A上の二次元座標(px、py)を算出し、運転者Xの顔位置として取得する。顔検出部102は、算出した中心点Pの二次元座標(px、py)を基準点算出部103に出力する。
Next, details of the reference point calculation processing by the face detection unit 102 and the reference point calculation unit 103 will be described with reference to FIG.
FIG. 4 is a diagram illustrating processing of the face detection unit 102 and the reference point calculation unit 103 of the state determination device 100 according to the first embodiment.
The face detection unit 102 acquires the vehicle speed when the captured image A is captured from the vehicle information recognition device 300, and determines whether or not the acquired vehicle speed is a preset threshold value, for example, 30 km / h or more. The face detection unit 102 detects the face area B of the driver's X from the captured image A when the vehicle speed is determined to be, for example, 30 km / h or higher. Considering that the position of the driver's head is fixed when the vehicle speed is higher than the low speed zone, an arbitrary value of 30 km / h or more is set as the vehicle speed threshold. The face detection unit 102 sets a frame C in contact with the contour of the head above the neck of the driver X in the face area B detected from the captured image A, and acquires the center point P of the set frame C. . The face detection unit 102 calculates the two-dimensional coordinates (px, py) on the captured image A of the acquired center point P, and acquires it as the face position of the driver X. The face detection unit 102 outputs the calculated two-dimensional coordinates (px, py) of the center point P to the reference point calculation unit 103.
 基準点算出部103は、顔検出部102が算出した中心点Pの二次元座標(px、py)を記録領域(図示しない)に記録する。基準点算出部103は、記録した運転者Xの二次元座標(px、py)の数が、予め設定されたサンプル数以上となった場合、記録された二次元座標の平均値を求めて基準点を算出する。予め設定されるサンプル数は、例えば任意の固定時間分の座標数に基づいて設定される。具体的には、カメラ200の撮像速度が10fpsの場合、固定時間である3秒分となる30個がサンプル数として設定される。基準点算出部103は、算出した基準点を、運転者Xの基準点として姿勢判定部104に出力する。運転者Xの基準点は、運転者Xによる運転が終了するまで、姿勢判定部104において保持される。 The reference point calculation unit 103 records the two-dimensional coordinates (px, py) of the center point P calculated by the face detection unit 102 in a recording area (not shown). When the number of recorded two-dimensional coordinates (px, py) of the driver X is equal to or greater than the preset number of samples, the reference point calculation unit 103 obtains an average value of the recorded two-dimensional coordinates and performs a reference. Calculate points. The number of samples set in advance is set based on the number of coordinates for an arbitrary fixed time, for example. Specifically, when the imaging speed of the camera 200 is 10 fps, 30 samples corresponding to a fixed time of 3 seconds are set as the number of samples. The reference point calculation unit 103 outputs the calculated reference point to the posture determination unit 104 as the reference point of the driver X. The reference point of the driver X is held in the posture determination unit 104 until the driving by the driver X ends.
 ここで、運転者Xによる運転が終了するまでとは、目的の場所に到達するまで、車両のエンジンを停止させるまで、または他の運転者と運転を交代するまでである。基準点算出部103は、車両情報認識装置300から入力される情報、例えば車速情報およびブレーキ情報等を参照し、運転者Xによる運転が終了したと判断した場合には、基準点算出部103の記憶領域に記録した情報、および姿勢判定部104に設定した基準点のリセットを行う。
 なお、顔検出部102は、車速が予め設定した閾値未満である場合、または撮像画像から運転者の顔領域が検出されない場合、中心点の二次元座標の算出を行わない。また、基準点算出部103は、中心点の二次元座標の記録数が予め設定したサンプル数未満である場合、基準点の算出を行わない。
Here, the term “until driving by the driver X” refers to until the vehicle reaches the target location, until the engine of the vehicle is stopped, or until driving with another driver is changed. The reference point calculation unit 103 refers to information input from the vehicle information recognition device 300, for example, vehicle speed information and brake information, and determines that the driving by the driver X has ended, the reference point calculation unit 103 The information recorded in the storage area and the reference point set in the posture determination unit 104 are reset.
The face detection unit 102 does not calculate the two-dimensional coordinates of the center point when the vehicle speed is less than a preset threshold value or when the driver's face area is not detected from the captured image. Further, the reference point calculation unit 103 does not calculate the reference point when the number of recorded two-dimensional coordinates of the center point is less than a preset number of samples.
 次に、顔検出部102および姿勢判定部104による姿勢判定処理の詳細について、図4および図5を参照しながら説明する。
 顔検出部102は、姿勢判定部104をモニタし、既に基準点が設定されている場合に、姿勢判定処理のための顔位置を取得する。顔検出部102は、姿勢判定処理のための顔位置の取得は、車両の車速に関係することなく行う。顔検出部102は、図4で示したように画像取得部101が取得した撮像画像Aから、運転者Xの顔領域Bを検出し、検出した顔領域Bに含まれる頭部の輪郭に接する枠Cの中心点Pを取得する。顔検出部102は、取得した中心点Pの、撮像画像A上の二次元座標(px、py)を運転者Xの顔位置Paとして取得する。顔検出部102は、取得した運転者の顔位置Paを姿勢判定部104に出力する。
Next, details of the posture determination processing by the face detection unit 102 and the posture determination unit 104 will be described with reference to FIGS. 4 and 5.
The face detection unit 102 monitors the posture determination unit 104 and acquires a face position for posture determination processing when a reference point has already been set. The face detection unit 102 acquires the face position for the posture determination process regardless of the vehicle speed. The face detection unit 102 detects the face region B of the driver X from the captured image A acquired by the image acquisition unit 101 as shown in FIG. 4, and touches the contour of the head included in the detected face region B. The center point P of the frame C is acquired. The face detection unit 102 acquires the two-dimensional coordinates (px, py) of the acquired center point P on the captured image A as the face position Pa of the driver X. The face detection unit 102 outputs the acquired driver's face position Pa to the posture determination unit 104.
 姿勢判定部104は、予め算出された基準点Qが設定されており、当該基準点Qに対する運転者Xの顔位置Paの移動方向および顔位置Paの移動量を算出する。姿勢判定部104は、算出した顔位置Paの移動量が、算出した顔位置Paの移動方向において予め設定された第1の姿勢崩れ判定量以上であるか否か判定することにより、運転者の姿勢が崩れているか否か判定を行う。
 図5は、実施の形態1の状態判定装置100による運転者の頭部の移動方向の設定および各移動方向に設定された第1の姿勢崩れ判定量の一例を示す図である。
 図5では、基準点Qを中心として、4つに分割された領域、Area1、Area2、Area3およびArea4が設定されている。当該各領域が顔位置Paの移動方向を示す。図5の例では、顔位置PaがArea2に位置していることから、姿勢判定部104は、顔位置Paの移動方向がArea2であると特定する。
The posture determination unit 104 is set with a pre-calculated reference point Q, and calculates the movement direction and the movement amount of the face position Pa of the driver X with respect to the reference point Q. The posture determination unit 104 determines whether or not the calculated movement amount of the face position Pa is greater than or equal to a first posture collapse determination amount set in advance in the calculated movement direction of the face position Pa. It is determined whether or not the posture is broken.
FIG. 5 is a diagram illustrating an example of setting of the movement direction of the driver's head by the state determination device 100 according to Embodiment 1 and a first posture collapse determination amount set in each movement direction.
In FIG. 5, an area divided into four areas, Area 1, Area 2, Area 3, and Area 4, are set with the reference point Q as the center. Each area indicates the moving direction of the face position Pa. In the example of FIG. 5, since the face position Pa is located at Area2, the posture determination unit 104 specifies that the moving direction of the face position Pa is Area2.
 また、各領域には、第1の姿勢崩れ判定量が設定されている。図5の例では、Area1における第1の姿勢崩れ判定量がmthr1、Area2における第1の姿勢崩れ判定量がmthr2、Area3における第1の姿勢崩れ判定量がmthr3およびArea4における第1の姿勢崩れ判定量がmthr4と設定されている。顔位置PaはArea2に位置していることから、顔位置Paの第1の姿勢崩れ判定量はmthr2となる。
 姿勢判定部104は、顔位置Paと基準点Qとの2点間の距離mを算出し、顔位置Paの移動量mとする。姿勢判定部104は、算出した顔位置Paの移動量mと、顔位置Paの第1の姿勢崩れ判定量mthr2とを比較し、移動量mが第1の姿勢崩れ判定量mthr2以上であった場合、運転者Xの姿勢が崩れていると判定する。一方、姿勢判定部104は、移動量mが第1の姿勢崩れ判定量mthr2未満であった場合、運転者Xの姿勢が崩れていないと判定する。
In each region, a first posture collapse determination amount is set. In the example of FIG. 5, the first posture collapse determination amount in Area 1 is m thr1 , the first posture collapse determination amount in Area 2 is m thr2 , and the first posture collapse determination amount in Area 3 is the first posture collapse determination amount in m thr3 and Area 4. The posture collapse determination amount is set to m thr4 . Since the face position Pa is located at Area2, the first posture collapse determination amount of the face position Pa is m thr2 .
The posture determination unit 104 calculates the distance m between the two points of the face position Pa and the reference point Q and sets it as the movement amount m of the face position Pa. The posture determination unit 104 compares the calculated movement amount m of the face position Pa with the first posture collapse determination amount m thr2 of the face position Pa, and the movement amount m is equal to or greater than the first posture collapse determination amount m thr2 . If there is, it is determined that the posture of the driver X is broken. On the other hand, when the movement amount m is less than the first posture collapse determination amount m thr2 , the posture determination unit 104 determines that the posture of the driver X is not collapsed.
 次に、画像比較部105の詳細について、図6および図7を参照しながら説明する。画像比較部105が姿勢判定時の撮像画像における第1領域の画像と、比較画像における第1領域の画像との一致度に基づいて比較する場合と、姿勢判定時の撮像画像における第1領域の画像と、比較画像における第1領域の画像との乖離度に基づいて比較する場合とに分けて説明する。
 図6,図7は、実施の形態1に係る状態判定装置100の画像比較部105の処理を示す図である。
 まず、図6を参照しながら、画像比較部105が、姿勢判定時の撮像画像における第1領域の画像と、比較画像における第1領域の画像との一致度に基づいて比較する場合について説明する。図6は、姿勢判定時の撮像画像における第1領域の画像と、比較画像における第1領域の画像との一致度に基づいて比較する場合を示し、図6Aは比較画像蓄積部106に蓄積された比較画像における第1領域の画像、図6Bおよび図6Cは姿勢判定時の撮像画像および当該撮像画像における第1領域の画像の一例を示している。また、図6Aは、運転者Xが正常な状態で運転席に着座しているときに撮像した比較画像の第1領域Daを示している。
Next, details of the image comparison unit 105 will be described with reference to FIGS. 6 and 7. When the image comparison unit 105 compares based on the degree of coincidence between the image of the first area in the captured image at the time of posture determination and the image of the first area in the comparison image, the image comparison unit 105 A description will be given separately for the case where the comparison is made based on the degree of divergence between the image and the image of the first region in the comparison image.
6 and 7 are diagrams illustrating processing of the image comparison unit 105 of the state determination device 100 according to the first embodiment.
First, the case where the image comparison unit 105 performs comparison based on the degree of coincidence between the image of the first region in the captured image at the time of posture determination and the image of the first region in the comparison image will be described with reference to FIG. . FIG. 6 shows a case where comparison is made based on the degree of coincidence between the image of the first region in the captured image at the time of posture determination and the image of the first region in the comparison image, and FIG. FIG. 6B and FIG. 6C show an example of the captured image at the time of posture determination and the image of the first region in the captured image. FIG. 6A shows a first area Da of a comparative image captured when the driver X is seated in the driver's seat in a normal state.
 図6Aの比較画像の第1領域Daの画像と、図6Bまたは図6Cの姿勢判定時の撮像画像Aの第1領域Ea,Ebの画像とは、同一領域を撮像した画像となる。画像比較部105は、撮像画像Aの第1領域Eaの画像と比較画像の第1領域Daの画像とを比較、または撮像画像Aの第1領域Ebの画像と比較画像の第1領域Daの画像とを比較し、2つの画像の一致度を算出する。画像比較部105は、比較画像の第1領域Daの画像と撮像画像Aの第1領域Ea,Ebの画像との一致度を、ヘッドレストを撮像した領域の一致度、即ち領域Haと領域Hb、または領域Haと領域Hcの一致度に基づいて算出する。画像比較部105は、ヘッドレストを撮像した領域の一致度を、ヘッドレストが撮像されているピクセル数、ヘッドレストが視認可能な輪郭の長さ、第1領域の画像の輝度の少なくともいずれかを用いて算出する。 The image of the first region Da of the comparison image in FIG. 6A and the images of the first regions Ea and Eb of the captured image A at the time of posture determination in FIG. 6B or 6C are images obtained by capturing the same region. The image comparison unit 105 compares the image of the first region Ea of the captured image A with the image of the first region Da of the comparative image, or the image of the first region Eb of the captured image A and the first region Da of the comparative image. The degree of coincidence between the two images is calculated by comparing the images. The image comparison unit 105 determines the degree of coincidence between the image of the first area Da of the comparison image and the images of the first areas Ea and Eb of the captured image A, the degree of coincidence of the areas where the headrest is imaged, that is, the areas Ha and Hb, Alternatively, it is calculated based on the degree of coincidence between the region Ha and the region Hc. The image comparison unit 105 calculates the degree of coincidence of the area in which the headrest is imaged using at least one of the number of pixels in which the headrest is imaged, the length of the contour that the headrest can be visually recognized, and the brightness of the image in the first area. To do.
 画像比較部105が、図6Aの比較画像の第1領域Daの画像と、図6Bの撮像画像Aの第1領域Eaの画像との比較を行う場合、比較画像の第1領域Daの画像のヘッドレストを撮像した領域Haと、撮像画像Aの第1領域Eaの画像におけるヘッドレストを撮像した領域Hbとの一致度を算出する。画像比較部105は、領域Haと領域Hbとの高い一致度を算出する。
 一方、画像比較部105が、図6Aの比較画像の第1領域Daの画像と、図6Cの第1領域Ebの画像との比較を行う場合、比較画像の第1領域Daの画像のヘッドレストを撮像した領域Haと、撮像画像Aの第1領域Ebの画像におけるヘッドレストを撮像した領域Hcとの一致度を算出する。画像比較部105は、領域Haと領域Hcとの低い一致度を算出する。
When the image comparison unit 105 compares the image of the first area Da of the comparison image of FIG. 6A with the image of the first area Ea of the captured image A of FIG. 6B, the image of the first area Da of the comparison image is displayed. The degree of coincidence between the area Ha where the headrest is imaged and the area Hb where the headrest is imaged in the image of the first area Ea of the captured image A is calculated. The image comparison unit 105 calculates a high degree of coincidence between the area Ha and the area Hb.
On the other hand, when the image comparison unit 105 compares the image of the first area Da of the comparison image of FIG. 6A and the image of the first area Eb of FIG. 6C, the headrest of the image of the first area Da of the comparison image is set. The degree of coincidence between the imaged area Ha and the area Hc in which the headrest is imaged in the image of the first area Eb of the captured image A is calculated. The image comparison unit 105 calculates a low degree of coincidence between the area Ha and the area Hc.
 画像比較部105は、算出した一致度が予め設定した閾値以上であった場合には、運転者の姿勢が崩れていないと判定する。画像比較部105は、例えば、図6Aの比較画像の第1領域Daの画像と、図6Bの撮像画像Aの第1領域Eaの画像との比較では、領域Haと領域Hbとの一致度が高いことから、運転者Xの姿勢が崩れていないと判定する。
 一方、画像比較部105は、算出した一致度が予め設定した閾値未満であった場合には運転者の姿勢が崩れていると判定する。画像比較部105は、例えば、図6Aの比較画像の第1領域Daと、図6Cの撮像画像Aの第1領域Ebの画像との比較では、領域Haと領域Hbとの一致度が低いことから、運転者Xの姿勢が崩れていると判定する。
The image comparison unit 105 determines that the driver's posture has not collapsed when the calculated degree of coincidence is equal to or greater than a preset threshold value. For example, in the comparison between the image of the first area Da of the comparison image of FIG. 6A and the image of the first area Ea of the captured image A of FIG. 6B, the image comparison unit 105 determines the degree of coincidence between the area Ha and the area Hb. Since it is high, it is determined that the posture of the driver X has not collapsed.
On the other hand, the image comparison unit 105 determines that the driver's posture is broken when the calculated degree of coincidence is less than a preset threshold value. For example, in the comparison between the first area Da of the comparison image of FIG. 6A and the image of the first area Eb of the captured image A of FIG. 6C, the image comparison unit 105 has a low degree of coincidence between the area Ha and the area Hb. Therefore, it is determined that the posture of the driver X is broken.
 次に、図7を参照しながら、画像比較部105が、姿勢判定時の撮像画像における第1領域の画像と、比較画像における第1領域の画像との乖離度に基づいて判定する場合について説明する。図7は、姿勢判定時の撮像画像における第1領域の画像と、比較画像における第1領域の画像との乖離度に基づいて判定する場合を示し、図7Aは比較画像蓄積部106に蓄積された比較画像における第1領域の画像、図7Bおよび図7Cは姿勢判定時の撮像画像および当該撮像画像における第1領域の画像の一例を示している。また、図7Aは、運転者が着座していない運転席の座席を撮像した比較画像の第1領域Dbを示している。 Next, a case where the image comparison unit 105 performs determination based on the degree of divergence between the image of the first region in the captured image at the time of posture determination and the image of the first region in the comparison image will be described with reference to FIG. To do. FIG. 7 shows a case where determination is made based on the degree of divergence between the image of the first region in the captured image at the time of posture determination and the image of the first region in the comparison image. FIG. 7A is stored in the comparison image storage unit 106. FIG. 7B and FIG. 7C show an example of a captured image at the time of posture determination and an image of the first region in the captured image. FIG. 7A shows a first region Db of a comparative image obtained by imaging the seat of the driver's seat where the driver is not seated.
 図7Aの比較画像の第1領域Dbの画像と、図7Bまたは図7Cの姿勢判定時の撮像画像Aの第1領域Fa,Fbの画像とは、同一領域を撮像した画像となる。画像比較部105は、撮像画像Aの第1領域Faの画像と比較画像の第1領域Dbの画像とを比較、または撮像画像Aの第1領域Fbの画像と比較画像の第1領域Dbの画像とを比較し、2つの画像の乖離度を算出する。画像比較部105は、比較画像の第1領域Dbの画像と撮像画像Aの第1領域Fa,Fbの画像との乖離度を、ヘッドレストを撮像した領域の乖離度、即ち領域Hcと領域Hd、または領域Hcと領域Heの乖離度に基づいて算出する。画像比較部105は、ヘッドレストを撮像した領域の乖離度を、ヘッドレストが撮像されているピクセル数、ヘッドレストが視認可能な輪郭の長さ、第1領域の画像の輝度の少なくともいずれかを用いて算出する。 The image of the first region Db of the comparison image in FIG. 7A and the images of the first regions Fa and Fb of the captured image A at the time of posture determination in FIG. 7B or 7C are images obtained by capturing the same region. The image comparison unit 105 compares the image of the first region Fa of the captured image A with the image of the first region Db of the comparative image, or the image of the first region Fb of the captured image A and the first region Db of the comparative image. The difference between the two images is calculated by comparing with the images. The image comparison unit 105 determines the divergence between the image of the first area Db of the comparison image and the images of the first areas Fa and Fb of the captured image A, and the divergence between the areas where the headrest is imaged, that is, the areas Hc and Hd, Or it calculates based on the deviation degree of field Hc and field He. The image comparison unit 105 calculates the divergence degree of the area in which the headrest is imaged using at least one of the number of pixels in which the headrest is imaged, the length of the contour that the headrest can be visually recognized, and the brightness of the image in the first area. To do.
 画像比較部105が、図7Aの比較画像の第1領域Dbの画像と、図7Bの撮像画像Aの第1領域Faの画像との比較を行う場合、比較画像の第1領域Dbの画像のヘッドレストを撮像した領域Hcと、撮像画像Aの第1領域Faの画像におけるヘッドレストを撮像した領域Hdとの乖離度を算出する。画像比較部105は、領域Hcと領域Hdとの高い乖離度を算出する。 When the image comparison unit 105 compares the image of the first area Db of the comparison image of FIG. 7A with the image of the first area Fa of the captured image A of FIG. 7B, the image of the first area Db of the comparison image is displayed. The degree of divergence between the area Hc in which the headrest is imaged and the area Hd in which the headrest is imaged in the image of the first area Fa of the captured image A is calculated. The image comparison unit 105 calculates a high degree of divergence between the area Hc and the area Hd.
 一方、画像比較部105が、図7Aの比較画像の第1領域Dbの画像と、図7Cの撮像画像Aの第1領域Fbの画像との比較を行う場合、比較画像の第1領域Dbのヘッドレストを撮像した領域Hcと、撮像画像Aの第1領域Fbの画像におけるヘッドレストを撮像した領域Heとの乖離度を算出する。画像比較部105は、領域Hcと領域Heとの低い乖離度を算出する。 On the other hand, when the image comparison unit 105 compares the image of the first region Db of the comparison image in FIG. 7A and the image of the first region Fb of the captured image A in FIG. 7C, the image comparison unit 105 of the first region Db of the comparison image. The degree of divergence between the area Hc in which the headrest is imaged and the area He in which the headrest is imaged in the image of the first area Fb of the captured image A is calculated. The image comparison unit 105 calculates a low degree of divergence between the region Hc and the region He.
 画像比較部105は、算出した乖離度が予め設定した閾値以上であった場合には、運転者の姿勢が崩れていないと判定する。状態判定部107は、例えば、図7Aの比較画像の第1領域Dbの画像と、図7Bの撮像画像Aの第1領域Faの画像との比較では、領域Hcと領域Hdとの乖離度が高いことから、運転者Xの姿勢が崩れていないと判定する。
 一方、画像比較部105は、算出した乖離度が予め設定した閾値未満であった場合には、運転者の姿勢が崩れていると判定する。画像比較部105は、例えば、図7Aの比較画像の第1領域Dbと、図7Cの撮像画像Aの第1領域Fbの画像との比較では、領域Haと領域Hbとの一致度が低いことから、運転者Xの姿勢が崩れていると判定する。
The image comparison unit 105 determines that the driver's posture has not collapsed when the calculated degree of divergence is equal to or greater than a preset threshold value. For example, in the comparison between the image of the first region Db of the comparison image of FIG. 7A and the image of the first region Fa of the captured image A of FIG. 7B, the state determination unit 107 determines the degree of divergence between the region Hc and the region Hd. Since it is high, it is determined that the posture of the driver X has not collapsed.
On the other hand, the image comparison unit 105 determines that the driver's posture is broken when the calculated degree of divergence is less than a preset threshold. For example, in the comparison between the first area Db of the comparison image in FIG. 7A and the image of the first area Fb in the captured image A in FIG. 7C, the image comparison unit 105 has a low degree of coincidence between the area Ha and the area Hb. Therefore, it is determined that the posture of the driver X is broken.
 次に、状態判定装置100の動作について説明する。
 状態判定装置100の動作は、基準点算出処理の動作と、状態判定処理の動作とに分けて説明する。まず、図8のフローチャートを参照しながら、基準点算出処理の動作について説明する。
 図8は、実施の形態1に係る状態判定装置100の基準点算出処理の動作を示すフローチャートである。
 顔検出部102は、画像取得部101から撮像画像が入力されると(ステップST1)、当該撮像画像が撮像された際の車速が予め設定した閾値以上であるか否か判定を行う(ステップST2)。車速が予め設定した閾値未満である場合(ステップST2;NO)、処理を終了する。一方、車速が予め設定した閾値以上である場合(ステップST2;YES)、顔検出部102はステップST1で入力された撮像画像から顔領域が検出可能か否か判定を行う(ステップST3)。顔領域が検出可能でない場合(ステップST3;NO)、処理を終了する。一方、顔領域が検出可能である場合(ステップST3;YES)、顔検出部102はステップST1で入力された撮像画像から顔領域を検出し、検出した顔領域から顔位置を取得し、基準点算出部103に出力する(ステップST4)。
Next, the operation of the state determination device 100 will be described.
The operation of the state determination device 100 will be described separately for the operation of the reference point calculation process and the operation of the state determination process. First, the operation of the reference point calculation process will be described with reference to the flowchart of FIG.
FIG. 8 is a flowchart showing the operation of the reference point calculation process of the state determination apparatus 100 according to the first embodiment.
When the captured image is input from the image acquisition unit 101 (step ST1), the face detection unit 102 determines whether the vehicle speed when the captured image is captured is equal to or higher than a preset threshold (step ST2). ). If the vehicle speed is less than a preset threshold value (step ST2; NO), the process ends. On the other hand, when the vehicle speed is equal to or higher than a preset threshold (step ST2; YES), the face detection unit 102 determines whether or not a face region can be detected from the captured image input in step ST1 (step ST3). If the face area is not detectable (step ST3; NO), the process is terminated. On the other hand, when the face area can be detected (step ST3; YES), the face detection unit 102 detects the face area from the captured image input in step ST1, acquires the face position from the detected face area, and sets the reference point. It outputs to the calculation part 103 (step ST4).
 基準点算出部103は、ステップST4で取得された顔位置をバッファ等に記録する(ステップST5)。基準点算出部103は、顔位置の記録数が、予め設定した閾値以上となったか否か判定を行う(ステップST6)。顔位置の記録数が、予め設定した閾値未満である場合(ステップST6;NO)、処理を終了する。一方、顔領域の位置情報の記録数が、予め設定した閾値以上である場合(ステップST6;YES)、記録した顔位置から基準点を算出する(ステップST7)。基準点算出部103は、ステップST7で算出した基準点を姿勢判定部104に出力し(ステップST8)、処理を終了する。 The reference point calculation unit 103 records the face position acquired in step ST4 in a buffer or the like (step ST5). The reference point calculation unit 103 determines whether or not the number of recorded face positions is equal to or greater than a preset threshold value (step ST6). If the number of recorded face positions is less than a preset threshold value (step ST6; NO), the process ends. On the other hand, if the number of recorded face area position information is greater than or equal to a preset threshold value (step ST6; YES), a reference point is calculated from the recorded face position (step ST7). The reference point calculation unit 103 outputs the reference point calculated in step ST7 to the posture determination unit 104 (step ST8), and ends the process.
 次に、図9のフローチャートを参照しながら、状態判定処理の動作について説明する。
 図9は、実施の形態1に係る状態判定装置100の状態判定処理の動作を示すフローチャートである。画像取得部101は、常時、撮像画像を取得し、顔検出部102および画像比較部105に出力しているものとする。
 顔検出部102は、画像取得部101から撮像画像が入力されると(ステップST21)、姿勢判定部104により運転者の姿勢が崩れていると判定されているか否か判定を行う(ステップST22)。姿勢判定部104により運転者の姿勢が崩れていると判定されている場合(ステップST22;YES)、フローチャートはステップST27の処理に進む。一方、姿勢判定部104により運転者の姿勢が崩れていると判定されていない場合(ステップST22;NO)、顔検出部102は姿勢判定部104に基準点が設定されているか否か判定を行う(ステップST23)。ステップST23の判定処理は、顔検出部102が姿勢判定部104をモニタして行う。姿勢判定部104に基準点が設定されていない場合(ステップST23;NO)、処理を終了する。一方、姿勢判定部104に基準点が設定されている場合(ステップST23;YES)、顔検出部102はステップST21で入力された撮像画像から運転者の顔位置を取得し、姿勢判定部104に出力する(ステップST24)。
Next, the operation of the state determination process will be described with reference to the flowchart of FIG.
FIG. 9 is a flowchart showing the operation of the state determination process of the state determination device 100 according to the first embodiment. It is assumed that the image acquisition unit 101 always acquires a captured image and outputs it to the face detection unit 102 and the image comparison unit 105.
When the captured image is input from the image acquisition unit 101 (step ST21), the face detection unit 102 determines whether the posture determination unit 104 determines that the driver's posture is broken (step ST22). . When it is determined by the posture determination unit 104 that the driver's posture has collapsed (step ST22; YES), the flowchart proceeds to the process of step ST27. On the other hand, when it is not determined by the posture determination unit 104 that the driver's posture is broken (step ST22; NO), the face detection unit 102 determines whether or not a reference point is set in the posture determination unit 104. (Step ST23). The determination processing in step ST23 is performed by the face detection unit 102 monitoring the posture determination unit 104. When the reference point is not set in the posture determination unit 104 (step ST23; NO), the process ends. On the other hand, when the reference point is set in the posture determination unit 104 (step ST23; YES), the face detection unit 102 acquires the driver's face position from the captured image input in step ST21, and the posture determination unit 104 Output (step ST24).
 姿勢判定部104は、予め基準点算出部103によって設定された基準点と、ステップST23で取得された運転者の顔位置とを比較し、運転者の顔位置の移動方向および移動量を算出する(ステップST25)。姿勢判定部104は、ステップST25で算出した運転者の顔位置の移動方向および移動量に基づいて、運転者の姿勢が崩れているか否か判定を行う(ステップST26)。運転者の姿勢が崩れていない場合(ステップST26;NO)、処理を終了する。 Posture determination section 104 compares the reference point set in advance by reference point calculation section 103 with the driver's face position acquired in step ST23, and calculates the movement direction and amount of movement of the driver's face position. (Step ST25). The posture determination unit 104 determines whether or not the driver's posture is broken based on the movement direction and movement amount of the driver's face position calculated in step ST25 (step ST26). If the driver's posture is not collapsed (step ST26; NO), the process is terminated.
 一方、運転者の姿勢が崩れている場合(ステップST26;YES)、画像比較部105は画像取得部101が取得した姿勢判定時の撮像画像における第1領域の画像を取得する(ステップST27)。画像比較部105は、ステップST27で取得した姿勢判定時の撮像画像の第1領域の画像と、比較画像蓄積部106に蓄積された比較画像の第1領域の画像との比較を行い、一致度または乖離度が予め設定した閾値以上であるか否か判定を行う(ステップST28)。一致度または乖離度が予め設定した閾値以上である場合(ステップST28;YES)、画像比較部105は姿勢判定部104の誤判定であると判定する。状態判定部107は、画像比較部105の判定結果に基づいて、姿勢判定部104による運転者の姿勢が崩れているとの判定を解除、およびカウンタをリセットし(ステップST29)、処理を終了する。 On the other hand, when the driver's posture is broken (step ST26; YES), the image comparison unit 105 acquires an image of the first region in the captured image at the time of the posture determination acquired by the image acquisition unit 101 (step ST27). The image comparison unit 105 compares the image of the first region of the captured image at the posture determination acquired in step ST27 with the image of the first region of the comparison image accumulated in the comparison image accumulation unit 106, and the degree of coincidence Alternatively, it is determined whether or not the divergence degree is greater than or equal to a preset threshold value (step ST28). When the degree of coincidence or the degree of divergence is equal to or greater than a preset threshold (step ST28; YES), the image comparison unit 105 determines that the posture determination unit 104 has made an erroneous determination. Based on the determination result of the image comparison unit 105, the state determination unit 107 cancels the determination that the posture of the driver is broken by the posture determination unit 104, resets the counter (step ST29), and ends the process. .
 一方、一致度または乖離度が予め設定した閾値以上でない場合(ステップST28;NO)、画像比較部105は姿勢判定部104の判定結果が正しいと判定する。状態判定部107は、画像比較部105の判定結果に基づいて、カウンタ(図示しない)が起動済みであるか否か判定を行う(ステップST30)。カウンタが起動済みである場合(ステップST30;YES)、ステップST32の処理に進む。一方、カウンタが起動済みでない場合(ステップST30;NO)、状態判定部107はカウンタのカウントを開始し(ステップST31)、カウンタのカウントアップを行う(ステップST32)。状態判定部107は、カウンタのカウント値を参照し、注意喚起判定時間(例えば、3秒)が経過したか否か判定を行う(ステップST33)。注意喚起判定時間が経過していない場合(ステップST33;NO)、フローチャートは処理を終了する。 On the other hand, when the degree of coincidence or the degree of divergence is not greater than or equal to a preset threshold value (step ST28; NO), the image comparison unit 105 determines that the determination result of the posture determination unit 104 is correct. Based on the determination result of the image comparison unit 105, the state determination unit 107 determines whether or not a counter (not shown) has been activated (step ST30). If the counter has been activated (step ST30; YES), the process proceeds to step ST32. On the other hand, when the counter is not activated (step ST30; NO), the state determination unit 107 starts counting the counter (step ST31), and counts up the counter (step ST32). The state determination unit 107 refers to the count value of the counter and determines whether or not an alert determination time (for example, 3 seconds) has elapsed (step ST33). If the attention determination time has not elapsed (step ST33; NO), the flowchart ends the process.
 一方、注意喚起判定時間が経過している場合(ステップST33:YES)、状態判定部107はさらにカウンタのカウント値を参照し、運転不能判定時間(例えば、10秒)が経過したか否か判定を行う(ステップST34)。運転不能判定時間が経過していない場合(ステップST34;NO)、状態判定部107は、運転者が注意喚起を必要とする状態であると判定する(ステップST35)。状態判定部107は当該判定結果を外部の警告装置400または車両制御装置500に出力し、処理を終了する。一方、運転不能判定時間が経過している場合(ステップST34;YES)、状態判定部107は、運転者が運転不能状態であると判定する(ステップST36)。状態判定部107は当該判定結果を外部の警告装置400または車両制御装置500に出力し、処理を終了する。 On the other hand, when the alert determination time has elapsed (step ST33: YES), the state determination unit 107 further refers to the count value of the counter and determines whether or not the operation impossibility determination time (for example, 10 seconds) has elapsed. Is performed (step ST34). When the driving impossibility determination time has not elapsed (step ST34; NO), the state determination unit 107 determines that the driver needs to be alerted (step ST35). The state determination unit 107 outputs the determination result to the external warning device 400 or the vehicle control device 500, and ends the process. On the other hand, when the driving impossibility determination time has elapsed (step ST34; YES), the state determination unit 107 determines that the driver is in an inoperable state (step ST36). The state determination unit 107 outputs the determination result to the external warning device 400 or the vehicle control device 500, and ends the process.
 図9のフローチャートのステップST35において、状態判定部107が、運転者が注意喚起を必要とする状態であると判定すると、その後の画像比較部105の処理頻度を低下させるように構成してもよい。状態判定部107は、運転者が注意喚起を必要とする状態であると判定すると、当該判定結果を画像比較部105に出力する。画像比較部105は、出力された判定結果に基づいて、第1領域の画像と、比較画像とを比較をする処理頻度を低下させる。さらに、状態判定部107が、運転者が運転不能状態であるとの判定結果を画像比較部105に出力すると、画像比較部105は、第1領域の画像と、比較画像とを比較をする処理頻度をさらに低く設定してもよい。これにより、画像比較処理の処理負荷を軽減することができる。 In step ST35 of the flowchart of FIG. 9, if the state determination unit 107 determines that the driver needs to be alerted, the processing frequency of the subsequent image comparison unit 105 may be reduced. . If the state determination unit 107 determines that the driver needs to be alerted, the state determination unit 107 outputs the determination result to the image comparison unit 105. The image comparison unit 105 reduces the processing frequency of comparing the image of the first region and the comparison image based on the output determination result. Further, when the state determination unit 107 outputs a determination result indicating that the driver is incapable of driving to the image comparison unit 105, the image comparison unit 105 performs a process of comparing the image in the first region with the comparison image. The frequency may be set lower. Thereby, the processing load of the image comparison process can be reduced.
 上述した説明では、姿勢判定部104が、基準点に対する運転者の顔位置の移動方向および顔位置の移動量を算出し、算出した顔位置の移動量を、顔位置の移動方向における第1の姿勢崩れ判定量と比較することにより、運転者の姿勢が崩れているか否か判定を行う構成を示した。
 姿勢判定部104は、運転者の顔位置の移動方向および移動量に加えて、運転者の顔向きも考慮して、運転者の姿勢が崩れているか否か判定する構成としてもよい。運転者の顔向きは、運転者の顔向きの左右方向へのある時間における変位量、および運転者の顔向きの上下方向へのある時間における変位量に基づいて算出する。
In the above description, the posture determination unit 104 calculates the movement direction of the driver's face position and the movement amount of the face position relative to the reference point, and the calculated movement amount of the face position is the first movement direction in the movement direction of the face position. A configuration is shown in which it is determined whether or not the driver's posture is broken by comparing with the posture collapse determination amount.
The posture determination unit 104 may be configured to determine whether or not the driver's posture is broken in consideration of the driver's face orientation in addition to the movement direction and movement amount of the driver's face position. The driver's face orientation is calculated based on the amount of displacement of the driver's face in the left-right direction at a certain time and the amount of displacement of the driver's face in the up-down direction at a certain time.
 図10は、実施の形態1に係る状態判定装置100の姿勢判定部104による運転者の顔向きの変位量の算出を示す図である。
 図10Aは、運転者の顔向きの左右方向および上下方向を定義した図である。
 運転者Xの正面視において、Yaw方向が、運転者Xの顔向きの左右方向であり、Pitch方向が、運転者Xの顔向きの上下方向である。
 姿勢判定部104は、画像取得部101が取得した撮像画像から、運転者の正面視に対するYaw方向およびPitch方向への変位量を算出する。図10Bは、姿勢判定部104が時間の経過に伴って、Yaw方向に変位量「Y」を算出した場合を示している。図10Cは、姿勢判定部104が時間の経過に伴って、Pitch方向に変位量「P」を算出した場合を示している。
 姿勢判定部104は、顔位置の移動量と、第1の姿勢崩れ判定量との比較に加えて、図10Bで算出した変位量「Y」または図10Cで算出した変位量「P」を、予め設定された顔向きの変位量の閾値である第2の姿勢崩れ判定量と比較することにより、運転者の姿勢が崩れているか否か判定を行う。第2の姿勢崩れ判定量は、顔向きのYaw方向判定量、または顔向きのPitch方向判定量の少なくともいずれか一方で構成される。
FIG. 10 is a diagram illustrating calculation of the amount of displacement of the driver's face by the posture determination unit 104 of the state determination device 100 according to the first embodiment.
FIG. 10A is a diagram defining left and right directions and up and down directions of the driver's face.
In the front view of the driver X, the Yaw direction is the left-right direction facing the driver X, and the Pitch direction is the up-down direction facing the driver X.
The posture determination unit 104 calculates a displacement amount in the Yaw direction and the Pitch direction with respect to the driver's front view from the captured image acquired by the image acquisition unit 101. FIG. 10B shows a case where the posture determination unit 104 calculates the displacement “Y d ” in the Yaw direction as time elapses. FIG. 10C shows a case where the posture determination unit 104 calculates the displacement “P d ” in the pitch direction with the passage of time.
The posture determination unit 104 adds the displacement amount “Y d ” calculated in FIG. 10B or the displacement amount “P d ” calculated in FIG. 10C in addition to the comparison between the movement amount of the face position and the first posture collapse determination amount. Is compared with a second posture collapse determination amount, which is a preset threshold value for the amount of displacement in the face direction, to determine whether or not the driver's posture is collapsed. The second posture collapse determination amount is configured by at least one of a face direction yaw direction determination amount and a face direction pitch direction determination amount.
 図11は、実施の形態1の状態判定装置100による運転者の頭部の移動方向の設定および各移動方向に設定された第2の姿勢崩れ判定量の一例を示す図である。
 図11は、図5と同様に、基準点Qを中心として、4つに分割された領域、Area1、Area2、Area3およびArea4が設定されている。各領域には、第1の姿勢崩れ判定量に加えて、第2の姿勢崩れ判定量が設定されている。図11の例では、Area1における第1の姿勢崩れ判定量がmthr1、顔向きのYaw方向判定量(第2の姿勢崩れ判定量)がythr1、顔向きのPitch方向判定量(第2の姿勢崩れ判定量)がPthr1と設定されている。また、Area3における顔位置の移動量の第1の姿勢崩れ判定量がmthr3、顔向きのPitch方向判定量がPthr3と設定されている。
FIG. 11 is a diagram illustrating an example of setting of the movement direction of the driver's head by the state determination device 100 according to the first embodiment and a second posture collapse determination amount set in each movement direction.
In FIG. 11, similarly to FIG. 5, areas divided into four areas, Area 1, Area 2, Area 3, and Area 4, with the reference point Q as the center, are set. In each region, a second posture collapse determination amount is set in addition to the first posture collapse determination amount. In the example of FIG. 11, the first posture collapse determination amount in Area1 is m thr1 , the face direction Yaw direction determination amount (second posture collapse determination amount) is y thr1 , and the face direction pitch direction determination amount (second The posture collapse determination amount) is set to P thr1 . In addition, the first posture collapse determination amount of the movement amount of the face position in Area3 is set to m thr3 , and the Pitch direction determination amount of the face direction is set to P thr3 .
 次に、姿勢判定部104は、顔位置Paと基準点Qとの2点間の距離mを算出し、顔位置Paの移動量mとする。また、姿勢判定部104は、顔位置Paにおける運転者の顔向きの変位量Yおよび変位量Pを算出する。姿勢判定部104は、算出した顔位置Paの移動量mが第1の姿勢崩れ判定量mthr2以上であり、算出した顔向きの変位量YがYaw方向判定量Ythr2以上であり、且つ変位量PがPitch方向判定量Pthr2以上である場合に、運転者の姿勢が崩れていると判定する。一方、姿勢判定部104は、移動量mが第1の姿勢崩れ判定量mthr2未満であった場合、顔向きの変位量YがYaw方向判定量Ythr2未満であった場合、または変位量PがPitch方向判定量Pthr2未満であった場合、運転者の姿勢が崩れていないと判定する。 Next, the posture determination unit 104 calculates a distance m between the two points of the face position Pa and the reference point Q and sets it as the movement amount m of the face position Pa. Further, the posture determining unit 104 calculates the displacement amount Y d and the displacement amount P d of the driver's face direction at the face position Pa. The posture determination unit 104 has the calculated movement amount m of the face position Pa equal to or larger than the first posture collapse determination amount m thr2 , the calculated face direction displacement amount Y d is equal to or larger than the Yaw direction determination amount Y thr2 , and when the displacement amount P d is Pitch direction determination amount P thr2 above, it determines that the posture of the driver is collapsed. On the other hand, the posture determination unit 104 determines that the movement amount m is less than the first posture collapse determination amount m thr2 , the face direction displacement amount Y d is less than the Yaw direction determination amount Y thr2 , or the displacement amount. If P d is less than Pitch direction determination amount P thr2, it determines that the posture of the driver is not collapsed.
 このように、姿勢判定部104が、運転者の顔向きのある時間における変位量も考慮して、運転者の姿勢が崩れているか否か判定することにより、運転者の運転状態判定の精度が向上する。 As described above, the posture determination unit 104 determines whether or not the driver's posture is broken in consideration of the amount of displacement in a certain time of the driver's face direction, so that the driver's driving state determination accuracy is improved. improves.
 以上のように、この実施の形態1によれば、運転者を含む車内の撮像画像を取得する画像取得部101と、取得された撮像画像から、運転者の顔位置を検出する顔検出部102と、検出された運転者の顔位置と、予め算出された運転者の基準点とを比較し、設定された移動方向に、第1の姿勢崩れ判定量以上の運転者の顔位置の移動を検出した場合に、運転者の姿勢が崩れていると判定する姿勢判定部104と、運転者の姿勢が崩れていると判定された場合に、取得された撮像画像内の第1領域の画像と、比較画像内の第1領域の画像との比較を行って運転者の姿勢が崩れているか否か再判定を行う画像比較部105と、再判定の結果に基づいて、運転者が運転不能状態であるか否か判定を行う状態判定部107とを備えように構成したので、撮像画像に映りこんだ他の乗員の頭部、車両内装または風景などの運転者の頭部以外を運転者の頭部であると誤って検出した場合であっても、運転不能状態でない運転者を運転不能状態であると誤検出するのを抑制することができる。これにより、運転者の運転不能状態の判定を精度よく行うことができる。 As described above, according to the first embodiment, the image acquisition unit 101 that acquires a captured image in the vehicle including the driver, and the face detection unit 102 that detects the face position of the driver from the acquired captured image. The detected driver's face position is compared with the driver's reference point calculated in advance, and the driver's face position that is greater than or equal to the first posture collapse determination amount is moved in the set moving direction. When it is detected, the posture determination unit 104 that determines that the posture of the driver is collapsed, and the image of the first region in the acquired captured image when it is determined that the posture of the driver is collapsed The comparison with the image of the first region in the comparison image to re-determine whether or not the driver's posture has collapsed, and based on the result of the re-determination, the driver is unable to drive And a state determination unit 107 for determining whether or not Even if the driver's head other than the driver's head, such as the head of another occupant reflected in the captured image, vehicle interior, or scenery, is mistakenly detected as the driver's head, It is possible to prevent the person from being erroneously detected as being incapable of driving. As a result, it is possible to accurately determine whether the driver is unable to drive.
 また、この実施の形態1によれば、姿勢判定部104が、第2の姿勢崩れ判定量以上の運転者の顔向きの変位を検出した場合に、運転者の姿勢が崩れていると判定するように構成したので、運転者の運転状態をより精度よく判定することができる。 Further, according to the first embodiment, when the posture determination unit 104 detects a displacement of the driver's face that is greater than or equal to the second posture collapse determination amount, it is determined that the driver's posture is collapsed. Since it comprised so, a driver | operator's driving | running state can be determined more accurately.
 また、この実施の形態1によれば、画像比較部105は、状態判定部107が、運転者が注意喚起を必要とする状態、または運転不能状態であると判定すると、撮像画像内の第1領域の画像と、比較画像内の第1領域の画像とを比較する処理の頻度を低下させるように構成したので、状態判定装置の処理負荷を軽減することができる。 In addition, according to the first embodiment, when the state determination unit 107 determines that the driver is in a state that requires alerting or is incapable of driving, the image comparison unit 105 determines the first in the captured image. Since the processing frequency for comparing the image of the region and the image of the first region in the comparison image is reduced, the processing load of the state determination device can be reduced.
実施の形態2.
 この実施の形態2では、運転者の頭部中心軸の傾きを考慮して運転者の運転状態を判定する構成を示す。
 図12は、実施の形態2に係る状態判定装置100Aの構成を示すブロック図である。
 実施の形態2に係る状態判定装置100Aは、図1に示した実施の形態1の状態判定装置100に、軸検出部108を追加して構成している。以下では、実施の形態1に係る状態判定装置100の構成要素と同一または相当する部分には、実施の形態1で使用した符号と同一の符号を付して説明を省略または簡略化する。
Embodiment 2. FIG.
In the second embodiment, a configuration is shown in which the driving state of the driver is determined in consideration of the inclination of the driver's head center axis.
FIG. 12 is a block diagram showing a configuration of state determination apparatus 100A according to the second embodiment.
The state determination device 100A according to the second embodiment is configured by adding an axis detection unit 108 to the state determination device 100 according to the first embodiment shown in FIG. In the following, the same or corresponding parts as those of the state determination apparatus 100 according to the first embodiment are denoted by the same reference numerals as those used in the first embodiment, and the description thereof is omitted or simplified.
 軸検出部108は、姿勢判定部104の判定結果を参照し、運転者の姿勢が崩れているとの判定結果であった場合、画像取得部101が取得した姿勢判定時の撮像画像から運転者の頭部中心軸を検出する。軸検出部108は、検出した頭部中心軸の、予め設定した軸に対する傾きを算出する。軸検出部108は、算出した傾きが予め設定した閾値範囲内であるか否か判定を行う。軸検出部108は、算出した傾きが予め設定した閾値範囲内である場合には、姿勢判定部104による運転者の姿勢が崩れているとの判定を解除する。一方、軸検出部108は、算出した角度が閾値範囲内でないと判定した場合、姿勢判定部104による運転者の姿勢が崩れているとの判定を維持し、画像比較部105に出力する。 The axis detection unit 108 refers to the determination result of the posture determination unit 104, and if the determination result indicates that the driver's posture is broken, the driver is determined based on the captured image at the time of posture determination acquired by the image acquisition unit 101. The center axis of the head is detected. The axis detector 108 calculates the inclination of the detected head central axis with respect to a preset axis. The axis detection unit 108 determines whether or not the calculated inclination is within a preset threshold range. The axis detection unit 108 cancels the determination that the posture of the driver is broken by the posture determination unit 104 when the calculated inclination is within a preset threshold range. On the other hand, when it is determined that the calculated angle is not within the threshold range, the axis detection unit 108 maintains the determination that the driver's posture is broken by the posture determination unit 104 and outputs the result to the image comparison unit 105.
 軸検出部108が検出する頭部中心軸および頭部中心軸の傾きの判定について、図13および図14を参照しながら説明する。
 図13は、実施の形態2に係る状態判定装置100Aの軸検出部108による頭部中心軸の検出を示す図である。図13Aは、運転者Xが運転席に正常な状態で着座した場合を示している。運転者Xが搭乗している車両(図示しない)の横方向且つ路面に水平な方向をx軸、車両の前後方向且つ路面に水平な方向をy軸、車両の上下方向且つx軸およびy軸に垂直な方向をz軸とする。
 図13Bは、運転者Xの頭部を上方から見た図である。頭部中心軸Rは、運転者Xの頭部を上方から視認した場合に得られる円Xaの中心を通る軸である。軸検出部108は、画像取得部101が取得した姿勢判定時の撮像画像から得られる、両目の位置、鼻の付け根の位置および鼻の頂部の位置に基づいて、頭部中心軸Rを検出する。例えば、軸検出部108は、両目の中央位置と、鼻の頂部の位置に基づいて、頭部中心軸Rのx軸に対する傾きを検出する。また、例えば軸検出部108は、鼻の頂部と、鼻の付け根との間の距離に基づいて、頭部中心軸Rのy軸に対する傾きを検出する。
Determination of the head center axis detected by the axis detection unit 108 and the inclination of the head center axis will be described with reference to FIGS. 13 and 14.
FIG. 13 is a diagram illustrating detection of the head center axis by the axis detection unit 108 of the state determination device 100A according to the second embodiment. FIG. 13A shows a case where the driver X is seated in the driver's seat in a normal state. The lateral direction of the vehicle (not shown) on which the driver X is riding and the horizontal direction on the road surface is the x axis, the longitudinal direction of the vehicle and the horizontal direction on the road surface is the y axis, the vertical direction of the vehicle, the x axis and the y axis. The direction perpendicular to is the z-axis.
FIG. 13B is a view of the head of the driver X as viewed from above. The head center axis R is an axis passing through the center of a circle Xa obtained when the driver X's head is viewed from above. The axis detection unit 108 detects the head center axis R based on the positions of both eyes, the base of the nose, and the position of the top of the nose obtained from the captured image obtained at the time of posture determination acquired by the image acquisition unit 101. . For example, the axis detection unit 108 detects the inclination of the head center axis R with respect to the x axis based on the center position of both eyes and the position of the top of the nose. For example, the axis detection unit 108 detects the inclination of the head center axis R with respect to the y axis based on the distance between the top of the nose and the base of the nose.
 図14は、実施の形態2に係る状態判定装置100Aの軸検出部108による頭部中心軸の傾きの判定を示す図である。図14Aは、頭部中心軸Rのx軸に対する傾きを示す図であり、図14Bは頭部中心軸Rのy軸に対する傾きを示す図である。
 軸検出部108は、頭部中心軸Rと、x軸とのなす角度θf-xを算出する。同様に、軸検出部108は、頭部中心軸Rと、y軸とのなす角度θf-yを算出する。軸検出部108は、算出した角度θf-xおよび角度θf-yが、図14Cで示した閾値範囲(角度θthr1から角度θthr2の間の範囲)内であるか否か判定を行う。角度θf-xおよび角度θf-yが角度θthr1から角度θthr2の間の範囲内である場合、軸検出部108は、姿勢判定部104による運転者の姿勢が崩れているとの判定を解除する。一方、角度θf-xまたは角度θf-yが、0°から角度θthr1の範囲内または角度θthr2から180°の範囲内である場合、軸検出部108は、姿勢判定部104による運転者の姿勢が崩れているとの判定を維持する。
 なお、上述した閾値範囲は一例であり、適宜設定可能である。また角度θf-xの閾値範囲と、角度θf-yの閾値範囲とをそれぞれ別々に設定してもよい。
FIG. 14 is a diagram illustrating determination of the inclination of the head center axis by the axis detection unit 108 of the state determination apparatus 100A according to the second embodiment. 14A is a diagram illustrating the inclination of the head center axis R with respect to the x axis, and FIG. 14B is a diagram illustrating the inclination of the head center axis R with respect to the y axis.
The axis detector 108 calculates an angle θ fx formed by the head center axis R and the x axis. Similarly, the axis detector 108 calculates an angle θ fy between the head center axis R and the y axis. Axis detecting unit 108 outputs the calculated angle theta f-x and the angle theta f-y is, it is determined whether or not the threshold range shown in FIG. 14C (range between the angle theta thr2 from the angle theta thr1) . If the angle theta f-x and the angle theta f-y is in the range between the angle theta thr1 angle theta thr2, the determination of the axis detecting unit 108 is collapsed position of the driver by the posture determining unit 104 Is released. On the other hand, when the angle theta f-x or the angle theta f-y is in a range of 0 ° from the range or angle theta thr2 angle theta thr1 of 180 °, axis detecting unit 108, the operation by the posture determining unit 104 The judgment that the person's posture is broken is maintained.
The above-described threshold range is an example and can be set as appropriate. Also the threshold range of the angle θ f-x, and a threshold range of the angle theta f-y may be respectively configured separately.
 次に、状態判定装置100Aのハードウェア構成例を説明する。なお、実施の形態1と同一の構成の説明は省略する。
 状態判定装置100Aにおける軸検出部108は、図3Aで示した処理回路100a、または図3Bで示したメモリ100cに格納されるプログラムを実行するプロセッサ100bである。
Next, a hardware configuration example of the state determination device 100A will be described. Note that the description of the same configuration as that of Embodiment 1 is omitted.
The axis detection unit 108 in the state determination apparatus 100A is a processor 100b that executes a program stored in the processing circuit 100a illustrated in FIG. 3A or the memory 100c illustrated in FIG. 3B.
 次に、状態判定装置100Aによる状態判定処理の動作について説明する。
 図15は、実施の形態2に係る状態判定装置100Aの状態判定処理の動作を示すフローチャートである。
 なお、以下では、実施の形態1に係る状態判定装置100と同一のステップには図9で使用した符号と同一の符号を付し、説明を省略または簡略化する。また、実施の形態1と同様に、画像取得部101は、常時、撮像画像を取得し、顔検出部102、軸検出部108および画像比較部105に出力しているものとする。
 ステップST26において、姿勢判定部104が、運転者の姿勢が崩れていると判定した場合(ステップST26;YES)、軸検出部108は、画像取得部101が取得した姿勢判定時の撮像画像から運転者の頭部中心軸を検出する(ステップST41)。軸検出部108は、検出した頭部中心軸とx軸とのなす角度、および頭部中心軸とy軸とのなす角度を算出する(ステップST42)。
Next, the operation of the state determination process by the state determination device 100A will be described.
FIG. 15 is a flowchart showing the operation of the state determination process of the state determination device 100A according to the second embodiment.
In the following, the same steps as those of state determination apparatus 100 according to Embodiment 1 are denoted by the same reference numerals as those used in FIG. 9, and the description thereof is omitted or simplified. Similarly to the first embodiment, the image acquisition unit 101 always acquires a captured image and outputs it to the face detection unit 102, the axis detection unit 108, and the image comparison unit 105.
When the posture determination unit 104 determines in step ST26 that the driver's posture has collapsed (step ST26; YES), the axis detection unit 108 drives from the captured image at the time of posture determination acquired by the image acquisition unit 101. A person's head central axis is detected (step ST41). The axis detection unit 108 calculates the angle formed between the detected head center axis and the x axis, and the angle formed between the head center axis and the y axis (step ST42).
 軸検出部108は、算出したx軸とのなす角度、およびy軸とのなす角度が、予め設定した閾値範囲内であるか否か判定を行う(ステップST43)。算出したx軸とのなす角度、およびy軸とのなす角度が、予め設定した閾値範囲内である場合(ステップST43;YES)、軸検出部108は姿勢判定部104による運転者の姿勢が崩れているとの判定を解除し(ステップST44)、処理を終了する。 The axis detection unit 108 determines whether or not the calculated angle with the x axis and the angle with the y axis are within a preset threshold range (step ST43). If the calculated angle with the x-axis and the angle with the y-axis are within a preset threshold range (step ST43; YES), the axis detection unit 108 causes the posture determination unit 104 to lose the driver's posture. Is released (step ST44), and the process is terminated.
 一方、算出したx軸とのなす角度、またはy軸とのなす角度が、予め設定した閾値範囲内でない場合(ステップST43;NO)、軸検出部108は、姿勢判定部104による運転者の姿勢が崩れているとの判定を維持し、画像比較部105に対して姿勢判定部104による運転者の姿勢が崩れているとの判定を通知する(ステップST45)。画像比較部105は、ステップST45で通知された判定に基づいて、画像取得部101が取得した姿勢判定時の撮像画像における第1領域の画像を取得する(ステップST27)。その後、フローチャートはステップST28以降の処理を行う。 On the other hand, when the calculated angle with the x-axis or the angle with the y-axis is not within a preset threshold range (step ST43; NO), the axis detection unit 108 determines the driver's posture by the posture determination unit 104. The determination that the driver's posture is broken by the posture determination unit 104 is notified to the image comparison unit 105 (step ST45). Based on the determination notified in step ST45, the image comparison unit 105 acquires an image of the first region in the captured image at the time of posture determination acquired by the image acquisition unit 101 (step ST27). Thereafter, the flowchart performs the processing after step ST28.
 以上のように、この実施の形態2によれば、姿勢判定部104が運転者の姿勢が崩れていると判定した場合に、取得された撮像画像から運転者の頭部中心軸の傾きを検出し、検出した頭部中心軸の傾きが予め設定された閾値範囲内である場合、運転者の姿勢が崩れているとの判定を解除する軸検出部108を備えるように構成したので、正常な状態の姿勢とは異なる姿勢で運転している運転者を、注意喚起を必要とする状態、または運転不能状態と誤判定するのを抑制することができる。これにより、運転者の運転状態判定の精度を向上させることができる。また、運転者の頭部が車両前方側に移動した状態での運転、または運転者が車両前方側を覗き込む動作を、注意喚起を必要とする状態、または運転不能状態と誤判定するのを抑制することができる。 As described above, according to the second embodiment, when the posture determination unit 104 determines that the driver's posture is broken, the inclination of the center axis of the driver's head is detected from the acquired captured image. When the detected inclination of the central axis of the head is within a preset threshold range, it is configured to include the axis detection unit 108 that cancels the determination that the driver's posture has collapsed. It is possible to prevent a driver who is driving in a posture different from the posture in the state from being erroneously determined as a state requiring caution or an inoperable state. Thereby, the precision of a driver | operator's driving | running state determination can be improved. In addition, driving with the driver's head moving to the front side of the vehicle, or driving the driver looking into the front side of the vehicle may be misjudged as a state requiring caution or an inoperable state. Can be suppressed.
実施の形態3.
 この実施の形態3では、車両の構造部に肘を掛けて運転(以下、肘掛運転と記載する)を行っている運転者を、注意喚起を必要とする状態、または運転不能状態と誤判定するのを抑制する構成を示す。ここで、肘掛運転とは、例えば運転者の頭部が移動し、運転席ドアの窓枠、または運転席ドアの肘掛に肘をついた状態での運転である。
 図16は、実施の形態3に係る状態判定装置100Bの構成を示すブロック図である。
 実施の形態3に係る状態判定装置100Bは、図1に示した実施の形態1の状態判定装置100に、輪郭検出部109を追加して構成している。以下では、実施の形態1に係る状態判定装置100の構成要素と同一または相当する部分には、実施の形態1で使用した符号と同一の符号を付して説明を省略または簡略化する。
Embodiment 3 FIG.
In the third embodiment, a driver who is driving with an arm on the structural part of the vehicle (hereinafter referred to as an armrest driving) is erroneously determined as a state requiring attention or a state incapable of driving. The structure which suppresses this is shown. Here, the armrest driving is, for example, driving in a state where the head of the driver moves and the elbow is attached to the window frame of the driver's seat door or the armrest of the driver's seat door.
FIG. 16 is a block diagram showing a configuration of state determination apparatus 100B according to the third embodiment.
The state determination device 100B according to Embodiment 3 is configured by adding a contour detection unit 109 to the state determination device 100 according to Embodiment 1 shown in FIG. In the following, the same or corresponding parts as those of the state determination apparatus 100 according to the first embodiment are denoted by the same reference numerals as those used in the first embodiment, and the description thereof is omitted or simplified.
 輪郭検出部109は、姿勢判定部104の判定結果を参照し、運転者の姿勢が崩れているとの判定結果であった場合、画像取得部101が取得した姿勢判定時の撮像画像における第2領域の画像を取得する。第2領域は、運転席のヘッドレストを全て含む領域である。さらに、第2領域は、運転者が運転席に正常な状態で着座した場合に、運転者の腕の付け根が入る範囲、且つ運転者が両腕を広げた際に当該運転者の両肘が入る範囲の領域である。 The contour detection unit 109 refers to the determination result of the posture determination unit 104, and when the determination result indicates that the driver's posture is broken, the contour detection unit 109 outputs the second image in the posture determination acquired by the image acquisition unit 101. Get an image of the region. The second area is an area including all the headrests of the driver's seat. Further, the second region is a range where the driver's arm base enters when the driver is seated in the driver's seat in a normal state, and the driver's both elbows when the driver widens both arms. This is the area that falls within.
 図17は、実施の形態3に係る状態判定装置100Bの輪郭検出部109が取得する第2領域の一例を示す図である。図17Aは、運転席を正面から撮像した撮像画像に設定される第2領域の例を示し、図17Bは運転席を斜め前方から撮像した撮像画像から取得される第2領域の例を示している。
 図17Aおよび図17Bで示した第2領域Ec,Edは、運転席のヘッドレストHを全て含む領域である。また、第2領域Ec,Edは、運転者Xの腕の付け根Xbの位置、および運転者Xが両腕を広げた際の両肘Xcの位置を考慮した領域である。図17Aおよび図17Bで示した第2領域Ec,Edは、運転席のヘッドレストHを全て含む領域である。また、第2領域Ec,Edは、運転席の座席位置、運転者Xの座高、通常の運転動作を考慮した領域である。なお、第2領域Ec,Edは、運転席の座席位置が最前位置であった場合にも、最後位置にあった場合にも、領域内に運転席のヘッドレストHを全て含む領域である。
FIG. 17 is a diagram illustrating an example of the second region acquired by the contour detection unit 109 of the state determination device 100B according to the third embodiment. FIG. 17A shows an example of the second area set in the captured image obtained by imaging the driver's seat from the front, and FIG. 17B shows an example of the second area acquired from the captured image obtained by imaging the driver's seat from diagonally forward. Yes.
The second areas Ec and Ed shown in FIGS. 17A and 17B are areas including all the headrests H of the driver's seat. The second regions Ec and Ed are regions in consideration of the position of the base Xb of the arm of the driver X and the positions of both elbows Xc when the driver X spreads both arms. The second areas Ec and Ed shown in FIGS. 17A and 17B are areas including all the headrests H of the driver's seat. The second areas Ec and Ed are areas in which the seat position of the driver's seat, the seat height of the driver X, and normal driving operation are taken into consideration. The second areas Ec and Ed are areas including all the headrests H of the driver's seat in the area regardless of whether the seat position of the driver's seat is the frontmost position or the last position.
 輪郭検出部109は、取得した第2領域の画像にエッジ検出を行い、運転者の輪郭を検出する。輪郭検出部109は、検出した運転者の輪郭のうち、運転者の首周りの輪郭に予め定義した三角形状が存在する場合に、運転者は肘掛運転を行っていると判断する。輪郭検出部109は、運転者が肘掛運転を行っていると判断した場合に、姿勢判定部104による運転者の姿勢が崩れているとの判定を解除する。一方、輪郭検出部109は、運転者の首周りの輪郭に予め定義した三角形状が存在しない場合、姿勢判定部104による運転者の姿勢が崩れているとの判定を維持し、画像比較部105に出力する。 The contour detection unit 109 performs edge detection on the acquired image of the second region and detects the contour of the driver. The contour detection unit 109 determines that the driver is performing armrest driving when a predefined triangle shape exists in the contour around the driver's neck among the detected contours of the driver. The contour detection unit 109 cancels the determination that the posture of the driver is broken by the posture determination unit 104 when it is determined that the driver is performing the armrest driving. On the other hand, the contour detecting unit 109 maintains the determination that the posture of the driver is broken by the posture determining unit 104 when the contour around the neck of the driver does not exist, and the image comparing unit 105. Output to.
 輪郭検出部109は、例えば図17Aおよび図17Bで示した第2領域Ec,Edの画像に対してエッジ検出を行い、運転者Xの輪郭を検出した場合、首周りの輪郭に三角形状Saおよび三角形状Sbが存在すると判断する。輪郭検出部109は、図17Aおよび図17Bで示した撮像画像に対して、運転者Xが肘掛運転を行っていると判断する。 The contour detection unit 109 performs edge detection on the images of the second areas Ec and Ed shown in FIGS. 17A and 17B, for example, and when detecting the contour of the driver X, the contour Sa around the neck has a triangular shape Sa and It is determined that the triangle Sb exists. The contour detection unit 109 determines that the driver X is performing armrest driving on the captured images shown in FIGS. 17A and 17B.
 次に、状態判定装置100Bのハードウェア構成例を説明する。なお、実施の形態1と同一の構成の説明は省略する。
 状態判定装置100Bにおける輪郭検出部109は、図3Aで示した処理回路100a、または図3Bで示したメモリ100cに格納されるプログラムを実行するプロセッサ100bである。
Next, a hardware configuration example of the state determination device 100B will be described. Note that the description of the same configuration as that of Embodiment 1 is omitted.
The contour detection unit 109 in the state determination device 100B is a processor 100b that executes a program stored in the processing circuit 100a illustrated in FIG. 3A or the memory 100c illustrated in FIG. 3B.
 次に、状態判定装置100Bによる状態判定処理の動作について説明する。
 図18は、実施の形態3に係る状態判定装置100Bの状態判定処理の動作を示すフローチャートである。
 なお、以下では、実施の形態1に係る状態判定装置100と同一のステップには図9で使用した符号と同一の符号を付し、説明を省略または簡略化する。また、実施の形態1と同様に、画像取得部101は、常時、撮像画像を取得し、顔検出部102、輪郭検出部109および画像比較部105に出力しているものとする。
 ステップST26において、姿勢判定部104が、運転者の姿勢が崩れていると判定した場合(ステップST26;YES)、輪郭検出部109は画像取得部101が取得した姿勢判定時の撮像画像における第2領域の画像を取得する(ステップST51)。輪郭検出部109は、ステップST51で取得した第2領域の画像から運転者の輪郭を検出する(ステップST52)。
Next, the operation of the state determination process by the state determination device 100B will be described.
FIG. 18 is a flowchart showing the operation of the state determination process of the state determination device 100B according to the third embodiment.
In the following, the same steps as those of state determination apparatus 100 according to Embodiment 1 are denoted by the same reference numerals as those used in FIG. 9, and the description thereof is omitted or simplified. Similarly to the first embodiment, the image acquisition unit 101 always acquires a captured image and outputs it to the face detection unit 102, the contour detection unit 109, and the image comparison unit 105.
In step ST <b> 26, when the posture determination unit 104 determines that the driver's posture has collapsed (step ST <b>26; YES), the contour detection unit 109 outputs the second image in the posture determination acquired by the image acquisition unit 101. An image of the area is acquired (step ST51). The contour detection unit 109 detects the contour of the driver from the image of the second area acquired in step ST51 (step ST52).
 輪郭検出部109は、ステップST52で検出した運転者の輪郭のうち、運転者の首周りに輪郭に対して予め設定した三角形状と一致または類似する形状が存在するか否か判定を行う(ステップST53)。予め設定した三角形状と一致または類似する形状が存在する場合(ステップST53;YES)、輪郭検出部109は運転者が肘掛運転をしていると判断する。(ステップST54)。また、輪郭検出部109は、姿勢判定部104による運転者の姿勢が崩れているとの判定を解除し(ステップST55)、処理を終了する。 The contour detection unit 109 determines whether or not there is a shape that matches or is similar to a triangular shape preset for the contour around the driver's neck among the contours of the driver detected in step ST52 (step S52). ST53). If there is a shape that matches or resembles a preset triangular shape (step ST53; YES), the contour detection unit 109 determines that the driver is performing an armrest operation. (Step ST54). In addition, the contour detection unit 109 cancels the determination by the posture determination unit 104 that the driver's posture is broken (step ST55), and ends the process.
 一方、予め設定した三角形状と一致または類似する形状が存在しない場合(ステップST53;NO)、輪郭検出部109は、姿勢判定部104による運転者の姿勢が崩れているとの判定を維持し、画像比較部105に対して姿勢判定部104による運転者の姿勢が崩れているとの判定を通知する(ステップST56)。画像比較部105は、ステップST56で通知された判定に基づいて、画像取得部101が取得した姿勢判定時の撮像画像における第1領域の画像を取得する(ステップST27)。その後、フローチャートはステップST28以降の処理を行う。 On the other hand, when there is no shape that matches or resembles a preset triangular shape (step ST53; NO), the contour detection unit 109 maintains the determination that the posture of the driver is broken by the posture determination unit 104, The image comparison unit 105 is notified of the determination by the posture determination unit 104 that the driver's posture has collapsed (step ST56). Based on the determination notified in step ST56, the image comparison unit 105 acquires the image of the first region in the captured image at the time of posture determination acquired by the image acquisition unit 101 (step ST27). Thereafter, the flowchart performs the processing after step ST28.
 以上のように、この実施の形態3によれば、姿勢判定部104が運転者の姿勢が崩れていると判定した場合に、取得された姿勢判定時の撮像画像内の第2領域の画像から、運転者の輪郭を検出し、検出した輪郭に三角形の形状が含まれる場合、運転者の姿勢が崩れているとの判定を解除する輪郭検出部109を備えるように構成したので、頭部が移動した状態で肘掛運転を行っている運転者を、注意喚起を必要とする状態、または運転不能状態と誤判定するのを抑制することができる。これにより、運転者の運転状態判定の精度を向上させることができる。 As described above, according to the third embodiment, when the posture determination unit 104 determines that the driver's posture is broken, the acquired image of the second region in the captured image at the time of posture determination is obtained. Since the contour of the driver is detected, and the detected contour includes a triangular shape, the contour detection unit 109 for canceling the determination that the posture of the driver is broken is provided. It is possible to prevent the driver who is performing the armrest driving in the moved state from being erroneously determined to be in a state that requires alerting or incapable of driving. Thereby, the precision of a driver | operator's driving | running state determination can be improved.
 なお、上述した実施の形態3では、実施の形態1で示した状態判定装置100に輪郭検出部109を追加して適用する構成を示したが、実施の形態2で示した状態判定装置100Aに輪郭検出部109を追加して構成してもよい。
 その場合、輪郭検出部109は、軸検出部108と並行して処理を行ってもよいし、軸検出部108の処理の後段で処理を実行してもよい。
In the above-described third embodiment, the configuration in which the contour detection unit 109 is added and applied to the state determination device 100 described in the first embodiment has been described, but the state determination device 100A illustrated in the second embodiment is applied to the state determination device 100A. You may comprise by adding the outline detection part 109. FIG.
In that case, the contour detection unit 109 may perform processing in parallel with the axis detection unit 108 or may execute processing at a stage subsequent to the processing of the axis detection unit 108.
 上述した実施の形態1から実施の形態3で示した状態判定装置100,100A,100Bにおいて、運転者が注意喚起を必要とする状態であると判定した場合、または運転者が運転不能状態であると判定した場合、外部の警告装置400が音声により警告を行う。
 また、警告装置400はディスプレイに運転不能状態と判定していることを表示し、当該運転不能状態との判定を解除する入力を受け付けるボタンを表示してもよい。
 図19は、実施の形態1から実施の形態3に係る発明の状態判定装置100,100A,100Bの判定結果を運転者に通知する一例を示す図である。
 ディスプレイ401には、状態判定装置100,100A,100Bの判定結果402と、当該判定結果を解除するためのボタン403が表示される。判定結果402は、例えば「運転不能状態と判定中」と表示される。また、ボタン403は、例えば「正常状態にリセット」と表示される。
In the state determination devices 100, 100A, and 100B described in the first to third embodiments described above, when it is determined that the driver is in a state that requires attention, or the driver is incapable of driving. When the determination is made, the external warning device 400 gives a warning by voice.
Further, the warning device 400 may display on the display that it is determined that the operation is impossible, and may display a button for receiving an input for canceling the determination that the operation is impossible.
FIG. 19 is a diagram illustrating an example of notifying the driver of the determination results of the state determination devices 100, 100A, and 100B according to the first to third embodiments.
Display 401 displays determination results 402 of state determination apparatuses 100, 100A, and 100B and a button 403 for canceling the determination results. The determination result 402 is displayed as, for example, “determining that operation is not possible”. The button 403 displays, for example, “Reset to normal state”.
 運転者は、ディスプレイ401に表示された判定結果402を参照し、自身が運転不能状態でない場合には、ボタン403を押下することにより、状態判定装置100,100A,100Bに、判定結果を解除させることができる The driver refers to the determination result 402 displayed on the display 401. If the driver is not in an inoperable state, the driver causes the state determination devices 100, 100A, and 100B to cancel the determination result by pressing the button 403. be able to
 上記以外にも、本発明はその発明の範囲内において、各実施の形態の自由な組み合わせ、各実施の形態の任意の構成要素の変形、または各実施の形態の任意の構成要素の省略が可能である。 In addition to the above, within the scope of the present invention, the present invention can freely combine each embodiment, modify any component of each embodiment, or omit any component of each embodiment. It is.
 この発明に係る状態判定装置は、判定精度の向上が求められるドライバモニタリングシステム等に適用し、運転者の姿勢崩れに基づいて運転状態を判定するのに適している。 The state determination apparatus according to the present invention is applied to a driver monitoring system or the like that requires improvement in determination accuracy, and is suitable for determining a driving state based on a driver's posture change.
 100,100A,100B 状態判定装置、101 画像取得部、102 顔検出部、103 基準点算出部、104 姿勢判定部、105 画像比較部、106 比較画像蓄積部、107 状態判定部、108 軸検出部、109 輪郭検出部。 100, 100A, 100B state determination device, 101 image acquisition unit, 102 face detection unit, 103 reference point calculation unit, 104 posture determination unit, 105 image comparison unit, 106 comparison image storage unit, 107 state determination unit, 108 axis detection unit 109, contour detection unit.

Claims (9)

  1.  運転者を含む車内の撮像画像を取得する画像取得部と、
     前記画像取得部が取得した撮像画像から、前記運転者の顔位置を検出する顔検出部と、
     前記顔検出部が検出した前記運転者の顔位置と、予め算出された前記運転者の基準点とを比較し、設定された移動方向に、第1の姿勢崩れ判定量以上の前記運転者の顔位置の移動を検出した場合に、前記運転者の姿勢が崩れていると判定する姿勢判定部と、
     前記姿勢判定部が前記運転者の姿勢が崩れていると判定した場合に、前記画像取得部が取得した撮像画像内の第1領域の画像と、比較画像内の第1領域の画像との比較を行って前記運転者の姿勢が崩れているか否か再判定を行う画像比較部と、
     前記画像比較部の再判定の結果に基づいて、前記運転者が運転不能状態であるか否か判定を行う状態判定部とを備えた状態判定装置。
    An image acquisition unit for acquiring a captured image inside the vehicle including the driver;
    A face detection unit that detects a face position of the driver from a captured image acquired by the image acquisition unit;
    The driver's face position detected by the face detection unit is compared with the driver's reference point calculated in advance, and the driver's face more than the first posture collapse determination amount is set in the set moving direction. A posture determination unit that determines that the posture of the driver is broken when movement of the face position is detected;
    When the posture determination unit determines that the driver's posture is broken, the image of the first region in the captured image acquired by the image acquisition unit and the image of the first region in the comparison image are compared. An image comparison unit for re-determining whether or not the driver's posture is broken,
    A state determination device comprising: a state determination unit that determines whether or not the driver is in an inoperable state based on a result of redetermination by the image comparison unit.
  2.  前記姿勢判定部は、第2の姿勢崩れ判定量以上の前記運転者の顔向きの変位を検出した場合に、前記運転者の姿勢が崩れていると判定することを特徴とする請求項1記載の状態判定装置。 The posture determination unit determines that the posture of the driver has collapsed when detecting a displacement of the driver's face that is equal to or greater than a second posture collapse determination amount. State determination device.
  3.  前記画像比較部は、前記比較画像が、前記運転者が運転席に着座しているときの当該運転席を撮像した撮像画像である場合、前記撮像画像内の第1領域の画像と、前記比較画像内の第1領域の画像との一致度を算出し、当該算出した一致度に応じて、前記運転者の姿勢が崩れているか否かの再判定を行うことを特徴とする請求項1記載の状態判定装置。 When the comparison image is a captured image obtained by capturing the driver's seat when the driver is seated in the driver's seat, the image comparison unit compares the image of the first region in the captured image with the comparison. The degree of coincidence with the image of the first region in the image is calculated, and re-determination is performed as to whether or not the driver's posture is broken according to the calculated degree of coincidence. State determination device.
  4.  前記画像比較部は、前記比較画像が、前記運転者が着座していない運転席を撮像した撮像画像である場合、前記撮像画像内の第1領域の画像と、前記比較画像内の第1領域の画像との乖離度を算出し、当該算出した乖離度に応じて、前記運転者の姿勢が崩れているか否かの再判定を行うことを特徴とする請求項1記載の状態判定装置。 The image comparison unit, when the comparison image is a captured image obtained by capturing a driver's seat where the driver is not seated, an image of a first region in the captured image and a first region in the comparative image The state determination apparatus according to claim 1, wherein a degree of divergence from the image is calculated and re-determination is performed as to whether or not the driver's posture is broken in accordance with the calculated degree of divergence.
  5.  前記状態判定部は、前記画像比較部により前記運転者の姿勢が崩れていると再判定された状態が注意喚起判定時間以上継続した場合に、前記運転者が注意喚起を必要とする状態であると判定し、前記運転者の姿勢が崩れているとの再判定された状態が、前記注意喚起判定時間よりも長い運転不能判定時間以上継続した場合に、前記運転者が運転不能状態であると判定することを特徴とする請求項1記載の状態判定装置。 The state determination unit is a state in which the driver needs to be alerted when the state that is re-determined by the image comparison unit as the driver's posture has collapsed continues for the alert determination time. And the state where the determination that the driver's posture has collapsed continues for more than a driving inability determination time longer than the alert determination time, and the driver is in an inoperable state The state determination device according to claim 1, wherein the state determination device is determined.
  6.  前記画像比較部は、前記状態判定部が、前記運転者が前記注意喚起を必要とする状態、または運転不能状態であるとであると判定した場合に、前記撮像画像内の第1領域の画像と、前記比較画像内の第1領域の画像とを比較する処理の頻度を低下させることを特徴とする請求項5記載の状態判定装置。 The image comparison unit, when the state determination unit determines that the driver is in a state that requires the attention or is incapable of driving, an image of the first region in the captured image 6. The state determination device according to claim 5, wherein the frequency of processing for comparing the image with the image of the first region in the comparison image is reduced.
  7.  前記姿勢判定部が前記運転者の姿勢が崩れていると判定した場合に、前記画像取得部が取得した撮像画像から前記運転者の頭部中心軸の傾きを検出し、検出した前記頭部中心軸の傾きが予め設定された閾値範囲内である場合、前記運転者の姿勢が崩れているとの判定を解除する軸検出部を備えたことを特徴とする請求項1記載の状態判定装置。 When the posture determination unit determines that the posture of the driver is collapsed, the inclination of the driver's head central axis is detected from the captured image acquired by the image acquisition unit, and the detected head center The state determination apparatus according to claim 1, further comprising: an axis detection unit that cancels the determination that the driver's posture has collapsed when the inclination of the axis is within a preset threshold range.
  8.  前記姿勢判定部が前記運転者の姿勢が崩れていると判定した場合に、前記画像取得部が取得した撮像画像内の第2領域の画像から、前記運転者の輪郭を検出し、検出した輪郭に三角形の形状が含まれる場合、前記運転者の姿勢が崩れているとの判定を解除する輪郭検出部を備えたことを特徴とする請求項1記載の状態判定装置。 When the posture determination unit determines that the posture of the driver is broken, the contour of the driver is detected from the image of the second region in the captured image acquired by the image acquisition unit. The state determination device according to claim 1, further comprising: a contour detection unit that cancels determination that the driver's posture is broken when the vehicle includes a triangular shape.
  9.  画像取得部が、運転者を含む車内の撮像画像を取得するステップと、
     顔検出部が、前記取得された撮像画像から、前記運転者の顔位置を検出するステップと、
     姿勢判定部が、前記検出された前記運転者の顔位置と、予め算出された前記運転者の基準点とを比較し、設定された移動方向に、第1の姿勢崩れ判定量以上の前記運転者の顔位置の移動を検出した場合に、前記運転者の姿勢が崩れていると判定するステップと、
     画像比較部が、前記運転者の姿勢が崩れていると判定した場合に、前記取得された撮像画像内の第1領域の画像と、比較画像内の第1領域の画像との比較を行って前記運転者の姿勢が崩れているか否か再判定を行うステップと、
     状態判定部が、前記再判定の結果に基づいて、前記運転者が運転不能状態であるか否か判定を行うステップとを備えた状態判定方法。
    An image acquisition unit acquiring a captured image in the vehicle including the driver;
    A step of detecting a face position of the driver from the acquired captured image;
    A posture determination unit compares the detected driver's face position with a pre-calculated reference point of the driver, and the driving is equal to or greater than a first posture collapse determination amount in a set movement direction. A step of determining that the posture of the driver is broken when the movement of the person's face position is detected;
    When the image comparison unit determines that the posture of the driver is broken, the image comparison unit compares the image of the first region in the acquired captured image with the image of the first region in the comparison image. Re-determining whether the driver's posture has collapsed;
    A state determination unit including a step of determining whether or not the driver is in an inoperable state based on a result of the re-determination;
PCT/JP2017/021121 2017-06-07 2017-06-07 State determination device and state determination method WO2018225176A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2017/021121 WO2018225176A1 (en) 2017-06-07 2017-06-07 State determination device and state determination method
JP2019523262A JP6960995B2 (en) 2017-06-07 2017-06-07 State judgment device and state judgment method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/021121 WO2018225176A1 (en) 2017-06-07 2017-06-07 State determination device and state determination method

Publications (1)

Publication Number Publication Date
WO2018225176A1 true WO2018225176A1 (en) 2018-12-13

Family

ID=64566648

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/021121 WO2018225176A1 (en) 2017-06-07 2017-06-07 State determination device and state determination method

Country Status (2)

Country Link
JP (1) JP6960995B2 (en)
WO (1) WO2018225176A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020179656A1 (en) * 2019-03-06 2020-09-10 オムロン株式会社 Driver monitoring device
WO2020258719A1 (en) * 2019-06-28 2020-12-30 深圳市商汤科技有限公司 Method, apparatus and device for detecting on-duty state of driver, and computer storage medium
WO2021001944A1 (en) * 2019-07-02 2021-01-07 三菱電機株式会社 On-board image processing device and on-board image processing method
WO2021001943A1 (en) * 2019-07-02 2021-01-07 三菱電機株式会社 In-vehicle image processing device and in-vehicle image processing method
WO2021200341A1 (en) * 2020-03-31 2021-10-07 いすゞ自動車株式会社 Permit/prohibit determination device
WO2024069785A1 (en) * 2022-09-28 2024-04-04 三菱電機株式会社 Occupant state determination device, occupant state determination system, occupant state determination method, program, and vehicle control system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110103816B (en) * 2019-03-15 2022-04-19 河南理工大学 Driving state detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008137639A (en) * 2006-11-06 2008-06-19 Quality Kk Vehicle control device and vehicle control program
JP2016009255A (en) * 2014-06-23 2016-01-18 株式会社デンソー Driver's undrivable state detector

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008137639A (en) * 2006-11-06 2008-06-19 Quality Kk Vehicle control device and vehicle control program
JP2016009255A (en) * 2014-06-23 2016-01-18 株式会社デンソー Driver's undrivable state detector

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020179656A1 (en) * 2019-03-06 2020-09-10 オムロン株式会社 Driver monitoring device
WO2020258719A1 (en) * 2019-06-28 2020-12-30 深圳市商汤科技有限公司 Method, apparatus and device for detecting on-duty state of driver, and computer storage medium
US11423676B2 (en) 2019-06-28 2022-08-23 Shenzhen Sensetime Technology Co., Ltd. Method and apparatus for detecting on-duty state of driver, device, and computer storage medium
JPWO2021001944A1 (en) * 2019-07-02 2021-11-04 三菱電機株式会社 In-vehicle image processing device and in-vehicle image processing method
WO2021001943A1 (en) * 2019-07-02 2021-01-07 三菱電機株式会社 In-vehicle image processing device and in-vehicle image processing method
JPWO2021001943A1 (en) * 2019-07-02 2021-11-25 三菱電機株式会社 In-vehicle image processing device and in-vehicle image processing method
WO2021001944A1 (en) * 2019-07-02 2021-01-07 三菱電機株式会社 On-board image processing device and on-board image processing method
JP7183420B2 (en) 2019-07-02 2022-12-05 三菱電機株式会社 In-vehicle image processing device and in-vehicle image processing method
WO2021200341A1 (en) * 2020-03-31 2021-10-07 いすゞ自動車株式会社 Permit/prohibit determination device
JP2021163124A (en) * 2020-03-31 2021-10-11 いすゞ自動車株式会社 Permission/prohibition determination device
CN115398508A (en) * 2020-03-31 2022-11-25 五十铃自动车株式会社 Permission/non-permission determination device
JP7351253B2 (en) 2020-03-31 2023-09-27 いすゞ自動車株式会社 Approval/refusal decision device
CN115398508B (en) * 2020-03-31 2024-01-05 五十铃自动车株式会社 Permission determination device
WO2024069785A1 (en) * 2022-09-28 2024-04-04 三菱電機株式会社 Occupant state determination device, occupant state determination system, occupant state determination method, program, and vehicle control system

Also Published As

Publication number Publication date
JPWO2018225176A1 (en) 2019-12-12
JP6960995B2 (en) 2021-11-05

Similar Documents

Publication Publication Date Title
WO2018225176A1 (en) State determination device and state determination method
US10796171B2 (en) Object recognition apparatus, object recognition method, and object recognition program
CN104573623B (en) Face detection device and method
JP5867273B2 (en) Approaching object detection device, approaching object detection method, and computer program for approaching object detection
JP6573193B2 (en) Determination device, determination method, and determination program
CN109997148B (en) Information processing apparatus, imaging apparatus, device control system, moving object, information processing method, and computer-readable recording medium
JP6775197B2 (en) Display device and display method
JP7290930B2 (en) Occupant modeling device, occupant modeling method and occupant modeling program
JP2004334786A (en) State detection device and state detection system
JP2020056717A (en) Position detection device
JP2005066023A (en) Apparatus for detecting driver&#39;s condition
JP6594595B2 (en) Inoperable state determination device and inoperable state determination method
JP2017030578A (en) Automatic drive control apparatus, and automatic drive control method
JP2005018655A (en) Driver&#39;s action estimation device
US11915495B2 (en) Information processing apparatus, and recording medium
JP7312971B2 (en) vehicle display
WO2022113275A1 (en) Sleep detection device and sleep detection system
JP6711128B2 (en) Image processing device, imaging device, mobile device control system, image processing method, and program
JP2009278185A (en) Image recognition apparatus
WO2018097269A1 (en) Information processing device, imaging device, equipment control system, mobile object, information processing method, and computer-readable recording medium
JP4847303B2 (en) Obstacle detection method, obstacle detection program, and obstacle detection apparatus
JP7257623B2 (en) Display control device and display control method
WO2024079779A1 (en) Passenger state determination device, passenger state determination system, passenger state determination method and program
WO2022176037A1 (en) Adjustment device, adjustment system, display device, occupant monitoring device, and adjustment method
US20210407299A1 (en) Display device for vehicle, display method for vehicle, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17912476

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019523262

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17912476

Country of ref document: EP

Kind code of ref document: A1