WO2020179656A1 - Driver monitoring device - Google Patents

Driver monitoring device Download PDF

Info

Publication number
WO2020179656A1
WO2020179656A1 PCT/JP2020/008262 JP2020008262W WO2020179656A1 WO 2020179656 A1 WO2020179656 A1 WO 2020179656A1 JP 2020008262 W JP2020008262 W JP 2020008262W WO 2020179656 A1 WO2020179656 A1 WO 2020179656A1
Authority
WO
WIPO (PCT)
Prior art keywords
driver
difference
posture
image
total
Prior art date
Application number
PCT/JP2020/008262
Other languages
French (fr)
Japanese (ja)
Inventor
一輝 三浦
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Publication of WO2020179656A1 publication Critical patent/WO2020179656A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the present invention relates to a device for monitoring the condition of a driver of a vehicle, and particularly to a technique for determining whether or not the posture of the driver is appropriate.
  • Driver driver monitors are known that capture and monitor drivers with cameras to prevent traffic accidents such as drowsy driving and looking aside.
  • the driver monitor is installed in the driver's seat of the vehicle and outputs an alarm to alert the driver when a driver's abnormality is detected from the image captured by the camera, or to perform predetermined control on the vehicle. I will go.
  • Examples of the abnormality of the driver include a drowsiness, looking aside, and the case where the driving posture is not proper. For example, when the driver is in a reclining posture with a large inclination to the rear, the face is facing upward and the visibility in front of the vehicle deteriorates.Therefore, if the vehicle approaches the front vehicle abnormally or another vehicle suddenly interrupts.
  • Patent Documents 1 to 3 there have been proposed devices that monitor the posture of the driver with a camera or a sensor and take measures such as outputting an alarm when the posture is not appropriate.
  • the posture collapse amount is calculated based on the image captured by the camera. For example, the center of the headrest or steering is used as a reference point, and the distance between the center of the driver's face and the reference point is calculated as the amount of posture collapse. Then, when the calculated posture collapse amount is larger than the threshold value, it is determined that the driver is in the posture collapse state.
  • driver's posture information (elbow angle, knee angle, upper body angle, etc.) is acquired from a captured image of a camera, and the driver's confidence in driving is calculated from the degree of dispersion of this posture information. To do. If the calculated confidence level is less than a predetermined value, the driver is encouraged to improve the driving posture by displaying the driving posture of the driver and the appropriate driving posture.
  • a body pressure distribution sensor for detecting the body pressure distribution of the driver is provided in the seat of the driver seat as a means for detecting the body signal of the driver according to the driving posture, and the body pressure detected by the sensor is provided.
  • the driving posture of the driver is determined based on the distribution. Then, the driver's driving posture and the proper driving posture are displayed in a comparable manner, and the motion of the body part necessary to correct the driving posture of the driver to the proper driving posture is notified. There is.
  • Patent Document 1 since it is necessary to measure the distance between the driver's face center and the reference point in order to detect posture collapse, the driver monitor must be equipped with a distance measurement function.
  • Patent Document 2 since the captured image is analyzed and the posture information is obtained by quantifying the angles of the elbow, knee, upper body, and the like, the calculation process is complicated.
  • Patent Document 3 a body signal detecting means such as a body pressure distribution sensor is required separately from the driver monitor, and the configuration becomes complicated.
  • An object of the present invention is to provide a driver monitoring device that can easily detect an improper posture of a driver without a distance measuring function.
  • a driver monitoring device includes a camera for capturing an image of a driver, an attitude determination unit for determining the attitude of the driver based on an image captured by the camera, and a case where the attitude of the driver is determined to be inappropriate.
  • An alarm output unit that outputs an alarm to the camera, a difference calculation unit that calculates the difference between two images captured by the camera, and a difference image obtained by the calculation of the difference calculation unit is binarized using a threshold value, and the binary value
  • a total difference calculation unit that calculates the total number of pixels equal to or larger than a threshold value in the converted difference image as the total difference.
  • the camera has a first reference image captured while the driver is not in the vehicle and a second reference image captured when the driver is in the vehicle and the posture of the driver is proper.
  • a driving image taken while the person is driving is generated.
  • the difference calculation unit calculates a difference between the first reference image and the second reference image to generate a reference difference image which is a difference between the two images, and a difference between the first reference image and the driving image. Is calculated to generate a difference image during driving which is a difference between the two images.
  • the total difference calculation unit calculates the total number of pixels equal to or larger than the threshold value in the binarized reference difference image as the first total difference X, and calculates the total number of pixels equal to or larger than the threshold value in the binarized driving difference image as the second total. It is calculated as the total difference Y.
  • the posture determination unit determines whether or not the posture of the driver is appropriate based on the comparison result of the first total difference X and the second total difference Y.
  • a reference difference image and a driving difference image are generated from three images, a first reference image, a second reference image, and a driving image. Then, by comparing the total difference obtained by binarizing the reference difference image with the total difference obtained by binarizing the driving difference image, it is determined whether or not the posture of the driver is appropriate. Therefore, even with a driver monitoring device that does not have a distance measuring function, it is possible to easily determine whether or not the posture of the driver is appropriate by a simple arithmetic process.
  • the posture determination unit determines that
  • the barycentric position of the pixel region of the threshold value or more in the binarized reference difference image is calculated as the first barycentric position M, and the barycentric position of the pixel region of the threshold value or more in the binarized driving difference image is calculated.
  • a barycenter calculation unit may be provided to calculate as the second barycentric position N. Then, the posture determination unit determines whether the driver's posture is proper based on the comparison result between the first total difference X and the second total difference Y and the comparison result between the first center-of-gravity position M and the second center-of-gravity position N. It may be determined whether or not.
  • the posture determination unit determines that the posture of the driver is proper when
  • >Q it may be determined that the driver's posture is inappropriate.
  • the posture determination unit determines that the posture of the driver is inappropriate
  • the posture determination unit measures the time period during which the inappropriate posture continues as the duration time
  • the alarm output unit determines that the duration time is predetermined. An alarm may be output when the reference time has elapsed.
  • the posture determination unit determines whether or not the posture of the driver who has boarded the vehicle is appropriate based on the face information of the driver and the vehicle information of the vehicle. You may judge based on this.
  • the posture determination unit may determine whether or not the posture of the driver who has boarded the vehicle is appropriate by taking into consideration personal learning data regarding the posture of the driver in addition to the face information and the vehicle information. Good.
  • the camera may capture the first reference image when it is confirmed that the driver having the electronic key for opening and closing the door of the vehicle approaches the vehicle.
  • the present invention it is possible to provide a driver monitoring device that can easily detect an inappropriate posture of a driver without a distance measuring function.
  • FIG. 1 is a block diagram of a driver monitoring device according to the first embodiment of the present invention.
  • FIG. 2 is a diagram showing a state in which a driver is imaged by a camera.
  • FIG. 3 is a diagram illustrating an installation example of a camera.
  • FIG. 4 is a diagram showing an example of an improper posture of the driver.
  • FIG. 5 is a diagram showing another example of the driver's improper posture.
  • FIG. 6 is a diagram showing a reference image A captured in the absence of a driver.
  • FIG. 7 is a diagram showing a reference image B captured while the driver is on board.
  • FIG. 8 is a diagram showing a reference difference image G which is a difference between the reference images A and B.
  • FIG. 9 is a diagram schematically showing binarization of the reference difference image G.
  • FIG. 10 is a diagram showing a driving image C1 captured while the vehicle is driving.
  • FIG. 11 is a diagram showing a driving difference image H1 which is a difference between the reference image A and the driving image C1.
  • FIG. 12 is a diagram schematically showing the binarization of the difference image H1 during operation.
  • FIG. 13 is a diagram showing another driving image C2 taken while driving the vehicle.
  • FIG. 14 is a diagram showing a driving difference image H2 which is a difference between the reference image A and the driving image C2.
  • FIG. 15 is a diagram schematically showing the binarization of the difference image H2 during operation.
  • FIG. 16 is a flowchart showing in detail the posture determination procedure in the first embodiment.
  • FIG. 17 is a diagram illustrating the driver's movement in the left-right direction.
  • FIG. 18 is a diagram schematically showing the principle of the second embodiment of the present invention.
  • FIG. 19 is a block diagram of the driver monitoring device according to the second embodiment.
  • FIG. 20 is
  • FIG. 1 shows the configuration of the driver monitoring device (hereinafter referred to as “driver monitor”) according to the first embodiment.
  • a driver monitor 100 is a device mounted on a vehicle to monitor a driver's state, and includes a camera 1, an image processing unit 2, a control unit 3, a storage unit 4, a communication unit 5, and a power supply circuit 6. Equipped with.
  • the camera 1 is provided, for example, as shown in FIG. 2, in the vehicle interior 51 of the vehicle 50 so as to face the driver DR, and images the driver DR seated on the seat 54.
  • the broken line indicates the imaging range of the camera 1, and the upper body including the face, neck, and shoulders of the driver DR is imaged by the camera 1.
  • Vehicle 50 is, for example, an automobile.
  • Reference numeral 55 in FIG. 2 indicates a headrest provided on the upper portion of the seat 54.
  • FIG. 3 is a view of the windshield W and the instrument panel 52 as viewed from the inside of the passenger compartment 51 toward the traveling direction of the vehicle 50.
  • the camera 1 is installed at a location on the instrument panel 52 that faces the driver DR, and images the driver DR through the steering wheel 53. Note that the installation location of the camera 1 is not limited to the example of FIG. 3, and may be installed near the meter 56, the display 57, etc. arranged on the instrument panel 52, or on the windshield W, for example. It may be installed in the vicinity of the room mirror 58 located at.
  • the camera 1 includes an imaging unit 11 that captures the driver DR as a subject, and a light emitting unit 12 that irradiates the driver DR with light.
  • the image pickup section 11 has an image pickup element such as a CMOS image sensor
  • the light emitting section 12 has a light emitting element such as a light emitting diode that emits infrared light.
  • the camera 1 is also equipped with an optical system such as a lens (not shown).
  • the image processing unit 2 processes the image captured by the image capturing unit 11 of the camera 1 to extract the face of the driver DR and feature points such as eyes, nose, and mouth of the face. Then, these are analyzed to detect the direction of the driver's face, the direction of the line of sight, the open / closed state of the eyelids, and the like.
  • the control unit 3 includes a CPU and the like, and includes a difference calculation unit 31, a total difference calculation output unit 32, a vehicle state determination unit 33, an attitude determination unit 34, and an alarm output unit 35.
  • the functions of each of these parts are actually realized by software processing, but are shown here as hardware blocks for convenience.
  • the difference calculation unit 31 calculates the difference between the two images (described later) captured by the camera 1 to generate a difference image.
  • the total difference calculation unit 32 binarizes the difference image into “0” and “1” by a threshold value, and calculates the total number of pixels of “1” which is equal to or more than the threshold value (total difference).
  • the vehicle state determination unit 33 determines the state of the vehicle 50 (stopped state, running state, turning state, etc.) based on vehicle information such as vehicle speed and yaw rate input to the driver monitor 100 from the outside.
  • the posture determination unit 34 determines whether or not the posture of the driver DR is appropriate, based on the total difference calculated by the total difference calculation unit 32.
  • the alarm output unit 35 outputs an alarm when the attitude determination unit 34 determines that the posture of the driver DR is inappropriate. Details of the processing in each of these blocks 31 to 35 will be described later in more detail.
  • control unit 3 includes a drowsiness determination unit that determines whether or not the driver DR is drowsy driving, and whether or not the driver DR is looking aside. Although an inattentive judgment unit for judging is provided, these are not shown because they are not directly related to the present invention.
  • the storage unit 4 is composed of a semiconductor memory or the like, and stores various parameters required for processing by the control unit 3.
  • the parameters related to the present invention include a binarization threshold when the difference image is binarized by the total difference calculation unit 32, the reference value P and the reference time To used for the posture determination unit 34 to determine the posture of the driver DR.
  • the storage unit 4 is also provided with an area in which a program used for software processing in the control unit 3 is stored and an area in which the calculation result, the determination result, and the like in the control unit 3 are temporarily stored ( (Not shown).
  • the communication unit 5 sends and receives signals and data to and from the ECU (Electronic Control Unit) 200 mounted on the vehicle 50, and also sends and receives signals and data to and from the control unit 3.
  • the communication unit 5 also receives signals from various sensors provided in the vehicle 50 as the vehicle information described above.
  • the communication unit 5 and the ECU 200 are connected via a CAN (Controller Area Network).
  • the ECU 200 is composed of a plurality of ECUs provided for each control target, but in FIG. 1, they are collectively shown as one block.
  • the electronic key 300 is a mobile terminal for remotely controlling the opening and closing of the door of the vehicle 50, and performs wireless communication with the ECU 200.
  • the power supply circuit 6 supplies power to each part of the driver monitor 100 under the control of the control unit 3. Power is supplied to the power supply circuit 6 from a battery (not shown) mounted on the vehicle 50.
  • FIG. 4 shows an example of an improper posture of the driver DR boarding the vehicle 50.
  • the driver DR reclines the seat 54 and assumes a posture inclining rearward largely (reclining posture).
  • the face of the driver DR faces upward and the visibility in front of the vehicle 50 deteriorates. Therefore, in the case of an abnormal approach to the vehicle in front or a sudden interruption of another vehicle, the danger is immediately avoided.
  • FIG. 5 shows another example of the incorrect posture of the driver DR.
  • the driver DR is in a posture of being greatly tilted forward (a leaning forward posture).
  • the face of the driver DR faces downward, and the visibility in front of the vehicle 50 also deteriorates. Therefore, it is possible to immediately perform the driving for avoiding the danger against the abnormal approach or sudden interruption. It is not possible to shift, and there is a risk of an accident due to inattention to the front as in the case of looking aside, which is a safety issue.
  • a method of detecting this improper posture a method of measuring the distance from the camera 1 to the face of the driver DR based on the captured image of the camera 1 and determining whether the posture is proper or not can be considered. .. According to this, when the distance from the camera 1 to the face of the driver DR is too long, it is determined that the driver DR is in the reclining posture as shown in FIG. 4, and the distance from the camera 1 to the face of the driver DR is determined. If the distance is too short, it is determined that the driver DR is in the forward leaning posture as shown in FIG. However, this method cannot be adopted unless the driver monitor 100 has a distance measuring function.
  • the present invention makes it possible to determine whether or not the posture of the driver DR is appropriate, even if the driver monitor 100 does not have a distance measuring function.
  • the principle of the present invention will be described.
  • the camera 1 captures the reference image A (first reference image) shown in FIG. 6 and the reference image B (second reference image) shown in FIG. 7.
  • the reference image A of FIG. 6 is an image captured in a state where the driver DR is not in the vehicle 50
  • the reference image B of FIG. 7 is the image of the driver DR in the vehicle 50 and the driver DR of the driver DR. It is an image picked up in a proper posture.
  • the reference image B is captured after the driver DR starts driving the vehicle 50.
  • the difference calculation unit 31 calculates the difference between the reference image A and the reference image B to generate a reference difference image G as shown in FIG. 8 which is the difference between the two images A and B.
  • the reference difference image G the background portion other than the driver DR is darkened by the difference calculation, and the driver DR portion is brightened.
  • the actual reference difference image G is not such a simple light and dark image, but in FIG. 8, only the driver DR portion is drawn brightly for simplification (also in FIGS. 11 and 14). ).
  • the posture of the driver DR in the reference difference image G is the same as the posture of the driver DR in the reference image B of FIG. 7, and is an appropriate posture.
  • the total difference calculation unit 32 binarizes the reference difference image G as shown in FIG. 9 using the binarization threshold value stored in the storage unit 4 (FIG. 1).
  • FIG. 9 schematically shows pixels forming the reference difference image G.
  • White pixels are "0" pixels whose luminance is less than the threshold value
  • black pixels are "1" pixels whose luminance value is the threshold value or more. (The same applies to FIGS. 12, 15, and 18). It should be noted that the white color and the black color of the pixel are colored separately for convenience of distinction between “0” and “1” and have no relation to the actual luminance.
  • the total difference calculation unit 32 calculates the total number of pixels of “1” in the binarized reference difference image G of FIG. 9 as the total difference X (first total difference).
  • the camera 1 constantly captures the driver DR and generates a driving image captured while the driver DR is driving. If the posture of the driver DR is appropriate, the captured driving image is the same as the reference image B shown in FIG. 7. On the other hand, if the posture of the driver DR is inappropriate, the captured driving image will be an image as shown in FIGS. 10 and 13.
  • FIG. 10 shows a driving image C1 when the driver DR is in the reclining posture shown in FIG.
  • this driving image C1 since the driver DR is retracted rearward when viewed from the camera 1, the driver DR is captured smaller than the reference image B in FIG. 7.
  • the difference calculation unit 31 calculates the difference between the reference image A in FIG. 6 and the driving image C1 in FIG. 10 to calculate the difference between the two images A and C1, and the driving difference image H1 as shown in FIG. To generate.
  • the total difference calculation unit 32 binarizes the driving difference image H1 using a binarization threshold as shown in FIG. Then, the total difference calculation unit 32 calculates the total number of pixels of “1” in the binarized driving difference image H1 as the total difference Y (second total difference).
  • the posture determination unit 34 calculates the total difference X calculated for the reference difference image G in FIG. 9, the total difference Y calculated for the driving difference image H1 in FIG. 12, and the reference value P stored in the storage unit 4. Based on (first reference value), it is determined whether or not the posture of the driver DR is appropriate. Specifically, the posture determination unit 34 calculates the difference between the total difference X and the total difference Y as
  • the posture of the driver DR When the posture of the driver DR is appropriate, the face of the driver DR is neither too far from the camera 1 nor too close to the camera 1, so the difference
  • the posture of the driver DR is the reclining posture (FIG. 4)
  • the face of the driver DR is too far away from the camera 1, so that the total difference Y also becomes smaller as the image of the driver DR becomes smaller. Therefore, the difference
  • FIG. 13 shows a driving image C2 when the driver DR is in the forward leaning posture shown in FIG.
  • the driver DR since the driver DR is approaching the front when viewed from the camera 1, the driver DR is imaged larger than the reference image B in FIG.
  • the difference calculation unit 31 calculates the difference between the reference image A in FIG. 6 and the driving image C2 in FIG. 13 to calculate the difference between the two images A and C2, and the driving difference image H2 as shown in FIG. To generate.
  • the total difference calculation unit 32 binarizes this driving difference image H2 using a binarization threshold as shown in FIG. Then, the total difference calculation unit 32 calculates the total number of pixels of “1” in the binarized driving difference image H2 as the total difference Y (second total difference).
  • the posture determination unit 34 calculates the total difference X calculated for the reference difference image G of FIG. 9, the total difference Y calculated for the driving difference image H2 of FIG. 15, and the reference value P stored in the storage unit 4. Based on (first reference value), it is determined whether or not the posture of the driver DR is appropriate. Specifically, similar to the above, the difference between the total difference X and the total difference Y is calculated as
  • the alarm output unit 35 When the posture determination unit 34 determines that the posture of the driver DR is inappropriate, the alarm output unit 35 outputs an alarm. This alarm is output from a speaker (not shown) provided in the vehicle 50 as voice guidance such as "the driving posture is not proper. Please return to the correct posture.”
  • Whether or not the posture of the driver DR is appropriate is determined based on whether or not the posture is correct. Therefore, even if the driver monitor 100 does not have a distance measuring function, it is possible to easily determine that the posture of the driver DR is a reclining posture or a leaning posture by a simple calculation process.
  • FIG. 16 is a flowchart showing the posture determination procedure in the driver monitor 100 of the first embodiment in more detail.
  • step S1 the driver DR holding the electronic key 300 (FIG. 1) waits for approaching the vehicle 50.
  • the ECU 200 communicates with the electronic key 300, and detects that the electronic key 300 approaches the vehicle 50 based on receiving the response signal transmitted from the electronic key 300.
  • the door opening / closing portion (not shown) unlocks the door of the vehicle 50.
  • step S2 the control unit 3 drives the power supply circuit 6 to turn on the power supply of the driver monitor 100.
  • step S3 the camera 1 captures the reference image A (FIG. 6) in the absence of the driver DR.
  • step S4 it is determined whether or not the driver DR has boarded the vehicle 50. This determination is made on the output of, for example, a door sensor (not shown) that detects that the door of the driver's seat has been opened, or a seating sensor (not shown) that detects that the driver DR is seated on the seat 54 of the driver's seat. It is done based on.
  • a door sensor not shown
  • a seating sensor not shown
  • step S5 determines whether or not the posture of the driver DR is appropriate. This determination is based on, for example, face information of the driver DR (face direction, line-of-sight direction, etc.) obtained by image processing in the image processing unit 2 and vehicle information of the vehicle 50 input via the communication unit 5. It is done based on. As described above, the vehicle speed, the yaw rate, and the like are input as the vehicle information, and the vehicle state determination unit 33 determines the state of the vehicle 50 (running state, stopped state, turning state, etc.) based on these.
  • the posture determination unit 34 determines that the posture of the driver DR is not appropriate when the face of the driver DR faces sideways, and also when the vehicle 50 is turning, for example. It is judged that the posture is not appropriate because there is a risk of shaking. On the other hand, when the face of the driver DR is facing the front and the vehicle 50 is traveling straight, the posture determination unit 34 determines that the posture of the driver DR is appropriate.
  • the posture determination unit 34 also takes into consideration personal learning data regarding the posture of the driver DR stored in the storage unit 4, and the posture of the driver DR is appropriate. Or not.
  • step S5 If it is determined in step S5 that the posture of the driver DR is not proper, the driver waits until the posture becomes proper. Then, when it is determined that the posture of the driver DR is appropriate, the process proceeds to step S6, and the camera 1 captures the reference image B (FIG. 7) including the driver DR.
  • step S7 the difference calculation unit 31 calculates the difference between the reference image A and the reference image B to generate the reference difference image G (FIG. 8). Further, the difference calculation unit 31 binarizes the reference difference image G in step S8 (FIG. 9), totals the “1” pixels of the binarized reference difference image G in step S9, and calculates the total difference. Calculate X.
  • step S10 the camera 1 constantly photographs the driver DR who is driving, and photographs the driving image C (C1 in FIG. 10, C2 in FIG. 13).
  • step S11 the difference calculation unit 31 calculates the difference between the reference image A and the driving image C to generate a driving difference image H (H1 in FIG. 11, H2 in FIG. 14). Further, the difference calculation unit 31 binarizes the driving difference image H in step S12 (FIGS. 12 and 15), and totals the “1” pixels of the binarized reference difference image H in step S13. Then, the total difference Y is calculated.
  • step S14 the posture determination unit 34 compares the difference
  • step S15 the posture determination unit 34 measures the time during which the incorrect posture of the driver DR continues as the duration Tc. This is because even if the posture of the driver DR temporarily becomes an improper posture, if the posture is returned to the proper posture in a very short time, there is substantially no safety problem, and in such a case, the alarm is triggered irregularly. This is to avoid being done. Vehicle information is also taken into consideration in the measurement of the duration Tc. For example, when the gear is in the R (Reverse) range to move the vehicle 50 backward, the operation of checking the rear frequently occurs, so that the posture of the driver DR collapses, but there is no safety problem. Therefore, in such a case, the time during the backward confirmation of the driver DR is excluded from the duration Tc of the improper posture.
  • the posture determination unit 34 compares the measured duration time Tc with the reference time To stored in the storage unit 4 in the subsequent step S16. As a result, when the duration time Tc ⁇ reference time To (determination in step S16: NO), it is determined that the improper posture is temporary, and the process returns to the beginning without executing steps S15 to S18. After that, the operations after step S1 are performed. On the other hand, when the duration Tc> the reference time To (determination in step S16: YES), it is determined that the improper posture continues for a long time, and the process proceeds to step S17.
  • step S17 the alarm output unit 35 outputs an alarm for notifying the driver DR of an improper posture to prompt correction of the posture as a voice message as described above.
  • step S18 the control unit 3 determines whether or not the engine of the vehicle 50 has been turned off (stopped). This determination is made based on whether or not the ignition switch (not shown) is turned off. While the engine is not turned off (determination in step S18: NO), the process returns to step S10, the photographing of the driver DR by the camera 1 is continued, and the subsequent steps S11 to S18 are repeatedly executed. Then, when the engine is turned off (determination in step S18: YES), the process returns to the beginning and the operations in step S1 and subsequent steps are performed.
  • the reclining posture or the forward leaning posture of the driver DR that is, the inappropriate posture in the front-back direction when viewed from the camera 1 is detected.
  • the inappropriate posture in the left-right direction, the vertical direction, or the diagonal direction viewed from the camera 1 is detected. It is difficult to detect the posture. For example, as shown in FIG. 17, when the driver DR in an appropriate posture moves in the left-right direction (left in the figure) as viewed from the camera 1 and comes to the position indicated by the broken line, the driver DR′ is seated normally. Since it is out of position, it has an incorrect posture in the left-right direction.
  • the size on the image of the driver DR′ having an improper posture is almost the same as the size on the image of the driver DR having the proper posture. There is no big difference. For this reason, when the suitability of the posture is determined based on only the difference between the total difference X and the total difference Y as in the first embodiment, both the driver DR and the driver DR′ are determined to have proper postures. There is a risk that The second embodiment solves this problem.
  • FIG. 18 is a diagram schematically showing the principle of the second embodiment.
  • 18A shows a binarized image of the reference difference image corresponding to the driver DR of FIG. 17, and
  • FIG. 18B shows a binarized image of the driving difference image corresponding to the driver DR′ of FIG.
  • the binarized image is shown. Since the total difference (the total number of black pixels) in these binarized images is almost the same, the suitability of the posture cannot be determined only by the total difference as described above.
  • the position of the center of gravity of the "1" pixel region (black-painted region in FIG. 18) in the reference difference image and the driving difference image is calculated.
  • the barycentric position M (first barycentric position) of the reference difference image G is indicated by X
  • the barycentric position N of the driving difference image H is ⁇ . It is indicated by a mark.
  • the difference between the X coordinate of the center of gravity position M and the X coordinate of the center of gravity position N is
  • the difference between the Y coordinate of the center of gravity position M and the Y coordinate of the center of gravity position N is
  • FIG. 18 shows an example in which the center of gravity position N is horizontally moved to the left with respect to the center of gravity position M by a distance ⁇ for simplification.
  • the X coordinate of the center of gravity position N has changed from the X coordinate of the center of gravity position M, but the Y coordinate of the center of gravity position N is the same as the Y coordinate of the center of gravity position M. That is,
  • ⁇ ,
  • 0.
  • the posture of the driver DR is appropriate, and if the movement distance ⁇ exceeds the reference value Q ( ⁇ >Q), The posture of the driver DR is determined to be inappropriate.
  • N (here
  • is a value equal to or less than the reference value Q.
  • individual reference values Q1, Q2, and Q3 may be set for each of the left-right, up-down, and diagonal directions.
  • the position of the center of gravity M and N of the driver on the image is also taken into consideration to determine the appropriateness of the posture. It is possible to detect not only the unsuitable posture of the driver DR in the front-rear direction but also the unsuitable posture of the driver DR in the left, right, up, and down directions.
  • FIG. 19 shows the configuration of the driver monitor 100 according to the second embodiment.
  • the same parts as those in FIG. 1 are designated by the same reference numerals.
  • the difference from FIG. 1 is that the control unit 3 is provided with the center of gravity calculation unit 36 and the storage unit 4 stores the reference value Q.
  • the configurations other than these are the same as those in FIG. 1, and the description thereof will be omitted.
  • FIG. 20 is a flowchart showing the posture determination procedure in the driver monitor 100 of the second embodiment in more detail.
  • steps that perform the same processing as in FIG. 16 are assigned the same reference numerals.
  • Steps S1 to S9, S10 to S13, and S15 to S18 in FIG. 20 are the same processes as in FIG. 16, so description thereof will be omitted.
  • step S9a the center-of-gravity calculating unit 36 calculates the center-of-gravity position M (FIG. 18A) of the pixel region of "1" in the binarized reference difference image G.
  • step S13a the center of gravity calculation unit 36 calculates the center of gravity position N (FIG. 18B) of the pixel region “1” in the binarized operating difference image H.
  • step S14a the posture determination unit 34 compares the difference
  • between the total difference X and the total difference Y is compared with the reference value P, but the ratio Y/X (or X/Y) between the total difference X and the total difference Y is calculated.
  • You may compare with a predetermined reference value.
  • the suitability of the posture of the driver DR may be determined based on the comparison result of the total difference X and the total difference Y.
  • between the center of gravity position M and the center of gravity position N is compared with the reference value Q, but the ratio M/N (or N/M) of the center of gravity position M and the center of gravity position N is calculated. You may compare with a predetermined reference value. In short, the suitability of the posture of the driver DR may be determined based on the comparison result between the center of gravity position M and the center of gravity position N.
  • the camera 1 captures the reference image B after the driver DR starts driving the vehicle 50. However, between the time the driver DR boards the vehicle 50 and the driving starts, The camera 1 may capture the reference image B.
  • the present invention can also be applied to a driver monitor mounted on another vehicle such as a truck or a bus.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Emergency Alarm Devices (AREA)
  • Image Processing (AREA)

Abstract

In the present invention, a camera (1) generates: a first reference image in which a driver is imaged while not riding in a vehicle; a second reference image in which the driver is imaged while riding with proper posture; and a driving image in which the driver is imaged while driving. A difference calculation unit (31) generates: a reference difference image which is the difference between the first reference image and the second reference image; and a driving difference image which is the difference between the first reference image and the driving image. A difference total calculation unit (32): calculates, as a first difference total, the total number of pixels of a threshold value or above in the binarized reference difference image; and calculates, as a second difference total, the total number of pixels of the threshold value or above in the binarized driving difference image. A posture determination unit (34) determines whether the posture of the driver is proper, on the basis of a comparison result between the first difference total and the second difference total. An alarm output unit (35) outputs an alarm if the posture of the driver is determined to be improper.

Description

運転者監視装置Driver monitoring device
 本発明は、車両の運転者の状態を監視する装置に関し、特に、運転者の姿勢が適正か否かを判定する技術に関する。 The present invention relates to a device for monitoring the condition of a driver of a vehicle, and particularly to a technique for determining whether or not the posture of the driver is appropriate.
 居眠り運転や脇見運転などによる交通事故を未然に防止するために、運転者をカメラで撮像して監視するドライバモニタが知られている。ドライバモニタは、車両の運転席に装備され、カメラの撮像画像から運転者の異常が検出された場合は、警報を出力して運転者に注意を喚起したり、車両に対して所定の制御を行ったりする。運転者の異常としては、居眠りや脇見などのほか、運転姿勢が適正でない場合も挙げられる。たとえば、運転者が後方へ大きく傾いたリクライニング姿勢をとっている場合は、顔が上向きとなって車両前方の視界が悪くなるため、前方の車両と異常接近した場合や他の車両が急に割り込んできた場合のような、危険な状況が発生した際に、すぐに危険回避のための運転に移行することができず、安全上問題がある。また、運転者が前方へ大きく傾いた前かがみ姿勢をとっている場合も、顔が下向きとなるので同様の問題が生じる。そこで、運転者の姿勢をカメラやセンサにより監視し、姿勢が適正でない場合は警報を出力するなどの処置を行う装置が提案されている(特許文献1~3)。 Driver driver monitors are known that capture and monitor drivers with cameras to prevent traffic accidents such as drowsy driving and looking aside. The driver monitor is installed in the driver's seat of the vehicle and outputs an alarm to alert the driver when a driver's abnormality is detected from the image captured by the camera, or to perform predetermined control on the vehicle. I will go. Examples of the abnormality of the driver include a drowsiness, looking aside, and the case where the driving posture is not proper. For example, when the driver is in a reclining posture with a large inclination to the rear, the face is facing upward and the visibility in front of the vehicle deteriorates.Therefore, if the vehicle approaches the front vehicle abnormally or another vehicle suddenly interrupts. When a dangerous situation occurs, such as when possible, it is not possible to immediately shift to driving to avoid danger, which is a safety issue. Further, even when the driver is in a forward leaning posture with a large inclination to the front, the same problem occurs because the face is downward. Therefore, there have been proposed devices that monitor the posture of the driver with a camera or a sensor and take measures such as outputting an alarm when the posture is not appropriate (Patent Documents 1 to 3).
 特許文献1では、運転者の不適正な姿勢(姿勢崩れ)の検出にあたって、カメラの撮像画像に基づいて、姿勢崩れ量を算出する。たとえば、ヘッドレストやステアリングなどの中心を基準点とし、運転者の顔の中心と基準点との距離を姿勢崩れ量として算出する。そして、算出した姿勢崩れ量が閾値よりも大きい場合に、運転者が姿勢崩れの状態にあると判定する。 In Patent Document 1, when detecting an incorrect posture (posture collapse) of the driver, the posture collapse amount is calculated based on the image captured by the camera. For example, the center of the headrest or steering is used as a reference point, and the distance between the center of the driver's face and the reference point is calculated as the amount of posture collapse. Then, when the calculated posture collapse amount is larger than the threshold value, it is determined that the driver is in the posture collapse state.
 特許文献2では、カメラの撮像画像から運転者の姿勢情報(肘の角度、膝の角度、上体の角度など)を取得し、この姿勢情報の分散度から運転者の運転に対する自信度を算出する。そして、算出した自信度が所定値未満であれば、運転者の運転姿勢と適正な運転姿勢とを表示することにより、運転者に運転姿勢の改善を促すようにしている。 In Patent Document 2, driver's posture information (elbow angle, knee angle, upper body angle, etc.) is acquired from a captured image of a camera, and the driver's confidence in driving is calculated from the degree of dispersion of this posture information. To do. If the calculated confidence level is less than a predetermined value, the driver is encouraged to improve the driving posture by displaying the driving posture of the driver and the appropriate driving posture.
 特許文献3では、運転姿勢に応じた運転者の身体信号を検出する手段として、運転者の体圧分布を検出する体圧分布センサを運転席のシート内に設け、このセンサが検出した体圧分布に基づいて運転者の運転姿勢を判定する。そして、運転者の運転姿勢と適正な運転姿勢とを対比可能な態様で表示するとともに、運転者の運転姿勢を適正な運転姿勢に矯正するのに必要な身体部位の動作を報知するようにしている。 In Patent Document 3, a body pressure distribution sensor for detecting the body pressure distribution of the driver is provided in the seat of the driver seat as a means for detecting the body signal of the driver according to the driving posture, and the body pressure detected by the sensor is provided. The driving posture of the driver is determined based on the distribution. Then, the driver's driving posture and the proper driving posture are displayed in a comparable manner, and the motion of the body part necessary to correct the driving posture of the driver to the proper driving posture is notified. There is.
特開2016-38793号公報JP, 2016-38793, A 特開2015-193274号公報JP, 2005-193274, A 特開2015-20681号公報JP, 2005-20681, A
 特許文献1では、姿勢崩れの検出のために運転者の顔中心と基準点との距離を測定する必要があるので、ドライバモニタに測距機能が備わっていなければならない。特許文献2では、撮像画像を解析して肘、膝、上体などの角度を定量化することで姿勢情報を取得するため、演算処理が複雑となる。特許文献3では、ドライバモニタとは別に体圧分布センサのような身体信号検出手段が必要となり、構成が複雑となる。 In Patent Document 1, since it is necessary to measure the distance between the driver's face center and the reference point in order to detect posture collapse, the driver monitor must be equipped with a distance measurement function. In Patent Document 2, since the captured image is analyzed and the posture information is obtained by quantifying the angles of the elbow, knee, upper body, and the like, the calculation process is complicated. In Patent Document 3, a body signal detecting means such as a body pressure distribution sensor is required separately from the driver monitor, and the configuration becomes complicated.
 本発明の課題は、測距機能がなくても運転者の不適正な姿勢を簡単に検出できる運転者監視装置を提供することにある。 An object of the present invention is to provide a driver monitoring device that can easily detect an improper posture of a driver without a distance measuring function.
 本発明に係る運転者監視装置は、運転者を撮像するカメラと、このカメラの撮像画像に基づいて運転者の姿勢を判定する姿勢判定部と、運転者の姿勢が不適正と判定された場合に警報を出力する警報出力部と、カメラが撮像した2つの画像の差分を演算する差分演算部と、この差分演算部の演算によって得られる差分画像を閾値を用いて2値化し、当該2値化された差分画像における閾値以上の画素の総数を差分総計として算出する差分総計算出部とを備える。カメラは、運転者が車両に搭乗していない状態で撮像した第1基準画像と、運転者が車両に搭乗しかつ当該運転者の姿勢が適正である状態で撮像した第2基準画像と、運転者が運転をしている状態で撮像した運転時画像とを生成する。差分演算部は、第1基準画像と第2基準画像との差分を演算して、当該2つの画像の差分である基準差分画像を生成し、また、第1基準画像と運転時画像との差分を演算して、当該2つの画像の差分である運転時差分画像を生成する。差分総計算出部は、2値化された基準差分画像における閾値以上の画素の総数を第1差分総計Xとして算出し、2値化された運転時差分画像における閾値以上の画素の総数を第2差分総計Yとして算出する。姿勢判定部は、第1差分総計Xと第2差分総計Yとの比較結果に基づいて、運転者の姿勢が適正か否かを判定する。 A driver monitoring device according to the present invention includes a camera for capturing an image of a driver, an attitude determination unit for determining the attitude of the driver based on an image captured by the camera, and a case where the attitude of the driver is determined to be inappropriate. An alarm output unit that outputs an alarm to the camera, a difference calculation unit that calculates the difference between two images captured by the camera, and a difference image obtained by the calculation of the difference calculation unit is binarized using a threshold value, and the binary value And a total difference calculation unit that calculates the total number of pixels equal to or larger than a threshold value in the converted difference image as the total difference. The camera has a first reference image captured while the driver is not in the vehicle and a second reference image captured when the driver is in the vehicle and the posture of the driver is proper. A driving image taken while the person is driving is generated. The difference calculation unit calculates a difference between the first reference image and the second reference image to generate a reference difference image which is a difference between the two images, and a difference between the first reference image and the driving image. Is calculated to generate a difference image during driving which is a difference between the two images. The total difference calculation unit calculates the total number of pixels equal to or larger than the threshold value in the binarized reference difference image as the first total difference X, and calculates the total number of pixels equal to or larger than the threshold value in the binarized driving difference image as the second total. It is calculated as the total difference Y. The posture determination unit determines whether or not the posture of the driver is appropriate based on the comparison result of the first total difference X and the second total difference Y.
 このような運転者監視装置によると、第1基準画像、第2基準画像、および運転時画像の3つの画像から、基準差分画像と運転時差分画像が生成される。そして、基準差分画像を2値化して得られる差分総計と、運転時差分画像を2値化して得られる差分総計とを比較することで、運転者の姿勢の適否が判定される。このため、測距機能が備わっていない運転者監視装置であっても、簡単な演算処理によって、運転者の姿勢が適正か否かを容易に判定することができる。 According to such a driver monitoring device, a reference difference image and a driving difference image are generated from three images, a first reference image, a second reference image, and a driving image. Then, by comparing the total difference obtained by binarizing the reference difference image with the total difference obtained by binarizing the driving difference image, it is determined whether or not the posture of the driver is appropriate. Therefore, even with a driver monitoring device that does not have a distance measuring function, it is possible to easily determine whether or not the posture of the driver is appropriate by a simple arithmetic process.
 本発明において、第1差分総計Xと第2差分総計Yとの差を|X-Y|、第1基準値をPとしたとき、姿勢判定部は、|X-Y|≦Pである場合は、運転者の姿勢が適正であると判定し、|X-Y|>Pである場合は、運転者の姿勢が不適正であると判定してもよい。 In the present invention, when the difference between the first total difference X and the second total difference Y is |XY−, and the first reference value is P, the posture determination unit determines that |XY−≦P. May determine that the driver's posture is appropriate, and if |XY−>P, determine that the driver's posture is incorrect.
 本発明において、2値化された基準差分画像における閾値以上の画素領域の重心位置を第1重心位置Mとして算出するとともに、2値化された運転時差分画像における閾値以上の画素領域の重心位置を第2重心位置Nとして算出する重心算出部を設けてもよい。そして、姿勢判定部は、第1差分総計Xと第2差分総計Yとの比較結果、および第1重心位置Mと第2重心位置Nとの比較結果に基づいて、運転者の姿勢が適正か否かを判定してもよい。 In the present invention, the barycentric position of the pixel region of the threshold value or more in the binarized reference difference image is calculated as the first barycentric position M, and the barycentric position of the pixel region of the threshold value or more in the binarized driving difference image is calculated. A barycenter calculation unit may be provided to calculate as the second barycentric position N. Then, the posture determination unit determines whether the driver's posture is proper based on the comparison result between the first total difference X and the second total difference Y and the comparison result between the first center-of-gravity position M and the second center-of-gravity position N. It may be determined whether or not.
 この場合、第1差分総計Xと第2差分総計Yとの差を|X-Y|、第1基準値をP、第1重心位置Mと第2重心位置Nとの差を|M-N|、第2基準値をQとしたとき、姿勢判定部は、|X-Y|≦Pであり、かつ|M-N|≦Qである場合は、運転者の姿勢が適正であると判定し、|X-Y|>Pである場合、または|M-N|>Qである場合は、運転者の姿勢が不適正であると判定してもよい。 In this case, the difference between the first difference total X and the second difference total Y is |XY|, the first reference value is P, and the difference between the first center of gravity position M and the second center of gravity position N is |MN. , And the second reference value is Q, the posture determination unit determines that the posture of the driver is proper when |XY−≦P and |M−N|≦Q However, if |XY|>P or |MN|>Q, it may be determined that the driver's posture is inappropriate.
 本発明において、姿勢判定部は、運転者の姿勢が不適正であると判定した場合に、当該不適正姿勢が継続する時間を継続時間として計測し、警報出力部は、当該継続時間が所定の基準時間を経過したときに警報を出力してもよい。 In the present invention, when the posture determination unit determines that the posture of the driver is inappropriate, the posture determination unit measures the time period during which the inappropriate posture continues as the duration time, and the alarm output unit determines that the duration time is predetermined. An alarm may be output when the reference time has elapsed.
 本発明において、姿勢判定部は、カメラが第2基準画像を生成するに際して、車両に搭乗した運転者の姿勢が適正であるか否かを、当該運転者の顔情報および当該車両の車両情報に基づいて判定してもよい。 In the present invention, when the camera generates the second reference image, the posture determination unit determines whether or not the posture of the driver who has boarded the vehicle is appropriate based on the face information of the driver and the vehicle information of the vehicle. You may judge based on this.
 この場合、姿勢判定部は、顔情報および車両情報に加えて、運転者の姿勢に関する個人学習データを参酌して、車両に搭乗した運転者の姿勢が適正であるか否かを判定してもよい。 In this case, the posture determination unit may determine whether or not the posture of the driver who has boarded the vehicle is appropriate by taking into consideration personal learning data regarding the posture of the driver in addition to the face information and the vehicle information. Good.
 本発明において、カメラは、車両のドアを開閉するための電子キーを所持した運転者が当該車両に接近したことが確認されたときに、第1基準画像を撮像してもよい。 In the present invention, the camera may capture the first reference image when it is confirmed that the driver having the electronic key for opening and closing the door of the vehicle approaches the vehicle.
 本発明によれば、測距機能がなくても運転者の不適正な姿勢を簡単に検出できる運転者監視装置を提供することができる。 According to the present invention, it is possible to provide a driver monitoring device that can easily detect an inappropriate posture of a driver without a distance measuring function.
図1は、本発明の第1実施形態による運転者監視装置のブロック図である。FIG. 1 is a block diagram of a driver monitoring device according to the first embodiment of the present invention. 図2は、カメラにより運転者を撮像する様子を示す図である。FIG. 2 is a diagram showing a state in which a driver is imaged by a camera. 図3は、カメラの設置例を示す図である。FIG. 3 is a diagram illustrating an installation example of a camera. 図4は、運転者の不適正姿勢の例を示す図である。FIG. 4 is a diagram showing an example of an improper posture of the driver. 図5は、運転者の不適正姿勢の他の例を示す図である。FIG. 5 is a diagram showing another example of the driver's improper posture. 図6は、運転者がいない状態で撮像した基準画像Aを示す図である。FIG. 6 is a diagram showing a reference image A captured in the absence of a driver. 図7は、運転者が搭乗した状態で撮像した基準画像Bを示す図である。FIG. 7 is a diagram showing a reference image B captured while the driver is on board. 図8は、基準画像A、Bの差分である基準差分画像Gを示す図である。FIG. 8 is a diagram showing a reference difference image G which is a difference between the reference images A and B. 図9は、基準差分画像Gの2値化を模式的に示す図である。FIG. 9 is a diagram schematically showing binarization of the reference difference image G. 図10は、車両の運転中に撮像した運転時画像C1を示す図である。FIG. 10 is a diagram showing a driving image C1 captured while the vehicle is driving. 図11は、基準画像Aと運転時画像C1の差分である運転時差分画像H1を示す図である。FIG. 11 is a diagram showing a driving difference image H1 which is a difference between the reference image A and the driving image C1. 図12は、運転時差分画像H1の2値化を模式的に示す図である。FIG. 12 is a diagram schematically showing the binarization of the difference image H1 during operation. 図13は、車両の運転中に撮像した他の運転時画像C2を示す図である。FIG. 13 is a diagram showing another driving image C2 taken while driving the vehicle. 図14は、基準画像Aと運転時画像C2の差分である運転時差分画像H2を示す図である。FIG. 14 is a diagram showing a driving difference image H2 which is a difference between the reference image A and the driving image C2. 図15は、運転時差分画像H2の2値化を模式的に示す図である。FIG. 15 is a diagram schematically showing the binarization of the difference image H2 during operation. 図16は、第1実施形態における姿勢判定手順を詳細に示したフローチャートである。FIG. 16 is a flowchart showing in detail the posture determination procedure in the first embodiment. 図17は、運転者の左右方向への移動を説明する図である。FIG. 17 is a diagram illustrating the driver's movement in the left-right direction. 図18は、本発明の第2実施形態の原理を模式的に示す図である。FIG. 18 is a diagram schematically showing the principle of the second embodiment of the present invention. 図19は、第2実施形態による運転者監視装置のブロック図である。FIG. 19 is a block diagram of the driver monitoring device according to the second embodiment. 図20は、第2実施形態における姿勢判定手順を詳細に示したフローチャートである。FIG. 20 is a flowchart showing in detail the posture determination procedure in the second embodiment.
 以下、本発明の実施形態を図面に基づいて詳細に説明する。図中、同一の部分または対応する部分には、同一の符号を付してある。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the drawings, the same parts or corresponding parts are designated by the same reference numerals.
 図1は、第1実施形態による運転者監視装置(以下、「ドライバモニタ」という。)の構成を示している。図1において、ドライバモニタ100は、車両に搭載されて運転者の状態を監視する装置であって、カメラ1、画像処理部2、制御部3、記憶部4、通信部5、および電源回路6を備えている。 FIG. 1 shows the configuration of the driver monitoring device (hereinafter referred to as “driver monitor”) according to the first embodiment. In FIG. 1, a driver monitor 100 is a device mounted on a vehicle to monitor a driver's state, and includes a camera 1, an image processing unit 2, a control unit 3, a storage unit 4, a communication unit 5, and a power supply circuit 6. Equipped with.
 カメラ1は、たとえば図2に示すように、車両50の車室51内において運転者DRと対向するように設けられ、シート54に着座した運転者DRを撮像する。破線はカメラ1の撮像範囲を示しており、運転者DRの顔・首・肩を含む上半身がカメラ1により撮像される。車両50は、たとえば自動四輪車である。図2の符号55は、シート54の上部に設けられたヘッドレストを表している。 The camera 1 is provided, for example, as shown in FIG. 2, in the vehicle interior 51 of the vehicle 50 so as to face the driver DR, and images the driver DR seated on the seat 54. The broken line indicates the imaging range of the camera 1, and the upper body including the face, neck, and shoulders of the driver DR is imaged by the camera 1. Vehicle 50 is, for example, an automobile. Reference numeral 55 in FIG. 2 indicates a headrest provided on the upper portion of the seat 54.
 図3は、車室51内から車両50の進行方向に向って、フロントガラスWおよびインストルメントパネル52を見た図である。カメラ1は、インストルメントパネル52における運転者DRと対面する箇所に設置されており、ステアリング53を通して運転者DRを撮像する。なお、カメラ1の設置箇所は、図3の例に限るものではなく、たとえば、インストルメントパネル52に配置されたメータ56やディスプレイ57などの近傍に設置してもよく、あるいはフロントガラスWの上部に位置するルームミラー58の近傍に設置してもよい。 FIG. 3 is a view of the windshield W and the instrument panel 52 as viewed from the inside of the passenger compartment 51 toward the traveling direction of the vehicle 50. The camera 1 is installed at a location on the instrument panel 52 that faces the driver DR, and images the driver DR through the steering wheel 53. Note that the installation location of the camera 1 is not limited to the example of FIG. 3, and may be installed near the meter 56, the display 57, etc. arranged on the instrument panel 52, or on the windshield W, for example. It may be installed in the vicinity of the room mirror 58 located at.
 カメラ1には、図1に示すように、被写体である運転者DRを撮像する撮像部11と、運転者DRに光を照射する発光部12とが備わっている。撮像部11は、たとえばCMOSイメージセンサのような撮像素子を有しており、発光部12は、たとえば赤外光を発する発光ダイオードのような発光素子を有している。そのほか、カメラ1にはレンズなどの光学系も備わっている(図示省略)。 As shown in FIG. 1, the camera 1 includes an imaging unit 11 that captures the driver DR as a subject, and a light emitting unit 12 that irradiates the driver DR with light. The image pickup section 11 has an image pickup element such as a CMOS image sensor, and the light emitting section 12 has a light emitting element such as a light emitting diode that emits infrared light. In addition, the camera 1 is also equipped with an optical system such as a lens (not shown).
 画像処理部2は、カメラ1の撮像部11で撮像された画像を処理して、運転者DRの顔を抽出するとともに、顔における目・鼻・口などの特徴点を抽出する。そして、これらを解析して、運転者DRの顔の向き、視線の方向、瞼の開閉状態などを検出する。 The image processing unit 2 processes the image captured by the image capturing unit 11 of the camera 1 to extract the face of the driver DR and feature points such as eyes, nose, and mouth of the face. Then, these are analyzed to detect the direction of the driver's face, the direction of the line of sight, the open / closed state of the eyelids, and the like.
 制御部3は、CPUなどから構成されており、差分演算部31、差分総計算出部32、車両状態判定部33、姿勢判定部34、および警報出力部35を備えている。これらの各部の機能は、実際にはソフトウェア処理により実現されるものであるが、ここでは便宜上ハードウェアのブロックとして図示してある。 The control unit 3 includes a CPU and the like, and includes a difference calculation unit 31, a total difference calculation output unit 32, a vehicle state determination unit 33, an attitude determination unit 34, and an alarm output unit 35. The functions of each of these parts are actually realized by software processing, but are shown here as hardware blocks for convenience.
 差分演算部31は、カメラ1が撮像した2つの画像(後述)の差分を演算して、差分画像を生成する。差分総計算出部32は、その差分画像を閾値により“0”と“1”に2値化して、閾値以上である“1”の画素の総数(差分総計)を算出する。車両状態判定部33は、ドライバモニタ100に外部から入力される車速やヨーレートなどの車両情報に基づいて、車両50の状態(停止状態、走行状態、旋回状態など)を判定する。姿勢判定部34は、差分総計算出部32で算出された差分総計に基づいて、運転者DRの姿勢が適正か否かを判定する。警報出力部35は、姿勢判定部34で運転者DRの姿勢が不適正と判定された場合に、警報を出力する。これらの各ブロック31~35における処理の詳細については、後でさらに詳細に説明する。 The difference calculation unit 31 calculates the difference between the two images (described later) captured by the camera 1 to generate a difference image. The total difference calculation unit 32 binarizes the difference image into “0” and “1” by a threshold value, and calculates the total number of pixels of “1” which is equal to or more than the threshold value (total difference). The vehicle state determination unit 33 determines the state of the vehicle 50 (stopped state, running state, turning state, etc.) based on vehicle information such as vehicle speed and yaw rate input to the driver monitor 100 from the outside. The posture determination unit 34 determines whether or not the posture of the driver DR is appropriate, based on the total difference calculated by the total difference calculation unit 32. The alarm output unit 35 outputs an alarm when the attitude determination unit 34 determines that the posture of the driver DR is inappropriate. Details of the processing in each of these blocks 31 to 35 will be described later in more detail.
 なお、制御部3には、上述したブロック31~35のほかに、運転者DRが居眠り運転をしているか否かを判定する居眠り判定部や、運転者DRが脇見運転をしているか否かを判定する脇見判定部などが備わっているが、それらは本発明と直接関係しないので、図示を省略してある。 In addition to the blocks 31 to 35 described above, the control unit 3 includes a drowsiness determination unit that determines whether or not the driver DR is drowsy driving, and whether or not the driver DR is looking aside. Although an inattentive judgment unit for judging is provided, these are not shown because they are not directly related to the present invention.
 記憶部4は、半導体メモリなどから構成されており、制御部3での処理に必要な各種のパラメータが記憶されている。本発明に関連するパラメータとしては、差分総計算出部32で差分画像を2値化する場合の2値化閾値、姿勢判定部34で運転者DRの姿勢判定に用いられる基準値Pおよび基準時間To、運転者DRの姿勢の癖などの学習結果から得られた個人学習データなどがある。また、記憶部4には、制御部3でのソフトウェア処理に用いられるプログラムを格納したエリアや、制御部3での演算結果や判定結果などが一時的に記憶されるエリアも設けられている(図示省略)。 The storage unit 4 is composed of a semiconductor memory or the like, and stores various parameters required for processing by the control unit 3. The parameters related to the present invention include a binarization threshold when the difference image is binarized by the total difference calculation unit 32, the reference value P and the reference time To used for the posture determination unit 34 to determine the posture of the driver DR. , There are personal learning data obtained from learning results such as the posture habit of the driver DR. In addition, the storage unit 4 is also provided with an area in which a program used for software processing in the control unit 3 is stored and an area in which the calculation result, the determination result, and the like in the control unit 3 are temporarily stored ( (Not shown).
 通信部5は、車両50に搭載されているECU(Electronic Control Unit)200との間で信号やデータの授受を行うとともに、制御部3との間でも信号やデータの授受を行う。また、通信部5は、車両50に備わる各種のセンサからの信号を、前述した車両情報として受け取る。通信部5とECU200とは、CAN(Controller Area Network)を介して接続されている。ECU200は、制御対象ごとに設けられた複数のECUから構成されるが、図1では、それらをまとめて1つのブロックで示している。電子キー300は、車両50のドアの開閉を遠隔操作するための携帯端末であって、ECU200との間で無線通信を行う。 The communication unit 5 sends and receives signals and data to and from the ECU (Electronic Control Unit) 200 mounted on the vehicle 50, and also sends and receives signals and data to and from the control unit 3. The communication unit 5 also receives signals from various sensors provided in the vehicle 50 as the vehicle information described above. The communication unit 5 and the ECU 200 are connected via a CAN (Controller Area Network). The ECU 200 is composed of a plurality of ECUs provided for each control target, but in FIG. 1, they are collectively shown as one block. The electronic key 300 is a mobile terminal for remotely controlling the opening and closing of the door of the vehicle 50, and performs wireless communication with the ECU 200.
 電源回路6は、制御部3の制御の下で、ドライバモニタ100の各部に電源を供給する。電源回路6には、車両50に搭載されたバッテリ(図示省略)から電力が供給される。 The power supply circuit 6 supplies power to each part of the driver monitor 100 under the control of the control unit 3. Power is supplied to the power supply circuit 6 from a battery (not shown) mounted on the vehicle 50.
 図4は、車両50に搭乗した運転者DRの不適正な姿勢の例を示している。ここでは、運転者DRが、シート54をリクライニングさせて、後方へ大きく傾いた姿勢(リクライニング姿勢)をとっている。この場合、運転者DRの顔が上向きとなって、車両50の前方の視界が悪くなるため、前方車両との異常接近や他の車両の急な割り込みなどがあった場合に、すぐに危険回避のための運転に移行することができず、脇見運転と同様に前方不注意による事故のおそれがあり、安全上問題がある。 FIG. 4 shows an example of an improper posture of the driver DR boarding the vehicle 50. Here, the driver DR reclines the seat 54 and assumes a posture inclining rearward largely (reclining posture). In this case, the face of the driver DR faces upward and the visibility in front of the vehicle 50 deteriorates. Therefore, in the case of an abnormal approach to the vehicle in front or a sudden interruption of another vehicle, the danger is immediately avoided. There is a risk of accidents due to carelessness ahead as in the case of inattentive driving, and there is a safety problem.
 図5は、運転者DRの不適正な姿勢の他の例を示している。ここでは、運転者DRが、前方へ大きく傾いた姿勢(前かがみ姿勢)をとっている。この場合、運転者DRの顔が下向きとなって、やはり車両50の前方の視界が悪くなるため、前記のような異常接近や急な割り込みなどに対して、すぐに危険回避のための運転に移行することができず、脇見運転と同様に前方不注意による事故のおそれがあり、安全上問題がある。 FIG. 5 shows another example of the incorrect posture of the driver DR. Here, the driver DR is in a posture of being greatly tilted forward (a leaning forward posture). In this case, the face of the driver DR faces downward, and the visibility in front of the vehicle 50 also deteriorates. Therefore, it is possible to immediately perform the driving for avoiding the danger against the abnormal approach or sudden interruption. It is not possible to shift, and there is a risk of an accident due to inattention to the front as in the case of looking aside, which is a safety issue.
 そこで、運転者DRがこのような不適正な姿勢をとっている場合は、これを検出して運転者DRに警告することで、安全を確保する必要がある。この不適正な姿勢を検出する方法として、カメラ1の撮像画像に基づいて、カメラ1から運転者DRの顔までの距離を測定し、この距離の長短により姿勢の適否を判定する方法が考えられる。これによると、カメラ1から運転者DRの顔までの距離が長すぎる場合は、図4のように運転者DRがリクライニング姿勢をとっていると判定され、カメラ1から運転者DRの顔までの距離が短すぎる場合は、図5のように運転者DRが前かがみ姿勢をとっていると判定される。しかしながら、この方法は、ドライバモニタ100に測距機能が備わっていなければ、採用することができない。 Therefore, if the driver DR has such an inappropriate posture, it is necessary to ensure safety by detecting this and warning the driver DR. As a method of detecting this improper posture, a method of measuring the distance from the camera 1 to the face of the driver DR based on the captured image of the camera 1 and determining whether the posture is proper or not can be considered. .. According to this, when the distance from the camera 1 to the face of the driver DR is too long, it is determined that the driver DR is in the reclining posture as shown in FIG. 4, and the distance from the camera 1 to the face of the driver DR is determined. If the distance is too short, it is determined that the driver DR is in the forward leaning posture as shown in FIG. However, this method cannot be adopted unless the driver monitor 100 has a distance measuring function.
 本発明は、測距機能のないドライバモニタ100であっても、運転者DRの姿勢の適否を判定できるようにしたものである。以下、本発明の原理について説明する。 The present invention makes it possible to determine whether or not the posture of the driver DR is appropriate, even if the driver monitor 100 does not have a distance measuring function. Hereinafter, the principle of the present invention will be described.
 本発明では、まず、図6に示す基準画像A(第1基準画像)と、図7に示す基準画像B(第2基準画像)とを、カメラ1により撮像する。図6の基準画像Aは、運転者DRが車両50に搭乗していない状態で撮像した画像であり、図7の基準画像Bは、運転者DRが車両50に搭乗し、かつ運転者DRの姿勢が適正である状態で撮像した画像である。この基準画像Bは、運転者DRが車両50の運転を開始した後に撮像される。 In the present invention, first, the camera 1 captures the reference image A (first reference image) shown in FIG. 6 and the reference image B (second reference image) shown in FIG. 7. The reference image A of FIG. 6 is an image captured in a state where the driver DR is not in the vehicle 50, and the reference image B of FIG. 7 is the image of the driver DR in the vehicle 50 and the driver DR of the driver DR. It is an image picked up in a proper posture. The reference image B is captured after the driver DR starts driving the vehicle 50.
 次に、差分演算部31において、基準画像Aと基準画像Bとの差分を演算して、2つの画像A、Bの差分である図8に示すような基準差分画像Gを生成する。この基準差分画像Gでは、運転者DR以外の背景部分は差分演算によって暗くなり、運転者DRの部分が明るくなっている。実際の基準差分画像Gは、このような単純な明暗の画像とはならないが、図8では簡単化するために、運転者DRの部分だけを明るく描いてある(図11および図14においても同様)。この基準差分画像Gにおける運転者DRの姿勢は、図7の基準画像Bにおける運転者DRの姿勢と同じであって、適正な姿勢である。 Next, the difference calculation unit 31 calculates the difference between the reference image A and the reference image B to generate a reference difference image G as shown in FIG. 8 which is the difference between the two images A and B. In the reference difference image G, the background portion other than the driver DR is darkened by the difference calculation, and the driver DR portion is brightened. The actual reference difference image G is not such a simple light and dark image, but in FIG. 8, only the driver DR portion is drawn brightly for simplification (also in FIGS. 11 and 14). ). The posture of the driver DR in the reference difference image G is the same as the posture of the driver DR in the reference image B of FIG. 7, and is an appropriate posture.
 差分総計算出部32は、上記の基準差分画像Gを、記憶部4(図1)に記憶されている2値化閾値を用いて、図9のように2値化する。図9は、基準差分画像Gを構成する画素を模式的に表したもので、白色の画素は輝度が閾値未満の“0”の画素、黒色の画素は輝度が閾値以上の“1”の画素である(図12、図15、および図18においても同様)。なお、画素の白色と黒色は“0”と“1”を区別するために便宜上塗り分けたものであって、実際の輝度とは関係がない。差分総計算出部32は、図9の2値化された基準差分画像Gにおける“1”の画素の総数を、差分総計X(第1差分総計)として算出する。 The total difference calculation unit 32 binarizes the reference difference image G as shown in FIG. 9 using the binarization threshold value stored in the storage unit 4 (FIG. 1). FIG. 9 schematically shows pixels forming the reference difference image G. White pixels are "0" pixels whose luminance is less than the threshold value, and black pixels are "1" pixels whose luminance value is the threshold value or more. (The same applies to FIGS. 12, 15, and 18). It should be noted that the white color and the black color of the pixel are colored separately for convenience of distinction between “0” and “1” and have no relation to the actual luminance. The total difference calculation unit 32 calculates the total number of pixels of “1” in the binarized reference difference image G of FIG. 9 as the total difference X (first total difference).
 その後、車両50が走行を続ける間、カメラ1は常時、運転者DRを撮影して、運転者DRが運転をしている状態で撮像した運転時画像を生成する。運転者DRの姿勢が適正であれば、撮像された運転時画像は、図7に示した基準画像Bと同様の画像となる。これに対して、運転者DRの姿勢が不適正であれば、撮像された運転時画像は、図10や図13のような画像となる。 After that, while the vehicle 50 continues to run, the camera 1 constantly captures the driver DR and generates a driving image captured while the driver DR is driving. If the posture of the driver DR is appropriate, the captured driving image is the same as the reference image B shown in FIG. 7. On the other hand, if the posture of the driver DR is inappropriate, the captured driving image will be an image as shown in FIGS. 10 and 13.
 図10は、運転者DRが図4に示したリクライニング姿勢をとっている場合の、運転時画像C1を示している。この運転時画像C1においては、運転者DRがカメラ1からみて後方に退いているため、運転者DRは図7の基準画像Bに比べて小さく撮像されている。 FIG. 10 shows a driving image C1 when the driver DR is in the reclining posture shown in FIG. In this driving image C1, since the driver DR is retracted rearward when viewed from the camera 1, the driver DR is captured smaller than the reference image B in FIG. 7.
 差分演算部31は、図6の基準画像Aと、図10の運転時画像C1との差分を演算して、2つの画像A、C1の差分である図11に示すような運転時差分画像H1を生成する。差分総計算出部32は、この運転時差分画像H1を、2値化閾値を用いて図12のように2値化する。そして、差分総計算出部32は、2値化された運転時差分画像H1における“1”の画素の総数を、差分総計Y(第2差分総計)として算出する。 The difference calculation unit 31 calculates the difference between the reference image A in FIG. 6 and the driving image C1 in FIG. 10 to calculate the difference between the two images A and C1, and the driving difference image H1 as shown in FIG. To generate. The total difference calculation unit 32 binarizes the driving difference image H1 using a binarization threshold as shown in FIG. Then, the total difference calculation unit 32 calculates the total number of pixels of “1” in the binarized driving difference image H1 as the total difference Y (second total difference).
 姿勢判定部34は、図9の基準差分画像Gについて算出された差分総計Xと、図12の運転時差分画像H1について算出された差分総計Yと、記憶部4に記憶されている基準値P(第1基準値)とに基づいて、運転者DRの姿勢が適正か否かを判定する。具体的には、姿勢判定部34は、差分総計Xと差分総計Yとの差を|X-Y|(絶対値)として算出し、この|X-Y|を基準値Pと比較する。そして、|X-Y|≦Pであれば、運転者DRの姿勢が適正であると判定し、|X-Y|>Pであれば、運転者DRの姿勢が不適正であると判定する。 The posture determination unit 34 calculates the total difference X calculated for the reference difference image G in FIG. 9, the total difference Y calculated for the driving difference image H1 in FIG. 12, and the reference value P stored in the storage unit 4. Based on (first reference value), it is determined whether or not the posture of the driver DR is appropriate. Specifically, the posture determination unit 34 calculates the difference between the total difference X and the total difference Y as |XY− (absolute value), and compares this |XY− with the reference value P. Then, if |XY|≦P, it is determined that the posture of the driver DR is proper, and if |XY−>P, it is determined that the posture of the driver DR is inappropriate. ..
 運転者DRの姿勢が適正である場合は、運転者DRの顔がカメラ1から離れすぎたり、カメラ1に近づきすぎたりしないので、差分総計Xと差分総計Yとの差|X-Y|は、基準値P以下の値となる。一方、運転者DRの姿勢がリクライニング姿勢(図4)である場合は、運転者DRの顔がカメラ1から離れすぎるので、運転者DRの画像が小さくなった分だけ差分総計Yも小さくなる。このため、差分総計Xと差分総計Yとの差|X-Y|は、基準値Pを超える値となる。 When the posture of the driver DR is appropriate, the face of the driver DR is neither too far from the camera 1 nor too close to the camera 1, so the difference |XY| between the total difference X and the total difference Y is | , Which is less than or equal to the reference value P. On the other hand, when the posture of the driver DR is the reclining posture (FIG. 4), the face of the driver DR is too far away from the camera 1, so that the total difference Y also becomes smaller as the image of the driver DR becomes smaller. Therefore, the difference | XY | between the total difference X and the total difference Y is a value exceeding the reference value P.
 図13は、運転者DRが図5に示した前かがみ姿勢をとっている場合の、運転時画像C2を示している。この運転時画像C2においては、運転者DRがカメラ1からみて前方に接近しているため、運転者DRは図7の基準画像Bに比べて大きく撮像されている。 FIG. 13 shows a driving image C2 when the driver DR is in the forward leaning posture shown in FIG. In the driving image C2, since the driver DR is approaching the front when viewed from the camera 1, the driver DR is imaged larger than the reference image B in FIG.
 差分演算部31は、図6の基準画像Aと、図13の運転時画像C2との差分を演算して、2つの画像A、C2の差分である図14に示すような運転時差分画像H2を生成する。差分総計算出部32は、この運転時差分画像H2を、2値化閾値を用いて図15のように2値化する。そして、差分総計算出部32は、2値化された運転時差分画像H2における“1”の画素の総数を、差分総計Y(第2差分総計)として算出する。 The difference calculation unit 31 calculates the difference between the reference image A in FIG. 6 and the driving image C2 in FIG. 13 to calculate the difference between the two images A and C2, and the driving difference image H2 as shown in FIG. To generate. The total difference calculation unit 32 binarizes this driving difference image H2 using a binarization threshold as shown in FIG. Then, the total difference calculation unit 32 calculates the total number of pixels of “1” in the binarized driving difference image H2 as the total difference Y (second total difference).
 姿勢判定部34は、図9の基準差分画像Gについて算出された差分総計Xと、図15の運転時差分画像H2について算出された差分総計Yと、記憶部4に記憶されている基準値P(第1基準値)とに基づいて、運転者DRの姿勢が適正か否かを判定する。具体的には、前記と同様、差分総計Xと差分総計Yとの差を|X-Y|(絶対値)として算出し、|X-Y|≦Pであれば、運転者DRの姿勢が適正であると判定し、|X-Y|>Pであれば、運転者DRの姿勢が不適正であると判定する。 The posture determination unit 34 calculates the total difference X calculated for the reference difference image G of FIG. 9, the total difference Y calculated for the driving difference image H2 of FIG. 15, and the reference value P stored in the storage unit 4. Based on (first reference value), it is determined whether or not the posture of the driver DR is appropriate. Specifically, similar to the above, the difference between the total difference X and the total difference Y is calculated as |XY−(absolute value), and if |XY−≦P, the posture of the driver DR is It is determined that the posture is appropriate, and if | XY |> P, it is determined that the posture of the driver DR is inappropriate.
 運転者DRの姿勢が前かがみ姿勢(図5)である場合は、運転者DRの顔がカメラ1に近づきすぎるので、運転者DRの画像が大きくなった分だけ差分総計Yも大きくなる。したがって、この場合も、差分総計Xと差分総計Yとの差|X-Y|は、基準値Pを超える値となる。 When the posture of the driver DR is the forward leaning posture (FIG. 5), the face of the driver DR is too close to the camera 1, and the total difference Y increases as the image of the driver DR increases. Therefore, in this case also, the difference |XY− between the total difference X and the total difference Y becomes a value exceeding the reference value P.
 姿勢判定部34で運転者DRの姿勢が不適正と判定された場合は、警報出力部35から警報が出力される。この警報は、たとえば「運転姿勢が適正ではありません。正しい姿勢に戻って下さい。」といった音声ガイダンスとして、車両50に備わるスピーカ(図示省略)から出力される。 When the posture determination unit 34 determines that the posture of the driver DR is inappropriate, the alarm output unit 35 outputs an alarm. This alarm is output from a speaker (not shown) provided in the vehicle 50 as voice guidance such as "the driving posture is not proper. Please return to the correct posture."
 このように、本発明の第1実施形態では、運転者DRがいないときの基準画像Aと、運転者DRが搭乗しかつ姿勢が適正である状態での基準画像Bと、運転中の運転者DRを撮像した運転時画像C(C1、C2)の3つの画像を撮像し、画像A、Bの差分である基準差分画像Gと、画像A、Cの差分である運転時差分画像H(H1、H2)とを生成する。そして、基準差分画像Gを2値化して得られる差分総計Xと、運転時差分画像Hを2値化して得られる差分総計Yとの差|X-Y|が、基準値P以下であるか否かによって、運転者DRの姿勢の適否を判定する。このため、ドライバモニタ100に測距機能が備わっていなくても、簡単な演算処理によって、運転者DRの姿勢がリクライニング姿勢や前かがみ姿勢であることを容易に判定することができる。 As described above, in the first embodiment of the present invention, the reference image A when the driver DR is not present, the reference image B when the driver DR is on board and the posture is proper, and the driver during driving Three images of the driving image C (C1, C2) obtained by capturing the DR are captured, and the reference difference image G that is the difference between the images A and B and the driving difference image H (H1 that is the difference between the images A and C). , H2) and. Whether the difference |XY− between the total difference X obtained by binarizing the reference difference image G and the total difference Y obtained by binarizing the driving difference image H is less than or equal to the reference value P. Whether or not the posture of the driver DR is appropriate is determined based on whether or not the posture is correct. Therefore, even if the driver monitor 100 does not have a distance measuring function, it is possible to easily determine that the posture of the driver DR is a reclining posture or a leaning posture by a simple calculation process.
 図16は、第1実施形態のドライバモニタ100における姿勢判定手順をさらに詳細に示したフローチャートである。 FIG. 16 is a flowchart showing the posture determination procedure in the driver monitor 100 of the first embodiment in more detail.
 ステップS1では、電子キー300(図1)を所持した運転者DRが車両50に接近するのを待つ。ECU200は、電子キー300との間で通信を行い、電子キー300から送信された応答信号を受信することに基づいて、当該電子キー300が車両50に接近したことを検出する。これにより、図示しないドア開閉部が車両50のドアのロックを解除する。 In step S1, the driver DR holding the electronic key 300 (FIG. 1) waits for approaching the vehicle 50. The ECU 200 communicates with the electronic key 300, and detects that the electronic key 300 approaches the vehicle 50 based on receiving the response signal transmitted from the electronic key 300. As a result, the door opening / closing portion (not shown) unlocks the door of the vehicle 50.
 また、電子キー300の接近が検出されたことに基づき、ステップS2において、制御部3が電源回路6を駆動して、ドライバモニタ100の電源をオン状態にする。これに続いてステップS3では、カメラ1が、運転者DRのいない状態で基準画像A(図6)を撮像する。 Further, based on the detection of the approach of the electronic key 300, in step S2, the control unit 3 drives the power supply circuit 6 to turn on the power supply of the driver monitor 100. Following this, in step S3, the camera 1 captures the reference image A (FIG. 6) in the absence of the driver DR.
 次に、ステップS4において、運転者DRが車両50に搭乗したか否かを判定する。この判定は、たとえば運転席のドアが開かれたことを検出するドアセンサ(図示省略)や、運転席のシート54に運転者DRが着座したことを検出する着座センサ(図示省略)などの出力に基づいて行われる。 Next, in step S4, it is determined whether or not the driver DR has boarded the vehicle 50. This determination is made on the output of, for example, a door sensor (not shown) that detects that the door of the driver's seat has been opened, or a seating sensor (not shown) that detects that the driver DR is seated on the seat 54 of the driver's seat. It is done based on.
 運転者DRが車両50に搭乗して、車両50の運転が開始されると、ステップS5に進んで、運転者DRの姿勢が適正か否かを判定する。この判定は、たとえば、画像処理部2での画像処理により得られる運転者DRの顔情報(顔の向きや視線方向など)と、通信部5を介して入力される車両50の車両情報とに基づいて行われる。前述のように、車両情報としては車速やヨーレートなどが入力され、車両状態判定部33は、これらに基づいて、車両50の状態(走行状態、停止状態、旋回状態など)を判別する。 When the driver DR gets on the vehicle 50 and the driving of the vehicle 50 is started, the process proceeds to step S5 to determine whether or not the posture of the driver DR is appropriate. This determination is based on, for example, face information of the driver DR (face direction, line-of-sight direction, etc.) obtained by image processing in the image processing unit 2 and vehicle information of the vehicle 50 input via the communication unit 5. It is done based on. As described above, the vehicle speed, the yaw rate, and the like are input as the vehicle information, and the vehicle state determination unit 33 determines the state of the vehicle 50 (running state, stopped state, turning state, etc.) based on these.
 姿勢判定部34は、運転者DRの顔が横を向いているような場合は、運転者DRの姿勢が適正でないと判定し、また、車両50が旋回中である場合なども、運転者DRに揺れが生じるおそれがあるので、姿勢が適正でないと判定する。一方、運転者DRの顔が正面を向いていて、車両50がまっすぐ走行しているような場合は、姿勢判定部34は、運転者DRの姿勢が適正であると判定する。 The posture determination unit 34 determines that the posture of the driver DR is not appropriate when the face of the driver DR faces sideways, and also when the vehicle 50 is turning, for example. It is judged that the posture is not appropriate because there is a risk of shaking. On the other hand, when the face of the driver DR is facing the front and the vehicle 50 is traveling straight, the posture determination unit 34 determines that the posture of the driver DR is appropriate.
 また、姿勢判定部34は、上述した顔情報と車両情報に加えて、記憶部4に記憶されている運転者DRの姿勢に関する個人学習データも参酌して、運転者DRの姿勢が適正であるか否かを判定する。 In addition to the face information and the vehicle information described above, the posture determination unit 34 also takes into consideration personal learning data regarding the posture of the driver DR stored in the storage unit 4, and the posture of the driver DR is appropriate. Or not.
 ステップS5で運転者DRの姿勢が適正でないと判定されると、適正な姿勢になるまで待つ。そして、運転者DRの姿勢が適正であると判定されると、ステップS6に進んで、カメラ1が運転者DRを含む基準画像B(図7)を撮像する。 If it is determined in step S5 that the posture of the driver DR is not proper, the driver waits until the posture becomes proper. Then, when it is determined that the posture of the driver DR is appropriate, the process proceeds to step S6, and the camera 1 captures the reference image B (FIG. 7) including the driver DR.
 次に、ステップS7において、差分演算部31は、基準画像Aと基準画像Bとの差分を演算して基準差分画像G(図8)を生成する。さらに、差分演算部31は、ステップS8において、基準差分画像Gを2値化し(図9)、ステップS9において、2値化した基準差分画像Gの“1”の画素を総計して、差分総計Xを算出する。 Next, in step S7, the difference calculation unit 31 calculates the difference between the reference image A and the reference image B to generate the reference difference image G (FIG. 8). Further, the difference calculation unit 31 binarizes the reference difference image G in step S8 (FIG. 9), totals the “1” pixels of the binarized reference difference image G in step S9, and calculates the total difference. Calculate X.
 その後、ステップS10に進んで、カメラ1は、運転中の運転者DRを常時撮影して、運転時画像C(図10のC1、図13のC2)を撮像する。 After that, the process proceeds to step S10, and the camera 1 constantly photographs the driver DR who is driving, and photographs the driving image C (C1 in FIG. 10, C2 in FIG. 13).
 次に、ステップS11において、差分演算部31は、基準画像Aと運転時画像Cとの差分を演算して運転時差分画像H(図11のH1、図14のH2)を生成する。さらに、差分演算部31は、ステップS12において、運転時差分画像Hを2値化し(図12、図15)、ステップS13において、2値化した基準差分画像Hの“1”の画素を総計して、差分総計Yを算出する。 Next, in step S11, the difference calculation unit 31 calculates the difference between the reference image A and the driving image C to generate a driving difference image H (H1 in FIG. 11, H2 in FIG. 14). Further, the difference calculation unit 31 binarizes the driving difference image H in step S12 (FIGS. 12 and 15), and totals the “1” pixels of the binarized reference difference image H in step S13. Then, the total difference Y is calculated.
 次に、ステップS14において、姿勢判定部34は、ステップS9で算出した差分総計Xと、ステップS13で算出した差分総計Yとの差|X-Y|を基準値Pと比較する。そして、|X-Y|≦Pの場合(ステップS14の判定:NO)は、運転者DRの姿勢は適正であると判断して、ステップS15~S18を実行することなく最初に戻る。以後は、ステップS1以降の動作が行われる。また、|X-Y|>Pの場合(ステップS14の判定:YES)は、運転者DRの姿勢は不適正であると判断して、ステップS15へ進む。 Next, in step S14, the posture determination unit 34 compares the difference |X−Y| between the total difference X calculated in step S9 and the total difference Y calculated in step S13 with the reference value P. Then, when |XY−≦P (determination in step S14: NO), it is determined that the posture of the driver DR is proper, and the process returns to the beginning without executing steps S15 to S18. After that, the operations after step S1 are performed. When |XY−>P (determination in step S14: YES), it is determined that the posture of the driver DR is inappropriate, and the process proceeds to step S15.
 ステップS15では、姿勢判定部34は、運転者DRの不適正姿勢が継続する時間を、継続時間Tcとして計測する。これは、運転者DRの姿勢が一時的に不適正な姿勢となっても、ごく短時間で適正な姿勢に戻れば、実質的に安全上の問題はなく、このような場合に警報が乱発されるのを回避するためである。なお、継続時間Tcの計測においては、車両情報も考慮される。たとえば、車両50を後進させるためにギアがR(Reverse)レンジにあるときは、後方を確認する動作が頻発するため、運転者DRの姿勢が崩れるが、安全上は問題がない。したがって、このような場合は、運転者DRの後方確認中の時間を、不適正姿勢の継続時間Tcから除外する。 In step S15, the posture determination unit 34 measures the time during which the incorrect posture of the driver DR continues as the duration Tc. This is because even if the posture of the driver DR temporarily becomes an improper posture, if the posture is returned to the proper posture in a very short time, there is substantially no safety problem, and in such a case, the alarm is triggered irregularly. This is to avoid being done. Vehicle information is also taken into consideration in the measurement of the duration Tc. For example, when the gear is in the R (Reverse) range to move the vehicle 50 backward, the operation of checking the rear frequently occurs, so that the posture of the driver DR collapses, but there is no safety problem. Therefore, in such a case, the time during the backward confirmation of the driver DR is excluded from the duration Tc of the improper posture.
 姿勢判定部34は、続くステップS16において、計測した継続時間Tcを記憶部4に記憶されている基準時間Toと比較する。その結果、継続時間Tc≦基準時間Toの場合(ステップS16の判定:NO)は、不適正な姿勢は一時的なものと判断し、ステップS15~S18を実行することなく、最初に戻る。以後は、ステップS1以降の動作が行われる。一方、継続時間Tc>基準時間Toの場合(ステップS16の判定:YES)は、不適正な姿勢が長時間続いていると判断して、ステップS17へ進む。 The posture determination unit 34 compares the measured duration time Tc with the reference time To stored in the storage unit 4 in the subsequent step S16. As a result, when the duration time Tc≦reference time To (determination in step S16: NO), it is determined that the improper posture is temporary, and the process returns to the beginning without executing steps S15 to S18. After that, the operations after step S1 are performed. On the other hand, when the duration Tc> the reference time To (determination in step S16: YES), it is determined that the improper posture continues for a long time, and the process proceeds to step S17.
 ステップS17において、警報出力部35は、不適正な姿勢を運転者DRへ報知して姿勢の是正を促すための警報を、前述したような音声メッセージとして出力する。 In step S17, the alarm output unit 35 outputs an alarm for notifying the driver DR of an improper posture to prompt correction of the posture as a voice message as described above.
 次のステップS18では、車両50のエンジンがオフ(停止)したか否かが、制御部3により判定される。この判定は、イグニッションスイッチ(図示省略)がオフされたかどうかに基づいて行われる。エンジンがオフしない間(ステップS18の判定:NO)は、ステップS10に戻って、カメラ1による運転者DRの撮影が継続され、以後のステップS11~S18が繰り返し実行される。そして、エンジンがオフになると(ステップS18の判定:YES)、最初に戻って、ステップS1以降の動作が行われる。 In the next step S18, the control unit 3 determines whether or not the engine of the vehicle 50 has been turned off (stopped). This determination is made based on whether or not the ignition switch (not shown) is turned off. While the engine is not turned off (determination in step S18: NO), the process returns to step S10, the photographing of the driver DR by the camera 1 is continued, and the subsequent steps S11 to S18 are repeatedly executed. Then, when the engine is turned off (determination in step S18: YES), the process returns to the beginning and the operations in step S1 and subsequent steps are performed.
 次に、本発明の第2実施形態について説明する。上述した第1実施形態では、運転者DRのリクライニング姿勢や前かがみ姿勢、つまりカメラ1からみて前後方向の不適正姿勢を検出したが、カメラ1からみて左右方向や上下方向、あるいは斜め方向の不適正姿勢を検出するのは困難である。たとえば、図17に示すように、適正姿勢の運転者DRが、カメラ1からみて左右方向(図では左方向)に動いて、破線で示す位置に来た場合、運転者DR’は正規の着席位置からずれた位置にいるので、左右方向に不適正な姿勢となっている。 Next, a second embodiment of the present invention will be described. In the above-described first embodiment, the reclining posture or the forward leaning posture of the driver DR, that is, the inappropriate posture in the front-back direction when viewed from the camera 1 is detected. However, the inappropriate posture in the left-right direction, the vertical direction, or the diagonal direction viewed from the camera 1 is detected. It is difficult to detect the posture. For example, as shown in FIG. 17, when the driver DR in an appropriate posture moves in the left-right direction (left in the figure) as viewed from the camera 1 and comes to the position indicated by the broken line, the driver DR′ is seated normally. Since it is out of position, it has an incorrect posture in the left-right direction.
 しかるに、不適正な姿勢の運転者DR’の画像上の大きさは、適正な姿勢の運転者DRの画像上の大きさと殆ど変わらないので、双方の場合の運転時差分画像における差分総計Yに大きな差は生じない。このため、第1実施形態のように、差分総計Xと差分総計Yとの差だけに基づいて姿勢の適否を判定すると、運転者DRも運転者DR’も共に、適正な姿勢と判定されてしまうおそれがある。第2実施形態は、この問題を解決するものである。 However, the size on the image of the driver DR′ having an improper posture is almost the same as the size on the image of the driver DR having the proper posture. There is no big difference. For this reason, when the suitability of the posture is determined based on only the difference between the total difference X and the total difference Y as in the first embodiment, both the driver DR and the driver DR′ are determined to have proper postures. There is a risk that The second embodiment solves this problem.
 図18は、第2実施形態の原理を模式的に示す図である。図18(a)は、図17の運転者DRに対応する基準差分画像の2値化画像を示し、図18(b)は、図17の運転者DR’に対応する運転時差分画像の2値化画像を示している。これらの2値化画像における差分総計(黒色の画素の総数)は殆ど同じであるため、上述したように差分総計だけでは姿勢の適否を判定できない。 FIG. 18 is a diagram schematically showing the principle of the second embodiment. 18A shows a binarized image of the reference difference image corresponding to the driver DR of FIG. 17, and FIG. 18B shows a binarized image of the driving difference image corresponding to the driver DR′ of FIG. The binarized image is shown. Since the total difference (the total number of black pixels) in these binarized images is almost the same, the suitability of the posture cannot be determined only by the total difference as described above.
 そこで、第2実施形態では、基準差分画像と運転時差分画像における“1”の画素領域(図18の黒塗りの領域)の重心位置を算出する。図18(a)には、基準差分画像Gの重心位置M(第1重心位置)が×印で示されており、図18(b)には、運転時差分画像Hの重心位置Nが×印で示されている。これらの重心位置M、Nは、マトリクス状に配列された各画素にX座標とY座標を割り当て、これらの座標値を用いて算出することができる。 Therefore, in the second embodiment, the position of the center of gravity of the "1" pixel region (black-painted region in FIG. 18) in the reference difference image and the driving difference image is calculated. In FIG. 18A, the barycentric position M (first barycentric position) of the reference difference image G is indicated by X, and in FIG. 18B, the barycentric position N of the driving difference image H is ×. It is indicated by a mark. These center of gravity positions M and N can be calculated by assigning the X coordinate and the Y coordinate to each pixel arranged in a matrix and using these coordinate values.
 ここで、重心位置MのX座標と重心位置NのX座標との差を|Mx-Nx|、重心位置MのY座標と重心位置NのY座標との差を|My-Ny|とし、これらをまとめて|M-N|と表記するものとする。重心位置Nが重心位置Mに対して移動した場合、重心位置の差|M-N|を演算することによって、重心位置Nの移動距離を知ることができる。 Here, the difference between the X coordinate of the center of gravity position M and the X coordinate of the center of gravity position N is |Mx−Nx|, and the difference between the Y coordinate of the center of gravity position M and the Y coordinate of the center of gravity position N is |My−Ny| These are collectively referred to as |MN|. When the center of gravity position N moves with respect to the center of gravity position M, the moving distance of the center of gravity position N can be known by calculating the difference | MN | of the center of gravity positions.
 図18では、簡単化のために、重心位置Nが重心位置Mに対して左へ水平に距離δだけ移動した例を示している。重心位置NのX座標は重心位置MのX座標から変化しているが、重心位置NのY座標は重心位置MのY座標と同じである。すなわち、|Mx-Nx|=δ、|My-Ny|=0である。この重心位置Nの移動距離δを、記憶部4に記憶されている基準値Q(第2基準値)と比較することにより、運転者DRが水平方向に動いて不適正な姿勢をとっているか否かを判定することができる。具体的には、移動距離δが基準値Q以下であれば(δ≦Q)、運転者DRの姿勢は適正であり、移動距離δが基準値Qを超えていれば(δ>Q)、運転者DRの姿勢は不適正と判定される。 FIG. 18 shows an example in which the center of gravity position N is horizontally moved to the left with respect to the center of gravity position M by a distance δ for simplification. The X coordinate of the center of gravity position N has changed from the X coordinate of the center of gravity position M, but the Y coordinate of the center of gravity position N is the same as the Y coordinate of the center of gravity position M. That is, | Mx—Nx | = δ, | My—Ny | = 0. By comparing the moving distance δ of the center of gravity position N with the reference value Q (second reference value) stored in the storage unit 4, is the driver DR moving in the horizontal direction and taking an inappropriate posture? It can be determined whether or not. Specifically, if the movement distance δ is equal to or less than the reference value Q (δ≦Q), the posture of the driver DR is appropriate, and if the movement distance δ exceeds the reference value Q (δ>Q), The posture of the driver DR is determined to be inappropriate.
 基準差分画像Gの重心位置Mと、運転時差分画像Hの重心位置Nとを比較した場合、運転者DRの姿勢が適正であるときは、重心位置Nの移動量が小さいので、|M-N|(ここでは|Mx-Nx|すなわちδ)は基準値Q以下の値となる。一方、運転者DRが左右方向に動いて不適正な姿勢をとっているときは、重心位置Nの移動量が大きいので、|M-N|(同上)は基準値Qを超える値となる。 When the center-of-gravity position M of the reference difference image G and the center-of-gravity position N of the driving difference image H are compared, if the posture of the driver DR is appropriate, the amount of movement of the center-of-gravity position N is small. N | (here | Mx-Nx |, that is, δ) is a value equal to or less than the reference value Q. On the other hand, when the driver DR moves in the left-right direction and takes an improper posture, the amount of movement of the center of gravity N is large, so |MN| (same as above) exceeds the reference value Q.
 運転者DRが上下方向に動いた場合や、斜め方向に動いた場合も、上記と同じ原理に基づいて、重心位置の移動量|M-N|を基準値Qと比較することによって、運転者DRの姿勢の適否を判定することができる。なお、左右・上下・斜めの各方向に対して個別の基準値Q1、Q2、Q3を設定してもよい。 Even if the driver DR moves in the vertical direction or in the diagonal direction, by comparing the moving amount |MN| of the center of gravity position with the reference value Q based on the same principle as above, the driver DR The suitability of the DR posture can be determined. In addition, individual reference values Q1, Q2, and Q3 may be set for each of the left-right, up-down, and diagonal directions.
 このように、第2実施形態では、第1実施形態における差分総数X、Yに加え、画像上の運転者の重心位置M、Nも考慮に入れて姿勢の適否を判定するようにしたので、運転者DRの前後方向の不適正姿勢だけでなく、運転者DRの左右・上下・斜め方向の不適正姿勢も検出することが可能となる。 As described above, in the second embodiment, in addition to the total differences X and Y in the first embodiment, the position of the center of gravity M and N of the driver on the image is also taken into consideration to determine the appropriateness of the posture. It is possible to detect not only the unsuitable posture of the driver DR in the front-rear direction but also the unsuitable posture of the driver DR in the left, right, up, and down directions.
 図19は、第2実施形態によるドライバモニタ100の構成を示している。図19では、図1と同じ部分に同じ符号を付してある。図19において、図1と異なる点は、制御部3に重心算出部36が設けられていること、および、記憶部4に基準値Qが記憶されていることである。これら以外の構成については、図1と同じであるので説明を省略する。 FIG. 19 shows the configuration of the driver monitor 100 according to the second embodiment. In FIG. 19, the same parts as those in FIG. 1 are designated by the same reference numerals. In FIG. 19, the difference from FIG. 1 is that the control unit 3 is provided with the center of gravity calculation unit 36 and the storage unit 4 stores the reference value Q. The configurations other than these are the same as those in FIG. 1, and the description thereof will be omitted.
 図20は、第2実施形態のドライバモニタ100における姿勢判定手順をさらに詳細に示したフローチャートである。図20では、図16と同じ処理を行うステップに同じ符号を付してある。 FIG. 20 is a flowchart showing the posture determination procedure in the driver monitor 100 of the second embodiment in more detail. In FIG. 20, steps that perform the same processing as in FIG. 16 are assigned the same reference numerals.
 図20のステップS1~S9、S10~S13、およびS15~S18については、図16の場合と同じ処理であるので、説明を省略する。 Steps S1 to S9, S10 to S13, and S15 to S18 in FIG. 20 are the same processes as in FIG. 16, so description thereof will be omitted.
 ステップS9aでは、重心算出部36が、2値化した基準差分画像Gにおける“1”の画素領域の重心位置M(図18(a))を算出する。ステップS13aでは、重心算出部36が、2値化した運転時差分画像Hにおける“1”の画素領域の重心位置N(図18(b))を算出する。 In step S9a, the center-of-gravity calculating unit 36 calculates the center-of-gravity position M (FIG. 18A) of the pixel region of "1" in the binarized reference difference image G. In step S13a, the center of gravity calculation unit 36 calculates the center of gravity position N (FIG. 18B) of the pixel region “1” in the binarized operating difference image H.
 ステップS14aでは、姿勢判定部34が、差分総計の差|X-Y|を基準値Pと比較するとともに、重心位置の差|M-N|を基準値Qと比較する。そして、|X-Y|≦Pであって、かつ|M-N|≦Qである場合(ステップS14aの判定:NO)は、姿勢判定部34は運転者DRの姿勢が適正であると判定する。この場合は、ステップS15~S18を実行することなく、最初に戻る。一方、|X-Y|>Pである場合、または|M-N|>Qである場合(ステップS14aの判定:YES)は、姿勢判定部34は運転者DRの姿勢が不適正であると判定する。この場合は、ステップS15へ進んで、第1実施形態と同様にステップS15~S18の処理が実行される。 In step S14a, the posture determination unit 34 compares the difference |XY| in the total difference with the reference value P and compares the difference |MN| in the center of gravity position with the reference value Q. Then, if |XY|≦P and |MN|≦Q (determination in step S14a: NO), the posture determination unit 34 determines that the posture of the driver DR is appropriate. To do. In this case, the process returns to the beginning without executing steps S15 to S18. On the other hand, if |XY|>P or |MN|>Q (determination in step S14a: YES), the posture determination unit 34 determines that the posture of the driver DR is incorrect. judge. In this case, the process proceeds to step S15, and the processes of steps S15 to S18 are executed in the same manner as in the first embodiment.
 本発明では、上述した実施形態以外にも、以下のような種々の実施形態を採用することができる。 In the present invention, in addition to the above-described embodiments, the following various embodiments can be adopted.
 前記の実施形態では、差分総計Xと差分総計Yとの差|X-Y|を基準値Pと比較したが、差分総計Xと差分総計Yとの比Y/X(またはX/Y)を所定の基準値と比較してもよい。要するに、差分総計Xと差分総計Yとの比較結果に基づいて、運転者DRの姿勢の適否を判定すればよい。 In the above-described embodiment, the difference |X−Y| between the total difference X and the total difference Y is compared with the reference value P, but the ratio Y/X (or X/Y) between the total difference X and the total difference Y is calculated. You may compare with a predetermined reference value. In short, the suitability of the posture of the driver DR may be determined based on the comparison result of the total difference X and the total difference Y.
 前記の実施形態では、重心位置Mと重心位置Nとの差|M-N|を基準値Qと比較したが、重心位置Mと重心位置Nとの比M/N(またはN/M)を所定の基準値と比較してもよい。要するに、重心位置Mと重心位置Nとの比較結果に基づいて、運転者DRの姿勢の適否を判定すればよい。 In the above-described embodiment, the difference |MN| between the center of gravity position M and the center of gravity position N is compared with the reference value Q, but the ratio M/N (or N/M) of the center of gravity position M and the center of gravity position N is calculated. You may compare with a predetermined reference value. In short, the suitability of the posture of the driver DR may be determined based on the comparison result between the center of gravity position M and the center of gravity position N.
 前記の実施形態では、運転者DRが車両50の運転を開始した後に、カメラ1が基準画像Bを撮像したが、運転者DRが車両50に搭乗してから運転を開始するまでの間に、カメラ1が基準画像Bを撮像してもよい。 In the above-described embodiment, the camera 1 captures the reference image B after the driver DR starts driving the vehicle 50. However, between the time the driver DR boards the vehicle 50 and the driving starts, The camera 1 may capture the reference image B.
 前記の実施形態では、画像処理部2を制御部3とは独立して設けた例を挙げたが、画像処理部2は制御部3に組み込んでもよい。同様に、前記の実施形態では、記憶部4を制御部3とは独立して設けた例を挙げたが、記憶部4は制御部3に組み込んでもよい。 In the above embodiment, an example in which the image processing unit 2 is provided independently of the control unit 3 has been described, but the image processing unit 2 may be incorporated in the control unit 3. Similarly, in the above-described embodiment, an example in which the storage unit 4 is provided independently of the control unit 3 has been described, but the storage unit 4 may be incorporated in the control unit 3.
 前記の実施形態では、ドライバモニタ100を自動四輪車に搭載した例を挙げたが、本発明は、トラックやバスなどの他の車両に搭載されるドライバモニタにも適用することができる。 In the above embodiment, an example in which the driver monitor 100 is mounted on a four-wheeled vehicle has been described, but the present invention can also be applied to a driver monitor mounted on another vehicle such as a truck or a bus.
  1  カメラ
  2  画像処理部
  3  制御部
  4  記憶部
  5  通信部
  6  電源回路
  11 撮像部
  12 発光部
  31 差分演算部
  32 差分総計算出部
  33 車両状態判定部
  34 姿勢判定部
  35 警報出力部
  36 重心算出部
  50 車両
  100 ドライバモニタ(運転者監視装置)
  200 ECU
  300 電子キー
  DR 運転者
1 Camera 2 Image processing unit 3 Control unit 4 Storage unit 5 Communication unit 6 Power supply circuit 11 Imaging unit 12 Light emitting unit 31 Difference calculation unit 32 Difference total calculation output unit 33 Vehicle condition judgment unit 34 Attitude judgment unit 35 Alarm output unit 36 Center of gravity calculation unit 50 Vehicle 100 Driver monitor (driver monitoring device)
200 ECU
300 electronic key DR driver

Claims (8)

  1.  車両の運転者と対向するように設けられ、前記車両に搭乗した運転者を撮像するカメラと、
     前記カメラが撮像した画像に基づいて前記運転者の姿勢を判定する姿勢判定部と、
     前記姿勢判定部で前記運転者の姿勢が不適正と判定された場合に警報を出力する警報出力部と、を備えた運転者監視装置において、
     前記カメラが撮像した2つの画像の差分を演算する差分演算部と、
     前記差分演算部の演算によって得られる差分画像を所定の閾値を用いて2値化し、当該2値化された差分画像における前記閾値以上の画素の総数を差分総計として算出する差分総計算出部と、をさらに備え、
     前記カメラは、
     前記運転者が前記車両に搭乗していない状態で撮像した第1基準画像と、
     前記運転者が前記車両に搭乗し、かつ当該運転者の姿勢が適正である状態で撮像した第2基準画像と、
     前記運転者が運転をしている状態で撮像した運転時画像と、を生成し、
     前記差分演算部は、
     前記第1基準画像と前記第2基準画像との差分を演算して、当該2つの画像の差分である基準差分画像を生成し、
     前記第1基準画像と前記運転時画像との差分を演算して、当該2つの画像の差分である運転時差分画像を生成し、
     前記差分総計算出部は、
     2値化された前記基準差分画像における前記閾値以上の画素の総数を第1差分総計Xとして算出し、
     2値化された前記運転時差分画像における前記閾値以上の画素の総数を第2差分総計Yとして算出し、
     前記姿勢判定部は、前記第1差分総計Xと前記第2差分総計Yとの比較結果に基づいて、前記運転者の姿勢が適正か否かを判定する、ことを特徴とする運転者監視装置。
    A camera provided so as to face the driver of the vehicle and capturing an image of the driver on the vehicle;
    A posture determination unit that determines the posture of the driver based on the image captured by the camera, and
    In a driver monitoring device including an alarm output unit that outputs an alarm when the posture determination unit determines that the driver's posture is inappropriate.
    A difference calculation unit that calculates the difference between two images captured by the camera,
    A difference total calculation output unit that binarizes the difference image obtained by the calculation of the difference calculation unit using a predetermined threshold value and calculates the total number of pixels equal to or larger than the threshold value in the binarized difference image as the total difference. Further equipped with,
    The camera is
    A first reference image taken by the driver when he is not in the vehicle, and
    A second reference image taken by the driver in the vehicle and in a proper posture of the driver;
    A driving image captured by the driver while driving is generated.
    The difference calculation unit
    The difference between the first reference image and the second reference image is calculated to generate a reference difference image which is the difference between the two images.
    The difference between the first reference image and the driving image is calculated to generate a driving difference image which is the difference between the two images.
    The difference total calculation output unit is
    The total number of pixels above the threshold value in the binarized reference difference image is calculated as the first difference total X.
    The total number of pixels above the threshold value in the binarized difference image during operation is calculated as the second total difference total Y.
    The driver monitoring device is characterized in that the posture determination unit determines whether or not the posture of the driver is appropriate based on the comparison result between the first difference total X and the second difference total Y. ..
  2.  請求項1に記載の運転者監視装置において、
     前記姿勢判定部は、
     前記第1差分総計Xと前記第2差分総計Yとの差を|X-Y|、第1基準値をPとしたとき、
     |X-Y|≦Pである場合は、前記運転者の姿勢が適正であると判定し、
     |X-Y|>Pである場合は、前記運転者の姿勢が不適正であると判定する、ことを特徴とする運転者監視装置。
    The driver monitoring device according to claim 1,
    The posture determination unit,
    When the difference between the first total difference X and the second total difference Y is |XY−, and the first reference value is P,
    If |XY−≦P, it is determined that the posture of the driver is appropriate,
    When | XY |> P, the driver monitoring device is characterized in that it is determined that the posture of the driver is inappropriate.
  3.  請求項1に記載の運転者監視装置において、
     2値化された前記基準差分画像における前記閾値以上の画素領域の重心位置を第1重心位置Mとして算出するとともに、2値化された前記運転時差分画像における前記閾値以上の画素領域の重心位置を第2重心位置Nとして算出する重心算出部をさらに備え、
     前記姿勢判定部は、前記第1差分総計Xと前記第2差分総計Yとの比較結果、および前記第1重心位置Mと前記第2重心位置Nとの比較結果に基づいて、前記運転者の姿勢が適正か否かを判定する、ことを特徴とする運転者監視装置。
    In the driver monitoring device according to claim 1,
    The position of the center of gravity of the pixel region above the threshold value in the binarized reference difference image is calculated as the first center of gravity position M, and the position of the center of gravity of the pixel area above the threshold value in the binarized difference image during operation is calculated. Is further provided with a center of gravity calculation unit that calculates as the second center of gravity position N.
    The posture determination unit is based on the comparison result of the first difference total X and the second difference total Y, and the comparison result of the first center of gravity position M and the second center of gravity position N of the driver. A driver monitoring device characterized in that it determines whether or not the posture is appropriate.
  4.  請求項3に記載の運転者監視装置において、
     前記姿勢判定部は、
     前記第1差分総計Xと前記第2差分総計Yとの差を|X-Y|、前記第1重心位置Mと前記第2重心位置Nとの差を|M-N|、第1基準値をP、第2基準値をQとしたとき、
     |X-Y|≦Pであり、かつ|M-N|≦Qである場合は、前記運転者の姿勢が適正であると判定し、
     |X-Y|>Pである場合、または|M-N|>Qである場合は、前記運転者の姿勢が不適正であると判定する、ことを特徴とする運転者監視装置。
    The driver monitoring device according to claim 3,
    The posture determination unit,
    The difference between the first total difference total X and the second difference total Y is | XY |, the difference between the first center of gravity position M and the second center of gravity position N is | MN |, the first reference value. When is P and the second reference value is Q,
    When | XY | ≤P and | MN | ≤Q, it is determined that the posture of the driver is appropriate.
    A driver monitoring device, characterized in that, when | XY |> P, or | MN |> Q, it is determined that the posture of the driver is inappropriate.
  5.  請求項1ないし請求項4のいずれかに記載の運転者監視装置において、
     前記姿勢判定部は、前記運転者の姿勢が不適正であると判定した場合に、当該不適正姿勢が継続する時間を継続時間として計測し、
     前記警報出力部は、前記継続時間が所定の基準時間を経過したときに警報を出力する、ことを特徴とする運転者監視装置。
    In the driver monitoring device according to any one of claims 1 to 4.
    When the posture determination unit determines that the driver's posture is inappropriate, the posture determination unit measures the duration of the improper posture as the duration.
    The driver monitoring device, wherein the alarm output unit outputs an alarm when the duration has passed a predetermined reference time.
  6.  請求項1ないし請求項5のいずれかに記載の運転者監視装置において、
     前記姿勢判定部は、前記カメラが前記第2基準画像を生成するに際して、前記車両に搭乗した前記運転者の姿勢が適正であるか否かを、当該運転者の顔情報および当該車両の車両情報に基づいて判定する、ことを特徴とする運転者監視装置。
    In the driver monitoring device according to any one of claims 1 to 5.
    When the camera generates the second reference image, the posture determination unit determines whether or not the posture of the driver who has boarded the vehicle is appropriate, the face information of the driver and the vehicle information of the vehicle. A driver monitoring device characterized by making a judgment based on.
  7.  請求項6に記載の運転者監視装置において、
     前記姿勢判定部は、前記顔情報および前記車両情報に加えて、前記運転者の姿勢に関する個人学習データを参酌して、前記車両に搭乗した前記運転者の姿勢が適正であるか否かを判定する、ことを特徴とする運転者監視装置。
    The driver monitoring apparatus according to claim 6,
    The posture determination unit determines whether or not the posture of the driver who has boarded the vehicle is appropriate by taking into consideration the personal learning data regarding the posture of the driver in addition to the face information and the vehicle information. A driver monitoring device characterized in that it does.
  8.  請求項1ないし請求項7のいずれかに記載の運転者監視装置において、
     前記カメラは、前記車両のドアを開閉するための電子キーを所持した前記運転者が当該車両に接近したことが確認されたときに、前記第1基準画像を撮像する、ことを特徴とする運転者監視装置。
    In the driver monitoring device according to any one of claims 1 to 7.
    The camera captures the first reference image when it is confirmed that the driver, who possesses an electronic key for opening and closing the door of the vehicle, has approached the vehicle. Person monitoring device.
PCT/JP2020/008262 2019-03-06 2020-02-28 Driver monitoring device WO2020179656A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-040285 2019-03-06
JP2019040285A JP2020144573A (en) 2019-03-06 2019-03-06 Driver monitoring device

Publications (1)

Publication Number Publication Date
WO2020179656A1 true WO2020179656A1 (en) 2020-09-10

Family

ID=72338651

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/008262 WO2020179656A1 (en) 2019-03-06 2020-02-28 Driver monitoring device

Country Status (2)

Country Link
JP (1) JP2020144573A (en)
WO (1) WO2020179656A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI741892B (en) * 2020-12-01 2021-10-01 咸瑞科技股份有限公司 In-car driving monitoring system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015120584A (en) * 2013-12-25 2015-07-02 株式会社日立製作所 Image monitoring device and elevator monitoring device
WO2018225176A1 (en) * 2017-06-07 2018-12-13 三菱電機株式会社 State determination device and state determination method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015120584A (en) * 2013-12-25 2015-07-02 株式会社日立製作所 Image monitoring device and elevator monitoring device
WO2018225176A1 (en) * 2017-06-07 2018-12-13 三菱電機株式会社 State determination device and state determination method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI741892B (en) * 2020-12-01 2021-10-01 咸瑞科技股份有限公司 In-car driving monitoring system

Also Published As

Publication number Publication date
JP2020144573A (en) 2020-09-10

Similar Documents

Publication Publication Date Title
JP5549721B2 (en) Driver monitor device
US9616809B1 (en) Lane change prediction and turn signal activation upon observation of head and eye movement
US7379089B2 (en) Apparatus and method for monitoring the immediate surroundings of a vehicle
JP5092776B2 (en) Gaze direction detection device and gaze direction detection method
JP2008199515A (en) Fellow passenger sitting posture detecting/determining apparatus and method
JP2005247224A (en) Vehicular display device
US9981598B2 (en) Alighting notification device
KR20180119258A (en) Driver state sensing system, driver state sensing method, and vehicle including thereof
US20210016804A1 (en) Driving assistance apparatus
KR102420289B1 (en) Method, control device and vehicle for detecting at least one object present on a vehicle
JP2016172526A (en) Display device for vehicle
KR20120074820A (en) Control system for vehicle using face recognition function
WO2014034065A1 (en) Moving body warning device and moving body warning method
JP6575404B2 (en) VEHICLE DISPLAY CONTROL DEVICE, VEHICLE DISPLAY SYSTEM, VEHICLE DISPLAY CONTROL METHOD, AND PROGRAM
JP6525345B2 (en) Vehicle display system and control method of vehicle display system
KR20200072897A (en) Apparatus of managing vehicle invasion, system having the same and method thereof
JP2017019419A (en) Vehicle door opening/closing control device and door opening/closing control system, door opening/closing advance notice system
WO2020179656A1 (en) Driver monitoring device
JP2008305190A (en) Doze warning device and car
US10713829B2 (en) Accident report device and accident report method
US11881054B2 (en) Device and method for determining image data of the eyes, eye positions and/or a viewing direction of a vehicle user in a vehicle
US11772563B2 (en) In-vehicle multi-monitoring device for vehicle
US12015876B2 (en) In-vehicle monitoring device for vehicle
CN113879317B (en) driver monitoring device
WO2013114871A1 (en) Driving assistance device and driving assistance method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20766911

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20766911

Country of ref document: EP

Kind code of ref document: A1