WO2021001944A1 - Dispositif de traitement d'image embarqué et procédé de traitement d'image embarqué - Google Patents

Dispositif de traitement d'image embarqué et procédé de traitement d'image embarqué Download PDF

Info

Publication number
WO2021001944A1
WO2021001944A1 PCT/JP2019/026376 JP2019026376W WO2021001944A1 WO 2021001944 A1 WO2021001944 A1 WO 2021001944A1 JP 2019026376 W JP2019026376 W JP 2019026376W WO 2021001944 A1 WO2021001944 A1 WO 2021001944A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
driver
unit
detection
image
Prior art date
Application number
PCT/JP2019/026376
Other languages
English (en)
Japanese (ja)
Inventor
安田 太郎
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2019/026376 priority Critical patent/WO2021001944A1/fr
Priority to JP2021529614A priority patent/JP7183420B2/ja
Priority to DE112019007522.5T priority patent/DE112019007522T5/de
Publication of WO2021001944A1 publication Critical patent/WO2021001944A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Definitions

  • the present invention relates to an in-vehicle image processing device that detects a driver's face based on an image, and an in-vehicle image processing method.
  • Patent Document 1 describes a function of detecting a driver's face area from an image received from a camera, calculating an exposure time so as to obtain an image with optimum brightness in the detected face area, and instructing the CCD controller.
  • an in-vehicle image processing device having a function of analyzing a face image to detect a driver's inattentive driving, dozing driving, etc. and sounding an alarm is disclosed.
  • the conventional in-vehicle image processing device as disclosed in Patent Document 1 has a problem that the possibility that the face of an occupant other than the driver is imaged in the image cannot be taken into consideration.
  • the conventional in-vehicle image processing device may erroneously detect the face of the occupant as the face of the driver.
  • the control of the optical settings based on the falsely detected driver's face is not an appropriate control.
  • the present invention has been made to solve the above problems, and an object of the present invention is to prevent improper control of optical settings based on a falsely detected driver's face.
  • the in-vehicle image processing device includes an image acquisition unit that acquires an image obtained by capturing an image of a range inside the vehicle including a range in which the driver's face sitting in the driver's seat should exist from the image pickup device. Based on the image acquired by the image acquisition unit, the face detection unit that detects the driver's face and the driver's face area on the image and the information about the driver's face detected by the face detection unit are face detection conditions.
  • a false detection determination unit that makes an erroneous detection determination as to whether or not the face detection unit erroneously detects the driver's face, and a false detection determination unit that causes the face detection unit to erroneously detect the driver's face depending on whether or not the conditions are satisfied.
  • the optics When it is determined that it has not been detected, the optics according to the average brightness of the pixels in the driver's face region. It is provided with an optical setting control unit that controls the setting and controls the maintenance of the optical setting when the face detection unit determines that the driver's face is erroneously detected.
  • FIG. 2A and 2B are diagrams for explaining an example of an image of the installation position of the image pickup apparatus and the image pickup range in the first embodiment.
  • FIG. 5 is a diagram for explaining an image of a method in which a face detection unit determines a driver's face in the first embodiment. It is a figure for demonstrating the image of an example of the state in the vehicle when the face detection part erroneously detects a driver's face in Embodiment 1.
  • FIG. It is a figure for demonstrating an example of the layout of the image pickup apparatus which can image a passenger in a passenger seat in Embodiment 1.
  • FIG. 6A and 6B are diagrams for explaining an image of an example of the face detection range reduced by the area reduction portion in the first embodiment. It is a flowchart for demonstrating operation of the in-vehicle image processing apparatus which concerns on Embodiment 1.
  • FIG. 8A shows an erroneous detection determination by the erroneous detection determination unit when the erroneous face detection condition is "whether or not the change in the position of the driver's face region satisfies the first determination condition" in the first embodiment. It is a flowchart explaining the details of an example of the operation of, and FIG.
  • FIG. 8B shows the face erroneous detection condition as "whether or not the change in the position of the driver's face area satisfies the first determination condition" and "the driver's
  • FIG. 8C is a flowchart illustrating details of an example of an operation of false detection determination by the false detection determination unit when the combination of "whether or not the change in the size of the frame of the face region satisfies the second determination condition" is used.
  • Sets the face error detection condition as "whether or not the change in the size of the frame of the driver's face area satisfies the second judgment condition” and "the change in the feature amount of the driver's face satisfies the third judgment condition".
  • FIG. 11A and 11B are diagrams showing an example of the hardware configuration of the in-vehicle image processing device according to the first embodiment and the second embodiment.
  • FIG. 1 is a diagram showing a configuration example of an in-vehicle image processing device 10 according to the first embodiment.
  • the in-vehicle image processing device 10 is mounted on a vehicle (not shown).
  • the in-vehicle image processing device 10 is connected to the image pickup device 20 and the lighting device 30.
  • the image pickup device 20 is shared with the image pickup device included in the so-called "driver monitoring system" mounted on the vehicle for monitoring the state of the driver in the vehicle.
  • the image pickup device 20 is a camera or the like installed in the vehicle for the purpose of monitoring the inside of the vehicle, and at least the face of the driver sitting in the driver's seat should be present. Anything may be installed so that the range in the vehicle including the range can be imaged.
  • the range in which the face of the driver seated in the driver's seat should exist is, for example, a range corresponding to the space near the front of the headrest of the driver's seat.
  • the image pickup apparatus 20 may be, for example, a visible light camera or an infrared camera.
  • 2A and 2B are diagrams for explaining an example of an image of the installation position and the imaging range of the imaging device 20 in the first embodiment. Note that FIGS.
  • FIGS. 2A and 2B are views showing an image of the interior of the vehicle as viewed from above.
  • the driver is 21a and the rear seat occupant. Is shown by 21b, and the passenger in the passenger seat is shown by 21c.
  • the image pickup apparatus 20 is installed in the center of the instrument panel of the vehicle (hereinafter referred to as “instrument panel”) and images the driver's seat side from the center of the instrument panel. Is assumed.
  • the image pickup device 20 is installed in the center of the instrument panel of the vehicle and images the driver's seat and the passenger seat from the center of the instrument panel, for example, as shown in FIG. 2B. ..
  • the imaging device 20 is installed so as to have an installation position and an imaging range as shown in FIG. 2A.
  • the lighting device 30 is a light emitter that emits illumination light, such as an LED (Light Emitting Diode).
  • the lighting device 30 is installed near the image pickup device 20.
  • the lighting device 30 is mounted on the image pickup device 20.
  • FIG. 2A and FIG. 2B the illustration of the lighting device 30 is omitted.
  • the in-vehicle image processing device 10 acquires an image from the image pickup device 20 and detects the driver's face and the driver's face region on the image. At that time, the vehicle-mounted image processing device 10 makes an erroneous detection determination as to whether or not the driver's face is erroneously detected. When the vehicle-mounted image processing device 10 determines that the driver's face is not erroneously detected as a result of the erroneous detection determination, the in-vehicle image processing device 10 of the image processing device 20 according to the average brightness of the pixels in the driver's face region. Controls the optical settings.
  • the face area is a predetermined area on the image set based on the detection position on the image of the driver's face part.
  • the in-vehicle image processing device 10 monitors the driver's state based on the detected driver's face, and controls such as outputting an alarm according to the driver's state.
  • the in-vehicle image processing device 10 determines in the above-mentioned false detection determination that the driver's face is falsely detected, the in-vehicle image processing device 10 does not control the optical setting of the image pickup device 20, reduces the face detection area, and then reduces the face detection area. Rediscover the driver's face.
  • the in-vehicle image processing device 10 does not depend on the control of the optical setting of the image pickup device 20 according to the average brightness of the pixels of the face region including the face of the occupant other than the driver, or the state of the driver. Do not output the alarm.
  • the details of the face detection area and the details of the process of reducing the face detection area will be described later.
  • the in-vehicle image processing device 10 includes an image acquisition unit 101, a face detection unit 102, a driver monitoring unit 103, an erroneous detection determination unit 104, an area reduction unit 105, a re-detection instruction unit 106, and a brightness calculation.
  • a unit 107, a brightness determination unit 108, and an optical setting control unit 109 are provided.
  • the optical setting control unit 109 includes an exposure control unit 1091 and an image processing unit 1092.
  • the image acquisition unit 101 acquires an image obtained by capturing an image of a range in the vehicle including a range in which the face of the driver sitting in the driver's seat should exist from the image pickup device 20.
  • the image pickup apparatus 20 converts the light acquired from the inside of the vehicle into an electric signal, and generates image data based on the electric signal.
  • the image pickup apparatus 20 outputs the generated image data to the image acquisition unit 101.
  • the image acquisition unit 101 acquires the image data output from the image pickup apparatus 20.
  • the image pickup device 20 may convert the light acquired from the inside of the vehicle into an electric signal and output it to the image acquisition unit 101. In this case, the image acquisition unit 101 converts the electric signal output from the image pickup apparatus 20 to acquire image data.
  • the image acquisition unit 101 acquires the image data output from the image pickup device 20, or the image acquisition unit 101 converts the electrical signal output from the image pickup device 20 to form an image.
  • the image acquisition unit 101 "acquires an image from the image pickup apparatus 20" including acquiring data.
  • the image acquisition unit 101 outputs the acquired image to the face detection unit 102 and the brightness calculation unit 107.
  • the face detection unit 102 detects the driver's face and the driver's face region on the image based on the image acquired by the image acquisition unit 101. Specifically, for example, the face detection unit 102 detects the driver's face by detecting the face parts in the face detection area set on the image. The face detection unit 102 may detect a face part by using a known image recognition technique.
  • the face detection area is an area on the image where the face detection unit 102 detects the driver's face, and the initial value of the face detection area is set in advance by, for example, the user. ing. Then, information regarding the face detection area is stored in a storage unit (not shown).
  • the face detection area is defined in the information about the face detection area, and the information about the face detection area includes, for example, coordinates for specifying a frame indicating the face detection area on the image. Information that can identify the face detection area is included.
  • the face detection unit 102 identifies the face detection area with reference to the storage unit, and detects the driver's face. The face detection unit 102 also detects the driver's face area. In the first embodiment, for example, among the facial parts, the position of the eyebrows is the upper end of the face, the position of the mouth is the lower end of the face, and the outer ends of the left and right eyes are the left and right edges of the face, respectively.
  • the face area may be set as a rectangle obtained by expanding the rectangle passing through the upper end of the face, the lower end of the face, the left end of the face, and the right end of the face at a preset ratio or the like as described above.
  • the face detection unit 102 detects the driver's face
  • the detection of the driver's face also includes detecting the driver's face area. ..
  • the face detection unit 102 determines the face with the highest reliability as the driver's face based on the reliability of the detected faces. In the first embodiment, the face detection unit 102 sets, for example, the face having the largest face region as the face with the highest reliability.
  • FIG. 3 is a diagram for explaining an image of a method in which the face detection unit 102 determines the driver's face in the first embodiment.
  • the image acquired by the image acquisition unit 101 is shown by 200, and the face detection area on the image is shown by 201.
  • the face detection unit 102 has detected two faces, the first face 202a and the second face 202b, in the face detection region 201 on the image 200.
  • the face detection unit 102 includes a face region of the first face 202a (hereinafter referred to as "first face region 203a") and a face region of the second face 202b (hereinafter referred to as "second face region 203b"). ) Is detected.
  • the face detection unit 102 has the highest reliability of the first face 202a corresponding to the first face area 203a, which is the larger of the first face area 203a and the second face area 203b.
  • the face detection unit 102 detects the first face 202a as the driver's face among the detected first face 202a and the second face 202b.
  • the face detection area is shown in, for example, FIG.
  • the face detection area includes the face of an occupant sitting in the back seat
  • the face detection unit 102 makes the second face 202b, which is the face of the occupant sitting in the back seat, the face of the driver. It may be detected together with the first face 202a. Normally, the driver is sitting closer to the image pickup device 20, and the driver's face is larger in the image.
  • the face detection unit 102 detects both the second face 202b, which is the face of the occupant, and the first face 202a, which is the face of the driver, the face detection unit 102 has a larger face area.
  • the first face 202a is determined to be the most reliable face, and the first face 202a is determined to be the driver's face.
  • the face detection unit 102 determines the reliability of the detected face based on the size of the face area, but this is only an example.
  • the face detection unit 102 may determine the reliability of the face according to the high reliability of the detected face parts, for example.
  • the reliability of facial parts is, for example, the degree to which the eyes are eye-catching or the nose is nose-like.
  • the face detection unit 102 may calculate the reliability of the face parts by using a known image recognition technique such as pattern matching.
  • the face detection unit 102 outputs the detected information about the driver's face (hereinafter referred to as “face information”) to the driver monitoring unit 103, the erroneous detection determination unit 104, and the brightness calculation unit 107.
  • the driver's face information includes information about the driver's face parts and information about the driver's face area. More specifically, the information about the driver's face part is, for example, information in which the information indicating the driver's face part is associated with the coordinates on the image indicating the position of the part. .. Further, the information regarding the driver's face area is, for example, coordinates on an image for specifying a frame indicating the driver's face area.
  • the face detection unit 102 may combine the images output from the image acquisition unit 101 and output them to the driver monitoring unit 103, the erroneous detection determination unit 104, and the brightness calculation unit 107. Further, the face detection unit 102 stores the detected face information of the driver in the storage unit.
  • the driver monitoring unit 103 monitors the driver's state based on the face information output from the face detection unit 102. On the other hand, if the erroneous detection determination unit 104 does not output the face detection presence information, the driver monitoring unit 103 does not monitor the driver's condition based on the face information output from the face detection unit 102.
  • the state of the driver monitored by the driver monitoring unit 103 refers to the state of the driver, such as falling asleep or looking aside, which is not concentrated on driving and may interfere with driving. The false detection determination unit 104 will be described later.
  • the driver monitoring unit 103 monitors, for example, whether the driver is dozing or looking aside, based on the driver's eye opening rate or the angle of the nose. Since the technique for calculating the eye opening rate or the angle of the nose muscle based on the information on the facial parts is a known technique, detailed description thereof will be omitted.
  • the driver monitoring unit 103 may monitor whether the driver is in a state where he / she may interfere with driving. When the driver monitoring unit 103 determines that the driver is in a state where the driving may be hindered, the driver monitoring unit 103 outputs the alarm output instruction information to the alarm output control unit (not shown). When the alarm output instruction information is output from the driver monitoring unit 103, the alarm output control unit causes an output device (not shown) to output an alarm.
  • the output device is, for example, an audio output device provided in a vehicle and is connected to an in-vehicle image processing device 10.
  • the alarm output control unit causes the output device to output, for example, an alarm sound.
  • the erroneous detection determination unit 104 makes an erroneous detection determination as to whether or not the face detection unit 102 erroneously detects the driver's face. Specifically, the erroneous detection determination unit 104 determines whether or not the driver's face information detected by the face detection unit 102 satisfies the face erroneous detection condition based on the face information output from the face detection unit 102. To do. When the driver's face information detected by the face detection unit 102 satisfies the face erroneous detection condition, the erroneous detection determination unit 104 determines that the face detection unit 102 erroneously detects the driver's face.
  • the face erroneous detection condition is a condition for determining whether or not the face detection unit 102 erroneously detects the driver's face.
  • the erroneous detection determination unit 104 determines that the face detection unit 102 erroneously detects the driver's face.
  • the erroneous detection determination unit 104 determines that the face detection unit 102 does not erroneously detect the driver's face when the driver's face information detected by the face detection unit 102 does not satisfy the face erroneous detection condition.
  • FIG. 4 is a diagram for explaining an image of an example of the state inside the vehicle when the face detection unit 102 erroneously detects the driver's face in the first embodiment.
  • the face detection unit 102 detects the first face 202a as the driver's face.
  • the face detection unit 102 can correctly detect the face of the driver. After that, it is assumed that the driver moves his / her hand from the state as shown in the image shown in FIG. 3 to a position in front of the driver's face when viewed from the image pickup device 20. Then, as shown in FIG.
  • the face detected in the face detection area 201 is only the second face 202b, which is the face of the occupant in the rear seat.
  • the face detection unit 102 cannot detect the part of the first face 202a, which is the driver's face that could be detected until then, but in the face detection area 201 on the image 200, the face detection unit 102 is the first.
  • the face part of the face 202b of 2 can be detected. Therefore, the face detection unit 102 detects the second face 202b as the driver's face. In this way, the face detection unit 102 can erroneously detect the driver's face.
  • the erroneous detection determination unit 104 considers that the face detection unit 102 may detect the face of an occupant other than the driver as the driver's face, and the face detection unit 102 By determining whether or not the detected driver's face information satisfies the face erroneous detection condition, it is determined whether or not the face detection unit 102 erroneously detects the driver's face.
  • the face erroneous detection condition is, for example, "whether or not the change in the position of the driver's face region satisfies a preset condition (hereinafter referred to as" first determination condition ")".
  • the erroneous detection determination unit 104 uses the coordinates of a frame (hereinafter referred to as "current face frame”) indicating the driver's face area based on the face information output from the face detection unit 102 (hereinafter, "current face frame”).
  • a frame hereinafter referred to as "coordinates” and a frame indicating the driver's face area based on the face information immediately before the face information output from the face detection unit 102 stored in the storage unit in chronological order.
  • the erroneous detection determination unit 104 compares the immediately preceding face frame coordinates with the current face frame coordinates, and determines whether or not the difference in coordinates satisfies the first determination condition.
  • the first determination condition is, for example, whether or not the difference in coordinates is equal to or greater than a preset value (hereinafter referred to as "first threshold value").
  • the false positive determination unit 104 calculates, for example, the absolute value of the difference between the coordinates of the center of the immediately preceding face frame and the coordinates of the center of the current face frame as the coordinate difference.
  • the erroneous detection determination unit 104 determines that the face detection unit 102 erroneously detects the driver's face when the difference in coordinates is equal to or greater than the first threshold value.
  • the false detection determination unit 104 compares, for example, the coordinates of the four immediately preceding face frames indicating the four corners of the immediately preceding face frame and the coordinates of the four current face frames indicating the four corners of the current face frame, and makes four differences. The absolute value of the calculated and average value of the four calculated differences is calculated as the difference in coordinates.
  • the erroneous detection determination unit 104 determines that the face detection unit 102 erroneously detects the driver's face when the difference in coordinates is equal to or greater than the first threshold value.
  • the false detection determination unit 104 compares, for example, the coordinates of the four immediately preceding face frames indicating the four corners of the immediately preceding face frame and the coordinates of the four current face frames indicating the four corners of the current face frame, and makes four differences. When at least one of the calculated absolute values of the four differences is equal to or greater than the first threshold value, the face detection unit 102 determines that the driver's face has been erroneously detected.
  • the face detection unit 102 detects the occupant.
  • the frame showing the face area of the driver changes from the frame showing the first face area 203a to the frame showing the second face area 203b.
  • the second face 202b corresponding to the second face area 203b is the face of the occupant in the rear seat, which is erroneously detected as the face of the driver. Then, when viewed on the image, the position of the frame indicating the face area detected on the image 200 as the driver's face until then moves from left to right in an instant.
  • the erroneous detection determination unit 104 determines whether or not the face detection unit 102 erroneously detects the driver's face, for example, depending on whether or not the change in the position of the driver's face region satisfies the first determination condition. judge. When the change in the position of the driver's face region satisfies the first determination condition, the erroneous detection determination unit 104 determines that the face erroneous detection condition is satisfied, and determines that the face detection unit 102 erroneously detects the driver's face. ..
  • the face erroneous detection condition may be, for example, "whether or not the change in the size of the frame indicating the driver's face area satisfies a preset condition (hereinafter referred to as" second determination condition ")". Good.
  • the false detection determination unit 104 compares the sizes of the immediately preceding face frame and the current face frame, and the change in the size of the current face frame with respect to the size of the immediately preceding face frame satisfies the second determination condition. Judge whether or not.
  • the second determination condition is, for example, whether or not the ratio of the current face frame to the immediately preceding face frame is equal to or less than a preset value (hereinafter, “second threshold value”).
  • the erroneous detection determination unit 104 calculates, for example, the area of the immediately preceding face frame and the area of the current face frame, and sets the ratio of the two areas as the ratio of the current face frame to the immediately preceding face frame.
  • the false detection determination unit 104 calculates, for example, the circumference of the immediately preceding face frame and the circumference of the current face frame, and sets the ratio of the two peripheral lengths to the ratio of the current face frame to the immediately preceding face frame.
  • the face detection unit 102 detects the occupant.
  • the frame showing the face area of the driver changes from the frame showing the first face area 203a to the frame showing the second face area 203b.
  • the second face 202b corresponding to the second face area 203b is the face of the occupant in the rear seat, which is erroneously detected as the face of the driver.
  • the size of the occupant's face in the rear seat on the image is smaller than the size of the driver's face. Then, when viewed on the image, it is considered that the size of the frame indicating the face region detected on the image 200 as the driver's face until then becomes smaller.
  • the face detection unit 102 correctly erroneously detects the driver's face depending on whether or not the change in the size of the frame indicating the driver's face region satisfies the second determination condition. Judge whether or not.
  • the erroneous detection determination unit 104 determines that the face erroneous detection condition is satisfied, and the face detection unit 102 erroneously detects the driver's face. Is determined.
  • the face erroneous detection condition may be, for example, "whether or not the change in the feature amount of the driver's face satisfies a preset condition (hereinafter referred to as" third determination condition ")".
  • the erroneous detection determination unit 104 stores the driver's facial feature amount (hereinafter referred to as "current face feature amount”) and the storage unit based on the face information output from the face detection unit 102.
  • the driver's facial feature amount (hereinafter referred to as "immediately preceding face feature amount”) is quantified based on the face information immediately before the face information output from the face detection unit 102 in chronological order. ..
  • the facial feature amount is, for example, the distance between both eyes or the ratio of the sizes of both eyes.
  • the facial feature amount may be, for example, the distance between the nose and the mouth, the ratio of the sizes of the nose and the mouth, or the distance between the eyes, the nose, and the mouth. However, the ratio may be binocular, nose, and mouth.
  • the erroneous detection determination unit 104 compares the immediately preceding face feature amount with the current face feature amount, and determines whether or not the difference between the feature amounts satisfies the third determination condition.
  • the third determination condition may be, for example, whether or not there is a difference of a preset value (hereinafter referred to as “third threshold value”) or more, or whether or not the feature amount is different.
  • the face detection unit 102 may change the state.
  • the face detected by the face detection unit 102 as the driver's face changes from the first face 202a to the second face 202b.
  • the second face 202b is the face of the occupant in the rear seat, which is erroneously detected as the face of the driver.
  • the facial features of the first face 202a which have been detected as the driver's face, and the facial features of the newly detected second face 202b change. If the facial features change, it is possible that the detected face has changed from the previously detected face to another person's face.
  • the erroneous detection determination unit 104 determines whether or not the face detection unit 102 can correctly detect the driver's face, for example, depending on whether or not the change in the feature amount of the driver's face satisfies the third determination condition. To do. When the change in the feature amount of the driver's face satisfies the third determination condition, the erroneous detection determination unit 104 determines that the face erroneous detection condition is satisfied, and determines that the face detection unit 102 erroneously detects the driver's face. ..
  • the erroneous detection determination unit 104 determines "whether or not the change in the position of the driver's face area satisfies the first determination condition", and further, "changes in the size of the frame indicating the driver's face area”. May satisfy the second determination condition. " The erroneous detection determination unit 104 is the face detection unit 104 when the change in the position of the driver's face area satisfies the first determination condition and the change in the size of the frame of the driver's face area satisfies the second determination condition. It is determined that 102 erroneously detects the driver's face.
  • the face detection unit 102 may determine that the driver's face has been erroneously detected even though the face detection unit 102 has not erroneously detected the driver's face. For example, when the driver moves his / her face laterally toward the center of the vehicle from the state captured in the image 200 shown in FIG. 3, the first face region 203a is shown on the image 200. The position of the frame changes significantly toward the frame indicating the second face region 203b.
  • the face detection unit 102 detects the first face region 203a as the driver's face. That is, the face detection unit 102 can correctly detect the driver's face and does not erroneously detect it. However, if the false detection determination unit 104 makes a false detection determination based only on the face false detection condition of "whether or not the change in the position of the driver's face region satisfies the first determination condition", the false detection determination unit 104 Determines that the face detection unit 102 erroneously detects the driver's face.
  • the erroneous detection determination unit 104 further determines "whether or not the change in the size of the frame indicating the driver's face region satisfies the second determination condition", thereby causing the driver to face the face as described above. It is possible to prevent the face detection unit 102 from determining that the driver's face is erroneously detected when the driver's face is moved, and it is possible to improve the accuracy of the erroneous detection determination.
  • the erroneous detection determination unit 104 determines "whether or not the change in the size of the frame indicating the driver's face region satisfies the second determination condition", and further, "feature amount of the driver's face”. Whether or not the change in the condition satisfies the third determination condition ”may be determined.
  • the erroneous detection determination unit 104 detects the face when the change in the size of the frame indicating the driver's face area satisfies the second determination condition and the change in the feature amount of the driver's face satisfies the third determination condition. It is determined that the unit 102 erroneously detects the driver's face. For example, from the state captured in the image 200 shown in FIG.
  • the driver moves his / her hand to a position in front of the driver's face when viewed from the image pickup device 20, and the occupant in the rear seat moves.
  • the size of the second face 202b which is the face of the rear seat occupant, is almost the same as the size of the first face 202a, which is the driver's face, on the image 200. It doesn't change. Since the first face 202a is hidden by the hand, the face detection unit 102 detects the second face 202b as the driver's face.
  • the false detection determination unit 104 falsely detects the driver's face only with the face false detection condition of "whether or not the change in the size of the frame indicating the driver's face region satisfies the second determination condition".
  • the erroneous detection determination unit 104 determines that the first face region 203a has not changed significantly. That is, it is determined that the face detection unit 102 does not erroneously detect the driver's face.
  • the erroneous detection determination unit 104 further determines "whether or not the change in the feature amount of the driver's face satisfies the third determination condition", so that the occupant of the rear seat as described above takes an image.
  • the face is moved to a position where the distance from the device 20 is about the same as that of the driver, the face of the driver is erroneously detected even though the face detection unit 102 erroneously detects the face of the driver. It is possible to prevent the determination that the error is not made, and improve the accuracy of the false positive determination.
  • the case where the occupant in the back seat leans forward has been described, but this is only an example.
  • the detection determination unit 104 can prevent the face detection unit 102 from erroneously detecting the face of the passenger in the passenger seat as the face of the driver, and can improve the accuracy of the erroneous detection determination.
  • the face detection unit 102 determines that the driver's face is erroneously detected and detects the face.
  • An area reduction instruction for reducing the usage area is output to the area reduction unit 105.
  • the optical setting control unit 109 controls to maintain the optical setting of the image pickup apparatus 20 as it is. Specifically, the false detection determination unit 104 outputs a command instructing the same optical setting as the current optical setting to the image pickup apparatus 20.
  • the erroneous detection determination unit 104 does not output any new command to the image pickup apparatus 20.
  • Controlling to maintain the optical setting of the image pickup device 20 as it is means, as described above, outputting a command instructing the same optical setting as the current optical setting to the image pickup device 20 or the image pickup device. It is a control including not outputting any new command to 20.
  • controlling to maintain the optical setting of the imaging device 20 as it is means that when the erroneous detection determination unit 104 determines that the face detection unit 102 erroneously detects the driver's face, the optical setting control is performed.
  • the unit 109 also includes a control that sets the optical setting as the default optical setting.
  • the exposure control unit 1091 of the optical setting control unit 109 performs exposure control with the exposure time, the exposure time, and the lighting time of the image pickup apparatus 20 as preset initial values.
  • the initial value of the exposure time and the initial value of the lighting time are set in advance to the exposure time and the lighting time which are assumed to be sufficient to clearly image the driver's face.
  • the image processing unit 1092 of the optical setting control unit 109 adjusts the gain by setting the gain value of the image pickup apparatus 20 as a preset initial value.
  • the initial value of the gain value is set in advance to a gain value that is assumed to be sufficient to clearly image the driver's face.
  • the face detection unit 102 erroneously detects the driver's face. The determination is made, and the face detection presence information is output to the driver monitoring unit 103 and the brightness calculation unit 107.
  • the area reduction unit 105 reduces the face detection area when a range reduction instruction is output from the erroneous detection determination unit 104. At this time, the area reduction unit 105 reduces the face detection area so that the area does not include the driver's face area that was erroneously detected by the face detection unit 102.
  • the area reduction unit 105 acquires the face information output by the face detection unit 102 via the erroneous detection determination unit 104, and identifies the driver's face area erroneously detected by the face detection unit 102 from the acquired face information. Just do it.
  • FIGS. 6A and 6B are diagrams for explaining an image of an example of the face detection region reduced by the region reduction unit 105 in the first embodiment.
  • the image 200 shown in FIGS. 6A and 6B returns from the state in which the image was captured as shown in FIG. 4 to the state in which the part of the first face 202a, which is the driver's face, is captured again. It is supposed to be.
  • the area reduction unit 105 moves one side on the right side of the rectangle showing the face detection area to the left to narrow the face detection area. At that time, the area reduction unit 105 narrows the face detection area until the area does not include the driver's face area detected by the face detection unit 102 (see FIG. 6A).
  • the area reduction unit 105 narrows the face detection area by moving the lower right apex of the rectangle indicating the face detection area toward the upper left apex while maintaining the aspect ratio of the face detection area. At that time, the area reduction unit 105 narrows the face detection area until the area does not include the driver's face area detected by the face detection unit 102 (see FIG. 6B).
  • the area reduction unit 105 updates, for example, the face detection area defined in the information about the face detection area stored in the storage unit to the reduced face detection area. Then, the area reduction unit 105 outputs information to the effect that the face detection area has been reduced to the re-detection instruction unit 106.
  • the re-detection instruction unit 106 When the re-detection instruction unit 106 outputs information indicating that the face detection area has been reduced from the area reduction unit 105, the face detection area after the area reduction unit 105 has been reduced with respect to the face detection unit 102. The driver's face is rediscovered inside.
  • the brightness calculation unit 107 is a pixel in the driver's face region detected by the face detection unit 102 in the image acquired by the image acquisition unit 101 (hereinafter, “” The average brightness of the "driver's face area pixel”) is calculated.
  • the brightness calculation unit 107 outputs the calculated information on the average brightness of the driver's face area pixels (hereinafter referred to as “face area brightness information”) to the brightness determination unit 108.
  • the brightness determination unit 108 determines whether or not the average brightness of the driver's face area pixels is equal to or greater than a threshold value (hereinafter referred to as “luminance determination threshold”) based on the face region brightness information output from the brightness calculation unit 107. To do.
  • the brightness determination threshold is assumed to be the operation when the driver's face is appropriately detected from an image captured in advance under standard light, for example, at the time of product shipment. Based on the average brightness of the pixels in the face area of the driver, for example, the lowest average brightness value that can be assumed as the average brightness of the pixels in the face area of the driver is set.
  • the brightness determination unit 108 When the average brightness of the driver's face area pixels is less than the brightness determination threshold value, the brightness determination unit 108 provides information to the effect that the average brightness of the driver's face area pixels is less than the brightness determination threshold value.
  • the output is output to the exposure control unit 1091 of the 109 or the image processing unit 1092 of the optical setting control unit 109.
  • the luminance determination unit 108 first outputs information to the exposure control unit 1091 that the average luminance of the driver's face region pixels is less than the luminance determination threshold. In response to this, the exposure control unit 1091 performs exposure control. The details of the exposure control unit 1091 will be described later.
  • the brightness determination unit 108 determines whether or not the average brightness of the driver's face region pixels after the exposure control is equal to or greater than the brightness determination threshold value.
  • the brightness determination unit 108 may acquire the average brightness of the driver region pixels after exposure control by having the brightness calculation unit 107 calculate the average brightness.
  • the brightness determination unit 108 provides information that the average brightness of the driver's face area pixels is less than the brightness determination threshold value. Output to the image processing unit 1092.
  • the image processing unit 1092 adjusts the gain. The details of the image processing unit 1092 will be described later.
  • the brightness determination unit 108 determines whether or not the average brightness of the driver's face region pixels after the gain adjustment is equal to or greater than the brightness determination threshold value.
  • the brightness determination unit 108 may obtain the average brightness of the driver region pixels after gain adjustment by having the brightness calculation unit 107 calculate the average brightness. In this way, the brightness determination unit 108 changes the optical settings in the order of the exposure control unit 1091 and the image processing unit 1092.
  • the brightness determination unit 108 determines that the average brightness of the driver's face area pixels is equal to or greater than the brightness determination threshold value, the brightness determination unit 108 determines that the average brightness of the driver's face area pixels is brightness with respect to the optical setting control unit 109. Does not output information that the value is less than the threshold value.
  • the optical setting control unit 109 sets the average brightness of the driver face region pixels calculated by the brightness calculation unit 107. Accordingly, the optical settings of the image pickup apparatus 20 are controlled.
  • the exposure control unit 1091 of the optical setting control unit 109 executes the exposure control. Specifically, the exposure control unit 1091 changes the exposure time of the image pickup apparatus 20 to be longer. Further, the exposure control unit 1091 changes the lighting time of the lighting device 30 to be longer according to the changed exposure time, if necessary. How long the exposure control unit 1091 changes the exposure time or the lighting time is determined in advance according to the average brightness of the driver's face region pixels.
  • the image processing unit 1092 of the optical setting control unit 109 adjusts the gain when the brightness determination unit 108 outputs information that the average brightness of the driver's face region pixels after exposure control is less than the brightness determination threshold value. Do. Specifically, the image processing unit 1092 raises the gain value of the image pickup apparatus 20 to brighten the image. It is assumed that how much the image processing unit 1092 raises the gain value is determined in advance according to the average brightness of the driver's face region pixels.
  • the vehicle-mounted image processing device 10 has the configuration described above. It should be noted that the in-vehicle image processing device 10 does not include the erroneous detection determination unit 104, and the in-vehicle image processing device 10 has a function of determining whether or not the face detection unit 102 erroneously detects the driver's face. It is assumed that it did not have. That is, unlike the conventional technique described above, it is not considered that the face detected from the image may be a face other than the driver. In that case, as in the state shown in the image of FIG. 4 above, if, for example, some disturbance causes a state in which the driver's face parts cannot be detected temporarily, the face detection unit 102 operates.
  • the face of an occupant other than the person can be detected as the face of the driver. Then, the face detection unit 102 outputs face information regarding the face of the occupant other than the driver, in which the face of the occupant other than the driver is the face of the driver, to the driver monitoring unit 103 and the brightness calculation unit 107. ..
  • the driver monitoring unit 103 determines that an occupant other than the driver is in a dozing state, an alarm is output from the output device. This can be a useless alarm, as it is not really an alarm for the driver being asleep. Further, since it is usually harder for the light of the lighting device 30, for example, to reach the rear seats than the driver's seat, the occupants of the rear seats are imaged darker in the image. Therefore, when the face detection unit 102 erroneously detects the face of the occupant in the rear seat as the driver's face, the average brightness of the driver face area pixels calculated by the brightness calculation unit 107 is used as the average brightness of the driver face area pixels.
  • the optical setting control unit 109 may control the optical setting according to the face region of the occupant in the rear seat. As a result, for example, even if the state in which the driver's face parts cannot be detected temporarily returns to the state in which the driver's face parts can be detected (see FIG. 3), the driver's face area is displayed on the image. Since the brightness of the pixels of the above is not optimized, it becomes difficult for the face detection unit 102 to re-detect the driver's face as a face.
  • the vicinity of the driver's seat where light such as the lighting device 30 can easily reach from the rear seat, becomes too bright, and for example, so-called "whiteout” occurs on the image, and the image is captured with the optical settings in the default state. It is more difficult to detect the driver's face as a face than the image. Then, the face detection unit 102 continues to detect the face of the occupant in the rear seat as the face of the driver, and the driver monitoring unit 103 continues to monitor the state of the face of the occupant in the rear seat. Become.
  • the in-vehicle image processing device 10 includes the erroneous detection determination unit 104 as described above.
  • the erroneous detection determination unit 104 determines whether or not the driver's face is erroneously detected by determining whether or not the driver's face information detected by the face detection unit 102 satisfies the face erroneous detection condition. I made it. Then, when the erroneous detection determination unit 104 determines that the driver's face is not erroneously detected, the optical setting control unit 109 controls the optical setting according to the average brightness of the driver's face region pixels. I made it.
  • the optical setting control unit 109 controls to maintain the optical setting as it is. Further, when the erroneous detection determination unit 104 determines that the driver's face is not erroneously detected, the driver monitoring unit 103 does not perform monitoring based on the driver's face detected by the face detection unit 102. did. As described above, when the in-vehicle image processing device 10 detects the driver's face from the image, if the driver's face is not erroneously detected, the driver according to the average brightness of the driver's face area.
  • the in-vehicle image processing device 10 detects the driver's face from the image, if the driver's face is erroneously detected, the face detection area for detecting the driver's face is reduced. Therefore, it is possible to detect the driver's face from the image in consideration of the fact that the image pickup device 20 can image the face of the occupant other than the driver.
  • FIG. 7 is a flowchart for explaining the operation of the vehicle-mounted image processing device 10 according to the first embodiment.
  • the in-vehicle image processing device 10 repeats the operation shown in the flowchart of FIG. 7 while the vehicle is running.
  • the image acquisition unit 101 acquires an image from the image pickup apparatus 20 (step ST701).
  • the image acquisition unit 101 outputs the acquired image to the face detection unit 102 and the brightness calculation unit 107.
  • the face detection unit 102 detects the driver's face in the face detection area set on the image based on the image acquired by the image acquisition unit 101 in step ST701 (“YES” in step ST702). If).
  • the face detection unit 102 outputs the detected face information of the driver to the driver monitoring unit 103, the erroneous detection determination unit 104, and the brightness calculation unit 107. Further, the face detection unit 102 stores the detected face information of the driver in the storage unit.
  • step ST702 If the face detection unit 102 cannot detect the driver's face in the face detection area (when "NO" in step ST702), the processing by the in-vehicle image processing device 10 returns to step ST701.
  • the false positive determination unit 104 makes a false positive determination (step ST703).
  • the erroneous detection determination unit 104 issues a range reduction instruction to reduce the face detection area. Output to 105.
  • the optical setting control unit 109 controls to maintain the optical setting as it is.
  • the area reduction unit 105 reduces the face detection area (step ST705). Then, the area reduction unit 105 outputs information to the effect that the face detection area has been reduced to the re-detection instruction unit 106.
  • the re-detection instruction unit 106 When the re-detection instruction unit 106 outputs information indicating that the face detection area has been reduced from the area reduction unit 105 in step ST705, the area reduction unit 105 is reduced with respect to the face detection unit 102. The driver's face is rediscovered in the face detection area of the above (step ST706).
  • the in-vehicle image processing device 10 returns to step ST701, and the face detection unit 102 is for face detection after the area reduction unit 105 is reduced in step ST705 based on the image acquired by the image acquisition unit 101 in step ST701. Rediscover the driver's face within the area.
  • the erroneous detection determination unit 104 determines that the face detection unit 102 has not erroneously detected the driver's face (when “NO” in step ST704)
  • the face detection presence information is transmitted to the driver monitoring unit 103.
  • the driver monitoring unit 103 monitors the driver's condition based on the face information output from the face detection unit 102.
  • the brightness calculation unit 107 calculates the average brightness of the driver's face region pixels in the image acquired by the image acquisition unit 101 in step ST701. (Step ST707).
  • the brightness calculation unit 107 outputs the face area brightness information to the brightness determination unit 108.
  • the brightness determination unit 108 determines whether or not the average brightness of the driver's face region pixels calculated by the brightness calculation unit 107 in step ST707 is equal to or greater than the brightness determination threshold value (step ST708). In step ST708, when the brightness determination unit 108 determines that the average brightness of the driver's face area pixels is less than the brightness determination threshold value (when “NO” in step ST708), the average brightness of the driver's face area pixels Is output to the optical setting control unit 109 to the effect that is less than the luminance determination threshold value.
  • step ST708 when the brightness determination unit 108 determines that the average brightness of the driver's face region pixels is equal to or greater than the brightness determination threshold value (when “YES” in step ST708), step ST709 is skipped and the vehicle is used.
  • the image processing device 10 ends the processing.
  • the optical setting control unit 109 controls the optical setting of the image pickup apparatus 20 according to the average brightness of the driver's face region pixels calculated by the brightness calculation unit 107 in step ST708 (step ST709).
  • FIGS. 8A, 8B, and 8C are flowcharts for explaining the details of an example of the operation of the false detection determination by the false positive determination unit 104 in steps ST703 to ST704 of FIG.
  • FIG. 8A shows an example of the operation of the erroneous detection determination by the erroneous detection determination unit 104 when the face erroneous detection condition is "whether or not the change in the position of the driver's face region satisfies the first determination condition". It is a flowchart explaining the detail.
  • FIG. 8A shows an example of the operation of the erroneous detection determination by the erroneous detection determination unit 104 when the face erroneous detection condition is "whether or not the change in the position of the driver's face region satisfies the first determination condition". It is a flowchart explaining the detail.
  • FIG. 8A shows an example of the operation of the erroneous detection determination by the erroneous detection determination unit 104 when the face errone
  • the face error detection conditions are "whether or not the change in the position of the driver's face area satisfies the first determination condition" and "the change in the size of the frame indicating the driver's face area is the second. It is a flowchart explaining the detail of an example of the operation of the erroneous detection determination by the erroneous detection determination unit 104 when the combination of "whether or not the determination condition is satisfied”.
  • the face error detection conditions are "whether or not the change in the size of the frame indicating the driver's face region satisfies the second determination condition” and "the change in the feature amount of the driver's face is the third. It is a flowchart explaining the detail of an example of the operation of the erroneous detection determination by the erroneous detection determination unit 104 when the combination of "whether or not the determination condition is satisfied".
  • the erroneous detection determination unit 104 determines the driver's face region. It is determined whether or not the change in the position of is satisfying the first determination condition (step ST801). In step ST801, when it is determined that the change in the position of the driver's face region satisfies the first determination condition (when “YES” in step ST801), the erroneous detection determination unit 104 has the face detection unit 102 of the driver. It is determined that the face is erroneously detected, and a range reduction instruction for reducing the face detection area is output to the area reduction unit 105.
  • the optical setting control unit 109 controls to maintain the optical setting as it is.
  • the processing by the in-vehicle image processing device 10 is the step of FIG. Proceed to ST707.
  • the face error detection conditions are "whether or not the change in the position of the driver's face area satisfies the first judgment condition" and "the change in the size of the frame indicating the driver's face area satisfies the second judgment condition".
  • the erroneous detection determination unit 104 first determines whether or not the change in the position of the driver's face region satisfies the first determination condition (step). ST801a). When it is determined in step ST801a that the change in the position of the driver's face area satisfies the first determination condition (when “YES” in step ST801a), the false detection determination unit 104 is a frame indicating the driver's face area.
  • step ST8011 It is determined whether or not the change in the magnitude of is satisfying the second determination condition (step ST8011).
  • step ST8011 determines whether or not the change in the size of the frame indicating the driver's face area satisfies the second determination condition (when “YES” in step ST8011).
  • step ST8011 when it is determined that the change in the size of the frame indicating the driver's face area does not satisfy the second determination condition (when “NO” in step ST8011), the processing by the in-vehicle image processing device 10 is performed. The process proceeds to step ST707 in FIG. Since the specific operation of step ST802a is the same as the specific operation of step ST802 of FIG. 8A, duplicate description will be omitted. Since the specific operation in the case of “NO” in step ST801a is the same as the specific operation in the case of “NO” in step ST801 of FIG. 8A, duplicate description will be omitted.
  • the face error detection conditions are "whether or not the change in the size of the frame indicating the driver's face area satisfies the second judgment condition" and "the change in the feature amount of the driver's face satisfies the third judgment condition".
  • the erroneous detection determination unit 104 first determines whether or not the change in the size of the frame indicating the driver's face region satisfies the second determination condition. Determine (step ST8011a).
  • step ST8011a when it is determined that the change in the size of the frame indicating the driver's face area satisfies the second determination condition (when “YES” in step ST8011a), the false detection determination unit 104 determines the driver's face. It is determined whether or not the change in the feature amount of the above satisfies the third determination condition (step ST8012). When it is determined in step ST8012 that the change in the feature amount of the driver's face satisfies the third determination condition condition (when "YES” in step ST8012), the false detection determination unit 104 proceeds to step ST802b.
  • step ST8012 When it is determined in step ST8012 that the change in the feature amount of the driver's face does not satisfy the third determination condition (when "NO" in step ST8012), the processing by the in-vehicle image processing device 10 is performed in the step of FIG. Proceed to ST707. Since the specific operation of step ST802b is the same as the specific operation of step ST802 of FIG. 8A, duplicate description will be omitted. Since the specific operation of “NO” in step ST8011a has already been explained, detailed description thereof will be omitted.
  • the optical setting control unit 109 sets the optical setting of the image pickup apparatus 20 according to the average brightness of the driver's face region pixels calculated by the brightness calculation unit 107 in step ST707 of FIG. It is a flowchart for demonstrating the operation which performs control.
  • the brightness calculation unit 107 calculates the average brightness of the driver's face area pixels in step ST707 of FIG. 7
  • the brightness determination unit 108 determines whether or not the average brightness of the driver's face area pixels is equal to or greater than the brightness determination threshold value. Is determined (step ST708).
  • the operation of step ST708 is the operation described with reference to FIG. 7.
  • step ST708 when the brightness determination unit 108 determines that the average brightness of the driver's face region pixels is less than the brightness determination threshold value (when “NO” in step ST708), the exposure control unit 1091 is an image pickup device. 20 exposure control is performed (step ST901).
  • the brightness determination unit 108 again determines whether or not the average brightness of the driver's face region pixels is equal to or greater than the brightness determination threshold value (step ST902).
  • the image processing unit 1092 is an image pickup device.
  • the gain of 20 is adjusted (step ST903). After that, the vehicle-mounted image processing device 10 returns to step ST708.
  • the brightness determination unit 108 determines in step ST708 that the average brightness of the driver's face region pixels is equal to or greater than the brightness determination threshold value (when “YES” in step ST708), or in step ST902, the brightness determination unit When 108 determines that the average brightness of the driver's face region pixels is equal to or greater than the luminance determination threshold value (when “YES” in step ST902), the in-vehicle image processing device 10 ends the process.
  • the area reduction unit 105 reduces the face detection area (see step ST705 in FIG. 7), and the area reduction unit 105 stores the face detection unit in the storage unit. In the information about the area, the face detection area is returned to the initial value.
  • the area reduction unit 105 reduces the face detection area for a certain period of time. If the face detection unit 102 does not detect the driver's face even after the lapse of time (see step ST702 in FIG. 7), the face detection area may be returned to the initial value. In that case, the area reduction unit 105 acquires, for example, information on whether or not the driver's face has been detected from the face detection unit 102.
  • the face detection area is set on the image, but the present invention is not limited to this, and it is essential that the face detection area is set on the image. is not.
  • the range in the vehicle including the range in which the face of the driver sitting in the driver's seat should exist is imaged from the image pickup device 20.
  • the image acquisition unit 101 that acquires an image
  • the face detection unit 102 that detects the driver's face and the driver's face area on the image based on the image acquired by the image acquisition unit 101
  • the face detection unit 102 An erroneous detection determination unit 104 that makes an erroneous detection determination as to whether or not the face detection unit 102 erroneously detects the driver's face depending on whether or not the detected information on the driver's face satisfies the face erroneous detection condition.
  • the detection determination unit 104 determines that the face detection unit 102 does not erroneously detect the driver's face
  • the detection determination unit 104 controls the optical setting according to the average brightness of the pixels in the driver's face region to determine the erroneous detection.
  • the face detection unit 102 is configured to include an optical setting control unit 109 that controls to maintain the optical setting when the face detection unit 102 determines that the driver's face is erroneously detected.
  • the in-vehicle image processing device 10 can prevent improper control of optical settings based on a falsely detected driver's face.
  • the face detection unit 102 is based on the image acquired by the image acquisition unit 101, and the driver's face detection area is set on the image.
  • the area reduction unit 105 that reduces the face detection area
  • the face detection unit 102 is configured to include a re-detection instruction unit 106 that rediscovers the driver's face and the driver's face area in the face detection area after the area reduction unit 105 is reduced.
  • the in-vehicle image processing device 10 detects the driver's face from the image, if the driver's face is erroneously detected, the face detection area for detecting the driver's face is reduced. Considering that the face of an occupant other than the driver may be imaged, the face of the driver can be detected from the image.
  • Embodiment 2 In the second embodiment, the embodiment in which the accuracy of the false positive determination can be improved by considering the pull-out amount of the seat belt will be described.
  • FIG. 10 is a diagram showing a configuration example of the vehicle-mounted image processing device 10a according to the second embodiment.
  • the same reference numerals are given to the same configurations as those of the in-vehicle image processing apparatus 10 described with reference to FIG. 1 in the first embodiment, and duplicate description will be omitted.
  • the vehicle-mounted image processing device 10a according to the second embodiment is different from the vehicle-mounted image processing device 10 according to the first embodiment in that it includes a drawing amount detection unit 110.
  • the withdrawal amount detection unit 110 detects the withdrawal amount of the seat belt installed in the seat of the vehicle.
  • the pull-out amount detection unit 110 detects, for example, the pull-out amount of the seatbelt from the movement amount of the position of the mark attached to the seatbelt on the image based on the image acquired by the image acquisition unit 101. It is assumed that the seat belt is marked in advance.
  • the mark is detectable by the image pickup apparatus 20. For example, if the image pickup apparatus 20 is an infrared camera, the mark can be detected by the infrared camera.
  • the withdrawal amount detection unit 110 is, for example, based on the rotation amount detected by a take-up sensor (not shown) which is installed near the seatbelt installed in each seat and has a portion which rotates according to the movement amount of the seatbelt.
  • the seatbelt pull-out amount may be detected by calculating the seatbelt pull-out amount.
  • the pull-out amount of the seatbelt detected by the pull-out amount detection unit 110 is, for example, a pull-out amount based on the seatbelt in the unused state (hereinafter, referred to as “pull-out amount based on the unused state”).
  • the drawer amount of the seatbelt detected by the drawer amount detection unit 110 is, for example, a drawer amount based on the seatbelt in a state where the occupant is wearing the seatbelt in a normal sitting posture (hereinafter, "drawer amount based on the usage condition"). It may be called “quantity". When the occupant changes his / her posture from the normal sitting posture for some purpose, the amount of pulling out of the seat belt may also change.
  • the withdrawal amount detection unit 110 detects the withdrawal amount based on the usage state, it corresponds to a more accurate change in the posture of the occupant in consideration of the physical difference of the occupant than the withdrawal amount based on the non-use state.
  • the amount of seat belt withdrawal can be calculated.
  • the withdrawal amount detection unit 110 outputs the calculated withdrawal amount to the erroneous detection determination unit 104.
  • the pull-out amount detection unit 110 adds information that can identify which seat the seatbelt pull-out amount is, and outputs the seatbelt pull-out amount.
  • the withdrawal amount detection unit 110 repeats the detection operation at least at the same timing as the face detection unit 102 performs the detection operation. Then, it is assumed that the detected withdrawal amount is stored in the storage unit in association with the face information. For example, when the face detection unit 102 stores the face information in the storage unit, the face detection unit 102 acquires information on the withdrawal amount from the withdrawal amount detection unit 110 and stores the information in the storage unit in association with the face information.
  • the driver's face information detected by the face detection unit 102 satisfies the face false detection condition, and the drawer amount of the driver's seat belt detected by the drawer amount detection unit 110 is a threshold value (hereinafter, "" It is called “first withdrawal amount determination threshold”.) It is determined whether or not it is equal to or less than or equal to.
  • the driver's face information detected by the face detection unit 102 satisfies the face false detection condition, and the pull-out amount of the driver's seat belt detected by the pull-out amount detection unit 110 is the first pull-out.
  • the face detection unit 102 determines that the driver's face is erroneously detected. For example, it is assumed that the face erroneous detection condition is "whether or not the change in the position of the driver's face area satisfies the set first determination condition". In this case, in the first embodiment, the erroneous detection determination unit 104 determines that the face detection unit 102 erroneously detects the driver's face when the difference between the immediately preceding face frame coordinates and the current face frame coordinates is large. ..
  • the erroneous detection determination unit 104 makes a erroneous detection determination in consideration of the pull-out amount of the driver's seat belt in addition to the face erroneous detection condition.
  • step ST703 whether or not the erroneous detection determination unit 104 satisfies the face erroneous detection condition and the withdrawal amount of the driver's seat belt detected by the withdrawal amount detection unit 110 is equal to or less than the first withdrawal amount determination threshold value. Is determined to make a false positive determination.
  • the in-vehicle image processing device 10a makes a false detection determination in consideration of the pull-out amount of the driver's seat belt. Therefore, the accuracy of erroneous detection determination can be improved.
  • the driver's face information detected by the face detection unit 102 satisfies the face erroneous detection condition, and the driver's seat belt detected by the drawer amount detection unit 110.
  • the erroneous detection determination is performed depending on whether or not the withdrawal amount is equal to or less than the first withdrawal amount determination threshold.
  • the false detection determination unit 104 the driver's face information detected by the face detection unit 102 satisfies the face false detection condition, and the drawer amount detection unit 110 detects a seat other than the driver's seat.
  • the erroneous detection determination may be performed depending on whether or not the withdrawal amount of the seat belt is equal to or greater than the threshold value (hereinafter referred to as "second withdrawal amount determination threshold value").
  • the erroneous detection determination unit 104 may make the erroneous detection determination in consideration of the pull-out amount of the seat belt of the seat other than the driver's seat in addition to the face erroneous detection condition.
  • the false detection determination unit 104 considers the pull-out amount of the driver's seat belt in addition to the face false detection condition, and the accuracy of the false detection determination is the same as in the case of performing the false detection determination. Can be improved.
  • the vehicle-mounted image processing device 10a detects the pull-out amount of the seat belt installed in the seat of the vehicle in addition to the configuration of the vehicle-mounted image processing device 10 according to the first embodiment.
  • the erroneous detection determination unit 104 includes a drawer amount detection unit 110, in which the information about the driver's face detected by the face detection unit 102 satisfies the face erroneous detection condition, and the driver's seat detected by the drawer amount detection unit 110. The erroneous detection determination is performed depending on whether or not the withdrawal amount of the seat belt is equal to or less than the first withdrawal amount determination threshold.
  • the false detection determination unit 104 the information about the driver's face detected by the face detection unit 102 satisfies the face false detection condition, and the drawer amount detection unit 110 detects the seat belts of the seats other than the driver's seat.
  • the erroneous detection determination is performed depending on whether or not the amount is equal to or greater than the second withdrawal amount determination threshold. Therefore, when the in-vehicle image processing device 10a detects the driver's face from the image, if the driver's face is not erroneously detected, the optical setting is set according to the average brightness of the driver's face region.
  • the driver's face can be detected from the image in consideration of the possibility that the face of the occupant other than the driver is imaged in the image, and the driver's face is mistaken.
  • the accuracy of the erroneous detection determination can be improved by taking into account the amount of pulling out of the driver's seat belt when making the erroneous detection determination of whether or not the vehicle has been detected. Further, when the in-vehicle image processing device 10a detects the driver's face from the image, if the driver's face is erroneously detected, the face detection area for detecting the driver's face is reduced.
  • the driver's face can be detected from the image, and the driver's face is not erroneously detected.
  • the accuracy of the erroneous detection determination can be improved by taking into account the amount of pulling out of the driver's seat belt when making the erroneous detection determination.
  • FIGS. 11A and 11B are diagrams showing an example of the hardware configuration of the vehicle-mounted image processing devices 10 and 10a according to the first and second embodiments.
  • the functions of the brightness calculation unit 107, the brightness determination unit 108, the optical setting control unit 109, and the extraction amount detection unit 110 are realized by the processing circuit 1101.
  • the in-vehicle image processing devices 10 and 10a make a false detection determination as to whether or not the driver's face is falsely detected, and control the optical setting of the image pickup device 20 based on the result of the false detection determination.
  • the processing circuit 1101 for the purpose is provided.
  • the processing circuit 1101 may be dedicated hardware as shown in FIG. 11A, or may be a CPU (Central Processing Unit) 1105 that executes a program stored in the memory 1106 as shown in FIG. 11B.
  • CPU Central Processing Unit
  • the processing circuit 1101 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field-Programmable). Gate Array) or a combination of these is applicable.
  • the processing circuit 1101 is the CPU 1105
  • the functions of the 107, the brightness determination unit 108, the optical setting control unit 109, and the extraction amount detection unit 110 are realized by software, firmware, or a combination of software and firmware. That is, the image acquisition unit 101, the face detection unit 102, the driver monitoring unit 103, the erroneous detection determination unit 104, the area reduction unit 105, the re-detection instruction unit 106, the brightness calculation unit 107, and the brightness determination unit.
  • the 108, the optical setting control unit 109, and the withdrawal amount detection unit 110 are processing circuits such as a CPU 1105 that executes a program stored in an HDD (Hard Disk Drive) 1102, a memory 1106, and a system LSI (Lage-Scale Integration). Is realized by.
  • the programs stored in the HDD 1102, the memory 1106, etc. include the image acquisition unit 101, the face detection unit 102, the driver monitoring unit 103, the false detection determination unit 104, the area reduction unit 105, and the re-detection instruction unit. It can also be said that the computer is made to execute the procedure or method of the 106, the brightness calculation unit 107, the brightness determination unit 108, the optical setting control unit 109, and the extraction amount detection unit 110.
  • the memory 1106 is, for example, a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electric Memory), etc.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • flash memory an EPROM (Erasable Programmable Read Only Memory)
  • EEPROM Electrical Memory
  • the functions of the 108, the optical setting control unit 109, and the extraction amount detection unit 110 may be partially realized by dedicated hardware and partly realized by software or firmware.
  • the image acquisition unit 101 and the extraction amount detection unit 110 are realized by the processing circuit 1101 as dedicated hardware, and the face detection unit 102, the driver monitoring unit 103, the false detection determination unit 104, and the like.
  • the processing circuit reads and executes the program stored in the memory 1106. It is possible to realize the function.
  • the in-vehicle image processing devices 10 and 10a include a device such as an image pickup device 20 or a lighting device 30, an input interface device 1103 and an output interface device 1104 that perform wired communication or wireless communication.
  • the in-vehicle image processing devices 10 and 10a are mounted on the vehicle, but this is only an example.
  • a part or all of the components of the in-vehicle image processing devices 10 and 10a as described with reference to FIG. 1 or 10 may be provided in the server.
  • the optical setting control unit 109 can perform both exposure control and gain adjustment as optical setting control.
  • the optical setting control unit 109 may perform only one of exposure control and gain adjustment as the control of the optical setting.
  • the in-vehicle image processing device is configured to prevent improper control of optical settings based on a falsely detected driver's face, the vehicle is based on an image taken by an occupant in the vehicle. It can be applied to an in-vehicle image processing device that detects a driver's face.
  • 10a In-vehicle image processing device 101 image acquisition unit, 102 face detection unit, 103 driver monitoring unit, 104 false detection determination unit, 105 area reduction unit, 106 re-detection instruction unit, 107 brightness calculation unit, 108 brightness determination Unit, 109 Optical setting control unit, 1091 Exposure control unit, 1092 Image processing unit, 110 Extraction amount detection unit, 20 Imaging device, 30 Lighting device, 1101 processing circuit, 1102 HDD, 1103 Input interface device, 1104 Output interface device, 1105 CPU, 1106 memory.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention comprend : une unité d'acquisition d'image (101) qui acquiert une image capturée par un dispositif d'imagerie (20), l'image montrant une plage dans un véhicule comprenant une plage dans laquelle le visage d'un conducteur assis dans un siège conducteur doit être présent ; une unité de détection faciale (102) qui détecte le visage du conducteur et des régions faciales du conducteur dans l'image sur la base de l'image acquise par l'unité d'acquisition d'image (101) ; une unité de détermination de détection erronée (104) qui effectue une détermination de détection erronée pour déterminer si le visage du conducteur a été détecté de manière erronée par l'unité de détection faciale (102) ; et une unité de commande de réglage optique (109) qui commande le réglage optique sur la base de la luminance moyenne de pixels dans les régions faciales du conducteur si l'unité de détermination de détection erronée (104) détermine que l'unité de détection faciale (102) n'a pas détecté de manière erronée le visage du conducteur, et qui effectue une commande pour maintenir le réglage optique si l'unité de détermination de détection erronée (104) détermine que l'unité de détection faciale (102) a détecté de façon erronée le visage du conducteur.
PCT/JP2019/026376 2019-07-02 2019-07-02 Dispositif de traitement d'image embarqué et procédé de traitement d'image embarqué WO2021001944A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/JP2019/026376 WO2021001944A1 (fr) 2019-07-02 2019-07-02 Dispositif de traitement d'image embarqué et procédé de traitement d'image embarqué
JP2021529614A JP7183420B2 (ja) 2019-07-02 2019-07-02 車載用画像処理装置、および、車載用画像処理方法
DE112019007522.5T DE112019007522T5 (de) 2019-07-02 2019-07-02 Onboard-Bildverarbeitungsvorrichtung und Onboard-Bildverarbeitungsverfahren

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/026376 WO2021001944A1 (fr) 2019-07-02 2019-07-02 Dispositif de traitement d'image embarqué et procédé de traitement d'image embarqué

Publications (1)

Publication Number Publication Date
WO2021001944A1 true WO2021001944A1 (fr) 2021-01-07

Family

ID=74100754

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/026376 WO2021001944A1 (fr) 2019-07-02 2019-07-02 Dispositif de traitement d'image embarqué et procédé de traitement d'image embarqué

Country Status (3)

Country Link
JP (1) JP7183420B2 (fr)
DE (1) DE112019007522T5 (fr)
WO (1) WO2021001944A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005108033A (ja) * 2003-09-30 2005-04-21 Toshiba Corp 運転者状況判定装置および運転者状況判定方法
JP2009116742A (ja) * 2007-11-08 2009-05-28 Aisin Seiki Co Ltd 車載用画像処理装置、画像処理方法、および、プログラム
WO2018150485A1 (fr) * 2017-02-15 2018-08-23 三菱電機株式会社 Dispositif de détermination d'état de conduite et procédé de détermination d'état de conduite
WO2018225176A1 (fr) * 2017-06-07 2018-12-13 三菱電機株式会社 Dispositif de détermination d'état et procédé de détermination d'état

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005108033A (ja) * 2003-09-30 2005-04-21 Toshiba Corp 運転者状況判定装置および運転者状況判定方法
JP2009116742A (ja) * 2007-11-08 2009-05-28 Aisin Seiki Co Ltd 車載用画像処理装置、画像処理方法、および、プログラム
WO2018150485A1 (fr) * 2017-02-15 2018-08-23 三菱電機株式会社 Dispositif de détermination d'état de conduite et procédé de détermination d'état de conduite
WO2018225176A1 (fr) * 2017-06-07 2018-12-13 三菱電機株式会社 Dispositif de détermination d'état et procédé de détermination d'état

Also Published As

Publication number Publication date
JPWO2021001944A1 (ja) 2021-11-04
JP7183420B2 (ja) 2022-12-05
DE112019007522T5 (de) 2022-05-05

Similar Documents

Publication Publication Date Title
RU2453884C1 (ru) Устройство для формирования изображения водителя и способ формирования изображения водителя
JP4497305B2 (ja) 運転者状態判定装置
US20090141147A1 (en) Auto zoom display system and method
CN107960989B (zh) 脉搏波计测装置以及脉搏波计测方法
US11034305B2 (en) Image processing device, image display system, and image processing method
JP2008199515A (ja) 同乗者着座姿勢検出・判定装置及び方法
JP2005323180A (ja) 撮像制御装置及びプログラム
JP6201956B2 (ja) 視線検出装置および視線検出方法
JP2008243031A (ja) 漫然運転判定装置
JP6857695B2 (ja) 後方表示装置、後方表示方法、およびプログラム
US20160140760A1 (en) Adapting a display on a transparent electronic display
JP2009201756A (ja) 情報処理装置および方法、並びに、プログラム
JP7138175B2 (ja) バーチャルコンテンツを表示するためのヘッドマウント式電子ディスプレイ装置の操作方法およびバーチャルコンテンツを表示するためのディスプレイシステム
WO2018061413A1 (fr) Dispositif de détection de geste
US11915495B2 (en) Information processing apparatus, and recording medium
JP2018069026A (ja) 脈波計測装置、及び、脈波計測方法
JP6587254B2 (ja) 輝度制御装置、輝度制御システム及び輝度制御方法
JP6945775B2 (ja) 車載用画像処理装置、および、車載用画像処理方法
JP2009125518A (ja) ドライバの瞬き検知方法及びドライバの覚醒度判断方法並びに装置
WO2021001944A1 (fr) Dispositif de traitement d'image embarqué et procédé de traitement d'image embarqué
US20190156133A1 (en) Vehicle driving support apparatus and vehicle driving support program
WO2019030855A1 (fr) Dispositif et procédé de détermination d'état d'incapacité de conduite
WO2022113275A1 (fr) Dispositif de détection de sommeil et système de détection de sommeil
JP7212251B2 (ja) 眼の開閉検出装置、乗員監視装置
JP2009096323A (ja) カメラ照明制御装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19936107

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021529614

Country of ref document: JP

Kind code of ref document: A

122 Ep: pct application non-entry in european phase

Ref document number: 19936107

Country of ref document: EP

Kind code of ref document: A1