WO2014061175A1 - State monitoring device - Google Patents

State monitoring device Download PDF

Info

Publication number
WO2014061175A1
WO2014061175A1 PCT/JP2013/003046 JP2013003046W WO2014061175A1 WO 2014061175 A1 WO2014061175 A1 WO 2014061175A1 JP 2013003046 W JP2013003046 W JP 2013003046W WO 2014061175 A1 WO2014061175 A1 WO 2014061175A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
image
face
light
imaging
Prior art date
Application number
PCT/JP2013/003046
Other languages
French (fr)
Japanese (ja)
Inventor
泰斗 渡邉
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2014061175A1 publication Critical patent/WO2014061175A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means

Definitions

  • the present disclosure relates to a technique for monitoring the state of a driver using a face image that is mounted on a vehicle and that captures the face of the driver who controls the vehicle.
  • a state monitoring device mounted on a vehicle performs face recognition using a face image obtained by photographing a driver's face in order to monitor the state of the driver.
  • the state monitoring device has a configuration in which light is emitted toward a prescribed region that is defined in advance as a region where the operator's face is located, and light that is incident from the prescribed region is received. And a configuration for recognizing the face based on the position of the eye in the captured face image.
  • the image processing apparatus of Patent Document 1 which is a kind of configuration for recognizing a face as described above, performs face recognition using red eyes. More specifically, in a face image shot with flash emission, a human eye can appear as a red bright part. Such a red eye phenomenon is a phenomenon that occurs mainly because light in the red band of the flash light is reflected by the blood vessels of the retina. This red eye acquires high contrast with respect to the surroundings of the red eye. Therefore, the image processing apparatus can accurately specify the position of the eye in the face image based on the position of the red bright part.
  • the inventor of the present disclosure has attempted to employ the above-described face recognition using red eyes in a state monitoring device. Then, the following was found. That is, when a pilot wearing a spectacle is photographed while illuminating a specified area with a light emitting unit such as a flash in a state with little ambient light, the photographed face image includes not only the driver's red eyes. The light emitting part reflected in the glasses is photographed as a bright part. As described above, if a plurality of bright portions that are candidate eyes are photographed around the eyes of the driver, the specification of the position of the eyes in the face image may be incorrect. Then, the accuracy of face recognition performed based on the position of the eyes may not be ensured.
  • An object of the present disclosure is to provide a state monitoring device that can ensure the accuracy of face recognition performed based on the position of the eye.
  • a state monitoring device that is mounted on a vehicle and monitors the state of the driver using a face image obtained by photographing the face of the driver who controls the vehicle.
  • a light emitting unit that emits light in a red to near-infrared band toward a prescribed region that is defined in advance as a region where the face is located, and a first face that is photographed by receiving incident light incident from the prescribed region
  • a first image acquisition unit that acquires an image as a face image, and a second face image that is captured by receiving attenuated light obtained by attenuating red band light from incident light is separated from the first face image.
  • the second face image is photographed in the plurality of bright parts. It is characterized by comprising an eye determination unit that determines that a bright part that is not an eye is an eye, and a face recognition unit that recognizes a face based on the position of the eye determined by the eye determination unit.
  • a state monitoring method for causing a computer to execute a process for monitoring a state of a driver mounted on a vehicle and using a face image obtained by photographing a face of the driver who controls the vehicle.
  • the first step is taken by emitting a light emitting step for emitting light in a band from red to near-infrared toward a prescribed region defined in advance as a region where the face is located, and receiving incident light incident from the prescribed region.
  • a first image acquisition step of acquiring a face image as a face image, and a second face image captured by receiving attenuated light obtained by attenuating light in the red band from incident light is referred to as a first face image.
  • Second image acquisition step to acquire as another face image, and when a plurality of bright parts that are candidates for the pilot's eyes are photographed in the first face image, the second face image is photographed among the plurality of bright parts
  • the bright part that is not Comprising the eye determination step of constant, based on the position of the eye is determined by eye determination step, and recognizing the face recognition step the face, the.
  • the storage medium is a non-transition storage medium, and the medium includes instructions that are read and executed by a computer, and the instructions are the state monitoring method according to the second aspect described above. And the method is computer mounted.
  • the red band light is attenuated from the incident light, so the pilot's eyes appear as a bright part. hard. Therefore, when a plurality of bright parts that are candidates for the eyes of the pilot are photographed in the first face image, by determining those of these bright parts that are not photographed in the second face image as eyes, Even when the light emitting part is reflected on the driver's glasses, the position of the eye in the face image can be accurately specified. Therefore, the accuracy of face recognition performed based on the eye position can be ensured.
  • the state monitoring device 100 is mounted on a vehicle 1 as a moving body, and indicates the state of a driver (hereinafter also referred to as a driver) that drives or controls the vehicle. It is a driver status monitor to be monitored.
  • the state monitoring apparatus 100 includes an imaging unit 10, a light emitting unit 15, a control circuit 20, and a housing 60 (see FIG. 3) that houses these configurations.
  • the state monitoring device 100 is connected to an actuation unit 90 and a vehicle control device 96 mounted on the vehicle.
  • the imaging unit 10 shown in FIGS. 1 and 2 is a device that generates a face image 51 (see also FIG. 4) obtained by photographing a driver's face in the state monitoring device 100 installed on the upper surface of the steering column 81.
  • the state monitoring device 100 installed on the upper surface of the steering column 81 is a device that generates a face image 51 (see also FIG. 4) obtained by photographing the driver's face.
  • the image capturing unit 10 captures a predetermined area PA defined in advance in the vehicle 1.
  • This prescribed area PA includes an area where the face of the driver who is seated in the driver's seat is assumed to be located. Specifically, such a defined area PA is defined based on the eyelips assumed from the eye range of each eye of the driver, and is defined to include, for example, the 99th percentile of the eyelips.
  • the imaging unit 10 is a so-called near-infrared camera, and is configured by combining the imaging element 11 with an optical lens, an optical filter, and the like.
  • the imaging element 11 generates an electrical signal corresponding to the intensity of received light by a plurality of pixels arranged along the imaging surface.
  • the image sensor 11 is arranged in a posture in which the imaging surface is directed to the defined area PA.
  • the image sensor 11 is in an exposure state based on a control signal from the control circuit 20 and receives incident light incident from the defined area PA. As a result, a monochrome face image 51 drawn with shades of white and black is generated.
  • the face images 51 thus photographed are sequentially output from the imaging unit 10 to the control circuit 20.
  • the light projecting unit 15 has a plurality of light emitting diodes 16. Each light emitting diode 16 is disposed so as to sandwich the imaging unit 10 (see FIG. 3), and emits illumination light in a band from red to near infrared toward the defined area PA. The on state and the off state of light emission of the light emitting diode 16 are controlled by a current supplied from the control circuit 20.
  • the control circuit 20 is connected to the imaging unit 10, the light projecting unit 15, the actuation unit 90, and the like, and is a circuit that controls the operation of these components.
  • the control circuit 20 is mainly configured by a microcomputer including a processor that performs various arithmetic processes, a RAM that functions as a work area for the arithmetic processes, and a flash memory that stores programs used for the arithmetic processes.
  • the control circuit 20 includes a power supply circuit that supplies power to the imaging unit 10, the light projecting unit 15, and the like.
  • the control circuit 20 executes a plurality of functions such as a light emission control unit 21, an imaging control unit 23, an image recognition unit 24, a state determination unit 31, and a warning control unit 33 by executing a state monitoring program stored in advance by a processor. With blocks. This functional block is also referred to as a functional section.
  • the light emission control unit 21 is a functional block related to the light emission control of the light projecting unit 15.
  • the light emission control unit 21 causes the light emitting diode 16 to emit light by applying a predetermined current to the light emitting diode 16. Based on the control value calculated by the image recognition unit 24, the light emission control unit 21 causes the light projecting unit 15 to emit illumination light in accordance with the timing at which the imaging unit 10 is in the exposure state.
  • the imaging control unit 23 is a functional block related to imaging control of the imaging unit 10.
  • the imaging control unit 23 controls the exposure start timing, gain, exposure time, and the like in the imaging unit 10 based on the control value calculated by the image recognition unit 24.
  • the image recognition unit 24 is a functional block related to image processing of the face image 51 and the like.
  • the image recognition unit 24 sets an imaging condition in the imaging unit 10 and a light emission condition in the light projecting unit 15 in order to acquire a face image 51 from which the driver's face can be extracted.
  • the image acquisition unit 21 controls the imaging control unit 23 and the light emission control unit 21 to control the imaging unit 10 and the light projecting unit 15 in order to cause the imaging unit 10 and the light projecting unit 15 to perform operations in accordance with the set imaging conditions and light emission conditions. Calculate the value.
  • the image recognition unit 24 acquires the face image 51 thus photographed from the imaging unit 10.
  • the image recognizing unit 24 performs image processing on the acquired face image 51 to obtain values related to the driver's face orientation and the degree of eye opening (hereinafter referred to as “eye open degree”) and the degree of sleepiness of the driver. Calculate related values.
  • the state determination unit 31 compares the value calculated by the image acquisition unit 21 with a preset threshold value. By this comparison processing, the state determination unit 31 estimates whether, for example, a sign of driving aside or a sign of dozing operation is detected. And the state determination part 31 which detected the above-mentioned sign determines with the state which should alert a driver
  • the warning control unit 33 is connected to the actuation unit 90.
  • the warning control unit 33 outputs a control signal to the actuation unit 90 when the state determination unit 31 determines that a situation that should warn the driver is occurring.
  • the warning control unit 33 issues a warning to the driver by operating the actuation unit 90.
  • the housing 60 includes a main body member 63, a front cover member 66, a rear cover member (not shown), and the like as shown in FIG.
  • the main body member 63 holds the sub-board 62 on which the light projecting unit 15 and the imaging unit 10 are mounted.
  • a main board 61 on which the control circuit 20 is formed is attached to the sub board 62 in a posture orthogonal to the sub board 62.
  • the body member 63 is provided with an insertion hole 64 and a light distribution portion 65.
  • the insertion hole 64 is provided in the central portion of the main body member 63 in the horizontal direction, and allows the imaging unit 10 mounted on the sub-board 62 to be inserted.
  • the insertion hole 64 cooperates with a light blocking hole provided in the sub-substrate 66 to exhibit a light blocking function between the light projecting unit 15 and the imaging unit 10, thereby Light leakage to the imaging unit 10 is prevented.
  • the light distribution unit 65 is disposed so as to sandwich the insertion hole 64 in the horizontal direction, and faces the light projecting unit 15 mounted on the sub-board 62.
  • the light distribution unit 65 distributes light to the defined area PA (see FIG. 1) while transmitting the light emitted from the light projecting unit 15.
  • the front cover member 66 is provided with a visible light filter 67.
  • the visible light filter 67 mainly transmits light in the red to near-infrared band used for generating the face image 51 (see FIG. 4) and shields light in the visible light band that is unnecessary for generating the face image 51. To do.
  • the visible light filter 67 covers an opening 68 formed at a position facing the light distribution portion 65 in the front cover member 66.
  • the rear cover member is disposed on the opposite side of the front cover member 66 with the main body member 63 interposed therebetween. The rear cover member covers the substrates 61 and 62 to protect them from dust and dirt in the atmosphere.
  • the 2 includes, for example, a speaker 91, a seat vibration device 93, an air conditioner 95, and the like mounted on the vehicle 1 (see FIG. 1).
  • the speaker 91 alerts the driver by reproducing audio data based on a control signal from the warning control unit 33.
  • the seat vibration device 93 is installed inside the seat surface of the driver's seat or the like, and alerts the driver by vibrating the driver's seat based on a control signal from the warning control unit 33.
  • the air conditioner 95 alerts the driver by an operation such as introducing outside air into the vehicle 1 based on a control signal from the warning control unit 33.
  • both eyes of the driver appear as a bright portion 55.
  • Such a so-called red-eye phenomenon occurs because light in the red band included in ambient light and illumination light is reflected by capillaries of the retina. This red eye acquires high contrast with respect to the surroundings of the red eye. Therefore, the image recognition unit 24 (see FIG. 2) can accurately specify the position of the eye in the face image 51 based on the position of the bright part 55 that appears in the face image 51.
  • the imaging element 11 is formed with a covering region 77 and a non-covering region 76.
  • the covering region 77 is a region covered with the color filter 78 on the imaging surface.
  • the color filter 78 allows light in the near infrared band to pass through while attenuating light in the red band. Therefore, the covering region 77 receives the attenuated light that is incident from the defined region PA (see FIG. 1) and passes through the color filter 78 and in which the red band light is attenuated.
  • the uncovered area 76 is located outside the covered area 77 and is not covered with the color filter 78. Therefore, the uncovered area 76 receives incident light incident from the defined area PA.
  • the image sensor 11 is a so-called VGA size element in which, for example, 640 ⁇ 480 pixels are arranged in each of the horizontal direction H and the vertical direction V.
  • the image sensor 11 has a first pixel 70 and a second pixel 73 as a plurality of pixels.
  • the first pixel 70 is a pixel that is not covered with the color filter 78. Therefore, the entire area of the first pixel 70 becomes the uncovered area 76.
  • the second pixel 73 is a pixel covered with the color filter 78. Therefore, the entire area of the second pixel 73 becomes the covering region 77.
  • the number of second pixels 73 provided on the imaging surface is smaller than the number of first pixels 70. Therefore, on the imaging surface of the imaging device 11, the area of the covered region 77 is narrower than the area of the non-covered region 76.
  • the imaging unit 10 shown in FIG. 2 receives a face image (hereinafter referred to as “first” for convenience) taken by receiving incident light. 51 ”(described as“ face image ”).
  • the imaging unit 10 also captures a face image captured by receiving attenuated light based on the output from the second pixel 73 (see FIG. 5) (hereinafter referred to as “second face image” for convenience). 52 is generated.
  • the number of pixels of the first face image 51 is larger than the number of pixels of the second face image 52.
  • the imaging unit 10 can acquire the output from the second pixel 73 while acquiring the output from the first pixel 70 (see FIG. 5). Therefore, the first face image 51 and the second face image 52 are images taken substantially simultaneously.
  • the image recognition unit 24 includes a first image acquisition block 25, a second image acquisition block 26, It has a dark place determination block 28 and an eye determination block 27.
  • the first image acquisition block 25 acquires the first face image 51 from the imaging unit 10.
  • the second image acquisition block 26 acquires the second face image 52 from the imaging unit 10 as a face image different from the first face image 51.
  • the dark place determination block 28 determines whether or not the room of the vehicle 1 (see FIG. 1) is a dark place based on the information acquired from the vehicle control device 96.
  • the vehicle control device 96 is a device that controls various devices mounted on the vehicle 1.
  • the vehicle control device 96 can control the operation of the headlamp of the vehicle 1 as one of its functions.
  • the dark place determination block 28 determines that the interior of the vehicle 1 is a dark place based on the state information of the headlamp of the vehicle 1 when the headlamp is in a lighting state. On the other hand, the dark place determination block 28 determines that the interior of the vehicle 1 is not a dark place when the headlamp is turned off.
  • the switching between the lighting state and the unlighting state of the headlamp by the vehicle control device 96 may be performed based on the detection result of the external light sensor mounted on the vehicle 1 (see FIG. 1). It may be performed based on the operation state of the changeover switch provided in FIG.
  • the eye determination block 27 includes a plurality of bright portions 55 and 56 that are candidates for the eyes when a plurality of bright portions 55 and 56 that are candidates for the eyes of the driver shown in FIG. From this, it is determined which bright part corresponds to the actual eye. Specifically, as shown in FIG. 6, in the second face image 52 photographed by receiving the attenuated light, the red band light is attenuated from the incident light. It is hard to become eyes. That is, in the second face image 52, the driver's eyes are not easily captured as the bright portion 55 as shown in FIG. Therefore, among the bright portions 55 and 56 that are candidates for the eyes photographed in the first face image 51, those not photographed in the second face image 52 in FIG. 5 are the bright portions 55 by the actual eyes. The probability is high.
  • the eye determination block 27 shown in FIG. 2 performs a process of comparing the first face image 51 and the second face image 52, thereby a plurality of bright portions 55 appearing in the first face image 51. , 56 (see FIG. 4), the bright portion 55 that has not been photographed in the second face image 52 is determined as an eye.
  • the determination of specifying the eye position is performed on the condition that the dark place determination block 28 determines that the room is dark.
  • FIG. 7 is started by the image recognition unit 24 when the ignition of the vehicle 1 (see FIG. 1) is turned on.
  • the light emission control unit 21 outputs a control signal for instructing the light projecting unit 15 to emit light
  • the imaging control unit 23 outputs a control signal for supporting imaging to the imaging unit 10, and the process proceeds to S102.
  • the light projecting unit 15 Based on the control signal output in S101, the light projecting unit 15 emits illumination light toward the defined area PA. And the imaging part 10 image
  • each section is expressed as S101, for example.
  • each section can be divided into a plurality of subsections, while a plurality of sections can be combined into one section.
  • each section configured in this manner can be referred to as a device, module, or means.
  • the first face image 51 and the second face image 52 based on the control signal output in S101 are acquired by the first image acquisition block 25 and the second image acquisition block 26, and the process proceeds to S103.
  • S103 based on the headlight status information acquired from the vehicle control device 96, it is determined whether or not the interior of the vehicle 1 is in a dark place. If a positive determination is made in S103, the process proceeds to S105. On the other hand, if a negative determination is made in S103, the process proceeds to S104. In S104, face recognition processing without using red eyes is performed, and the process proceeds to S110. On the other hand, in S105, which is carried out on the condition that an affirmative determination is made in S103, the first face image 51 acquired in S102 is subjected to image processing, so that the driver's eye candidate is determined from the image 51. The bright parts 55 and 56 are extracted, and the process proceeds to S106.
  • S106 as a result of the image processing performed in S105, it is determined whether or not a plurality of bright portions 55 and 56 that are candidates for the eye are photographed in a range preliminarily assumed as the vicinity region of the eye. If a negative determination is made in S106, the process proceeds to S107. In S107, face recognition is performed using the bright part 55, which is the red eye extracted in S105, and the process proceeds to S110. On the other hand, if a positive determination is made in S106, the process proceeds to S108.
  • the second face image 52 is not photographed among the plurality of bright portions 55 and 56 photographed in the first face image 51.
  • the light part 55 is determined to be the eyes of the driver, and the process proceeds to S109.
  • face recognition is performed using the bright portion 55 that is the red eye determined to be an eye in S108, and the process proceeds to S110.
  • the state determination unit 31 determines whether or not there is a state in which the driver should be warned, such as a sign of driving aside or a sign of dozing. .
  • S111 it is determined whether or not the ignition ON state of the vehicle 1 is continued. If a negative determination is made in S111 because the ignition is turned off, the process ends. On the other hand, if a positive determination is made in S111, the process returns to S101.
  • the red band light is attenuated from the incident light. It is hard to be seen as the bright part 55. Therefore, among the bright portions 55 and 56 photographed in the first face image 51, the bright portion 55 that is not photographed in the second face image 52 is determined as the eye. Thereby, even when the light emitting diode 16 is reflected in the driver's glasses, the position of the eye in the first face image 51 can be accurately specified. Therefore, the accuracy of face recognition performed based on the eye position can be ensured.
  • the imaging unit 10 can obtain both an output for generating the second face image 52 and an output for generating the first face image 51 from one image sensor 11. Therefore, a configuration in which a part of the image sensor 11 is covered with the color filter 78 is particularly suitable for the state monitoring apparatus 100 that identifies the eye position by comparing the first face image 51 and the second face image 52.
  • the second face image 52 in the first embodiment is a face image that is mainly necessary for specifying the position of the eye. Therefore, the second face image 52 may not be as clear as the first face image 51. Therefore, the number of second pixels 73 in the image sensor 11 is smaller than the number of first pixels 70. Thereby, since the area of the covering region 77 is smaller than the area of the non-covering region 76, the first face image 51 based on the output from the non-covering region 76 maintains a high resolution and effectively uses the illumination light. It can be a clear image. Therefore, the accuracy of face recognition by the image recognition unit 24 using the first face image 51 can be reliably ensured.
  • the imaging unit 10 can generate the first face image 51 and the second face image 52 based on outputs acquired at substantially the same timing. Therefore, the difference in photographing timing between the first face image 51 and the second face image 52 can be substantially eliminated. As described above, it is possible to avoid a situation in which the position of the photographed driver is shifted between the first face image 51 and the second face image 52. Therefore, the accuracy of the eye position specified by the comparison between the first face image 51 and the second face image 52, and hence the accuracy of face recognition, can be further ensured.
  • the eye position is identified by comparing the first face image 51 and the second face image 52 on the condition that the interior of the vehicle 1 is dark. .
  • the bright portion 55 due to red eyes and the bright portion 56 due to the reflection of the light emitting diodes 16 on the glasses are both reflected in the first face image 51 mainly under a condition with little ambient light. Therefore, according to the condition that the position of the eye is specified on the condition that the room is in a dark place, the processing load in the state monitoring device 100 is maintained while maintaining high face recognition accuracy. Can be reduced.
  • the vehicle 1 is also referred to as a moving body.
  • the imaging unit 10 is also referred to as an imaging device or imaging means.
  • the light projecting unit 15 is also referred to as a light emitting unit, a light emitting device, or a light emitting means.
  • the image recognition unit 24 is also referred to as a face recognition unit, a face recognition device, or a face recognition means.
  • the first image acquisition block 25 is also referred to as a first image acquisition unit, a first image acquisition device, or a first image acquisition means.
  • the second image acquisition block 26 is also referred to as a second image acquisition unit, a second image acquisition device, or a unit second image acquisition means.
  • the eye determination block 27 is also referred to as an eye determination unit, an eye determination device, or an eye determination means.
  • the dark place determination block 28 is also referred to as a dark place determination unit, a dark place determination device, or a dark place determination means.
  • the color filter 78 is also referred to as an attenuation filter.
  • S101 is also referred to as a light emission section or a light emission step.
  • S102 is also referred to as a first image acquisition section or first image acquisition step and a second image acquisition section or second image acquisition step.
  • S108 is also referred to as an eye determination section or an eye determination step.
  • S109 is also referred to as a face recognition section or a face recognition step.
  • the second embodiment of the present disclosure shown in FIG. 8 is a modification of the first embodiment.
  • the state monitoring apparatus 200 according to the second embodiment includes a first imaging unit 110 and a second imaging unit 210 instead of the imaging unit 10 (see FIG. 2) of the first embodiment.
  • the structure of the state monitoring apparatus 200 for acquiring the 1st face image 51 and the 2nd face image 52 is demonstrated in detail.
  • the first imaging unit 110 and the second imaging unit 210 are both near-infrared cameras and have a configuration corresponding to the imaging unit 10 of the first embodiment.
  • the first image pickup unit 110 includes a first image pickup element 111 corresponding to the image pickup element 11 (see FIG. 5) of the first embodiment, and the image pickup surface of the element 111 is set to a specified area PA (see FIG. 1). It is arranged in a facing posture.
  • the first image sensor 111 receives incident light incident from the defined area PA. With the above configuration, the first imaging unit 110 generates the first face image 51 based on the output from the first imaging element 111 and sequentially outputs it to the image recognition unit 24.
  • the second imaging unit 210 includes a second imaging element 211 corresponding to the imaging element 11 (see FIG. 5), and a color filter 278 that attenuates red band light.
  • the second imaging unit 210 is arranged in a posture in which the imaging surface of the second imaging element 211 is directed to the defined area PA (see FIG. 1).
  • the second imaging element 211 receives the attenuated light by being covered with the color filter 278.
  • the second imaging unit 210 generates the second face image 52 based on the output from the second imaging element 211 and sequentially outputs it to the image recognition unit 24.
  • the entire area of the first image sensor 111 becomes the uncovered area 76
  • the entire area of the second image sensor 211 becomes the covered area 77.
  • the pixel pitches and the number of pixels of the image sensors 111 and 211 are the same. Therefore, the area of the covering region 77 is substantially equal to the area of the non-covering region 76.
  • the imaging control unit 23 outputs a control signal to each of the first imaging unit 110 and the second imaging unit 210.
  • the imaging control unit 23 sets both the first imaging element 111 and the second imaging element 211 to the exposure state in accordance with the timing when the light emission control unit 21 sets the light emitting diode 16 of the light projecting unit 15 to the light emitting state.
  • the imaging control unit 23 synchronizes the imaging timings of the first imaging element 111 and the second imaging element 211, so that the first face image 51 and the second face image 52 are captured at substantially the same timing. Is done.
  • the first image acquisition block 25 acquires the first face image 51 photographed using the first imaging element 111 from the first imaging unit 110.
  • the second image acquisition block 26 acquires the second face image 52 captured using the second image sensor 211 from the second imaging unit 210.
  • photography the 1st face image 51 and the 2nd face image 52 may be provided separately.
  • the second image 52 that is captured using attenuated light is generated by the second imaging unit 210. Therefore, since the eye position can be specified by comparing the first face image 51 and the second face image 52, the accuracy of the face recognition performed based on the eye position is ensured.
  • the second imaging unit 210 is provided as a configuration different from the first imaging unit 110. Therefore, the freedom degree of the image pick-up element employ
  • the first imaging unit 110 is also referred to as a first imaging device or a first imaging means.
  • the second imaging unit 210 is also referred to as a second imaging device or a second imaging means.
  • the color filter 278 is also referred to as an attenuation filter.
  • the third embodiment of the present disclosure shown in FIG. 9 is another modification of the first embodiment.
  • the state monitoring apparatus 300 according to the third embodiment includes an imaging unit 310 instead of the imaging unit 10 (see FIG. 2) of the first embodiment.
  • the structure of the state monitoring apparatus 300 for acquiring the 1st face image 51 and the 2nd face image 52 is demonstrated in detail.
  • the imaging unit 310 includes a color filter 378 and a switching mechanism 313 in addition to the imaging element 11.
  • the color filter 378 is configured to attenuate red band light and pass near infrared band light.
  • the color filter 378 is formed in a size that can cover the image sensor 11.
  • the switching mechanism 313 is a mechanism for moving the color filter 378.
  • the imaging unit 310 described above is provided with an attenuation imaging mode and a non-attenuation imaging mode that can be switched to each other as the imaging mode.
  • the imaging unit 310 moves the color filter 378 to a position covering the imaging surface of the imaging element 11 by the operation of the switching mechanism 313.
  • the imaging unit 310 can generate the second face image 52 based on the output from the imaging device 11.
  • the imaging unit 310 retracts the color filter 378 from the imaging surface of the imaging device 11 by the operation of the switching mechanism 313. Therefore, the image sensor 11 receives incident light. As described above, the imaging unit 310 can generate the first face image 51 based on the output from the imaging element 11.
  • the imaging control unit 23 outputs a control signal to the imaging unit 310 so that the attenuation imaging mode and the non-attenuation imaging mode are alternately repeated.
  • the image recognition unit 24 acquires the first face image 51 captured in the non-attenuation shooting mode by the first image acquisition block 25 and the second face image 52 captured in the attenuation shooting mode. Acquisition by the image acquisition block 26 is performed alternately.
  • the light emission control unit 21 emits illumination light from the light projecting unit 15 in accordance with the timing at which the image sensor 11 is exposed in each shooting mode.
  • the second face image 52 photographed using the attenuated light is generated in the attenuation photographing mode of the imaging unit 310. Accordingly, the eye position can be specified by comparing the first face image 51 and the second face image 52, and thus the accuracy of the face recognition performed based on the eye position is ensured.
  • the imaging unit 310 is also referred to as an imaging device or imaging means.
  • the color filter 378 is also referred to as an attenuation filter.
  • an image sensor 411 shown in FIG. 10 is used instead of the image sensor 11 (see FIG. 5).
  • a second pixel 473 included in the image sensor 411 is provided with a sub-pixel 474 covered with a color filter 78.
  • the imaging unit 410 having the above configuration generates the first face image 51 (see FIG. 4) based on the output from the first pixel 70 and the output from the region excluding the sub pixel 474 in the second pixel 473. .
  • the imaging unit 410 generates the second face image 52 (see FIG. 4) based on the output from the sub-pixel 474.
  • the second face image 52 is photographed together with the first face image 51 while maintaining the sensitivity of the image sensor 411 with respect to near infrared rays, as in the first embodiment. can do.
  • the first imaging element and the second imaging element are provided in one imaging unit.
  • the imaging unit is further provided with a splitter that divides incident light from the defined area PA into each imaging element and a color filter positioned between the splitter and the second imaging element.
  • the first image sensor that forms the uncovered area and the second image sensor that forms the covered area may be provided together in one image pickup unit.
  • the dark place determination block is omitted.
  • the eye determination block allows the bright portions that are candidates for a plurality of eyes to be captured in the first face image 51. From the inside, the determination which identifies the bright part corresponding to eyes is implemented.
  • the number of first pixels and the number of second pixels are set to be approximately the same.
  • the area of the covering region is approximately the same as the area of the non-covering region.
  • the number of pixels of the second pixel is larger than the number of pixels of the first pixel. Thereby, the area of the covering region is narrower than the area of the non-covering region.
  • the installation and evacuation of the color filter by the switching mechanism may be performed electrically.
  • the color filter may be configured to block light in the red band by applying a voltage.
  • the configuration for applying a voltage to the color filter corresponds to the switching mechanism.
  • the ratio and layout of the pixels provided with the color filter can be changed as appropriate.
  • the pixels having the color filter may be provided so as to be narrowed down to a range that captures a region near the eye in the image sensor.
  • the timing at which the output from the non-covering area is acquired and the timing at which the output from the covering area is acquired in the imaging unit are different.
  • the timing at which the output from the first imaging element is acquired in the first imaging unit is different from the timing at which the output from the second imaging element is acquired in the second imaging unit. .
  • a shift in shooting timing between the first face image and the second face image is acceptable.
  • Image sensors such as a CCD (Charge Coupled Device) and a CMOS (Complementary Metal Oxide Semiconductor) can be appropriately employed for each image sensor in the above embodiment.
  • the frequency band of light detected by the image sensor is not limited to the near infrared band, and may include the visible light band in addition to the near infrared band.
  • the light emitting diodes be appropriately changed in frequency band, number, arrangement, and the like of emitted light so as to correspond to the specifications of the image sensor.
  • the installation position of the imaging unit and the state monitoring device which are the upper surface of the steering column 81, may be changed as appropriate as long as the specified area PA can be imaged.
  • the state monitoring device may be installed on the upper surface of the instrument panel, for example, or may be attached to a ceiling portion near the sun visor.
  • the imaging unit may be provided separately from the main body of the state monitoring device and at a position suitable for photographing the defined area PA.
  • the method for determining the prescribed area PA in the above embodiment may be changed as appropriate.
  • the defined area PA may be defined to include the 95th percentile of Ilips.
  • the method for determining the prescribed area PA is not limited to the method for determining from the iris.
  • the prescribed area PA may be determined experimentally by actually sitting a plurality of drivers of different races, genders, ages, etc. on the driver's seat. Such a defined area PA is desirably defined in consideration of the movement of the face accompanying the driving operation.
  • the plurality of functions provided by the control circuit 20 that has executed the state monitoring program may be provided by hardware and software different from the above-described control device, or a combination thereof.
  • functions corresponding to each functional block and sub-functional block may be provided by an analog circuit that performs a predetermined function without depending on a program.
  • the present disclosure monitors not only the so-called driver status monitor for automobiles as vehicles, but also the state of the driver by various moving bodies (transport equipment) such as motorcycles, tricycles, ships, and aircraft as vehicles.
  • the present invention can be applied to a state monitoring device.

Abstract

This state monitoring device (100) is mounted in a vehicle (1) and monitors the state of a driver using a facial image (51) of the driver. The state monitoring device emits illumination light from a projection unit (15) towards the face surface of the driver presumed to be positioned in a stipulated region (PA). The state monitoring device acquires a first face image (51) captured by receiving incoming light coming in from the stipulated region, and a second face image (52) captured by receiving attenuated light resulting from the light from the red region of the incoming light having been attenuated. Here, when a plurality of bright sections (55, 56) that are candidates for the eyes of the driver have been captured in the first face image, the bright sections not captured in the second face image among the plurality of bright sections are determined to be eyes, and on the basis of the locations of the determined eyes, face recognition is performed.

Description

状態監視装置Condition monitoring device 関連出願の相互参照Cross-reference of related applications
 本開示は、2012年10月15日に出願された日本出願番号2012-228178号に基づくもので、ここにその記載内容を援用する。 This disclosure is based on Japanese Patent Application No. 2012-228178 filed on October 15, 2012, the contents of which are incorporated herein by reference.
 本開示は、車両に搭載され、車両を操縦する操縦者の顔面を撮影した顔画像を用いて当該操縦者の状態を監視する技術に関する。 The present disclosure relates to a technique for monitoring the state of a driver using a face image that is mounted on a vehicle and that captures the face of the driver who controls the vehicle.
 従来、車両に搭載される状態監視装置は、操縦者の状態を監視するために、操縦者の顔面を撮影した顔画像を用いた顔認識を行っている。こうした顔認識を行うために、状態監視装置は、操縦者の顔面が位置する領域として予め規定された規定領域に向けて光を出射させる構成と、規定領域から入射する光を受光することで顔面を撮影する構成と、撮影された顔画像における眼の位置に基づいて顔面を認識する構成とを備えている。 Conventionally, a state monitoring device mounted on a vehicle performs face recognition using a face image obtained by photographing a driver's face in order to monitor the state of the driver. In order to perform such face recognition, the state monitoring device has a configuration in which light is emitted toward a prescribed region that is defined in advance as a region where the operator's face is located, and light that is incident from the prescribed region is received. And a configuration for recognizing the face based on the position of the eye in the captured face image.
 上述したような顔面を認識する構成の一種である特許文献1の画像処理装置は、赤眼を利用した顔面の認識を行なっている。詳しく説明すると、フラッシュの発光を伴って撮影された顔画像では、人物の眼は、赤色の明部として写り得る。こうした赤眼現象は、フラッシュの光のうちで主に赤色の帯域の光が網膜の血管によって反射されるために生じる現象である。この赤眼は、当該赤眼の周囲に対して高いコントラストを獲得している。故に、画像処理装置は、赤色の明部の位置に基づくことで、顔画像における眼の位置を正確に特定し得るのである。 The image processing apparatus of Patent Document 1, which is a kind of configuration for recognizing a face as described above, performs face recognition using red eyes. More specifically, in a face image shot with flash emission, a human eye can appear as a red bright part. Such a red eye phenomenon is a phenomenon that occurs mainly because light in the red band of the flash light is reflected by the blood vessels of the retina. This red eye acquires high contrast with respect to the surroundings of the red eye. Therefore, the image processing apparatus can accurately specify the position of the eye in the face image based on the position of the red bright part.
特開2003―30647号公報JP 2003-30647 A
 さて、本開示の発明者は、上述の赤眼を利用した顔認識を、状態監視装置に採用することを試みた。すると、次のことが判明した。即ち、環境光の少ない状態において、フラッシュ等の発光部によって規定領域を照明しつつ、眼鏡を掛けた操縦者を撮影した場合に、撮影される顔画像には、操縦者の赤眼だけでなく、眼鏡に映り込んだ発光部が、明部として撮影されてしまう。このように、操縦者の眼の周囲に、眼の候補となる明部が複数撮影されてしまうと、顔画像における眼の位置の特定が、誤ったものとなり得る。すると、眼の位置に基づいて行われる顔認識の正確性が、確保されない可能性ある。 Now, the inventor of the present disclosure has attempted to employ the above-described face recognition using red eyes in a state monitoring device. Then, the following was found. That is, when a pilot wearing a spectacle is photographed while illuminating a specified area with a light emitting unit such as a flash in a state with little ambient light, the photographed face image includes not only the driver's red eyes. The light emitting part reflected in the glasses is photographed as a bright part. As described above, if a plurality of bright portions that are candidate eyes are photographed around the eyes of the driver, the specification of the position of the eyes in the face image may be incorrect. Then, the accuracy of face recognition performed based on the position of the eyes may not be ensured.
 本開示の目的は、眼の位置に基づいて行われる顔認識の正確性につき、確保が可能な状態監視装置を提供することである。 An object of the present disclosure is to provide a state monitoring device that can ensure the accuracy of face recognition performed based on the position of the eye.
 上記目的を達成するために、本開示の第一の観点として、車両に搭載され、車両を操縦する操縦者の顔面を撮影した顔画像を用いて当該操縦者の状態を監視する状態監視装置が次のように提供される。顔面が位置する領域として予め規定された規定領域に向けて、赤色から近赤外の帯域の光を出射させる発光部と、規定領域から入射する入射光を受光することで撮影される第一顔画像を、顔画像として取得する第一画像取得部と、入射光から赤色の帯域の光を減衰させた減衰光を受光することによって撮影される第二顔画像を、第一顔画像とは別の顔画像として取得する第二画像取得部と、操縦者の眼の候補となる明部が第一顔画像に複数撮影された場合に、複数の明部のうちで第二顔画像に撮影されていない明部を、眼であると判定する眼判定部と、眼判定部によって判定された眼の位置に基づいて、顔面を認識する顔認識部と、を備えることを特徴としている。 In order to achieve the above object, as a first aspect of the present disclosure, a state monitoring device that is mounted on a vehicle and monitors the state of the driver using a face image obtained by photographing the face of the driver who controls the vehicle is provided. Provided as follows. A light emitting unit that emits light in a red to near-infrared band toward a prescribed region that is defined in advance as a region where the face is located, and a first face that is photographed by receiving incident light incident from the prescribed region A first image acquisition unit that acquires an image as a face image, and a second face image that is captured by receiving attenuated light obtained by attenuating red band light from incident light is separated from the first face image. When a plurality of bright parts that are candidates for the pilot's eyes and a plurality of bright parts that are candidates for the driver's eyes are photographed in the first face image, the second face image is photographed in the plurality of bright parts. It is characterized by comprising an eye determination unit that determines that a bright part that is not an eye is an eye, and a face recognition unit that recognizes a face based on the position of the eye determined by the eye determination unit.
 本開示の第二の観点として、車両に搭載され、車両を操縦する操縦者の顔面を撮影した顔画像を用いて当該操縦者の状態を監視する処理をコンピュータに実行させる状態監視方法であって、顔面が位置する領域として予め規定された規定領域に向けて、赤色から近赤外の帯域の光を出射させる発光ステップと、規定領域から入射する入射光を受光することで撮影される第一顔画像を、顔画像として取得する第一画像取得ステップと、入射光から赤色の帯域の光を減衰させた減衰光を受光することによって撮影される第二顔画像を、第一顔画像とは別の顔画像として取得する第二画像取得ステップと、操縦者の眼の候補となる明部が第一顔画像に複数撮影された場合に、複数の明部のうちで第二顔画像に撮影されていない明部を、眼であると判定する眼判定ステップと、眼判定ステップによって判定された眼の位置に基づいて、顔面を認識する顔認識ステップと、を備える。 According to a second aspect of the present disclosure, there is provided a state monitoring method for causing a computer to execute a process for monitoring a state of a driver mounted on a vehicle and using a face image obtained by photographing a face of the driver who controls the vehicle. The first step is taken by emitting a light emitting step for emitting light in a band from red to near-infrared toward a prescribed region defined in advance as a region where the face is located, and receiving incident light incident from the prescribed region. A first image acquisition step of acquiring a face image as a face image, and a second face image captured by receiving attenuated light obtained by attenuating light in the red band from incident light is referred to as a first face image. Second image acquisition step to acquire as another face image, and when a plurality of bright parts that are candidates for the pilot's eyes are photographed in the first face image, the second face image is photographed among the plurality of bright parts The bright part that is not Comprising the eye determination step of constant, based on the position of the eye is determined by eye determination step, and recognizing the face recognition step the face, the.
 本開示のさらに第三の観点として、非遷移の記憶媒体であり、前記媒体は、コンピュータにより読み出されてそして実行されるインストラクションを含み、前記インストラクションは上記の第二の観点の前記状態監視方法を含み、前記方法は、コンピュータ搭載される。 As a third aspect of the present disclosure, the storage medium is a non-transition storage medium, and the medium includes instructions that are read and executed by a computer, and the instructions are the state monitoring method according to the second aspect described above. And the method is computer mounted.
 これら上記の観点によれば、減衰光を受光することによって撮影される第二顔画像では、入射光から赤色の帯域の光が減衰させられているので、操縦者の眼は、明部として写り難い。故に、操縦者の眼の候補となる明部が第一顔画像に複数撮影された場合に、これらの明部のうちで第二顔画像に撮影されていないものを眼と判定することにより、操縦者の眼鏡に発光部が映り込んだ場合でも、顔画像における眼の位置は、正確に特定され得る。したがって、眼の位置に基づいて行われる顔認識の正確性が、確保可能となる。 According to these above viewpoints, in the second face image taken by receiving the attenuated light, the red band light is attenuated from the incident light, so the pilot's eyes appear as a bright part. hard. Therefore, when a plurality of bright parts that are candidates for the eyes of the pilot are photographed in the first face image, by determining those of these bright parts that are not photographed in the second face image as eyes, Even when the light emitting part is reflected on the driver's glasses, the position of the eye in the face image can be accurately specified. Therefore, the accuracy of face recognition performed based on the eye position can be ensured.
 本開示についての上記目的およびその他の目的、特徴や利点は、添付の図面を参照しながら下記の詳細な記述により、より明確になる。 The above object and other objects, features, and advantages of the present disclosure will become clearer by the following detailed description with reference to the accompanying drawings.
 本開示についての上記目的およびその他の目的、特徴や利点は、添付の図面を参照しながら下記の詳細な記述により、より明確になる。
本開示の第一実施形態による状態監視装置の車両における配置を説明するための図であり、 第一実施形態による状態監視装置の電気的構成を説明するためのブロック図であり、 状態監視装置の機械的構成を説明するための斜視図であり、 第一顔画像の一例を模式的に示す図であり、 撮像素子の構成を説明するための模式図であり、 第二顔画像の一例を模式的に示す図であり、 画像認識部によって実施される処理を示すフローチャートであり、 第二実施形態による状態監視装置の電気的構成を説明するためのブロック図であり、 第三実施形態による状態監視装置の電気的構成を説明するためのブロック図であり、 図5の変形例を示す模式図である。
The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings.
It is a figure for demonstrating arrangement | positioning in the vehicle of the state monitoring apparatus by 1st embodiment of this indication, It is a block diagram for demonstrating the electrical structure of the state monitoring apparatus by 1st embodiment, It is a perspective view for explaining the mechanical configuration of the state monitoring device, It is a figure which shows an example of a 1st face image typically, It is a schematic diagram for explaining the configuration of the image sensor, It is a figure which shows an example of a 2nd face image typically, It is a flowchart which shows the process implemented by the image recognition part, It is a block diagram for demonstrating the electrical structure of the state monitoring apparatus by 2nd embodiment, It is a block diagram for demonstrating the electrical structure of the state monitoring apparatus by 3rd embodiment, It is a schematic diagram which shows the modification of FIG.
 以下、本開示の複数の実施形態を図面に基づいて説明する。尚、各実施形態において対応する構成要素には同一の符号を付すことにより、重複する説明を省略する場合がある。各実施形態において構成の一部分のみを説明している場合、当該構成の他の部分については、先行して説明した他の実施形態の構成を適用することができる。また、各実施形態の説明において明示している構成の組み合わせばかりではなく、特に組み合わせに支障が生じなければ、明示していなくても複数の実施形態の構成同士を部分的に組み合せることができる。こうした複数の実施形態に記述された構成同士の明示されていない組み合わせも、以下の説明によって開示されているものとする。 Hereinafter, a plurality of embodiments of the present disclosure will be described with reference to the drawings. In addition, the overlapping description may be abbreviate | omitted by attaching | subjecting the same code | symbol to the corresponding component in each embodiment. When only a part of the configuration is described in each embodiment, the configuration of the other embodiment described above can be applied to the other part of the configuration. In addition, not only combinations of configurations explicitly described in the description of each embodiment, but also the configurations of a plurality of embodiments can be partially combined even if they are not explicitly specified unless there is a problem with the combination. . Combinations that are not explicitly described between the configurations described in the plurality of embodiments are also disclosed by the following description.
 本開示の第一実施形態による状態監視装置100は、図1に示すように、移動体としての車両1搭載され、車両を運転あるいは操縦する操縦者(以下、運転者とも言及する)の状態を監視するドライバステータスモニタである。状態監視装置100は、図2に示すように、撮像部10、発光部15、及び制御回路20、並びにこれらの構成を収容する筐体60(図3参照)を備えている。また状態監視装置100は、車両に搭載されたアクチュエーション部90及び車両制御装置96と接続されている。 As shown in FIG. 1, the state monitoring device 100 according to the first embodiment of the present disclosure is mounted on a vehicle 1 as a moving body, and indicates the state of a driver (hereinafter also referred to as a driver) that drives or controls the vehicle. It is a driver status monitor to be monitored. As shown in FIG. 2, the state monitoring apparatus 100 includes an imaging unit 10, a light emitting unit 15, a control circuit 20, and a housing 60 (see FIG. 3) that houses these configurations. The state monitoring device 100 is connected to an actuation unit 90 and a vehicle control device 96 mounted on the vehicle.
 図1,2に示す撮像部10は、ステアリングコラム81の上面に設置された状態監視装置100において、運転者の顔面を撮影した顔画像51(図4も参照)を生成する装置である。ステアリングコラム81の上面に設置された状態監視装置100において、運転者の顔面を撮影した顔画像51(図4も参照)を生成する装置である。撮像部10は、車両1の室内において予め規定された規定領域PAを撮影する。この規定領域PAには、運転席に着座した運転者の顔面が位置すると想定される領域が含まれている。こうした規定領域PAは、具体的には、運転者の各眼のアイレンジから想定されるアイリプスに基づいて規定され、例えば、アイリプスの99パーセンタイルを包含するように規定される。 The imaging unit 10 shown in FIGS. 1 and 2 is a device that generates a face image 51 (see also FIG. 4) obtained by photographing a driver's face in the state monitoring device 100 installed on the upper surface of the steering column 81. The state monitoring device 100 installed on the upper surface of the steering column 81 is a device that generates a face image 51 (see also FIG. 4) obtained by photographing the driver's face. The image capturing unit 10 captures a predetermined area PA defined in advance in the vehicle 1. This prescribed area PA includes an area where the face of the driver who is seated in the driver's seat is assumed to be located. Specifically, such a defined area PA is defined based on the eyelips assumed from the eye range of each eye of the driver, and is defined to include, for example, the 99th percentile of the eyelips.
 撮像部10は、所謂近赤外線カメラであって、撮像素子11に、光学レンズ及び光学フィルタ等を組み合わせることによって構成されている。撮像素子11は、撮像面に沿って配列された複数の画素により、受光した光の強度に応じた電気信号を生じさせる。撮像素子11は、撮像面を規定領域PAに向けた姿勢にて配置されている。撮像素子11は、制御回路20からの制御信号に基づいて露光状態となり、規定領域PAから入射する入射光を受光する。これにより、白と黒の濃淡によって描画されたモノクロームの顔画像51が生成される。こうして撮影された顔画像51は、撮像部10から制御回路20に逐次出力される。 The imaging unit 10 is a so-called near-infrared camera, and is configured by combining the imaging element 11 with an optical lens, an optical filter, and the like. The imaging element 11 generates an electrical signal corresponding to the intensity of received light by a plurality of pixels arranged along the imaging surface. The image sensor 11 is arranged in a posture in which the imaging surface is directed to the defined area PA. The image sensor 11 is in an exposure state based on a control signal from the control circuit 20 and receives incident light incident from the defined area PA. As a result, a monochrome face image 51 drawn with shades of white and black is generated. The face images 51 thus photographed are sequentially output from the imaging unit 10 to the control circuit 20.
 投光部15は、複数の発光ダイオード16を有している。各発光ダイオード16は、撮像部10を挟むように配置されており(図3参照)、赤色から近赤外の帯域の照明光を規定領域PAに向けて出射させる。発光ダイオード16の発光のオン状態及びオフ状態は、制御回路20から供給される電流によって制御される。 The light projecting unit 15 has a plurality of light emitting diodes 16. Each light emitting diode 16 is disposed so as to sandwich the imaging unit 10 (see FIG. 3), and emits illumination light in a band from red to near infrared toward the defined area PA. The on state and the off state of light emission of the light emitting diode 16 are controlled by a current supplied from the control circuit 20.
 制御回路20は、撮像部10、投光部15、及びアクチュエーション部90等と接続されており、これらの構成の作動を制御する回路である。制御回路20は、各種の演算処理を行うプロセッサ、演算処理の作業領域として機能するRAM、及び演算処理に用いられるプログラム等が格納されたフラッシュメモリ等を備えたマイクロコンピュータを中心に構成されている。加えて制御回路20は、撮像部10及び投光部15等に電力を供給する電源回路を備えている。 The control circuit 20 is connected to the imaging unit 10, the light projecting unit 15, the actuation unit 90, and the like, and is a circuit that controls the operation of these components. The control circuit 20 is mainly configured by a microcomputer including a processor that performs various arithmetic processes, a RAM that functions as a work area for the arithmetic processes, and a flash memory that stores programs used for the arithmetic processes. . In addition, the control circuit 20 includes a power supply circuit that supplies power to the imaging unit 10, the light projecting unit 15, and the like.
 制御回路20は、予め記憶された状態監視プログラムをプロセッサによって実行することにより、発光制御部21、撮像制御部23、画像認識部24、状態判定部31、及び警告制御部33等の複数の機能ブロックを備える。この機能ブロックは機能セクションとも言及される。 The control circuit 20 executes a plurality of functions such as a light emission control unit 21, an imaging control unit 23, an image recognition unit 24, a state determination unit 31, and a warning control unit 33 by executing a state monitoring program stored in advance by a processor. With blocks. This functional block is also referred to as a functional section.
 発光制御部21は、投光部15の発光制御に関連する機能ブロックである。発光制御部21は、所定の電流を発光ダイオード16に印加することで、発光ダイオード16の発光状態にさせる。発光制御部21は、画像認識部24の演算した制御値に基づくことで、撮像部10が露光状態とされるタイミングに合わせて、投光部15から照明光を出射させる。 The light emission control unit 21 is a functional block related to the light emission control of the light projecting unit 15. The light emission control unit 21 causes the light emitting diode 16 to emit light by applying a predetermined current to the light emitting diode 16. Based on the control value calculated by the image recognition unit 24, the light emission control unit 21 causes the light projecting unit 15 to emit illumination light in accordance with the timing at which the imaging unit 10 is in the exposure state.
 撮像制御部23は、撮像部10の撮像制御に関連する機能ブロックである。撮像制御部23は、画像認識部24の演算した制御値に基づいて、撮像部10における露光開始のタイミング、ゲイン、露光時間等を制御する。 The imaging control unit 23 is a functional block related to imaging control of the imaging unit 10. The imaging control unit 23 controls the exposure start timing, gain, exposure time, and the like in the imaging unit 10 based on the control value calculated by the image recognition unit 24.
 画像認識部24は、顔画像51等の画像処理に関連する機能ブロックである。画像認識部24は、運転者の顔面を抽出可能な顔画像51を取得するため、撮像部10における撮像条件及び投光部15における発光条件を設定する。そして画像取得部21は、設定した撮像条件及び発光条件に則った作動を撮像部10及び投光部15に実施させるため、これらを制御する撮像制御部23及び発光制御部21にて用いられる制御値を演算する。こうして撮影された顔画像51を、画像認識部24は、撮像部10から取得する。そして画像認識部24は、取得した顔画像51の画像処理により、運転者の顔の向き及び各眼の開き具合(以下「開眼度」という)に関連する値や、運転者の眠気の度合いに関連する値等を演算する。 The image recognition unit 24 is a functional block related to image processing of the face image 51 and the like. The image recognition unit 24 sets an imaging condition in the imaging unit 10 and a light emission condition in the light projecting unit 15 in order to acquire a face image 51 from which the driver's face can be extracted. The image acquisition unit 21 controls the imaging control unit 23 and the light emission control unit 21 to control the imaging unit 10 and the light projecting unit 15 in order to cause the imaging unit 10 and the light projecting unit 15 to perform operations in accordance with the set imaging conditions and light emission conditions. Calculate the value. The image recognition unit 24 acquires the face image 51 thus photographed from the imaging unit 10. The image recognizing unit 24 performs image processing on the acquired face image 51 to obtain values related to the driver's face orientation and the degree of eye opening (hereinafter referred to as “eye open degree”) and the degree of sleepiness of the driver. Calculate related values.
 状態判定部31は、画像取得部21によって演算された値と予め設定された閾値とを比較する。この比較処理により、状態判定部31は、例えば脇見運転の兆候や居眠り運転の兆候等が検出されたか否かを推定する。そして、上述した兆候を検出した状態判定部31は、運転者に警告を実施すべき状態が生じていると判定する。 The state determination unit 31 compares the value calculated by the image acquisition unit 21 with a preset threshold value. By this comparison processing, the state determination unit 31 estimates whether, for example, a sign of driving aside or a sign of dozing operation is detected. And the state determination part 31 which detected the above-mentioned sign determines with the state which should alert a driver | operator having arisen.
 警告制御部33は、アクチュエーション部90と接続されている。警告制御部33は、状態判定部31によって運転者に警告すべき状況が生じていると判定された場合に、アクチュエーション部90に制御信号を出力する。こうして警告制御部33は、アクチュエーション部90を作動させることにより、運転者に警告を行う。 The warning control unit 33 is connected to the actuation unit 90. The warning control unit 33 outputs a control signal to the actuation unit 90 when the state determination unit 31 determines that a situation that should warn the driver is occurring. Thus, the warning control unit 33 issues a warning to the driver by operating the actuation unit 90.
 筐体60は、図3に示すように、本体部材63、フロントカバー部材66、及び図示しないリヤカバー部材等によって構成されている。 The housing 60 includes a main body member 63, a front cover member 66, a rear cover member (not shown), and the like as shown in FIG.
 本体部材63は、投光部15及び撮像部10等の実装されたサブ基板62を保持している。サブ基板62には、制御回路20の形成されたメイン基板61が、当該サブ基板62と直交する姿勢にて取り付けられている。本体部材63には、挿通孔64及び配光部65が設けられている。挿通孔64は、本体部材63の水平方向における中央部分に設けられており、サブ基板62に実装された撮像部10を挿通させる。挿通孔64は、サブ基板66に設けられた遮光のための孔部と協働して、投光部15と撮像部10との間にて遮光機能を発揮することで、投光部15から撮像部10への光の漏れを防いでいる。配光部65は、挿通孔64を水平方向において挟むように配置されており、サブ基板62に実装された投光部15と対向している。配光部65は、投光部15から放射された光を透過させつつ、規定領域PA(図1参照)に配光する。 The main body member 63 holds the sub-board 62 on which the light projecting unit 15 and the imaging unit 10 are mounted. A main board 61 on which the control circuit 20 is formed is attached to the sub board 62 in a posture orthogonal to the sub board 62. The body member 63 is provided with an insertion hole 64 and a light distribution portion 65. The insertion hole 64 is provided in the central portion of the main body member 63 in the horizontal direction, and allows the imaging unit 10 mounted on the sub-board 62 to be inserted. The insertion hole 64 cooperates with a light blocking hole provided in the sub-substrate 66 to exhibit a light blocking function between the light projecting unit 15 and the imaging unit 10, thereby Light leakage to the imaging unit 10 is prevented. The light distribution unit 65 is disposed so as to sandwich the insertion hole 64 in the horizontal direction, and faces the light projecting unit 15 mounted on the sub-board 62. The light distribution unit 65 distributes light to the defined area PA (see FIG. 1) while transmitting the light emitted from the light projecting unit 15.
 フロントカバー部材66には、可視光フィルタ67が設けられている。可視光フィルタ67は、顔画像51(図4参照)の生成に用いられる赤色から近赤外の帯域の光を主に透過させると共に、顔画像51の生成に不要な可視光帯域の光を遮蔽する。可視光フィルタ67は、フロントカバー部材66において配光部65と対向する位置に形成された開口68を覆っている。リヤカバー部材は、本体部材63を挟んでフロントカバー部材66とは反対側に配置されている。リヤカバー部材は、各基板61,62を覆うことにより、これらを雰囲気中の塵及び埃等から保護している。 The front cover member 66 is provided with a visible light filter 67. The visible light filter 67 mainly transmits light in the red to near-infrared band used for generating the face image 51 (see FIG. 4) and shields light in the visible light band that is unnecessary for generating the face image 51. To do. The visible light filter 67 covers an opening 68 formed at a position facing the light distribution portion 65 in the front cover member 66. The rear cover member is disposed on the opposite side of the front cover member 66 with the main body member 63 interposed therebetween. The rear cover member covers the substrates 61 and 62 to protect them from dust and dirt in the atmosphere.
 図2に示すアクチュエーション部90には、車両1(図1参照)に搭載された、例えばスピーカ91、シート振動装置93、及び空調装置95等が含まれている。スピーカ91は、警告制御部33からの制御信号に基づいて音声データを再生することにより、運転者に注意を喚起する。シート振動装置93は、運転席の座面の内部等に設置されており、警告制御部33からの制御信号に基づいて運転席を振動させることにより、運転者に注意を喚起する。空調装置95は、警告制御部33からの制御信号に基づいて車両1の室内に外気を導入させる等の作動により、運転者に注意を喚起する。 2 includes, for example, a speaker 91, a seat vibration device 93, an air conditioner 95, and the like mounted on the vehicle 1 (see FIG. 1). The speaker 91 alerts the driver by reproducing audio data based on a control signal from the warning control unit 33. The seat vibration device 93 is installed inside the seat surface of the driver's seat or the like, and alerts the driver by vibrating the driver's seat based on a control signal from the warning control unit 33. The air conditioner 95 alerts the driver by an operation such as introducing outside air into the vehicle 1 based on a control signal from the warning control unit 33.
 次に、状態監視装置100によって撮影される顔画像51についてさらに詳しく説明する。 Next, the face image 51 photographed by the state monitoring device 100 will be described in more detail.
 図4に示すように、環境光の少ない状態下において、照明光を規定領域PAに照射しつつ運転者の顔面を撮影した場合、運転者の両眼は、明部55として写る。こうした所謂赤眼現象は、環境光及び照明光に含まれる赤色帯域の光が網膜の毛細血管によって反射されるために生じる。この赤眼は、当該赤眼の周囲に対して高いコントラストを獲得している。故に、画像認識部24(図2参照)は、顔画像51に写る明部55の位置に基づくことで、当該顔画像51における眼の位置を正確に特定し得る。 As shown in FIG. 4, when the driver's face is photographed while irradiating the prescribed area PA with illumination light under a low ambient light condition, both eyes of the driver appear as a bright portion 55. Such a so-called red-eye phenomenon occurs because light in the red band included in ambient light and illumination light is reflected by capillaries of the retina. This red eye acquires high contrast with respect to the surroundings of the red eye. Therefore, the image recognition unit 24 (see FIG. 2) can accurately specify the position of the eye in the face image 51 based on the position of the bright part 55 that appears in the face image 51.
 しかし、環境光の少ない状態において、照明光を規定領域PAに照射しつつ、眼鏡を掛けた運転者が撮影されたとする。この場合、撮影される顔画像51には、運転者の所謂赤眼だけでなく、眼鏡に映り込んだ発光ダイオード16(図2参照)が、明部56として撮影されてしまう。このように、運転者の両眼の周囲に、眼の候補となる明部55,56が複数撮影されてしまうと、顔画像51における眼の位置の特定が、誤ったものとなり得る。すると、眼の位置に基づいて行われる顔認識の正確性が、確保されなくなってしまうおそれがあった。 However, it is assumed that a driver wearing spectacles was photographed while irradiating illumination light to the prescribed area PA in a state with little ambient light. In this case, not only the driver's so-called red eye but also the light-emitting diode 16 (see FIG. 2) reflected in the glasses is photographed as the bright portion 56 in the photographed face image 51. As described above, if a plurality of bright portions 55 and 56 that are candidate eyes are photographed around both eyes of the driver, the specification of the eye position in the face image 51 may be erroneous. As a result, the accuracy of face recognition performed based on the position of the eyes may not be ensured.
 顔認識の精度を向上させるために、図2の状態監視装置100の備える構成、及び制御回路20によって実施される処理を、以下詳細に説明する。 In order to improve the accuracy of face recognition, the configuration of the state monitoring apparatus 100 in FIG. 2 and the processing performed by the control circuit 20 will be described in detail below.
 図5に示すように、撮像素子11には、被覆領域77と非被覆領域76とが形成されている。被覆領域77は、撮像面においてカラーフィルタ78によって覆われた領域である。カラーフィルタ78は、赤色帯域の光を減衰させつつ、近赤外帯域の光を通過させる。故に、被覆領域77は、規定領域PA(図1参照)から入射してカラーフィルタ78を通過した光であって、赤色帯域の光が減衰された減衰光を受光する。非被覆領域76は、被覆領域77から外れて位置しており、カラーフィルタ78によって覆われてない領域である。故に、非被覆領域76は、規定領域PAから入射する入射光を受光する。 As shown in FIG. 5, the imaging element 11 is formed with a covering region 77 and a non-covering region 76. The covering region 77 is a region covered with the color filter 78 on the imaging surface. The color filter 78 allows light in the near infrared band to pass through while attenuating light in the red band. Therefore, the covering region 77 receives the attenuated light that is incident from the defined region PA (see FIG. 1) and passes through the color filter 78 and in which the red band light is attenuated. The uncovered area 76 is located outside the covered area 77 and is not covered with the color filter 78. Therefore, the uncovered area 76 receives incident light incident from the defined area PA.
 撮像素子11は、水平方向H及び垂直方向Vのそれぞれに例えば640×480の画素を配列してなる所謂VGAサイズの素子である。撮像素子11は、複数の画素として、第一画素70及び第二画素73を有している。第一画素70は、カラーフィルタ78に覆われていない画素である。故に、第一画素70の全域が非被覆領域76となる。第二画素73は、カラーフィルタ78によって覆われた画素である。故に、第二画素73の全域が被覆領域77となる。撮像面に設けられる第二画素73の数は、第一画素70の数よりも少なくされている。故に、撮像素子11の撮像面においては、被覆領域77の面積は、非被覆領域76の面積よりも、狭くされている。 The image sensor 11 is a so-called VGA size element in which, for example, 640 × 480 pixels are arranged in each of the horizontal direction H and the vertical direction V. The image sensor 11 has a first pixel 70 and a second pixel 73 as a plurality of pixels. The first pixel 70 is a pixel that is not covered with the color filter 78. Therefore, the entire area of the first pixel 70 becomes the uncovered area 76. The second pixel 73 is a pixel covered with the color filter 78. Therefore, the entire area of the second pixel 73 becomes the covering region 77. The number of second pixels 73 provided on the imaging surface is smaller than the number of first pixels 70. Therefore, on the imaging surface of the imaging device 11, the area of the covered region 77 is narrower than the area of the non-covered region 76.
 以上の撮像素子11の構成により、第一画素70からの出力に基づいて、図2に示す撮像部10は、入射光の受光することによって撮影された顔画像(以下、便宜的に「第一顔画像」と記載する)51を生成する。また撮像部10は、第二画素73(図5参照)からの出力に基づいて、減衰光を受光することによって撮影された顔画像(以下、便宜的に「第二顔画像」と記載する)52を生成する。以上によれば、第一顔画像51の画素数は、第二顔画像52の画素数よりも多くなる。さらに、撮像部10は、第一画素70(図5参照)からの出力を取得しつつ、第二画素73からの出力を取得することができる。よって、第一顔画像51及び第二顔画像52は、実質的に同時に撮影された画像となる。 With the configuration of the imaging device 11 described above, based on the output from the first pixel 70, the imaging unit 10 shown in FIG. 2 receives a face image (hereinafter referred to as “first” for convenience) taken by receiving incident light. 51 ”(described as“ face image ”). The imaging unit 10 also captures a face image captured by receiving attenuated light based on the output from the second pixel 73 (see FIG. 5) (hereinafter referred to as “second face image” for convenience). 52 is generated. According to the above, the number of pixels of the first face image 51 is larger than the number of pixels of the second face image 52. Furthermore, the imaging unit 10 can acquire the output from the second pixel 73 while acquiring the output from the first pixel 70 (see FIG. 5). Therefore, the first face image 51 and the second face image 52 are images taken substantially simultaneously.
 これら第一顔画像51及び第二顔画像52を用いた高精度な顔認識を実現するため、画像認識部24は、サブ機能ブロックとして、第一画像取得ブロック25、第二画像取得ブロック26、暗所判定ブロック28、及び眼判定ブロック27を有する。 In order to realize highly accurate face recognition using the first face image 51 and the second face image 52, the image recognition unit 24 includes a first image acquisition block 25, a second image acquisition block 26, It has a dark place determination block 28 and an eye determination block 27.
 第一画像取得ブロック25は、撮像部10から第一顔画像51を取得する。一方、第二画像取得ブロック26は、撮像部10から第二顔画像52を、第一顔画像51とは別の顔画像として取得する。 The first image acquisition block 25 acquires the first face image 51 from the imaging unit 10. On the other hand, the second image acquisition block 26 acquires the second face image 52 from the imaging unit 10 as a face image different from the first face image 51.
 暗所判定ブロック28は、車両制御装置96から取得した情報に基づいて、車両1(図1参照)の室内が暗所であるか否かを判定する。車両制御装置96は、車両1に搭載された種々の機器を制御する装置である。車両制御装置96は、その機能の一つとして、車両1の前照灯の作動を制御することができる。暗所判定ブロック28は、車両1の前照灯の状態情報に基づいて、当該前照灯が点灯状態である場合に、車両1の室内が暗所であると判定する。一方で、暗所判定ブロック28は、前照灯が消灯状態である場合に、車両1の室内が暗所でないと判定する。尚、車両制御装置96による前照灯の点灯状態及び消灯状態の切り替えは、車両1(図1参照)に搭載された外光センサの検出結果に基づいて行われてもよく、ステアリングコラム81(図1参照)に設けられた切替スイッチの操作状態に基づいて行われてもよい。 The dark place determination block 28 determines whether or not the room of the vehicle 1 (see FIG. 1) is a dark place based on the information acquired from the vehicle control device 96. The vehicle control device 96 is a device that controls various devices mounted on the vehicle 1. The vehicle control device 96 can control the operation of the headlamp of the vehicle 1 as one of its functions. The dark place determination block 28 determines that the interior of the vehicle 1 is a dark place based on the state information of the headlamp of the vehicle 1 when the headlamp is in a lighting state. On the other hand, the dark place determination block 28 determines that the interior of the vehicle 1 is not a dark place when the headlamp is turned off. It should be noted that the switching between the lighting state and the unlighting state of the headlamp by the vehicle control device 96 may be performed based on the detection result of the external light sensor mounted on the vehicle 1 (see FIG. 1). It may be performed based on the operation state of the changeover switch provided in FIG.
 眼判定ブロック27は、図4に示す運転者の眼の候補となる明部55,56が第一顔画像51に複数撮影された場合に、当該眼の候補となる複数の明部55,56から、実際の眼に対応する明部がいずれであるかを判定する。詳記すると、図6に示すように、減衰光を受光することによって撮影される第二顔画像52では、入射光から赤色帯域の光が減衰させられているので、運転者の眼は、赤眼となり難い。即ち、第二顔画像52では、運転者の眼は、図4の如く明部55として写り難い。故に、第一顔画像51に撮影された眼の候補となる明部55,56のうちで、図5の第二顔画像52に撮影されていないものが、実際の眼による明部55である蓋然性が高いこととなる。 The eye determination block 27 includes a plurality of bright portions 55 and 56 that are candidates for the eyes when a plurality of bright portions 55 and 56 that are candidates for the eyes of the driver shown in FIG. From this, it is determined which bright part corresponds to the actual eye. Specifically, as shown in FIG. 6, in the second face image 52 photographed by receiving the attenuated light, the red band light is attenuated from the incident light. It is hard to become eyes. That is, in the second face image 52, the driver's eyes are not easily captured as the bright portion 55 as shown in FIG. Therefore, among the bright portions 55 and 56 that are candidates for the eyes photographed in the first face image 51, those not photographed in the second face image 52 in FIG. 5 are the bright portions 55 by the actual eyes. The probability is high.
 以上説明した原理に基づき、図2に示す眼判定ブロック27は、第一顔画像51と第二顔画像52とを比較する処理を行うことで、第一顔画像51に写る複数の明部55,56(図4参照)のうちで、第二顔画像52に撮影されていなかった明部55を、眼と判定する。こうした眼の位置を特定する判定は、暗所判定ブロック28によって室内が暗所でると判定されたこと条件として、実施される。 Based on the principle described above, the eye determination block 27 shown in FIG. 2 performs a process of comparing the first face image 51 and the second face image 52, thereby a plurality of bright portions 55 appearing in the first face image 51. , 56 (see FIG. 4), the bright portion 55 that has not been photographed in the second face image 52 is determined as an eye. The determination of specifying the eye position is performed on the condition that the dark place determination block 28 determines that the room is dark.
 次に、ここまで説明した眼の位置を特定する判定を実現するために、画像認識部24によって実施される処理を、図7に基づいて詳細に説明する。図7に示す処理は、車両1(図1参照)のイグニッションがオン状態とされることにより、画像認識部24によって開始される。 Next, the process performed by the image recognition unit 24 in order to realize the determination for specifying the eye position described so far will be described in detail with reference to FIG. The process shown in FIG. 7 is started by the image recognition unit 24 when the ignition of the vehicle 1 (see FIG. 1) is turned on.
 S101では、発光制御部21から投光部15に発光を指示する制御信号を出力すると共に、撮像制御部23から撮像部10に撮影を支持する制御信号を出力しS102に進む。このS101にて出力された制御信号に基づいて、投光部15は、規定領域PAに向けて照明光を出射させる。そして、撮像部10は、照明光を含む規定領域PAからの光を受光することで、第一顔画像51及び第二顔画像52を撮影する。 In S101, the light emission control unit 21 outputs a control signal for instructing the light projecting unit 15 to emit light, and the imaging control unit 23 outputs a control signal for supporting imaging to the imaging unit 10, and the process proceeds to S102. Based on the control signal output in S101, the light projecting unit 15 emits illumination light toward the defined area PA. And the imaging part 10 image | photographs the 1st face image 51 and the 2nd face image 52 by receiving the light from regulation area | region PA containing illumination light.
 ここで、この出願に記載されるフローチャート、あるいは、フローチャートの処理は、複数のセクション(あるいはステップと言及される)から構成され、各セクションは、たとえば、S101と表現される。さらに、各セクションは、複数のサブセクションに分割されることができる、一方、複数のセクションが合わさって一つのセクションにすることも可能である。さらに、このように構成される各セクションは、デバイス、モジュール、ミーンズとして言及されることができる。 Here, the flowchart described in this application or the process of the flowchart is configured by a plurality of sections (or referred to as steps), and each section is expressed as S101, for example. Further, each section can be divided into a plurality of subsections, while a plurality of sections can be combined into one section. Further, each section configured in this manner can be referred to as a device, module, or means.
 S102では、S101にて出力された制御信号に基づく第一顔画像51及び第二顔画像52を、第一画像取得ブロック25及び第二画像取得ブロック26によって取得し、S103に進む。 In S102, the first face image 51 and the second face image 52 based on the control signal output in S101 are acquired by the first image acquisition block 25 and the second image acquisition block 26, and the process proceeds to S103.
 S103では、車両制御装置96から取得している前照灯の状態情報に基づき、車両1の室内が暗所であるか否かを判定する。S103にて肯定判定をした場合には、S105に進む。一方で、S103にて否定判定をした場合には、S104に進む。S104では、赤眼を利用しない顔認識の処理を実施し、S110に進む。一方で、S103によって肯定判定がなされたことを条件として実施されるS105では、S102にて取得された第一顔画像51を画像処理することにより、当該画像51から、運転者の眼の候補となる明部55,56を抽出し、S106に進む。 In S103, based on the headlight status information acquired from the vehicle control device 96, it is determined whether or not the interior of the vehicle 1 is in a dark place. If a positive determination is made in S103, the process proceeds to S105. On the other hand, if a negative determination is made in S103, the process proceeds to S104. In S104, face recognition processing without using red eyes is performed, and the process proceeds to S110. On the other hand, in S105, which is carried out on the condition that an affirmative determination is made in S103, the first face image 51 acquired in S102 is subjected to image processing, so that the driver's eye candidate is determined from the image 51. The bright parts 55 and 56 are extracted, and the process proceeds to S106.
 S106では、S105にて実施された画像処理の結果、眼の近傍領域として予め想定された範囲に、眼の候補となる明部55,56が複数撮影されているか否かを判定する。S106にて、否定判定をした場合には、S107に進む。S107では、S105にて抽出された赤眼である明部55を利用して顔認識を実施し、S110に進む。一方で、S106にて肯定判定をした場合には、S108に進む。 In S106, as a result of the image processing performed in S105, it is determined whether or not a plurality of bright portions 55 and 56 that are candidates for the eye are photographed in a range preliminarily assumed as the vicinity region of the eye. If a negative determination is made in S106, the process proceeds to S107. In S107, face recognition is performed using the bright part 55, which is the red eye extracted in S105, and the process proceeds to S110. On the other hand, if a positive determination is made in S106, the process proceeds to S108.
 S108では、第一顔画像51及び第二顔画像52を比較することにより、第一顔画像51に撮影された複数の明部55,56のうちで、第二顔画像52に撮影されていない明部55を、運転者の眼であると判定し、S109に進む。S109では、S108にて眼であると判定された赤眼である明部55を利用して顔認識を実施し、S110に進む。 In S108, by comparing the first face image 51 and the second face image 52, the second face image 52 is not photographed among the plurality of bright portions 55 and 56 photographed in the first face image 51. The light part 55 is determined to be the eyes of the driver, and the process proceeds to S109. In S109, face recognition is performed using the bright portion 55 that is the red eye determined to be an eye in S108, and the process proceeds to S110.
 S110では、S104、S107及びS109のいずれかによる顔認識の結果を状態判定部31に出力し、S111に進む。このS110にて出力された顔認識の結果に基づいて、状態判定部31は、脇見運転の兆候や居眠り運転の兆候等、運転者に警告を実施すべき状態が生じているか否かを判定する。 In S110, the result of face recognition by any of S104, S107, and S109 is output to the state determination unit 31, and the process proceeds to S111. Based on the face recognition result output in S110, the state determination unit 31 determines whether or not there is a state in which the driver should be warned, such as a sign of driving aside or a sign of dozing. .
 S111では、車両1のイグニッションのオン状態が継続されているか否かを判定する。イグニッションがオフ状態とされたことで、S111にて否定判定をした場合には、処理を終了する。一方で、S111にて肯定判定をした場合には、S101に戻る。 In S111, it is determined whether or not the ignition ON state of the vehicle 1 is continued. If a negative determination is made in S111 because the ignition is turned off, the process ends. On the other hand, if a positive determination is made in S111, the process returns to S101.
 ここまで説明した第一実施形態によれば、減衰光を受光することによって撮影される第二顔画像52では、入射光から赤色の帯域の光が減衰させられているので、運転者の眼は、明部55として写り難い。故に、第一顔画像51に複数撮影された明部55,56のうちで、第二顔画像52に撮影されていない明部55が、眼と判定される。これにより、運転者の眼鏡に発光ダイオード16が映り込んだ場合でも、第一顔画像51における眼の位置は、正確に特定され得る。したがって、眼の位置に基づいて行われる顔認識の正確性が、確保可能となる。 According to the first embodiment described so far, in the second face image 52 photographed by receiving the attenuated light, the red band light is attenuated from the incident light. It is hard to be seen as the bright part 55. Therefore, among the bright portions 55 and 56 photographed in the first face image 51, the bright portion 55 that is not photographed in the second face image 52 is determined as the eye. Thereby, even when the light emitting diode 16 is reflected in the driver's glasses, the position of the eye in the first face image 51 can be accurately specified. Therefore, the accuracy of face recognition performed based on the eye position can be ensured.
 加えて第一実施形態に用いられる撮像素子11では、その一部である第二画素73が被覆領域77としてカラーフィルタ78で覆われている。故に、撮像部10は、一つの撮像素子11から、第二顔画像52を生成するための出力と、第一顔画像51を生成するための出力とを共に得ることができる。よって、撮像素子11の一部をカラーフィルタ78によって覆う構成は、第一顔画像51及び第二顔画像52を比較することで眼の位置を特定する状態監視装置100に特に好適なのである。 In addition, in the image sensor 11 used in the first embodiment, the second pixel 73 which is a part of the image sensor 11 is covered with a color filter 78 as a covering region 77. Therefore, the imaging unit 10 can obtain both an output for generating the second face image 52 and an output for generating the first face image 51 from one image sensor 11. Therefore, a configuration in which a part of the image sensor 11 is covered with the color filter 78 is particularly suitable for the state monitoring apparatus 100 that identifies the eye position by comparing the first face image 51 and the second face image 52.
 また第一実施形態における第二顔画像52は、主に眼の位置を特定するために必要な顔画像である。故に、第二顔画像52は、第一顔画像51ほど鮮明でなくてもよい。よって、撮像素子11における第二画素73の数は、第一画素70の数よりも少なくされている。これにより、被覆領域77の面積が非被覆領域76の面積よりも狭くなるので、非被覆領域76からの出力に基づく第一顔画像51は、高い解像度を維持し、且つ照明光を有効に利用した鮮明な画像となり得る。したがって、第一顔画像51を用いた画像認識部24による顔認識の正確性は、確実に確保可能となる。 Further, the second face image 52 in the first embodiment is a face image that is mainly necessary for specifying the position of the eye. Therefore, the second face image 52 may not be as clear as the first face image 51. Therefore, the number of second pixels 73 in the image sensor 11 is smaller than the number of first pixels 70. Thereby, since the area of the covering region 77 is smaller than the area of the non-covering region 76, the first face image 51 based on the output from the non-covering region 76 maintains a high resolution and effectively uses the illumination light. It can be a clear image. Therefore, the accuracy of face recognition by the image recognition unit 24 using the first face image 51 can be reliably ensured.
 さらに第一実施形態によれば、撮像部10は、実質的に同じタイミングにて取得される出力に基づいて、第一顔画像51と第二顔画像52とを生成することができる。故に、第一顔画像51及び第二顔画像52間の撮影タイミングの差は、実質的に消失し得る。以上により、第一顔画像51及び第二顔画像52間にて、撮影された運転者の位置にずれが生じる事態は、回避され得る。したがって、第一顔画像51と第二顔画像52との比較によって特定される眼の位置の正確性、ひいては顔認識の正確性は、いっそう確保可能となる。 Furthermore, according to the first embodiment, the imaging unit 10 can generate the first face image 51 and the second face image 52 based on outputs acquired at substantially the same timing. Therefore, the difference in photographing timing between the first face image 51 and the second face image 52 can be substantially eliminated. As described above, it is possible to avoid a situation in which the position of the photographed driver is shifted between the first face image 51 and the second face image 52. Therefore, the accuracy of the eye position specified by the comparison between the first face image 51 and the second face image 52, and hence the accuracy of face recognition, can be further ensured.
 また加えて第一実施形態では、第一顔画像51と第二顔画像52との比較による眼の位置の特定は、車両1の室内が暗所である旨の判定を条件として、実施される。上述したように、赤眼による明部55と、発光ダイオード16の眼鏡への映り込みによる明部56とが共に第一顔画像51に写るのは、主に環境光の少ない条件下である。故に、室内が暗所である旨の判定を条件として、眼の位置を特定する処理が実施されることによれば、顔認識の精度を高く維持したうえで、状態監視装置100における処理の負荷の軽減が可能となる。 In addition, in the first embodiment, the eye position is identified by comparing the first face image 51 and the second face image 52 on the condition that the interior of the vehicle 1 is dark. . As described above, the bright portion 55 due to red eyes and the bright portion 56 due to the reflection of the light emitting diodes 16 on the glasses are both reflected in the first face image 51 mainly under a condition with little ambient light. Therefore, according to the condition that the position of the eye is specified on the condition that the room is in a dark place, the processing load in the state monitoring device 100 is maintained while maintaining high face recognition accuracy. Can be reduced.
 尚、第一実施形態において、車両1は移動体とも言及される。撮像部10は撮像デバイスあるいは撮像ミーンズとも言及される。投光部15は発光部、発光デバイス、あるいは発光ミーンズとも言及される。画像認識部24は顔認識部、顔認識デバイス、あるいは顔認識ミーンズとも言及される。第一画像取得ブロック25は第一画像取得部、第一画像取得デバイスあるいは第一画像取得ミーンズとも言及される。第二画像取得ブロック26は第二画像取得部、第二画像取得デバイス、あるいは部第二画像取得ミーンズとも言及される。眼判定ブロック27は眼判定部、眼判定デバイス、あるいは眼判定ミーンズとも言及される。暗所判定ブロック28は暗所判定部、暗所判定デバイス、あるいは、暗所判定ミーンズとも言及される。カラーフィルタ78は減衰フィルタとも言及される。S101は発光セクションあるいは発光ステップとも言及される。S102は第一画像取得セクションあるいは第一画像取得ステップ、第二画像取得セクションあるいは第二画像取得ステップとも言及される。S108は眼判定セクションあるいは眼判定ステップとも言及される。S109は顔認識セクションあるいは顔認識ステップとも言及される。 In the first embodiment, the vehicle 1 is also referred to as a moving body. The imaging unit 10 is also referred to as an imaging device or imaging means. The light projecting unit 15 is also referred to as a light emitting unit, a light emitting device, or a light emitting means. The image recognition unit 24 is also referred to as a face recognition unit, a face recognition device, or a face recognition means. The first image acquisition block 25 is also referred to as a first image acquisition unit, a first image acquisition device, or a first image acquisition means. The second image acquisition block 26 is also referred to as a second image acquisition unit, a second image acquisition device, or a unit second image acquisition means. The eye determination block 27 is also referred to as an eye determination unit, an eye determination device, or an eye determination means. The dark place determination block 28 is also referred to as a dark place determination unit, a dark place determination device, or a dark place determination means. The color filter 78 is also referred to as an attenuation filter. S101 is also referred to as a light emission section or a light emission step. S102 is also referred to as a first image acquisition section or first image acquisition step and a second image acquisition section or second image acquisition step. S108 is also referred to as an eye determination section or an eye determination step. S109 is also referred to as a face recognition section or a face recognition step.
 (第二実施形態)
 図8に示される本開示の第二実施形態は、第一実施形態の変形例である。第二実施形態による状態監視装置200は、第一実施形態の撮像部10(図2参照)に替えて、第一撮像部110及び第二撮像部210を有している。以下、図8に基づいて、第一顔画像51及び第二顔画像52を取得するための状態監視装置200の構成を詳しく説明する。
(Second embodiment)
The second embodiment of the present disclosure shown in FIG. 8 is a modification of the first embodiment. The state monitoring apparatus 200 according to the second embodiment includes a first imaging unit 110 and a second imaging unit 210 instead of the imaging unit 10 (see FIG. 2) of the first embodiment. Hereinafter, based on FIG. 8, the structure of the state monitoring apparatus 200 for acquiring the 1st face image 51 and the 2nd face image 52 is demonstrated in detail.
 第一撮像部110及び第二撮像部210は、共に近赤外線カメラであって、第一実施形態の撮像部10に相当する構成である。第一撮像部110は、第一実施形態の撮像素子11(図5参照)に相当する第一撮像素子111を有しており、当該素子111の撮像面を規定領域PA(図1参照)に向けた姿勢にて配置されている。第一撮像素子111は、規定領域PAから入射する入射光を受光する。以上の構成により、第一撮像部110は、第一撮像素子111からの出力に基づいて第一顔画像51を生成し、画像認識部24に逐次出力する。 The first imaging unit 110 and the second imaging unit 210 are both near-infrared cameras and have a configuration corresponding to the imaging unit 10 of the first embodiment. The first image pickup unit 110 includes a first image pickup element 111 corresponding to the image pickup element 11 (see FIG. 5) of the first embodiment, and the image pickup surface of the element 111 is set to a specified area PA (see FIG. 1). It is arranged in a facing posture. The first image sensor 111 receives incident light incident from the defined area PA. With the above configuration, the first imaging unit 110 generates the first face image 51 based on the output from the first imaging element 111 and sequentially outputs it to the image recognition unit 24.
 第二撮像部210は、撮像素子11(図5参照)に相当する第二撮像素子211、及び赤色帯域の光を減衰させるカラーフィルタ278を有している。第二撮像部210は、第二撮像素子211の撮像面を規定領域PA(図1参照)に向けた姿勢にて配置されている。第二撮像素子211は、カラーフィルタ278によって覆われることで減衰光を受光する。以上の構成により、第二撮像部210は、第二撮像素子211からの出力に基づいて第二顔画像52を生成し、画像認識部24に逐次出力する。 The second imaging unit 210 includes a second imaging element 211 corresponding to the imaging element 11 (see FIG. 5), and a color filter 278 that attenuates red band light. The second imaging unit 210 is arranged in a posture in which the imaging surface of the second imaging element 211 is directed to the defined area PA (see FIG. 1). The second imaging element 211 receives the attenuated light by being covered with the color filter 278. With the above configuration, the second imaging unit 210 generates the second face image 52 based on the output from the second imaging element 211 and sequentially outputs it to the image recognition unit 24.
 以上の構成では、第一撮像素子111の全域が非被覆領域76となり、第二撮像素子211の全域が被覆領域77となる。第二実施形態では、各撮像素子111,211の画素ピッチ及び画素数は、互いに同一とされている。故に、被覆領域77の面積は、非被覆領域76の面積と実質的に等しくなる。 In the above configuration, the entire area of the first image sensor 111 becomes the uncovered area 76, and the entire area of the second image sensor 211 becomes the covered area 77. In the second embodiment, the pixel pitches and the number of pixels of the image sensors 111 and 211 are the same. Therefore, the area of the covering region 77 is substantially equal to the area of the non-covering region 76.
 撮像制御部23は、第一撮像部110及び第二撮像部210のそれぞれに制御信号を出力する。撮像制御部23は、発光制御部21が投光部15の発光ダイオード16を発光状態にするタイミングに合わせて、第一撮像素子111及び第二撮像素子211を共に露光状態とする。このように、撮像制御部23が第一撮像素子111及び第二撮像素子211の撮影タイミングを同期させることにより、第一顔画像51及び第二顔画像52は、実質的に同じタイミングにて撮影される。 The imaging control unit 23 outputs a control signal to each of the first imaging unit 110 and the second imaging unit 210. The imaging control unit 23 sets both the first imaging element 111 and the second imaging element 211 to the exposure state in accordance with the timing when the light emission control unit 21 sets the light emitting diode 16 of the light projecting unit 15 to the light emitting state. Thus, the imaging control unit 23 synchronizes the imaging timings of the first imaging element 111 and the second imaging element 211, so that the first face image 51 and the second face image 52 are captured at substantially the same timing. Is done.
 以上の構成により、第一画像取得ブロック25は、第一撮像素子111を用いて撮影された第一顔画像51を第一撮像部110から取得する。一方で、第二画像取得ブロック26は、第二撮像素子211を用いて撮影された第二顔画像52を第二撮像部210から取得する。このように、第一顔画像51及び第二顔画像52を撮影するための撮像部は、別々に設けられていてもよい。 With the above configuration, the first image acquisition block 25 acquires the first face image 51 photographed using the first imaging element 111 from the first imaging unit 110. On the other hand, the second image acquisition block 26 acquires the second face image 52 captured using the second image sensor 211 from the second imaging unit 210. Thus, the imaging part for imaging | photography the 1st face image 51 and the 2nd face image 52 may be provided separately.
 ここまで説明した第二実施形態でも、減衰光を用いて撮影される第二顔画像52が、第二撮像部210によって生成される。故に、第一顔画像51と第二顔画像52との比較による眼の位置の特定が可能となるので、眼の位置に基づいて行われる顔認識の正確性は、確保される。 In the second embodiment described so far, the second image 52 that is captured using attenuated light is generated by the second imaging unit 210. Therefore, since the eye position can be specified by comparing the first face image 51 and the second face image 52, the accuracy of the face recognition performed based on the eye position is ensured.
 加えて第二実施形態では、第二撮像部210が、第一撮像部110とは別の構成として設けられている。故に、各撮像素子111,211として採用される撮像素子の自由度が、確保され得る。以上の構成では、第一顔画像51及び第二顔画像52を眼の位置の特定に好適な解像度にて取得することが、いっそう容易となる。 In addition, in the second embodiment, the second imaging unit 210 is provided as a configuration different from the first imaging unit 110. Therefore, the freedom degree of the image pick-up element employ | adopted as each image pick-up element 111,211 can be ensured. With the above configuration, it becomes easier to acquire the first face image 51 and the second face image 52 at a resolution suitable for specifying the eye position.
 尚、第二実施形態において、第一撮像部110は第一撮像デバイス、あるいは第一撮像ミーンズとも言及される。第二撮像部210は第二撮像デバイスあるいは第二撮像ミーンズとも言及される。カラーフィルタ278は減衰フィルタとも言及される。 In the second embodiment, the first imaging unit 110 is also referred to as a first imaging device or a first imaging means. The second imaging unit 210 is also referred to as a second imaging device or a second imaging means. The color filter 278 is also referred to as an attenuation filter.
 (第三実施形態)
 図9に示される本開示の第三実施形態は、第一実施形態の別の変形例である。第三実施形態による状態監視装置300は、第一実施形態の撮像部10(図2参照)に替えて、撮像部310を有している。以下、図9に基づいて、第一顔画像51及び第二顔画像52を取得するための状態監視装置300の構成を詳しく説明する。
(Third embodiment)
The third embodiment of the present disclosure shown in FIG. 9 is another modification of the first embodiment. The state monitoring apparatus 300 according to the third embodiment includes an imaging unit 310 instead of the imaging unit 10 (see FIG. 2) of the first embodiment. Hereinafter, based on FIG. 9, the structure of the state monitoring apparatus 300 for acquiring the 1st face image 51 and the 2nd face image 52 is demonstrated in detail.
 撮像部310は、撮像素子11に加えて、カラーフィルタ378及び切替機構313を有している。カラーフィルタ378は、赤色帯域の光を減衰させ且つ近赤外帯域の光を通過させる構成である。カラーフィルタ378は、撮像素子11を覆うことが可能な大きさに形成されている。切替機構313は、カラーフィルタ378を移動させるための機構である。 The imaging unit 310 includes a color filter 378 and a switching mechanism 313 in addition to the imaging element 11. The color filter 378 is configured to attenuate red band light and pass near infrared band light. The color filter 378 is formed in a size that can cover the image sensor 11. The switching mechanism 313 is a mechanism for moving the color filter 378.
 以上の撮像部310には、撮影モードとして、減衰撮影モードと非減衰撮影モードとが互いに切り替え可能に設けられている。減衰撮影モードでは、撮像部310は、切替機構313の作動により、撮像素子11の撮像面を覆う位置にカラーフィルタ378を移動させている。このように、カラーフィルタ378によって撮像素子11を覆うことで、撮像素子11は、減衰光を受光することとなる。故に、撮像部310は、撮像素子11からの出力に基づいて、第二顔画像52を生成することができる。 The imaging unit 310 described above is provided with an attenuation imaging mode and a non-attenuation imaging mode that can be switched to each other as the imaging mode. In the attenuation shooting mode, the imaging unit 310 moves the color filter 378 to a position covering the imaging surface of the imaging element 11 by the operation of the switching mechanism 313. Thus, by covering the image sensor 11 with the color filter 378, the image sensor 11 receives attenuated light. Therefore, the imaging unit 310 can generate the second face image 52 based on the output from the imaging device 11.
 一方で、非減衰撮影モードでは、撮像部310は、切替機構313の作動により、撮像素子11の撮像面からカラーフィルタ378を退避させている。故に、撮像素子11は、入射光を受光することとなる。以上により、撮像部310は、撮像素子11からの出力に基づいて、第一顔画像51を生成することができる。 On the other hand, in the non-attenuating shooting mode, the imaging unit 310 retracts the color filter 378 from the imaging surface of the imaging device 11 by the operation of the switching mechanism 313. Therefore, the image sensor 11 receives incident light. As described above, the imaging unit 310 can generate the first face image 51 based on the output from the imaging element 11.
 撮像制御部23は、減衰撮影モードと非減衰撮影モードとが交互に繰り返されるよう、撮像部310に制御信号を出力する。これにより、撮像部310では、第一顔画像51及び第二顔画像52が、僅かの時間差にて、交互に撮影される。よって、画像認識部24においては、非減衰撮影モードにて撮影された第一顔画像51の第一画像取得ブロック25による取得と、減衰撮影モードにて撮影された第二顔画像52の第二画像取得ブロック26による取得とが、交互に実施される。尚、発光制御部21は、各撮影モードにおいて撮像素子11が露光状態とされるタイミングに合わせて、投光部15から照明光を出射させる。 The imaging control unit 23 outputs a control signal to the imaging unit 310 so that the attenuation imaging mode and the non-attenuation imaging mode are alternately repeated. Thereby, in the imaging part 310, the 1st face image 51 and the 2nd face image 52 are image | photographed alternately by a slight time difference. Therefore, the image recognition unit 24 acquires the first face image 51 captured in the non-attenuation shooting mode by the first image acquisition block 25 and the second face image 52 captured in the attenuation shooting mode. Acquisition by the image acquisition block 26 is performed alternately. Note that the light emission control unit 21 emits illumination light from the light projecting unit 15 in accordance with the timing at which the image sensor 11 is exposed in each shooting mode.
 ここまで説明した第三実施形態でも、減衰光を用いて撮影される第二顔画像52が、撮像部310の減衰撮影モードにて生成される。これにより、第一顔画像51と第二顔画像52との比較による眼の位置の特定が可能となるので、眼の位置に基づいて行われる顔認識の正確性は、確保される。 Also in the third embodiment described so far, the second face image 52 photographed using the attenuated light is generated in the attenuation photographing mode of the imaging unit 310. Accordingly, the eye position can be specified by comparing the first face image 51 and the second face image 52, and thus the accuracy of the face recognition performed based on the eye position is ensured.
 尚、第三実施形態において、撮像部310は撮像デバイスあるいは撮像ミーンズとも言及される。カラーフィルタ378は減衰フィルタとも言及される。 In the third embodiment, the imaging unit 310 is also referred to as an imaging device or imaging means. The color filter 378 is also referred to as an attenuation filter.
 (他の実施形態)
 以上、本開示による複数の実施形態について説明したが、本開示は、上記実施形態に限定して解釈されるものではなく、本開示の要旨を逸脱しない範囲内において種々の実施形態及び組み合わせに適用することができる。
(Other embodiments)
Although a plurality of embodiments according to the present disclosure have been described above, the present disclosure is not construed as being limited to the above embodiments, and can be applied to various embodiments and combinations without departing from the gist of the present disclosure. can do.
 上記第一実施形態のさらに別の変形例1では、撮像素子11(図5参照)に替えて、図10に示す撮像素子411が用いられている。撮像素子411の有する第二画素473には、カラーフィルタ78によって覆われたサブ画素474が設けられている。以上の構成による撮像部410は、第一画素70からの出力と、第二画素473においてサブ画素474を除く領域からの出力とに基づいて、第一顔画像51(図4参照)を生成する。加えて、撮像部410は、サブ画素474からの出力に基づいて、第二顔画像52(図4参照)を生成する。このように、被覆領域77をサブ画素474として設ける形態でも、第一実施形態と同様に、近赤外線に対する撮像素子411の感度を維持しつつ、第一顔画像51と共に第二顔画像52を撮影することができる。 In yet another modification 1 of the first embodiment, an image sensor 411 shown in FIG. 10 is used instead of the image sensor 11 (see FIG. 5). A second pixel 473 included in the image sensor 411 is provided with a sub-pixel 474 covered with a color filter 78. The imaging unit 410 having the above configuration generates the first face image 51 (see FIG. 4) based on the output from the first pixel 70 and the output from the region excluding the sub pixel 474 in the second pixel 473. . In addition, the imaging unit 410 generates the second face image 52 (see FIG. 4) based on the output from the sub-pixel 474. As described above, in the embodiment in which the covering region 77 is provided as the sub-pixel 474, the second face image 52 is photographed together with the first face image 51 while maintaining the sensitivity of the image sensor 411 with respect to near infrared rays, as in the first embodiment. can do.
 上記第二実施形態の変形例では、一つの撮像部に、第一撮像素子及び第二撮像素子が設けられている。この撮像部には、規定領域PAからの入射光を各撮像素子に分割して到達させるスプリッタと、スプリッタ及び第二撮像素子間に位置するカラーフィルタとがさらに設けられている。以上の構成のように、非被覆領域を形成する第一撮像素子と、被覆領域を形成する第二撮像素子は、一つの撮像部に共に設けられていてもよい。 In the modified example of the second embodiment, the first imaging element and the second imaging element are provided in one imaging unit. The imaging unit is further provided with a splitter that divides incident light from the defined area PA into each imaging element and a color filter positioned between the splitter and the second imaging element. As described above, the first image sensor that forms the uncovered area and the second image sensor that forms the covered area may be provided together in one image pickup unit.
 上記実施形態の変形例では、暗所判定ブロックが省略されている。これにより、車両の室内が暗所であるか否かに係らず、眼判定ブロックは、複数の眼の候補となる明部が第一顔画像51に撮影された場合に、これらの明部のうちから、眼に対応する明部を特定する判定を実施する。 In the modification of the above embodiment, the dark place determination block is omitted. As a result, regardless of whether or not the interior of the vehicle is in a dark place, the eye determination block allows the bright portions that are candidates for a plurality of eyes to be captured in the first face image 51. From the inside, the determination which identifies the bright part corresponding to eyes is implemented.
 上記第一実施形態の変形例では、第一画素の画素数と第二画素の画素数とが同程度に設定されている。これにより、被覆領域の面積は、非被覆領域の面積と同程度となる。また、さらに別の変形例では、第二画素の画素数は、第一画素の画素数より多くされている。これにより、被覆領域の面積は、非被覆領域の面積よりも狭くなっている。 In the modification of the first embodiment, the number of first pixels and the number of second pixels are set to be approximately the same. As a result, the area of the covering region is approximately the same as the area of the non-covering region. In still another modification, the number of pixels of the second pixel is larger than the number of pixels of the first pixel. Thereby, the area of the covering region is narrower than the area of the non-covering region.
 上記第三実施形態の変形例では、切替機構によるカラーフィルタの設置及び退避が、電気的に実施されてもよい。例えば、カラーフィルタは、電圧の印加によって赤色帯域の光を遮断する構成であってもよい。こうした形態では、カラーフィルタに電圧を印加する構成が、切替機構に相当する。 In the modification of the third embodiment described above, the installation and evacuation of the color filter by the switching mechanism may be performed electrically. For example, the color filter may be configured to block light in the red band by applying a voltage. In such a form, the configuration for applying a voltage to the color filter corresponds to the switching mechanism.
 上記実施形態の変形例では、カラーフィルタを設ける画素の割合、レイアウトは、適宜変更可能である。例えば、カラーフィルタを有する画素は、撮像素子において眼の近傍領域を写す範囲に絞って設けられていてもよい。 In the modification of the above embodiment, the ratio and layout of the pixels provided with the color filter can be changed as appropriate. For example, the pixels having the color filter may be provided so as to be narrowed down to a range that captures a region near the eye in the image sensor.
 上記第一実施形態の変形例では、撮像部において、非被覆領域からの出力を取得するタイミングと、被覆領域からの出力を取得するタイミングとが異なっている。また、上記第二実施形態の変形例では、第一撮像部において第一撮像素子からの出力を取得するタイミングと、第二撮像部において第二撮像素子からの出力を取得するタイミングが異なっている。以上のように、第一顔画像及び第二顔画像間における撮影タイミングのずれは、許容可能である。 In the modified example of the first embodiment, the timing at which the output from the non-covering area is acquired and the timing at which the output from the covering area is acquired in the imaging unit are different. In the modification of the second embodiment, the timing at which the output from the first imaging element is acquired in the first imaging unit is different from the timing at which the output from the second imaging element is acquired in the second imaging unit. . As described above, a shift in shooting timing between the first face image and the second face image is acceptable.
 上記実施形態における各撮像素子には、CCD(Charge Coupled Device)やCMOS(Complementary Metal Oxide Semiconductor)等のイメージセンサが適宜採用可能である。また、撮像素子によって検出される光の周波数帯域は、近赤外帯域に限定されず、近赤外帯域に加えて可視光帯域が含まれていてもよい。さらに、発光ダイオードは、撮像素子の仕様に対応するように、放射する光の周波数帯域、個数、及び配置等を適宜変更されることが望ましい。 Image sensors such as a CCD (Charge Coupled Device) and a CMOS (Complementary Metal Oxide Semiconductor) can be appropriately employed for each image sensor in the above embodiment. Further, the frequency band of light detected by the image sensor is not limited to the near infrared band, and may include the visible light band in addition to the near infrared band. Furthermore, it is desirable that the light emitting diodes be appropriately changed in frequency band, number, arrangement, and the like of emitted light so as to correspond to the specifications of the image sensor.
 上記実施形態において、ステアリングコラム81の上面とされていた撮像部及び状態監視装置の設置位置は、規定領域PAの撮影が可能であれば、適宜変更されてよい。状態監視装置は、例えばインスツルメントパネルの上面に設置されていてもよく、又はサンバイザ近傍の天井部分に取り付けられていてもよい。さらに、撮像部は、状態監視装置の本体部分とは別体で、規定領域PAの撮影に好適な位置に設けられていてもよい。 In the above-described embodiment, the installation position of the imaging unit and the state monitoring device, which are the upper surface of the steering column 81, may be changed as appropriate as long as the specified area PA can be imaged. The state monitoring device may be installed on the upper surface of the instrument panel, for example, or may be attached to a ceiling portion near the sun visor. Furthermore, the imaging unit may be provided separately from the main body of the state monitoring device and at a position suitable for photographing the defined area PA.
 上記実施形態における規定領域PAの決定方法は、適宜変更されてよい。例えば、規定領域PAは、アイリプスの95パーセンタイルを包含するように規定されてもよい。さらに、規定領域PAの決定方法は、アイリプスから決定する方法に限定されない。例えば、人種、性別、及び年齢等が異なる複数の運転者を実際に運転席に着座させることにより、規定領域PAを実験的に決定してもよい。こうした規定領域PAは、運転動作に伴う顔面の移動を想定したうえで、規定されることが望ましい。 The method for determining the prescribed area PA in the above embodiment may be changed as appropriate. For example, the defined area PA may be defined to include the 95th percentile of Ilips. Furthermore, the method for determining the prescribed area PA is not limited to the method for determining from the iris. For example, the prescribed area PA may be determined experimentally by actually sitting a plurality of drivers of different races, genders, ages, etc. on the driver's seat. Such a defined area PA is desirably defined in consideration of the movement of the face accompanying the driving operation.
 上記実施形態において、状態監視プログラムを実行した制御回路20によって提供されていた複数の機能は、上述の制御装置と異なるハードウェア及びソフトウェア、或いはこれらの組み合わせによって提供されてよい。例えば、プログラムによらないで所定の機能を果たすアナログ回路によって、各機能ブロック及びサブ機能ブロックに相当する機能が提供されていてもよい。 In the above embodiment, the plurality of functions provided by the control circuit 20 that has executed the state monitoring program may be provided by hardware and software different from the above-described control device, or a combination thereof. For example, functions corresponding to each functional block and sub-functional block may be provided by an analog circuit that performs a predetermined function without depending on a program.
 上記実施形態では、車両に搭載されて、車両の運転者の状態を監視する状態監視装置に本開示を適用した例を示した。しかし、本開示は、車両としての自動車用の所謂ドライバステータスモニタだけでなく、車両としての、二輪車、三輪車、船舶及び航空機等の各種移動体(輸送機器)にて、その操縦者の状態を監視する状態監視装置に適用することが可能である。 In the above embodiment, an example in which the present disclosure is applied to a state monitoring device that is mounted on a vehicle and monitors the state of the driver of the vehicle has been described. However, the present disclosure monitors not only the so-called driver status monitor for automobiles as vehicles, but also the state of the driver by various moving bodies (transport equipment) such as motorcycles, tricycles, ships, and aircraft as vehicles. The present invention can be applied to a state monitoring device.
 本開示は、実施例に準拠して記述されたが、本開示は当該実施例や構造に限定されるものではないと理解される。本開示は、様々な変形例や均等範囲内の変形をも包含する。加えて、様々な組み合わせや形態、さらには、それらに一要素のみ、それ以上、あるいはそれ以下、を含む他の組み合わせや形態をも、本開示の範疇や思想範囲に入るものである。 Although the present disclosure has been described based on the embodiments, it is understood that the present disclosure is not limited to the embodiments and structures. The present disclosure includes various modifications and modifications within the equivalent range. In addition, various combinations and forms, as well as other combinations and forms including only one element, more or less, are within the scope and spirit of the present disclosure.

Claims (9)

  1.  車両(1)に搭載され、前記車両を操縦する操縦者の顔面を撮影した顔画像(51,52)を用いて当該操縦者の状態を監視する状態監視装置であって、
     前記顔面が位置する領域として予め規定された規定領域(PA)に向けて、赤色から近赤外の帯域の光を出射させる発光部(15,S101)と、
     前記規定領域から入射する入射光を受光することで撮影される第一顔画像(51)を、前記顔画像として取得する第一画像取得部(25,S102)と、
     前記入射光から赤色の帯域の光を減衰させた減衰光を受光することによって撮影される第二顔画像(52)を、前記第一顔画像とは別の前記顔画像として取得する第二画像取得部(26,S102)と、
     前記操縦者の眼の候補となる明部(55,56)が前記第一顔画像に複数撮影された場合に、複数の前記明部のうちで前記第二顔画像に撮影されていない明部(55)を、前記眼であると判定する眼判定部(27,S108)と、
     前記眼判定部によって判定された前記眼の位置に基づいて、前記顔面を認識する顔認識部(24,S109)と、を備える状態監視装置。
    A state monitoring device that is mounted on a vehicle (1) and monitors the state of the driver using face images (51, 52) obtained by photographing the face of the driver who controls the vehicle,
    A light emitting section (15, S101) for emitting light in a band from red to near infrared toward a predetermined area (PA) that is defined in advance as an area where the face is located;
    A first image acquisition unit (25, S102) for acquiring, as the face image, a first face image (51) captured by receiving incident light incident from the prescribed region;
    A second image obtained by receiving a second face image (52) taken by receiving attenuated light obtained by attenuating red band light from the incident light as the face image different from the first face image. An acquisition unit (26, S102);
    When a plurality of bright portions (55, 56) that are candidates for the pilot's eyes are photographed in the first face image, a bright portion that is not photographed in the second face image among the plurality of bright portions. An eye determination unit (27, S108) for determining (55) as the eye;
    A state monitoring apparatus comprising: a face recognition unit (24, S109) that recognizes the face based on the eye position determined by the eye determination unit.
  2.  赤色帯域の光を減衰させる減衰フィルタ(78)と、前記減衰フィルタによって覆われることで前記減衰光を受光する被覆領域(77)及び前記被覆領域から外れた位置にて前記入射光を受光する非被覆領域(76)を形成する撮像素子(11)と、を有し、前記非被覆領域からの出力に基づく前記第一顔画像及び前記被覆領域からの出力に基づく前記第二顔画像を生成する撮像部(10)、をさらに備え、
     前記第一画像取得部は、前記非被覆領域を用いて撮影された前記第一顔画像を前記撮像部から取得し、
     前記第二画像取得部は、前記被覆領域を用いて撮影された前記第二顔画像を前記撮像部から取得する請求項1に記載の状態監視装置。
    An attenuation filter (78) that attenuates light in the red band, a covering region (77) that receives the attenuated light by being covered by the attenuation filter, and a non-light that receives the incident light at a position outside the covering region. An image sensor (11) that forms a covered region (76), and generates the first face image based on an output from the non-covered region and the second face image based on an output from the covered region. An imaging unit (10),
    The first image acquisition unit acquires the first face image captured using the uncovered region from the imaging unit,
    The state monitoring device according to claim 1, wherein the second image acquisition unit acquires the second face image captured using the covering region from the imaging unit.
  3.  前記撮像素子において、前記被覆領域の面積は、前記非被覆領域の面積よりも狭い請求項2に記載の状態監視装置。 The state monitoring device according to claim 2, wherein an area of the covered region is smaller than an area of the non-covered region in the imaging element.
  4.  前記撮像部は、前記非被覆領域からの出力を取得しつつ、前記被覆領域からの出力を取得する請求項2又は3に記載の状態監視装置。 The state monitoring device according to claim 2 or 3, wherein the imaging unit acquires an output from the covered region while acquiring an output from the non-covered region.
  5.  前記入射光を受光する第一撮像素子(111)を有し、前記第一撮像素子からの出力に基づいて前記第一顔画像を生成する第一撮像部(110)と、
     赤色帯域の光を減衰させる減衰フィルタ(278)によって覆われることで前記減衰光を受光する第二撮像素子(211)を有し、前記第二撮像素子からの出力に基づいて前記第二顔画像を生成する第二撮像部(210)と、をさらに備え、
     前記第一画像取得部は、前記第一撮像素子を用いて撮影された前記第一顔画像を前記第一撮像部から取得し、
     前記第二画像取得部は、前記第二撮像素子を用いて撮影された前記第二顔画像を前記第二撮像部から取得する請求項1に記載の状態監視装置。
    A first imaging unit (110) that includes the first imaging element (111) that receives the incident light, and that generates the first face image based on an output from the first imaging element;
    The second face image has a second image sensor (211) that receives the attenuated light by being covered by an attenuation filter (278) that attenuates light in the red band, and is based on an output from the second image sensor. A second imaging unit (210) for generating
    The first image acquisition unit acquires the first face image captured using the first imaging element from the first imaging unit,
    The state monitoring apparatus according to claim 1, wherein the second image acquisition unit acquires the second face image captured using the second imaging element from the second imaging unit.
  6.  前記入射光を受光する撮像素子(311)及び赤色帯域の光を減衰させる減衰フィルタ(378)を有し、前記減衰フィルタによって前記撮像素子を覆うことで、前記減衰光を受光する当該撮像素子からの出力に基づいて前記第二顔画像を生成する減衰撮影モードと、前記減衰フィルタを前記撮像素子の撮像面から退避させることで、前記入射光を受光する当該撮像素子からの出力に基づいて前記第一顔画像を生成する非減衰撮影モードと、が切り替え可能な撮像部(310)、をさらに備え、
     前記第一画像取得部は、前記非減衰撮影モードにて撮影された前記第一顔画像を前記撮像部から取得し、
     前記第二画像取得部は、前記減衰撮影モードにて撮影された前記第二顔画像を前記撮像部から取得する請求項1に記載の状態監視装置。
    An image sensor (311) that receives the incident light and an attenuation filter (378) that attenuates red band light. The image sensor that receives the attenuated light by covering the image sensor with the attenuation filter. Based on the output from the image sensor that receives the incident light by retracting the attenuation filter from the imaging surface of the image sensor and generating the second face image based on the output of An imaging unit (310) capable of switching between a non-attenuating shooting mode for generating the first face image;
    The first image acquisition unit acquires the first face image shot in the non-attenuating shooting mode from the imaging unit,
    The state monitoring device according to claim 1, wherein the second image acquisition unit acquires the second face image captured in the attenuation shooting mode from the imaging unit.
  7.  前記車両の室内が暗所であるか否かを判定する暗所判定部(28,S103)、をさらに備え、
     前記眼判定部は、前記暗所判定部によって前記室内が暗所でると判定されたこと条件として、前記操縦者の眼の候補となる明部が前記第一顔画像に複数撮影された場合に、複数の前記明部のうちで前記第二顔画像に撮影されていない明部を、前記眼であると判定する請求項1~6のいずれか一項に記載の状態監視装置。
    A dark place determination unit (28, S103) for determining whether the interior of the vehicle is a dark place,
    The eye determination unit is configured in a case where a plurality of bright portions that are candidates for the eyes of the pilot are photographed in the first face image as a condition that the room is determined to be dark in the dark place determination unit. The state monitoring device according to any one of claims 1 to 6, wherein a bright portion that is not photographed in the second face image among the plurality of bright portions is determined to be the eye.
  8.  車両(1)に搭載され、前記車両を操縦する操縦者の顔面を撮影した顔画像(51,52)を用いて当該操縦者の状態を監視する処理をコンピュータに実行させる状態監視方法であって、
     前記顔面が位置する領域として予め規定された規定領域(PA)に向けて、赤色から近赤外の帯域の光を出射させる発光ステップ(S101)と、
     前記規定領域から入射する入射光を受光することで撮影される第一顔画像(51)を、前記顔画像として取得する第一画像取得ステップ(S102)と、
     前記入射光から赤色の帯域の光を減衰させた減衰光を受光することによって撮影される第二顔画像(52)を、前記第一顔画像とは別の前記顔画像として取得する第二画像取得ステップ(S102)と、
     前記操縦者の眼の候補となる明部(55,56)が前記第一顔画像に複数撮影された場合に、複数の前記明部のうちで前記第二顔画像に撮影されていない明部(55)を、前記眼であると判定する眼判定ステップ(S108)と、
     前記眼判定ステップによって判定された前記眼の位置に基づいて、前記顔面を認識する顔認識ステップ(S109)と、を備えるところの
     状態監視方法。
    A state monitoring method that is mounted on a vehicle (1) and causes a computer to execute processing for monitoring the state of the operator using face images (51, 52) obtained by photographing the face of the operator who controls the vehicle. ,
    A light emitting step (S101) for emitting light in a band from red to near infrared toward a defined area (PA) defined in advance as an area where the face is located;
    A first image acquisition step (S102) for acquiring, as the face image, a first face image (51) captured by receiving incident light incident from the prescribed region;
    A second image obtained by receiving a second face image (52) taken by receiving attenuated light obtained by attenuating red band light from the incident light as the face image different from the first face image. An acquisition step (S102);
    When a plurality of bright portions (55, 56) that are candidates for the pilot's eyes are photographed in the first face image, a bright portion that is not photographed in the second face image among the plurality of bright portions. (55) an eye determination step (S108) for determining that the eye is the eye;
    A state monitoring method comprising: a face recognition step (S109) for recognizing the face based on the eye position determined in the eye determination step.
  9.  非遷移の記憶媒体であり、前記媒体は、コンピュータにより読み出されてそして実行されるインストラクションを含み、
     前記インストラクションは請求項8に記載の前記状態監視方法を含み、前記方法は、コンピュータ搭載される
     ところの記憶媒体。
    A non-transitory storage medium, the medium comprising instructions read and executed by a computer;
    The instruction includes the state monitoring method according to claim 8, wherein the method is a storage medium mounted on a computer.
PCT/JP2013/003046 2012-10-15 2013-05-13 State monitoring device WO2014061175A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-228178 2012-10-15
JP2012228178A JP2014082585A (en) 2012-10-15 2012-10-15 State monitor device and state monitor program

Publications (1)

Publication Number Publication Date
WO2014061175A1 true WO2014061175A1 (en) 2014-04-24

Family

ID=50487758

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/003046 WO2014061175A1 (en) 2012-10-15 2013-05-13 State monitoring device

Country Status (2)

Country Link
JP (1) JP2014082585A (en)
WO (1) WO2014061175A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6801274B2 (en) 2016-07-11 2020-12-16 株式会社デンソー Driving support device and driving support method
KR101844243B1 (en) * 2016-07-27 2018-05-14 쌍용자동차 주식회사 Driver status check system using a smart-phone camera and method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002513176A (en) * 1998-04-29 2002-05-08 カーネギー−メロン ユニバーシティ Apparatus and method for monitoring a subject's eye using two different wavelengths of light
JP2002352229A (en) * 2001-05-30 2002-12-06 Mitsubishi Electric Corp Face region detector
JP2005296382A (en) * 2004-04-13 2005-10-27 Honda Motor Co Ltd Visual line detector
JP2009216423A (en) * 2008-03-07 2009-09-24 Omron Corp Measurement instrument and method, imaging device, and program
JP2009254525A (en) * 2008-04-15 2009-11-05 Calsonic Kansei Corp Pupil detecting method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002513176A (en) * 1998-04-29 2002-05-08 カーネギー−メロン ユニバーシティ Apparatus and method for monitoring a subject's eye using two different wavelengths of light
JP2002352229A (en) * 2001-05-30 2002-12-06 Mitsubishi Electric Corp Face region detector
JP2005296382A (en) * 2004-04-13 2005-10-27 Honda Motor Co Ltd Visual line detector
JP2009216423A (en) * 2008-03-07 2009-09-24 Omron Corp Measurement instrument and method, imaging device, and program
JP2009254525A (en) * 2008-04-15 2009-11-05 Calsonic Kansei Corp Pupil detecting method and apparatus

Also Published As

Publication number Publication date
JP2014082585A (en) 2014-05-08

Similar Documents

Publication Publication Date Title
WO2014054199A1 (en) State monitoring device
JP5045212B2 (en) Face image capturing device
US10189396B2 (en) Vehicle headlamp control device
JP2006252138A (en) Apparatus for photographing driver and apparatus for monitoring driver
JP5983693B2 (en) Mirror device with display function and display switching method
JP2008052029A (en) Photographing system, vehicle crew detection system, operation device controlling system, and vehicle
JP6605615B2 (en) Image processing apparatus, imaging apparatus, camera monitor system, and image processing method
US20110035099A1 (en) Display control device, display control method and computer program product for the same
WO2014103223A1 (en) Night-vision device
JP2016055782A5 (en)
CN111886541B (en) Imaging device and electronic apparatus
JP4927647B2 (en) Vehicle periphery monitoring device
JP2006248365A (en) Back monitoring mirror of movement body, driver photographing device, driver monitoring device and safety driving support device
JP5712821B2 (en) Shooting display control system
WO2014061175A1 (en) State monitoring device
JP6229769B2 (en) Mirror device with display function and display switching method
KR102006780B1 (en) Lamp having camera module
JP2010012995A (en) Lighting system
JP2020137053A (en) Control device and imaging system
JP6322723B2 (en) Imaging apparatus and vehicle
JP2009096323A (en) Camera illumination control device
JP2017193277A (en) Visually recognizing device for vehicle
JP2017168960A (en) Image processing apparatus
US20240100946A1 (en) Occupant imaging device
JP2018002152A (en) Mirror device with display function and display switching method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13847290

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13847290

Country of ref document: EP

Kind code of ref document: A1