WO2014061175A1 - Dispositif de surveillance d'état - Google Patents

Dispositif de surveillance d'état Download PDF

Info

Publication number
WO2014061175A1
WO2014061175A1 PCT/JP2013/003046 JP2013003046W WO2014061175A1 WO 2014061175 A1 WO2014061175 A1 WO 2014061175A1 JP 2013003046 W JP2013003046 W JP 2013003046W WO 2014061175 A1 WO2014061175 A1 WO 2014061175A1
Authority
WO
WIPO (PCT)
Prior art keywords
face image
image
face
light
imaging
Prior art date
Application number
PCT/JP2013/003046
Other languages
English (en)
Japanese (ja)
Inventor
泰斗 渡邉
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Publication of WO2014061175A1 publication Critical patent/WO2014061175A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means

Definitions

  • the present disclosure relates to a technique for monitoring the state of a driver using a face image that is mounted on a vehicle and that captures the face of the driver who controls the vehicle.
  • a state monitoring device mounted on a vehicle performs face recognition using a face image obtained by photographing a driver's face in order to monitor the state of the driver.
  • the state monitoring device has a configuration in which light is emitted toward a prescribed region that is defined in advance as a region where the operator's face is located, and light that is incident from the prescribed region is received. And a configuration for recognizing the face based on the position of the eye in the captured face image.
  • the image processing apparatus of Patent Document 1 which is a kind of configuration for recognizing a face as described above, performs face recognition using red eyes. More specifically, in a face image shot with flash emission, a human eye can appear as a red bright part. Such a red eye phenomenon is a phenomenon that occurs mainly because light in the red band of the flash light is reflected by the blood vessels of the retina. This red eye acquires high contrast with respect to the surroundings of the red eye. Therefore, the image processing apparatus can accurately specify the position of the eye in the face image based on the position of the red bright part.
  • the inventor of the present disclosure has attempted to employ the above-described face recognition using red eyes in a state monitoring device. Then, the following was found. That is, when a pilot wearing a spectacle is photographed while illuminating a specified area with a light emitting unit such as a flash in a state with little ambient light, the photographed face image includes not only the driver's red eyes. The light emitting part reflected in the glasses is photographed as a bright part. As described above, if a plurality of bright portions that are candidate eyes are photographed around the eyes of the driver, the specification of the position of the eyes in the face image may be incorrect. Then, the accuracy of face recognition performed based on the position of the eyes may not be ensured.
  • An object of the present disclosure is to provide a state monitoring device that can ensure the accuracy of face recognition performed based on the position of the eye.
  • a state monitoring device that is mounted on a vehicle and monitors the state of the driver using a face image obtained by photographing the face of the driver who controls the vehicle.
  • a light emitting unit that emits light in a red to near-infrared band toward a prescribed region that is defined in advance as a region where the face is located, and a first face that is photographed by receiving incident light incident from the prescribed region
  • a first image acquisition unit that acquires an image as a face image, and a second face image that is captured by receiving attenuated light obtained by attenuating red band light from incident light is separated from the first face image.
  • the second face image is photographed in the plurality of bright parts. It is characterized by comprising an eye determination unit that determines that a bright part that is not an eye is an eye, and a face recognition unit that recognizes a face based on the position of the eye determined by the eye determination unit.
  • a state monitoring method for causing a computer to execute a process for monitoring a state of a driver mounted on a vehicle and using a face image obtained by photographing a face of the driver who controls the vehicle.
  • the first step is taken by emitting a light emitting step for emitting light in a band from red to near-infrared toward a prescribed region defined in advance as a region where the face is located, and receiving incident light incident from the prescribed region.
  • a first image acquisition step of acquiring a face image as a face image, and a second face image captured by receiving attenuated light obtained by attenuating light in the red band from incident light is referred to as a first face image.
  • Second image acquisition step to acquire as another face image, and when a plurality of bright parts that are candidates for the pilot's eyes are photographed in the first face image, the second face image is photographed among the plurality of bright parts
  • the bright part that is not Comprising the eye determination step of constant, based on the position of the eye is determined by eye determination step, and recognizing the face recognition step the face, the.
  • the storage medium is a non-transition storage medium, and the medium includes instructions that are read and executed by a computer, and the instructions are the state monitoring method according to the second aspect described above. And the method is computer mounted.
  • the red band light is attenuated from the incident light, so the pilot's eyes appear as a bright part. hard. Therefore, when a plurality of bright parts that are candidates for the eyes of the pilot are photographed in the first face image, by determining those of these bright parts that are not photographed in the second face image as eyes, Even when the light emitting part is reflected on the driver's glasses, the position of the eye in the face image can be accurately specified. Therefore, the accuracy of face recognition performed based on the eye position can be ensured.
  • the state monitoring device 100 is mounted on a vehicle 1 as a moving body, and indicates the state of a driver (hereinafter also referred to as a driver) that drives or controls the vehicle. It is a driver status monitor to be monitored.
  • the state monitoring apparatus 100 includes an imaging unit 10, a light emitting unit 15, a control circuit 20, and a housing 60 (see FIG. 3) that houses these configurations.
  • the state monitoring device 100 is connected to an actuation unit 90 and a vehicle control device 96 mounted on the vehicle.
  • the imaging unit 10 shown in FIGS. 1 and 2 is a device that generates a face image 51 (see also FIG. 4) obtained by photographing a driver's face in the state monitoring device 100 installed on the upper surface of the steering column 81.
  • the state monitoring device 100 installed on the upper surface of the steering column 81 is a device that generates a face image 51 (see also FIG. 4) obtained by photographing the driver's face.
  • the image capturing unit 10 captures a predetermined area PA defined in advance in the vehicle 1.
  • This prescribed area PA includes an area where the face of the driver who is seated in the driver's seat is assumed to be located. Specifically, such a defined area PA is defined based on the eyelips assumed from the eye range of each eye of the driver, and is defined to include, for example, the 99th percentile of the eyelips.
  • the imaging unit 10 is a so-called near-infrared camera, and is configured by combining the imaging element 11 with an optical lens, an optical filter, and the like.
  • the imaging element 11 generates an electrical signal corresponding to the intensity of received light by a plurality of pixels arranged along the imaging surface.
  • the image sensor 11 is arranged in a posture in which the imaging surface is directed to the defined area PA.
  • the image sensor 11 is in an exposure state based on a control signal from the control circuit 20 and receives incident light incident from the defined area PA. As a result, a monochrome face image 51 drawn with shades of white and black is generated.
  • the face images 51 thus photographed are sequentially output from the imaging unit 10 to the control circuit 20.
  • the light projecting unit 15 has a plurality of light emitting diodes 16. Each light emitting diode 16 is disposed so as to sandwich the imaging unit 10 (see FIG. 3), and emits illumination light in a band from red to near infrared toward the defined area PA. The on state and the off state of light emission of the light emitting diode 16 are controlled by a current supplied from the control circuit 20.
  • the control circuit 20 is connected to the imaging unit 10, the light projecting unit 15, the actuation unit 90, and the like, and is a circuit that controls the operation of these components.
  • the control circuit 20 is mainly configured by a microcomputer including a processor that performs various arithmetic processes, a RAM that functions as a work area for the arithmetic processes, and a flash memory that stores programs used for the arithmetic processes.
  • the control circuit 20 includes a power supply circuit that supplies power to the imaging unit 10, the light projecting unit 15, and the like.
  • the control circuit 20 executes a plurality of functions such as a light emission control unit 21, an imaging control unit 23, an image recognition unit 24, a state determination unit 31, and a warning control unit 33 by executing a state monitoring program stored in advance by a processor. With blocks. This functional block is also referred to as a functional section.
  • the light emission control unit 21 is a functional block related to the light emission control of the light projecting unit 15.
  • the light emission control unit 21 causes the light emitting diode 16 to emit light by applying a predetermined current to the light emitting diode 16. Based on the control value calculated by the image recognition unit 24, the light emission control unit 21 causes the light projecting unit 15 to emit illumination light in accordance with the timing at which the imaging unit 10 is in the exposure state.
  • the imaging control unit 23 is a functional block related to imaging control of the imaging unit 10.
  • the imaging control unit 23 controls the exposure start timing, gain, exposure time, and the like in the imaging unit 10 based on the control value calculated by the image recognition unit 24.
  • the image recognition unit 24 is a functional block related to image processing of the face image 51 and the like.
  • the image recognition unit 24 sets an imaging condition in the imaging unit 10 and a light emission condition in the light projecting unit 15 in order to acquire a face image 51 from which the driver's face can be extracted.
  • the image acquisition unit 21 controls the imaging control unit 23 and the light emission control unit 21 to control the imaging unit 10 and the light projecting unit 15 in order to cause the imaging unit 10 and the light projecting unit 15 to perform operations in accordance with the set imaging conditions and light emission conditions. Calculate the value.
  • the image recognition unit 24 acquires the face image 51 thus photographed from the imaging unit 10.
  • the image recognizing unit 24 performs image processing on the acquired face image 51 to obtain values related to the driver's face orientation and the degree of eye opening (hereinafter referred to as “eye open degree”) and the degree of sleepiness of the driver. Calculate related values.
  • the state determination unit 31 compares the value calculated by the image acquisition unit 21 with a preset threshold value. By this comparison processing, the state determination unit 31 estimates whether, for example, a sign of driving aside or a sign of dozing operation is detected. And the state determination part 31 which detected the above-mentioned sign determines with the state which should alert a driver
  • the warning control unit 33 is connected to the actuation unit 90.
  • the warning control unit 33 outputs a control signal to the actuation unit 90 when the state determination unit 31 determines that a situation that should warn the driver is occurring.
  • the warning control unit 33 issues a warning to the driver by operating the actuation unit 90.
  • the housing 60 includes a main body member 63, a front cover member 66, a rear cover member (not shown), and the like as shown in FIG.
  • the main body member 63 holds the sub-board 62 on which the light projecting unit 15 and the imaging unit 10 are mounted.
  • a main board 61 on which the control circuit 20 is formed is attached to the sub board 62 in a posture orthogonal to the sub board 62.
  • the body member 63 is provided with an insertion hole 64 and a light distribution portion 65.
  • the insertion hole 64 is provided in the central portion of the main body member 63 in the horizontal direction, and allows the imaging unit 10 mounted on the sub-board 62 to be inserted.
  • the insertion hole 64 cooperates with a light blocking hole provided in the sub-substrate 66 to exhibit a light blocking function between the light projecting unit 15 and the imaging unit 10, thereby Light leakage to the imaging unit 10 is prevented.
  • the light distribution unit 65 is disposed so as to sandwich the insertion hole 64 in the horizontal direction, and faces the light projecting unit 15 mounted on the sub-board 62.
  • the light distribution unit 65 distributes light to the defined area PA (see FIG. 1) while transmitting the light emitted from the light projecting unit 15.
  • the front cover member 66 is provided with a visible light filter 67.
  • the visible light filter 67 mainly transmits light in the red to near-infrared band used for generating the face image 51 (see FIG. 4) and shields light in the visible light band that is unnecessary for generating the face image 51. To do.
  • the visible light filter 67 covers an opening 68 formed at a position facing the light distribution portion 65 in the front cover member 66.
  • the rear cover member is disposed on the opposite side of the front cover member 66 with the main body member 63 interposed therebetween. The rear cover member covers the substrates 61 and 62 to protect them from dust and dirt in the atmosphere.
  • the 2 includes, for example, a speaker 91, a seat vibration device 93, an air conditioner 95, and the like mounted on the vehicle 1 (see FIG. 1).
  • the speaker 91 alerts the driver by reproducing audio data based on a control signal from the warning control unit 33.
  • the seat vibration device 93 is installed inside the seat surface of the driver's seat or the like, and alerts the driver by vibrating the driver's seat based on a control signal from the warning control unit 33.
  • the air conditioner 95 alerts the driver by an operation such as introducing outside air into the vehicle 1 based on a control signal from the warning control unit 33.
  • both eyes of the driver appear as a bright portion 55.
  • Such a so-called red-eye phenomenon occurs because light in the red band included in ambient light and illumination light is reflected by capillaries of the retina. This red eye acquires high contrast with respect to the surroundings of the red eye. Therefore, the image recognition unit 24 (see FIG. 2) can accurately specify the position of the eye in the face image 51 based on the position of the bright part 55 that appears in the face image 51.
  • the imaging element 11 is formed with a covering region 77 and a non-covering region 76.
  • the covering region 77 is a region covered with the color filter 78 on the imaging surface.
  • the color filter 78 allows light in the near infrared band to pass through while attenuating light in the red band. Therefore, the covering region 77 receives the attenuated light that is incident from the defined region PA (see FIG. 1) and passes through the color filter 78 and in which the red band light is attenuated.
  • the uncovered area 76 is located outside the covered area 77 and is not covered with the color filter 78. Therefore, the uncovered area 76 receives incident light incident from the defined area PA.
  • the image sensor 11 is a so-called VGA size element in which, for example, 640 ⁇ 480 pixels are arranged in each of the horizontal direction H and the vertical direction V.
  • the image sensor 11 has a first pixel 70 and a second pixel 73 as a plurality of pixels.
  • the first pixel 70 is a pixel that is not covered with the color filter 78. Therefore, the entire area of the first pixel 70 becomes the uncovered area 76.
  • the second pixel 73 is a pixel covered with the color filter 78. Therefore, the entire area of the second pixel 73 becomes the covering region 77.
  • the number of second pixels 73 provided on the imaging surface is smaller than the number of first pixels 70. Therefore, on the imaging surface of the imaging device 11, the area of the covered region 77 is narrower than the area of the non-covered region 76.
  • the imaging unit 10 shown in FIG. 2 receives a face image (hereinafter referred to as “first” for convenience) taken by receiving incident light. 51 ”(described as“ face image ”).
  • the imaging unit 10 also captures a face image captured by receiving attenuated light based on the output from the second pixel 73 (see FIG. 5) (hereinafter referred to as “second face image” for convenience). 52 is generated.
  • the number of pixels of the first face image 51 is larger than the number of pixels of the second face image 52.
  • the imaging unit 10 can acquire the output from the second pixel 73 while acquiring the output from the first pixel 70 (see FIG. 5). Therefore, the first face image 51 and the second face image 52 are images taken substantially simultaneously.
  • the image recognition unit 24 includes a first image acquisition block 25, a second image acquisition block 26, It has a dark place determination block 28 and an eye determination block 27.
  • the first image acquisition block 25 acquires the first face image 51 from the imaging unit 10.
  • the second image acquisition block 26 acquires the second face image 52 from the imaging unit 10 as a face image different from the first face image 51.
  • the dark place determination block 28 determines whether or not the room of the vehicle 1 (see FIG. 1) is a dark place based on the information acquired from the vehicle control device 96.
  • the vehicle control device 96 is a device that controls various devices mounted on the vehicle 1.
  • the vehicle control device 96 can control the operation of the headlamp of the vehicle 1 as one of its functions.
  • the dark place determination block 28 determines that the interior of the vehicle 1 is a dark place based on the state information of the headlamp of the vehicle 1 when the headlamp is in a lighting state. On the other hand, the dark place determination block 28 determines that the interior of the vehicle 1 is not a dark place when the headlamp is turned off.
  • the switching between the lighting state and the unlighting state of the headlamp by the vehicle control device 96 may be performed based on the detection result of the external light sensor mounted on the vehicle 1 (see FIG. 1). It may be performed based on the operation state of the changeover switch provided in FIG.
  • the eye determination block 27 includes a plurality of bright portions 55 and 56 that are candidates for the eyes when a plurality of bright portions 55 and 56 that are candidates for the eyes of the driver shown in FIG. From this, it is determined which bright part corresponds to the actual eye. Specifically, as shown in FIG. 6, in the second face image 52 photographed by receiving the attenuated light, the red band light is attenuated from the incident light. It is hard to become eyes. That is, in the second face image 52, the driver's eyes are not easily captured as the bright portion 55 as shown in FIG. Therefore, among the bright portions 55 and 56 that are candidates for the eyes photographed in the first face image 51, those not photographed in the second face image 52 in FIG. 5 are the bright portions 55 by the actual eyes. The probability is high.
  • the eye determination block 27 shown in FIG. 2 performs a process of comparing the first face image 51 and the second face image 52, thereby a plurality of bright portions 55 appearing in the first face image 51. , 56 (see FIG. 4), the bright portion 55 that has not been photographed in the second face image 52 is determined as an eye.
  • the determination of specifying the eye position is performed on the condition that the dark place determination block 28 determines that the room is dark.
  • FIG. 7 is started by the image recognition unit 24 when the ignition of the vehicle 1 (see FIG. 1) is turned on.
  • the light emission control unit 21 outputs a control signal for instructing the light projecting unit 15 to emit light
  • the imaging control unit 23 outputs a control signal for supporting imaging to the imaging unit 10, and the process proceeds to S102.
  • the light projecting unit 15 Based on the control signal output in S101, the light projecting unit 15 emits illumination light toward the defined area PA. And the imaging part 10 image
  • each section is expressed as S101, for example.
  • each section can be divided into a plurality of subsections, while a plurality of sections can be combined into one section.
  • each section configured in this manner can be referred to as a device, module, or means.
  • the first face image 51 and the second face image 52 based on the control signal output in S101 are acquired by the first image acquisition block 25 and the second image acquisition block 26, and the process proceeds to S103.
  • S103 based on the headlight status information acquired from the vehicle control device 96, it is determined whether or not the interior of the vehicle 1 is in a dark place. If a positive determination is made in S103, the process proceeds to S105. On the other hand, if a negative determination is made in S103, the process proceeds to S104. In S104, face recognition processing without using red eyes is performed, and the process proceeds to S110. On the other hand, in S105, which is carried out on the condition that an affirmative determination is made in S103, the first face image 51 acquired in S102 is subjected to image processing, so that the driver's eye candidate is determined from the image 51. The bright parts 55 and 56 are extracted, and the process proceeds to S106.
  • S106 as a result of the image processing performed in S105, it is determined whether or not a plurality of bright portions 55 and 56 that are candidates for the eye are photographed in a range preliminarily assumed as the vicinity region of the eye. If a negative determination is made in S106, the process proceeds to S107. In S107, face recognition is performed using the bright part 55, which is the red eye extracted in S105, and the process proceeds to S110. On the other hand, if a positive determination is made in S106, the process proceeds to S108.
  • the second face image 52 is not photographed among the plurality of bright portions 55 and 56 photographed in the first face image 51.
  • the light part 55 is determined to be the eyes of the driver, and the process proceeds to S109.
  • face recognition is performed using the bright portion 55 that is the red eye determined to be an eye in S108, and the process proceeds to S110.
  • the state determination unit 31 determines whether or not there is a state in which the driver should be warned, such as a sign of driving aside or a sign of dozing. .
  • S111 it is determined whether or not the ignition ON state of the vehicle 1 is continued. If a negative determination is made in S111 because the ignition is turned off, the process ends. On the other hand, if a positive determination is made in S111, the process returns to S101.
  • the red band light is attenuated from the incident light. It is hard to be seen as the bright part 55. Therefore, among the bright portions 55 and 56 photographed in the first face image 51, the bright portion 55 that is not photographed in the second face image 52 is determined as the eye. Thereby, even when the light emitting diode 16 is reflected in the driver's glasses, the position of the eye in the first face image 51 can be accurately specified. Therefore, the accuracy of face recognition performed based on the eye position can be ensured.
  • the imaging unit 10 can obtain both an output for generating the second face image 52 and an output for generating the first face image 51 from one image sensor 11. Therefore, a configuration in which a part of the image sensor 11 is covered with the color filter 78 is particularly suitable for the state monitoring apparatus 100 that identifies the eye position by comparing the first face image 51 and the second face image 52.
  • the second face image 52 in the first embodiment is a face image that is mainly necessary for specifying the position of the eye. Therefore, the second face image 52 may not be as clear as the first face image 51. Therefore, the number of second pixels 73 in the image sensor 11 is smaller than the number of first pixels 70. Thereby, since the area of the covering region 77 is smaller than the area of the non-covering region 76, the first face image 51 based on the output from the non-covering region 76 maintains a high resolution and effectively uses the illumination light. It can be a clear image. Therefore, the accuracy of face recognition by the image recognition unit 24 using the first face image 51 can be reliably ensured.
  • the imaging unit 10 can generate the first face image 51 and the second face image 52 based on outputs acquired at substantially the same timing. Therefore, the difference in photographing timing between the first face image 51 and the second face image 52 can be substantially eliminated. As described above, it is possible to avoid a situation in which the position of the photographed driver is shifted between the first face image 51 and the second face image 52. Therefore, the accuracy of the eye position specified by the comparison between the first face image 51 and the second face image 52, and hence the accuracy of face recognition, can be further ensured.
  • the eye position is identified by comparing the first face image 51 and the second face image 52 on the condition that the interior of the vehicle 1 is dark. .
  • the bright portion 55 due to red eyes and the bright portion 56 due to the reflection of the light emitting diodes 16 on the glasses are both reflected in the first face image 51 mainly under a condition with little ambient light. Therefore, according to the condition that the position of the eye is specified on the condition that the room is in a dark place, the processing load in the state monitoring device 100 is maintained while maintaining high face recognition accuracy. Can be reduced.
  • the vehicle 1 is also referred to as a moving body.
  • the imaging unit 10 is also referred to as an imaging device or imaging means.
  • the light projecting unit 15 is also referred to as a light emitting unit, a light emitting device, or a light emitting means.
  • the image recognition unit 24 is also referred to as a face recognition unit, a face recognition device, or a face recognition means.
  • the first image acquisition block 25 is also referred to as a first image acquisition unit, a first image acquisition device, or a first image acquisition means.
  • the second image acquisition block 26 is also referred to as a second image acquisition unit, a second image acquisition device, or a unit second image acquisition means.
  • the eye determination block 27 is also referred to as an eye determination unit, an eye determination device, or an eye determination means.
  • the dark place determination block 28 is also referred to as a dark place determination unit, a dark place determination device, or a dark place determination means.
  • the color filter 78 is also referred to as an attenuation filter.
  • S101 is also referred to as a light emission section or a light emission step.
  • S102 is also referred to as a first image acquisition section or first image acquisition step and a second image acquisition section or second image acquisition step.
  • S108 is also referred to as an eye determination section or an eye determination step.
  • S109 is also referred to as a face recognition section or a face recognition step.
  • the second embodiment of the present disclosure shown in FIG. 8 is a modification of the first embodiment.
  • the state monitoring apparatus 200 according to the second embodiment includes a first imaging unit 110 and a second imaging unit 210 instead of the imaging unit 10 (see FIG. 2) of the first embodiment.
  • the structure of the state monitoring apparatus 200 for acquiring the 1st face image 51 and the 2nd face image 52 is demonstrated in detail.
  • the first imaging unit 110 and the second imaging unit 210 are both near-infrared cameras and have a configuration corresponding to the imaging unit 10 of the first embodiment.
  • the first image pickup unit 110 includes a first image pickup element 111 corresponding to the image pickup element 11 (see FIG. 5) of the first embodiment, and the image pickup surface of the element 111 is set to a specified area PA (see FIG. 1). It is arranged in a facing posture.
  • the first image sensor 111 receives incident light incident from the defined area PA. With the above configuration, the first imaging unit 110 generates the first face image 51 based on the output from the first imaging element 111 and sequentially outputs it to the image recognition unit 24.
  • the second imaging unit 210 includes a second imaging element 211 corresponding to the imaging element 11 (see FIG. 5), and a color filter 278 that attenuates red band light.
  • the second imaging unit 210 is arranged in a posture in which the imaging surface of the second imaging element 211 is directed to the defined area PA (see FIG. 1).
  • the second imaging element 211 receives the attenuated light by being covered with the color filter 278.
  • the second imaging unit 210 generates the second face image 52 based on the output from the second imaging element 211 and sequentially outputs it to the image recognition unit 24.
  • the entire area of the first image sensor 111 becomes the uncovered area 76
  • the entire area of the second image sensor 211 becomes the covered area 77.
  • the pixel pitches and the number of pixels of the image sensors 111 and 211 are the same. Therefore, the area of the covering region 77 is substantially equal to the area of the non-covering region 76.
  • the imaging control unit 23 outputs a control signal to each of the first imaging unit 110 and the second imaging unit 210.
  • the imaging control unit 23 sets both the first imaging element 111 and the second imaging element 211 to the exposure state in accordance with the timing when the light emission control unit 21 sets the light emitting diode 16 of the light projecting unit 15 to the light emitting state.
  • the imaging control unit 23 synchronizes the imaging timings of the first imaging element 111 and the second imaging element 211, so that the first face image 51 and the second face image 52 are captured at substantially the same timing. Is done.
  • the first image acquisition block 25 acquires the first face image 51 photographed using the first imaging element 111 from the first imaging unit 110.
  • the second image acquisition block 26 acquires the second face image 52 captured using the second image sensor 211 from the second imaging unit 210.
  • photography the 1st face image 51 and the 2nd face image 52 may be provided separately.
  • the second image 52 that is captured using attenuated light is generated by the second imaging unit 210. Therefore, since the eye position can be specified by comparing the first face image 51 and the second face image 52, the accuracy of the face recognition performed based on the eye position is ensured.
  • the second imaging unit 210 is provided as a configuration different from the first imaging unit 110. Therefore, the freedom degree of the image pick-up element employ
  • the first imaging unit 110 is also referred to as a first imaging device or a first imaging means.
  • the second imaging unit 210 is also referred to as a second imaging device or a second imaging means.
  • the color filter 278 is also referred to as an attenuation filter.
  • the third embodiment of the present disclosure shown in FIG. 9 is another modification of the first embodiment.
  • the state monitoring apparatus 300 according to the third embodiment includes an imaging unit 310 instead of the imaging unit 10 (see FIG. 2) of the first embodiment.
  • the structure of the state monitoring apparatus 300 for acquiring the 1st face image 51 and the 2nd face image 52 is demonstrated in detail.
  • the imaging unit 310 includes a color filter 378 and a switching mechanism 313 in addition to the imaging element 11.
  • the color filter 378 is configured to attenuate red band light and pass near infrared band light.
  • the color filter 378 is formed in a size that can cover the image sensor 11.
  • the switching mechanism 313 is a mechanism for moving the color filter 378.
  • the imaging unit 310 described above is provided with an attenuation imaging mode and a non-attenuation imaging mode that can be switched to each other as the imaging mode.
  • the imaging unit 310 moves the color filter 378 to a position covering the imaging surface of the imaging element 11 by the operation of the switching mechanism 313.
  • the imaging unit 310 can generate the second face image 52 based on the output from the imaging device 11.
  • the imaging unit 310 retracts the color filter 378 from the imaging surface of the imaging device 11 by the operation of the switching mechanism 313. Therefore, the image sensor 11 receives incident light. As described above, the imaging unit 310 can generate the first face image 51 based on the output from the imaging element 11.
  • the imaging control unit 23 outputs a control signal to the imaging unit 310 so that the attenuation imaging mode and the non-attenuation imaging mode are alternately repeated.
  • the image recognition unit 24 acquires the first face image 51 captured in the non-attenuation shooting mode by the first image acquisition block 25 and the second face image 52 captured in the attenuation shooting mode. Acquisition by the image acquisition block 26 is performed alternately.
  • the light emission control unit 21 emits illumination light from the light projecting unit 15 in accordance with the timing at which the image sensor 11 is exposed in each shooting mode.
  • the second face image 52 photographed using the attenuated light is generated in the attenuation photographing mode of the imaging unit 310. Accordingly, the eye position can be specified by comparing the first face image 51 and the second face image 52, and thus the accuracy of the face recognition performed based on the eye position is ensured.
  • the imaging unit 310 is also referred to as an imaging device or imaging means.
  • the color filter 378 is also referred to as an attenuation filter.
  • an image sensor 411 shown in FIG. 10 is used instead of the image sensor 11 (see FIG. 5).
  • a second pixel 473 included in the image sensor 411 is provided with a sub-pixel 474 covered with a color filter 78.
  • the imaging unit 410 having the above configuration generates the first face image 51 (see FIG. 4) based on the output from the first pixel 70 and the output from the region excluding the sub pixel 474 in the second pixel 473. .
  • the imaging unit 410 generates the second face image 52 (see FIG. 4) based on the output from the sub-pixel 474.
  • the second face image 52 is photographed together with the first face image 51 while maintaining the sensitivity of the image sensor 411 with respect to near infrared rays, as in the first embodiment. can do.
  • the first imaging element and the second imaging element are provided in one imaging unit.
  • the imaging unit is further provided with a splitter that divides incident light from the defined area PA into each imaging element and a color filter positioned between the splitter and the second imaging element.
  • the first image sensor that forms the uncovered area and the second image sensor that forms the covered area may be provided together in one image pickup unit.
  • the dark place determination block is omitted.
  • the eye determination block allows the bright portions that are candidates for a plurality of eyes to be captured in the first face image 51. From the inside, the determination which identifies the bright part corresponding to eyes is implemented.
  • the number of first pixels and the number of second pixels are set to be approximately the same.
  • the area of the covering region is approximately the same as the area of the non-covering region.
  • the number of pixels of the second pixel is larger than the number of pixels of the first pixel. Thereby, the area of the covering region is narrower than the area of the non-covering region.
  • the installation and evacuation of the color filter by the switching mechanism may be performed electrically.
  • the color filter may be configured to block light in the red band by applying a voltage.
  • the configuration for applying a voltage to the color filter corresponds to the switching mechanism.
  • the ratio and layout of the pixels provided with the color filter can be changed as appropriate.
  • the pixels having the color filter may be provided so as to be narrowed down to a range that captures a region near the eye in the image sensor.
  • the timing at which the output from the non-covering area is acquired and the timing at which the output from the covering area is acquired in the imaging unit are different.
  • the timing at which the output from the first imaging element is acquired in the first imaging unit is different from the timing at which the output from the second imaging element is acquired in the second imaging unit. .
  • a shift in shooting timing between the first face image and the second face image is acceptable.
  • Image sensors such as a CCD (Charge Coupled Device) and a CMOS (Complementary Metal Oxide Semiconductor) can be appropriately employed for each image sensor in the above embodiment.
  • the frequency band of light detected by the image sensor is not limited to the near infrared band, and may include the visible light band in addition to the near infrared band.
  • the light emitting diodes be appropriately changed in frequency band, number, arrangement, and the like of emitted light so as to correspond to the specifications of the image sensor.
  • the installation position of the imaging unit and the state monitoring device which are the upper surface of the steering column 81, may be changed as appropriate as long as the specified area PA can be imaged.
  • the state monitoring device may be installed on the upper surface of the instrument panel, for example, or may be attached to a ceiling portion near the sun visor.
  • the imaging unit may be provided separately from the main body of the state monitoring device and at a position suitable for photographing the defined area PA.
  • the method for determining the prescribed area PA in the above embodiment may be changed as appropriate.
  • the defined area PA may be defined to include the 95th percentile of Ilips.
  • the method for determining the prescribed area PA is not limited to the method for determining from the iris.
  • the prescribed area PA may be determined experimentally by actually sitting a plurality of drivers of different races, genders, ages, etc. on the driver's seat. Such a defined area PA is desirably defined in consideration of the movement of the face accompanying the driving operation.
  • the plurality of functions provided by the control circuit 20 that has executed the state monitoring program may be provided by hardware and software different from the above-described control device, or a combination thereof.
  • functions corresponding to each functional block and sub-functional block may be provided by an analog circuit that performs a predetermined function without depending on a program.
  • the present disclosure monitors not only the so-called driver status monitor for automobiles as vehicles, but also the state of the driver by various moving bodies (transport equipment) such as motorcycles, tricycles, ships, and aircraft as vehicles.
  • the present invention can be applied to a state monitoring device.

Abstract

La présente invention concerne un dispositif de surveillance d'état (100) qui est monté dans un véhicule (1) et surveille l'état d'un conducteur à l'aide d'une image faciale (51) du conducteur. Le dispositif de surveillance d'état émet une lumière d'éclairage d'une unité de projection (15) vers la surface faciale du conducteur présumée être positionnée dans une région stipulée (PA). Le dispositif de surveillance d'état acquiert une première image faciale (51) capturée par la réception de la lumière arrivant en provenance de la région stipulée et une seconde image faciale (52) capturée par la réception d'une lumière atténuée résultant de la lumière provenant de la région rouge de la lumière arrivante ayant été atténuée. Ici, lorsqu'une pluralité de sections brillantes (55, 56) qui sont candidates pour les yeux du conducteur ont été capturées dans la première image faciale, les sections brillantes non capturées dans la seconde image faciale parmi la pluralité de sections brillantes sont déterminées être les yeux et, sur la base de la position des yeux déterminés, le dispositif réalise une reconnaissance du visage.
PCT/JP2013/003046 2012-10-15 2013-05-13 Dispositif de surveillance d'état WO2014061175A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012-228178 2012-10-15
JP2012228178A JP2014082585A (ja) 2012-10-15 2012-10-15 状態監視装置及び状態監視プログラム

Publications (1)

Publication Number Publication Date
WO2014061175A1 true WO2014061175A1 (fr) 2014-04-24

Family

ID=50487758

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/003046 WO2014061175A1 (fr) 2012-10-15 2013-05-13 Dispositif de surveillance d'état

Country Status (2)

Country Link
JP (1) JP2014082585A (fr)
WO (1) WO2014061175A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6801274B2 (ja) 2016-07-11 2020-12-16 株式会社デンソー 運転支援装置、及び運転支援方法
KR101844243B1 (ko) * 2016-07-27 2018-05-14 쌍용자동차 주식회사 스마트폰 카메라를 이용한 운전자 상태 확인시스템 및 그 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002513176A (ja) * 1998-04-29 2002-05-08 カーネギー−メロン ユニバーシティ 2つの異なる波長の光を用いて対象者の眼を監視する装置および方法
JP2002352229A (ja) * 2001-05-30 2002-12-06 Mitsubishi Electric Corp 顔部位検出装置
JP2005296382A (ja) * 2004-04-13 2005-10-27 Honda Motor Co Ltd 視線検出装置
JP2009216423A (ja) * 2008-03-07 2009-09-24 Omron Corp 測定装置および方法、撮像装置、並びに、プログラム
JP2009254525A (ja) * 2008-04-15 2009-11-05 Calsonic Kansei Corp 瞳孔検知方法及び装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002513176A (ja) * 1998-04-29 2002-05-08 カーネギー−メロン ユニバーシティ 2つの異なる波長の光を用いて対象者の眼を監視する装置および方法
JP2002352229A (ja) * 2001-05-30 2002-12-06 Mitsubishi Electric Corp 顔部位検出装置
JP2005296382A (ja) * 2004-04-13 2005-10-27 Honda Motor Co Ltd 視線検出装置
JP2009216423A (ja) * 2008-03-07 2009-09-24 Omron Corp 測定装置および方法、撮像装置、並びに、プログラム
JP2009254525A (ja) * 2008-04-15 2009-11-05 Calsonic Kansei Corp 瞳孔検知方法及び装置

Also Published As

Publication number Publication date
JP2014082585A (ja) 2014-05-08

Similar Documents

Publication Publication Date Title
WO2014054199A1 (fr) Dispositif de surveillance d'état
JP5045212B2 (ja) 顔画像撮像装置
US10189396B2 (en) Vehicle headlamp control device
JP2006252138A (ja) 運転者撮影装置および運転者監視装置
JP5983693B2 (ja) 表示機能付きミラー装置および表示切替方法
JP2008052029A (ja) 撮像システム、車両乗員検出システム、作動装置制御システム、車両
JP6605615B2 (ja) 画像処理装置、撮像装置、カメラモニタシステム、および画像処理方法
US20110035099A1 (en) Display control device, display control method and computer program product for the same
CN111886541B (zh) 成像装置和电子设备
JP2016055782A5 (fr)
JP4927647B2 (ja) 車両周辺監視装置
JP2006248365A (ja) 移動体の後方監視ミラー、運転者撮影装置、運転者監視装置および安全運転支援装置
JP5712821B2 (ja) 撮影表示制御システム
WO2014061175A1 (fr) Dispositif de surveillance d'état
JP6229769B2 (ja) 表示機能付きミラー装置および表示切替方法
KR102006780B1 (ko) 카메라 모듈이 설치된 램프
JP2010012995A (ja) 照明装置
JP2020137053A (ja) 制御装置及び撮影システム
JP6322723B2 (ja) 撮像装置および車両
JP2009096323A (ja) カメラ照明制御装置
JP2017193277A (ja) 車両用視認装置
JP2017168960A (ja) 画像処理装置
US20240100946A1 (en) Occupant imaging device
JP2018002152A (ja) 表示機能付きミラー装置および表示切替方法
WO2022269732A1 (fr) Unité de caméra et système de surveillance d'occupant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13847290

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13847290

Country of ref document: EP

Kind code of ref document: A1