WO2019117001A1 - Image processing device and image processing method - Google Patents

Image processing device and image processing method Download PDF

Info

Publication number
WO2019117001A1
WO2019117001A1 PCT/JP2018/044832 JP2018044832W WO2019117001A1 WO 2019117001 A1 WO2019117001 A1 WO 2019117001A1 JP 2018044832 W JP2018044832 W JP 2018044832W WO 2019117001 A1 WO2019117001 A1 WO 2019117001A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
driver
unit
image processing
Prior art date
Application number
PCT/JP2018/044832
Other languages
French (fr)
Japanese (ja)
Inventor
山本 和夫
正樹 諏訪
航一 木下
向井 仁志
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Publication of WO2019117001A1 publication Critical patent/WO2019117001A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • Embodiments of the present invention relate to an image processing apparatus and an image processing method.
  • a method of detecting a three-dimensional shape of an object a method using a stereo camera or a method combining a two-dimensional image camera and a TOF (Time Of Flight) distance image camera is known.
  • a method using a stereo camera or a method combining a two-dimensional image camera and a TOF (Time Of Flight) distance image camera is known.
  • an object such as a container is imaged almost simultaneously using an infrared camera and a distance image camera, and the two-dimensional infrared image of the object obtained by the infrared camera and the distance
  • a technique is disclosed that detects the contour or boundary of the object based on a distance image to the object obtained by an image camera.
  • the face of the driver is imaged prior to driving, and the driver is authenticated based on the captured image, or the face of the driver while driving is imaged.
  • Development of technology for detecting the line of sight, the direction of a face, etc. based on a captured image is in progress. In order to realize this technology, it is necessary to acquire two-dimensional and three-dimensional images of the driver's face.
  • the conventional three-dimensional image detection method uses two types of cameras, an infrared camera and a distance image camera, as described in, for example, Patent Document 1. For this reason, the enlargement of the device can not be avoided, and it is difficult to install in a relatively narrow space such as a cockpit in a vehicle. In addition, cost increase of the device can not be avoided. Furthermore, for example, in order to obtain a three-dimensional image of the driver's face in order to detect face recognition and the direction of the line of sight, it is necessary to accurately align image data obtained by a plurality of cameras in pixel units, Troublesome processing such as calibration processing is indispensable.
  • the present invention has been made in view of the above circumstances, and in one aspect, is a technology capable of detecting a two-dimensional image and a three-dimensional image of a driver's face with a compact configuration and without the need for complicated image processing. Is to provide.
  • the 1st mode of the image processing device concerning this invention is a projection part which projects an optical pattern on a driver's face, an imaging part which picturizes the driver's face, and imaging A control unit and an image processing unit are provided. Then, the imaging control unit acquires a first face image captured by the imaging unit in a state where the optical pattern is not projected by the projection unit on the face of the driver, and the optical pattern is projected In which the second face image captured by the imaging unit is acquired in the selected state, and the acquired first face image and the second face image are associated by the image processing unit. is there.
  • the first face image and the second face image are acquired by one imaging unit. Therefore, the size and cost of the apparatus can be reduced as compared with the case where the first face image and the second face image are acquired by separate imaging units. This has the special effect that the device can be installed even in a place such as a cockpit in a vehicle where it is difficult to secure a sufficient space for capturing a face image.
  • the first face image and the second face image can be obtained by one imaging unit, the process of correlating these face images is simple and highly accurate without the need for troublesome calibration processing and the like. It is possible to do
  • the image processing apparatus further includes an illumination unit that illuminates the face of the driver.
  • the imaging control unit is an image captured by the imaging unit in a state in which the face is illuminated by the illumination unit and an image captured by the imaging unit in a state in which the face is not illuminated by the illumination unit.
  • An image is obtained, and a difference image of the obtained images is used as the first face image.
  • the illumination unit includes a light source for emitting near infrared light, and near infrared light emitted from the light source. And a polarization filter that polarizes the light, and is configured to illuminate the face with the near-infrared light that has passed through the polarization filter.
  • the face since the face is illuminated by the near infrared light, it is possible to maintain the anti-glare property to the driver and to acquire a face image also for the driver wearing sunglasses.
  • by illuminating through the polarizing filter even if the driver wears glasses, near infrared light is not reflected by the glasses, and a good first face image can be obtained.
  • the imaging control unit causes the projection unit to face the face of the driver.
  • An image captured by the imaging unit in a state in which the pattern is projected and an image captured by the imaging unit in a state in which the optical pattern is not projected are respectively acquired, and a differential image of the acquired images As the second face image.
  • the image processing unit in any one of the first to fourth aspects of the image processing apparatus, as the image processing unit, the first face image and the second face image And an optical axis connecting the imaging unit and the eyes of the face based on the second face image, and a two-dimensional plane orthogonal to the optical axis.
  • a second detection unit configured to detect a two-dimensional position of the pupil of the eye on the two-dimensional plane with respect to an image of a certain eye.
  • the first face image and the second face image obtained separately are associated with each other at the same pixel position, the two-dimensional position of the eye at the same position and the pupil of the eye It is possible to detect each of the dimensional positions.
  • a sixth aspect of the image processing apparatus in the fifth aspect of the image processing apparatus, further includes a gaze detection unit, and the detection associated with the image processing unit by the gaze detection unit.
  • the line of sight of the driver is detected based on the three-dimensional position of the eye and the two-dimensional position of the detected pupil.
  • the sixth aspect when calculating the direction of the line of sight based on the position of the pupil relative to the position of the driver's eyes, the position in the depth direction of the eye with respect to the camera is also taken into account. Can be detected.
  • the image processing apparatus captures an image of the driver's face with an imaging unit to obtain a first face image; Projecting an optical pattern onto the face of the person, and capturing an image of the face of the driver by the imaging unit in a state where the optical pattern is projected onto the face of the driver to obtain a second face image A process and a process of associating the first face image with the second face image are included.
  • the first face image and the second face image can be acquired by one imaging unit. Therefore, the size and cost of the apparatus can be reduced as compared with the case where the first face image and the second face image are acquired by separate imaging units.
  • the first face image and the second face image can be obtained by one imaging unit, the process of correlating these images is simple and highly accurate without the need for troublesome calibration processing and the like. It is possible to do
  • the image processing apparatus projects an optical pattern on the face of the driver, and the face of the driver A process of capturing an image of the driver's face by the imaging unit to obtain a second face image while the optical pattern is projected, and a process of capturing the driver's face without the optical pattern being projected A process of capturing a first face image by imaging with an imaging unit and a process of associating the first face image with the second face image are included.
  • the second aspect of the image processing method according to the present invention as in the first aspect, it is possible to obtain the first face image and the second face image by one imaging unit.
  • the size and cost of the device can be reduced, and the first face image and the second face image can be associated with high accuracy by simple processing without performing processing such as calibration. Excellent effects can be achieved.
  • each aspect of the present invention it is possible to provide a technique capable of detecting a two-dimensional image and a three-dimensional image of a driver's face with a compact configuration and without the need for complicated image processing.
  • FIG. 1 is a block diagram for explaining an application example of an image processing apparatus according to the present invention.
  • FIG. 2 is a schematic block diagram of a vehicle equipped with an image processing apparatus according to an embodiment of the present invention.
  • FIG. 3 is a view showing an internal structure of a camera head of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 4 is a perspective view showing the structure of the pattern projection projector of the camera head shown in FIG.
  • FIG. 5 is a block diagram showing the hardware configuration of the face image detection processing unit of the image processing apparatus according to the embodiment of the present invention.
  • FIG. 6 is a block diagram showing the hardware configuration of the camera head and the face image detection processing unit of the image processing apparatus according to the embodiment of the present invention, additionally including a software configuration.
  • FIG. 1 is a block diagram for explaining an application example of an image processing apparatus according to the present invention.
  • FIG. 2 is a schematic block diagram of a vehicle equipped with an image processing apparatus according to an embodiment of the present invention.
  • FIG. 7 is a flowchart showing the control procedure and control contents by the face image detection processing unit shown in FIG.
  • FIG. 8 is a view showing an example of two-dimensional image data captured by the image processing apparatus shown in FIG.
  • FIG. 9 is a view showing an example of three-dimensional image data captured by the image processing apparatus shown in FIG.
  • FIG. 10 is a diagram used to explain the principle of three-dimensional measurement of the eye position.
  • FIG. 11 is a diagram used to explain the principle of sight line detection.
  • FIG. 1 is a block diagram for explaining an application example of an image processing apparatus according to the present invention.
  • the image processing apparatus includes a camera head 1 and a face image detection processing unit 2.
  • the camera head 1 includes one imaging unit 1a, one projection unit 1b, and two illumination units 1c and 1d.
  • the imaging unit 1a includes an image sensor using a solid-state imaging device, captures an image of the driver's face, and outputs face image data thereof.
  • the projection unit 1 b is, for example, a pattern projection projector, and projects, for example, a stripe or lattice optical pattern on the face of the driver.
  • the illumination units 1c and 1d use a solid light emitting element such as a light emitting diode (LED) as a light source, and illuminate the driver's face.
  • LED light emitting diode
  • the face image detection processing unit 2 includes an imaging control unit 2a and an image processing unit 2b.
  • the imaging control unit 2a controls the imaging unit 1a, the projection unit 1b and the illumination units 1c and 1d of the camera head 1 according to an imaging control program stored in a program memory (not shown) at a predetermined timing, thereby allowing the driver to The two-dimensional image data of the face and the pattern projection image data are acquired respectively.
  • the imaging control unit 2a captures an image of the driver's face by the imaging unit 1a in a state in which the driver's face is illuminated by lighting the illumination units 1c and 1d, whereby two-dimensional image data of the driver's face ( Get the first face image). Further, the imaging control unit 2a operates the projection unit 1b to project a striped or lattice optical pattern onto the face of the driver, and in this state, the imaging portion 1a captures an image of the driver's face to drive the vehicle. Acquire pattern projection image data of the person's face. Since the pattern projection image data of the face is data for obtaining three-dimensional position information of the face, it will be simply referred to as three-dimensional image data (second face image) hereinafter.
  • the image processing unit 2b Under the control of the imaging control unit 2a, the image processing unit 2b performs processing of associating the two-dimensional image data obtained by the imaging unit 1a with the three-dimensional image data at the same pixel position. Then, of the two-dimensional image data and the three-dimensional image data associated with the pixel position, the eye of the driver is detected by, for example, pattern matching from the two-dimensional image data, and the position of the pupil with respect to the bright spot (camera position) The direction of the line of sight is calculated by detecting However, if the depth position of the eye is unknown, an accurate gaze position can not be identified. Therefore, the depth position of the eye is identified using a three-dimensional image. This enables accurate gaze position determination.
  • the three-dimensional position is defined by the optical axis connecting the imaging unit 1a and the bright spot of the eyeball as the Z axis, and the two-dimensional plane orthogonal to the optical axis is represented by the X and Y axes. It is expressed as position coordinates in a three-dimensional space defined by axes.
  • the image processing unit 2b selects, from the two-dimensional image data, an image area of the eye whose position corresponds to the eye whose detection target is the three-dimensional position, and the two-dimensional position of the pupil of the eye from the image area.
  • This two-dimensional position is represented as a position in a two-dimensional plane represented by the X and Y axes.
  • the three-dimensional position of the detected bright spot of the eyeball and the two-dimensional position of the detected pupil are used, for example, to detect the driver's gaze direction.
  • two-dimensional image data and three-dimensional image data of the driver's face can be obtained by one imaging unit 1 a provided in the camera head 1. Therefore, the two-dimensional image data and the three-dimensional image data can be accurately correlated by a simple process without performing a process such as calibration. Further, as compared with the case where two-dimensional image data and three-dimensional image data are acquired by separate imaging units, downsizing and cost reduction of the apparatus can be achieved. This is a very advantageous effect when the camera head 1 is installed in a place where the installable position is limited, such as a cockpit of a vehicle.
  • FIG. 2 is a block diagram showing an example of the configuration of a vehicle 3 equipped with an image processing apparatus according to an embodiment of the present invention.
  • the image processing apparatus according to the present embodiment includes, for example, a camera head 10 and a face image detection processing unit 20.
  • a seating sensor 40 is installed at the driver's seat of the vehicle 3.
  • the seating sensor 40 detects the seating of the driver 4 in the driver's seat, and outputs a detection signal to the face image detection processing unit 20.
  • the display control device 50 is provided in the vehicle 3.
  • the display control device 50 controls, for example, the display on the head-up display, and the target that the driver 4 visually recognizes based on the detection information of the line-of-sight direction of the driver 4 output from the face image detection processing unit 20 For example, a frame-shaped pattern is displayed superimposed on the image of the object.
  • FIG. 3 is a front view showing an example of the internal structure of the camera head 10.
  • the camera head 10 includes a camera 10a as an imaging unit, a pattern projection projector 10b as a projection unit, and two illumination units 10c in a head housing 11 configured in a horizontally long box shape. , 10d are arranged in a row. The arrangement position is set so that the camera 10a and the pattern projection projector 10b are arranged at the center, and the illumination units 10c and 10d are arranged next to the camera 10a and the pattern projection projector 10b.
  • the camera 10 a uses, for example, a CMOS (Complementary MOS) image sensor capable of receiving near-infrared light as an imaging device. Then, the face of the driver 4 is imaged and the face image data is output. Note that as the imaging device, another solid-state imaging device such as a CCD (Charge Coupled Device) may be used.
  • CMOS Complementary MOS
  • CCD Charge Coupled Device
  • the pattern projection projector 10 b projects, for example, a grid-like optical pattern onto the face of the driver 4 and is configured as follows, for example.
  • FIG. 4 is a perspective view showing the configuration. That is, the projector substrate 31 is mounted with a light emitting element 32 for emitting near infrared light, for example. Further, on the projector substrate 31, for example, a cylindrical projector optical system 33 is disposed in the optical axis direction of the light emitting element 32.
  • the projector optical system 33 has a mask holder, a lens holder, and a projection lens.
  • the mask holder incorporates a mask for generating the grid-like optical pattern.
  • the projection lens projects the optical pattern generated by the mask onto the driver's face at a predetermined magnification.
  • the optical pattern is not limited to a lattice, and may be a stripe or checkered pattern.
  • the illumination units 10c and 10d each include, for example, a light source having two light emitting elements 11c, 12c and 11d and 12d, and a polarization filter.
  • the light source emits near infrared light.
  • the illumination units 10c and 10d irradiate flat light of near infrared light emitted from the light emitting elements 11c and 12c and 11d and 12d through illumination filters as illumination light to the driver's face.
  • the reason for using a polarizing filter is to prevent near infrared light from being reflected by the glasses when the driver 4 wears glasses, and to prevent the eyes of the driver 4 from being undetectable.
  • 10d perform lighting and extinguishing operations according to the lighting control signal output from the face image detection processing unit 20.
  • the face image detection processing unit 20 controls the camera head 10 to acquire two-dimensional image data and three-dimensional image data of the face of the driver 4. Then, based on the acquired two-dimensional image data and three-dimensional image data, the three-dimensional position of the eye of the driver 4 and the two-dimensional position of the pupil of the eye are detected, and further, based on these detection results. A process of detecting the line of sight of the driver 4 is performed.
  • FIG. 5 is a block diagram showing an example of the hardware configuration of the face image detection processing unit 20.
  • the face image detection processing unit 20 has a hardware processor 21A such as a CPU (Central Processing Unit) as hardware, and the hardware processor 21A includes a program memory 21B, a data memory 25, a camera head interface (camera A head I / F) 23 and an external interface (external I / F) 24 are connected via a bus 22.
  • a hardware processor 21A such as a CPU (Central Processing Unit) as hardware
  • the hardware processor 21A includes a program memory 21B, a data memory 25, a camera head interface (camera A head I / F) 23 and an external interface (external I / F) 24 are connected via a bus 22.
  • a hardware processor 21A such as a CPU (Central Processing Unit) as hardware
  • the hardware processor 21A includes a program memory 21B, a data memory 25, a camera head interface (camera A head I / F) 23 and an external interface (external I
  • the camera head I / F 23 outputs a control signal to the camera 10a of the camera head 10, the pattern projection projector 10b, and the lighting units 10c and 10d, and receives image data output from the camera 10a of the camera head 10. Do.
  • the external I / F 24 receives the seating detection signal output from the seating sensor 40, and outputs information representing the detection result of the gaze direction to the display control device 50.
  • Transmission of signals and data between the camera head I / F 23 and the camera head 10 and between the external I / F 24 and the seating sensor 40 and the display control device 50 is all performed by a signal cable. If an in-vehicle wired network such as LAN (Local Area Network) or an in-vehicle wireless network adopting low power wireless data communication standards such as Bluetooth (registered trademark) is provided in the vehicle, these networks are used. The above signals and data may be transmitted.
  • LAN Local Area Network
  • Bluetooth registered trademark
  • the program memory 21B uses, as a storage medium, for example, a non-volatile memory such as a hard disk drive (HDD) or a solid state drive (SSD) that can be written and read as needed, or a non-volatile memory such as a ROM. Programs necessary to execute various control processes according to the present embodiment are stored.
  • a non-volatile memory such as a hard disk drive (HDD) or a solid state drive (SSD) that can be written and read as needed
  • SSD solid state drive
  • the data memory 25 includes, for example, a combination of a non-volatile memory such as an HDD or an SSD that can be written and read as needed and a volatile memory such as a RAM as a storage medium, and a face image captured by the camera 10a It is used to store data and information representing detection results.
  • a non-volatile memory such as an HDD or an SSD that can be written and read as needed
  • a volatile memory such as a RAM as a storage medium
  • a face image captured by the camera 10a It is used to store data and information representing detection results.
  • FIG. 6 is a block diagram showing a software configuration further added to the hardware configuration of the face image detection processing unit 20 shown in FIG.
  • a two-dimensional image storage unit 251, a three-dimensional image storage unit 252, and a detection result storage unit 253 are provided in a storage area of the data memory 25, a two-dimensional image storage unit 251, a three-dimensional image storage unit 252, and a detection result storage unit 253 are provided.
  • the two-dimensional image storage unit 251 and the three-dimensional image storage unit 252 are used to store two-dimensional image data and three-dimensional image data acquired by the control unit 21, respectively.
  • the detection result storage unit 253 is used to store information indicating the three-dimensional position of the eye and the two-dimensional position of the pupil of the eye, which is detected by the control unit 21.
  • the control unit 21 includes the hardware processor 21A and the program memory 21B, and as a processing function unit by software, a camera head control unit 211, a two-dimensional image acquisition processing unit 212, and a three-dimensional image acquisition processing unit 213, an image processing unit 214, and a gaze detection unit 215.
  • the camera head control unit 211, the two-dimensional image acquisition processing unit 212, and the three-dimensional image acquisition processing unit 213 constitute an imaging control unit.
  • control unit each processing unit, and the detection units 211 to 215 are all realized by causing the hardware processor 21A to execute the program stored in the program memory 21B.
  • the camera head control unit 211 is triggered by the seating detection signal output from the seating sensor 40 as a trigger, and illuminates the illumination units 10c and 10d of the camera head 10 and the pattern projection projector 10b in a predetermined procedure in a predetermined detection cycle. And control the camera 10a.
  • An example of the above detection cycle and procedure will be described in detail later.
  • the two-dimensional image acquisition processing unit 212 uses the camera head I / F 23 two-dimensional image data of the face of the driver 4 captured by the camera 10 a under the control of the camera head control unit 211 for each detection cycle. A process of taking in and storing it in the two-dimensional image storage unit 251 of the data memory 25 is performed.
  • the two-dimensional image data is face image data of the driver 4 captured in a state where the optical pattern is not projected by the pattern projection projector 10b. An example of the acquisition processing method of this two-dimensional image data will be described in detail later.
  • the three-dimensional image acquisition processing unit 213 performs three-dimensional image data of the face of the driver 4 captured by the camera 10 a under the control of the camera head control unit 211 through the camera head I / F 23 at each detection cycle. Processing for loading and storing in the three-dimensional image storage unit 252 of the data memory 25 is performed.
  • the three-dimensional image data is image data of the face of the driver 4 captured in a state where the optical pattern is projected by the pattern projection projector 10b. An example of this three-dimensional image data acquisition processing method will also be described in detail later.
  • the image processing unit 214 has, for example, the following processing functions. (1) The two-dimensional image data and the three-dimensional image data acquired in the same detection cycle are read out from the two-dimensional image storage unit 251 and the three-dimensional image storage unit 252 for each of the detection cycles. Process of associating image data and 3D image data at the same pixel position.
  • the eyes of the driver 4 are recognized from the three-dimensional image data after the association processing, and from the recognized image of the eye, the third order of the bright spot by the corneal reflection of the eyeball A process of detecting an original position and storing information representing the detection result in the detection result storage unit 253.
  • the three-dimensional position of the bright spot of the eyeball is such that the optical axis connecting the camera 10a and the bright spot is the Z axis, and the two-dimensional plane orthogonal to the optical axis is X and Y axes. , Z axis is expressed as a position in a three-dimensional space.
  • the line-of-sight detection unit 215 reads out the three-dimensional position of the bright spot of the eye to be detected and the two-dimensional position of the pupil of the eye from the detection result storage unit 253 for each detection cycle.
  • the direction of the line of sight of the driver 4 is detected from the three-dimensional position of the bright spot of the eye and the two-dimensional position of the pupil of the eye.
  • the line-of-sight detection unit 215 also outputs information representing the detected direction of the line of sight from the external I / F 24 to the display control device 50.
  • FIG. 7 is a flowchart showing an example of the processing procedure and processing contents of the face image detection processing unit 20.
  • the control unit 21 of the face image detection processing unit 20 monitors the input of the seating detection signal output from the seating sensor 40 in step S11 under the control of the camera head control unit 211. There is. In this state, when the driver is seated on the seat, a seating detection signal is output from the seating sensor 40. The seating detection signal is input to the control unit 21 via the external I / F 24. When detecting the input of the seating detection signal in step S11, the control unit 21 first executes control for acquiring two-dimensional image data of the driver's face in the following procedure.
  • an illumination lighting signal is output from the camera head I / F 23 to the camera head 10.
  • the lighting units 10c and 10d are turned on, whereby the face of the driver 4 is illuminated.
  • This illumination light is near infrared light which has passed through the polarization filter. For this reason, the driver 4 does not feel dazzling and there is no hindrance to driving.
  • the driver 4 wears glasses, near infrared light is reflected by the glasses, and the problem that the eyes of the driver 4 can not be detected does not occur.
  • the camera head control unit 211 outputs an imaging control signal to the camera head 10 from the camera head I / F 23 in step S13.
  • the camera head 10 receives the image pickup control signal
  • the camera 10 a picks up the face of the driver 4 and outputs two-dimensional image data of the face of the driver 4 obtained by the image pickup to the face image detection processing unit 20 .
  • the control unit 21 of the face image detection processing unit 20 takes in the two-dimensional image data Va1 through the camera head I / F 23 in step S14 under the control of the two-dimensional image acquisition processing unit 212.
  • the two-dimensional image storage unit 251 of the data memory 25 stores the two-dimensional image data Va1.
  • the two-dimensional image data Va1 is, for example, a face image including an eyeball as shown in FIG.
  • the control unit 21 outputs an illumination off signal via the camera head I / F 23 in step S15.
  • the camera head 10 turns off the illumination units 10c and 10d when it receives the above-mentioned illumination off signal.
  • the control unit 21 outputs an imaging control signal from the camera head I / F 23 to the camera head 10 in step S16 under the control of the camera head control unit 211.
  • the camera head 10 receives the image pickup control signal
  • the camera 10 a picks up the face of the driver 4 and outputs two-dimensional image data of the face of the driver 4 obtained by this image pickup operation to the face image detection processing unit 20 Do.
  • the control unit 21 of the face image detection processing unit 20 takes in the two-dimensional image data Va2 through the camera head I / F 23 in step S17 under the control of the two-dimensional image acquisition processing unit 212. Then, the two-dimensional image storage unit 251 of the data memory 25 stores the two-dimensional image data Va2.
  • the control unit 21 controls the camera head I / F 23 from the camera head I / F 23 in step S19 under the control of the camera head controller 211 Output a pattern projection signal to 10.
  • the pattern projection projector 10 b lights up. Thereby, a lattice pattern is projected on the face of the driver.
  • the camera head control unit 211 outputs an imaging control signal to the camera head 10 from the camera head I / F 23 in step S20.
  • the camera head 10 receives the imaging control signal
  • the camera 10a captures an image of the face of the driver 4 on which the optical pattern is projected, and the face projection image data of the face of the driver 4 obtained by this imaging is a face Output to the image detection processing unit 20.
  • the control unit 21 of the face image detection processing unit 20 controls the pattern projection image data captured under the pattern projection, that is, three-dimensional image data Vb1 in step S21. Load via the head I / F 23. Then, the three-dimensional image data Vb 1 is stored in the three-dimensional image storage unit 252 of the data memory 25. At this time, the pattern projection projector 10b projects pattern light which does not pass through the polarizing filter. Therefore, the three-dimensional image data Vb1 is an image in which the bright spot 42 of the corneal reflection is reflected in the eye as illustrated in FIG. 9, for example.
  • the three-dimensional image acquisition processing unit 213 stores the three-dimensional image data Vb1 stored in the three-dimensional image storage unit 252 and the non-illuminated state stored in the two-dimensional image storage unit 251.
  • the purpose of obtaining a difference image for this three-dimensional image data is to remove from the three-dimensional image data the influence of ambient light such as the light of headlights of oncoming vehicles and various meters, as in the case of the two-dimensional image described above.
  • the three-dimensional image acquisition processing unit 213 stores the three-dimensional difference image data Vb in the three-dimensional image storage unit 252.
  • the control unit 21 controls the image processing unit 214 to perform step S23. Then, the two-dimensional difference image data Va stored in the two-dimensional image storage unit 251 and the three-dimensional difference image data Vb stored in the three-dimensional image storage unit 252 are read out. Then, the read two-dimensional difference image data Va and the three-dimensional difference image data Vb are associated with each other at the same pixel position. In this association process, since the two-dimensional difference image data Va and the three-dimensional difference image data Vb are both generated based on an image obtained by one and the same camera 10a, the frames are overlapped. It can be done simply and accurately.
  • the control unit 21 detects the two-dimensional position of the pupil of the driver 4 in the following manner, for example. That is, the two-dimensional difference image data Va is read from the two-dimensional image storage unit 251, and the eyes of the driver 4 are recognized from the two-dimensional difference image data Va.
  • the recognition of the eyes can be realized, for example, by using an existing image recognition technology such as pattern matching.
  • the pupil 41 is recognized from the recognized eye image as illustrated in FIG. 8, and the two-dimensional position of the pupil 41 is detected.
  • the two-dimensional position of the pupil 41 is represented by the XY coordinates of the two-dimensional plane when the frame of the two-dimensional image is defined as a two-dimensional plane (X-Y coordinate plane).
  • the image processing unit 214 causes the detection result storage unit 253 to store information indicating the detected two-dimensional position of the pupil 41.
  • the control unit 21 performs three-dimensional position of the bright point of the eye (eyeball) of the driver 4 in step S25.
  • the image processing unit 214 first reads the three-dimensional difference image data Vb from the three-dimensional image storage unit 252, and recognizes the eyes of the driver 4 from the three-dimensional difference image data Vb.
  • the recognition of the eyes here can also be realized, for example, by applying an existing image recognition technology such as pattern matching.
  • the image processing unit 214 detects the three-dimensional position of the bright spot of the eye, for example, using triangulation as follows.
  • FIG. 10 is a diagram for explaining the outline of the detection process.
  • the image processing unit 214 first detects a bright spot due to corneal reflection of the eyeball from the recognized eye image, and defines the optical axis La between the camera 10a and the bright spot as the Z axis. At the same time, a two-dimensional direction orthogonal to the optical axis La is defined by the XY axis. That is, the position of the eye of the driver 4 can be defined by the three-dimensional coordinates represented by the X, Y, Z axes.
  • the pattern projection projector 10b is disposed apart from the camera 10a by a fixed distance in the direction orthogonal to the optical axis. Therefore, there is a constant between the optical axis La connecting the camera 10a and the bright spot due to the corneal reflection of the eyeball and the optical axis Lb connecting the pattern projection projector 10b and the bright spot due to the corneal reflex of the eyeball. There is an angle.
  • the two-dimensional position (XY coordinate position of the optical pattern in the three-dimensional difference image Vb according to the distance of the change ) Changes.
  • the position of the face of the driver 4 changes from P1 to P2
  • the position of the optical pattern R in the three-dimensional difference image data Vb changes as Q1 to Q2 in FIG.
  • the image processing unit 214 detects the amount of change Q1-Q2 of the position of the optical pattern R in the three-dimensional difference image data Vb.
  • step S26 the image processing unit 214 associates the information representing the three-dimensional position of the bright spot due to the corneal reflection of the detected eyeball with the information representing the two-dimensional position of the pupil, in the detection result storage unit 253.
  • the control unit 21 controls the driver in step S27 under control of the gaze detection unit 215.
  • the detection process of the gaze direction of the person 4 is executed.
  • FIG. 11 is a schematic view for explaining an example of the process of detecting the sight line direction.
  • the amount of positional deviation of the pupil in the two-dimensional (X, Y) direction with respect to the bright spot position due to the corneal reflection of the eyeball and the depth (Z) direction of the bright spot due to the corneal reflection of the eyeball with respect to the camera 10a The position of is required.
  • the direction W1 of the line of sight at this time is the two-dimensional (X, Y) displacement of the pupil with respect to the bright spot position due to corneal reflection of the eyeball. It is calculated from the amount and the distance D1 from the camera 10a to the bright spot position by corneal reflection of the eyeball.
  • the direction W2 of the line of sight at this time is the displacement of the pupil in the two-dimensional (X, Y) direction with respect to the bright spot position due to corneal reflection of the eyeball.
  • the distance D2 from the camera 10a to the bright spot position due to the corneal reflection of the eyeball is the two-dimensional (X, Y) displacement of the pupil with respect to the bright spot position due to corneal reflection of the eyeball.
  • the gaze detection unit 215 outputs information representing the detection result of the gaze direction from the external I / F 24 to the display control device 50.
  • the display control device 50 specifies the target object visually recognized by the driver 4 based on the detection information of the line-of-sight direction of the driver 4 output from the face image detection processing unit 20. Then, for example, in a head-up display, a frame-shaped pattern is superimposed and displayed on the captured image in front of the vehicle.
  • the driver 4 is looking at the object K1, for example, from the detected direction W1 of the sight line and the image obtained by imaging the front of the vehicle Display the pattern of the mold in layers. Further, it can be determined that the driver 4 is looking at the object K2, for example, from the detected direction W2 of the sight line and the image obtained by imaging the front of the vehicle, and a frame type pattern is superimposed on the image of the object K1. indicate.
  • the series of processes (1) to (5) described above are repeatedly executed in a preset detection cycle.
  • the detection cycle is set such that, for example, the above processes (1) to (5) are performed 30 times per second.
  • the camera head 10 is provided with one camera 10a, and this one camera 10a projects two-dimensional image data of the face of the driver 4 and a grid-like optical pattern.
  • the three-dimensional image data of the face of the driver 4 captured in the off state is respectively acquired.
  • the acquired two-dimensional image data and three-dimensional image data are associated with each other at the same pixel position, thereby setting the same eye of the driver 4 from the two-dimensional image data to the pupil 2
  • the three-dimensional position of the eye is detected from the three-dimensional image data while detecting the dimensional position.
  • the two-dimensional image data and the three-dimensional image data of the face of the driver 4 can be accurately correlated by a simple process without performing a process such as calibration.
  • the camera head can be miniaturized and the cost can be reduced as compared with the case where two-dimensional image data and three-dimensional image data are acquired by separate imaging units. This is particularly effective when the location where the camera head can be installed is limited, such as a cockpit of a vehicle.
  • a difference image is generated between the image data captured in a state of being illuminated by the illumination units 10c and 10d and the image data captured in a state of not being illuminated.
  • three-dimensional image data is acquired, a difference image between the pattern projection image and the image captured without illumination is generated. For this reason, the influence of ambient light can be reduced to obtain high quality two-dimensional image data and three-dimensional image data without variation in density.
  • the illumination units 10c and 10d and the pattern projection projector 10b emit near infrared light, the antiglare property to the driver 4 is maintained, and the gaze direction is also detected to the driver wearing sunglasses. can do.
  • the gaze direction of the driver 4 is detected based on the two-dimensional position of the pupil detected from the two-dimensional image data and the three-dimensional position of the eye detected from the three-dimensional image data. Therefore, even if the position of the face of the driver 4 in the depth direction with respect to the camera 10a changes, it is possible to accurately detect the sight line direction of the driver 4.
  • the camera head 10 includes the camera 10a, the pattern projection projector 10b, and the illumination units 10c and 10d as an example.
  • the camera head 10 may be configured to surround one camera 10a.
  • the pattern projection projector 10b and the illumination units 10c and 10d may be arranged in a circular or arc shape.
  • the present invention is not limited to this, and for example, information representing the detection result of the gaze direction is input to the aptitude determination device, and the presence of the driver 4 is determined based on the detection result of the gaze direction in the aptitude determination device. You may do so.
  • information representing the detection result of the gaze direction is input to, for example, an automatic driving control device, and based on the detection result of the gaze direction in the automatic driving control device, for example, from the automatic driving mode to the manual driving mode It may be possible to determine whether or not to switch.
  • An image processing apparatus comprising a hardware processor (21A) and a memory (21B), comprising: The hardware processor (21A) Projecting an optical pattern onto the face of said driver (4) (211); Imaging the face of the driver (4) (211); The captured first face image in a state in which the optical pattern is not projected on the face of the driver (4) and the captured second face in a state in which the optical pattern is projected Acquire the image and (212, 213), An image processing apparatus, configured to associate (214) the first face image with the second face image.
  • the hardware processor (21A) controls the imaging unit to capture the face of the driver (4), and acquires a first face image captured by the imaging unit (S13 to S18); , Controlling the projection unit to project an optical pattern on the face of the driver (4) (S19);
  • the hardware processor (21A) controls the imaging unit to image the face of the driver (4) in a state where the optical pattern is projected on the face of the driver (4), and the imaging is performed Acquiring a second face image captured by the unit (10a) (S20 to S22);
  • An image processing method comprising: the hardware processor (21A) associating the first face image with the second face image (S23).
  • the hardware processor (21A) controls the imaging unit (10a) to capture a face of the driver (4) in a state where the optical pattern is not projected, and obtains a first face image Process (S13 to S18),
  • An image processing method comprising: the hardware processor (21A) associating the first face image with the second face image (S23).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

In order to detect a two-dimensional image and a three-dimensional image of a driver's face with a small-size configuration and without requiring complicated image processing, one aspect according to the present invention is provided with one camera 10a on a camera head 10, and acquires, by means of the one camera 10a, two-dimensional image data about the face of the driver 4 and three-dimensional image data about the face of the driver 4 who is photographed in a state in which a grid-shaped optical pattern is projected thereon. In addition, the acquired two-dimensional image data are associated with the acquired three-dimensional image data at identical pixel positions, and accordingly, for the same eyes of the driver 4, the two-dimensional positions of the pupils are detected from the two-dimensional data and the three-dimensional positions of the eyes are detected from the three-dimensional image data.

Description

画像処理装置および画像処理方法Image processing apparatus and image processing method
 この発明の実施形態は、画像処理装置および画像処理方法に関する。 Embodiments of the present invention relate to an image processing apparatus and an image processing method.
 従来、物体の三次元形状を検出する手法としては、ステレオカメラを用いるものや、二次元画像カメラとTOF(Time Of Flight)距離画像カメラとを組み合わせるものが知られている。例えば、日本国特開2012-220479号公報には、コンテナなどの物体を赤外線カメラと距離画像カメラとを用いてほぼ同時に撮像し、赤外線カメラにより得られる上記物体の二次元赤外画像と、距離画像カメラにより得られる物体までの距離画像とをもとに、上記物体の輪郭や境界を検出する技術が記載されている。 Conventionally, as a method of detecting a three-dimensional shape of an object, a method using a stereo camera or a method combining a two-dimensional image camera and a TOF (Time Of Flight) distance image camera is known. For example, in JP-A-2012-220479, an object such as a container is imaged almost simultaneously using an infrared camera and a distance image camera, and the two-dimensional infrared image of the object obtained by the infrared camera and the distance A technique is disclosed that detects the contour or boundary of the object based on a distance image to the object obtained by an image camera.
 一方、最近では、車両の運転者を対象として、運転に先立ち運転者の顔を撮像してその撮像画像をもとに運転者を認証したり、運転中の運転者の顔を撮像してその撮像画像をもとに視線や顔の向き等を検出したりする技術の開発が進められている。この技術を実現するには、運転者の顔の二次元画像および三次元画像を取得する必要がある。 On the other hand, recently, for the driver of a vehicle, the face of the driver is imaged prior to driving, and the driver is authenticated based on the captured image, or the face of the driver while driving is imaged. Development of technology for detecting the line of sight, the direction of a face, etc. based on a captured image is in progress. In order to realize this technology, it is necessary to acquire two-dimensional and three-dimensional images of the driver's face.
 しかし、従来の三次元画像検出手法は、例えば特許文献1に記載されているように赤外線カメラと距離画像カメラという2種類のカメラを使用するものとなっている。このため、装置の大型化が避けられず、車両内のコックピットのような比較的狭いスペースには設置することが困難である。また装置のコストアップも避けられない。さらに、例えば顔認証や視線の方向を検出するために運転者の顔の三次元画像を得るためには、複数のカメラにより得られた画像データを画素単位で精度良く位置合わせする必要があり、キャリブレーション処理等の面倒な処理が必要不可欠となる。 However, the conventional three-dimensional image detection method uses two types of cameras, an infrared camera and a distance image camera, as described in, for example, Patent Document 1. For this reason, the enlargement of the device can not be avoided, and it is difficult to install in a relatively narrow space such as a cockpit in a vehicle. In addition, cost increase of the device can not be avoided. Furthermore, for example, in order to obtain a three-dimensional image of the driver's face in order to detect face recognition and the direction of the line of sight, it is necessary to accurately align image data obtained by a plurality of cameras in pixel units, Troublesome processing such as calibration processing is indispensable.
 この発明は、上記事情に着目してなされたもので、一側面では、運転者の顔の二次元画像と三次元画像を小型の構成でかつ複雑な画像処理を必要とすることなく検出できる技術を提供しようとするものである。 The present invention has been made in view of the above circumstances, and in one aspect, is a technology capable of detecting a two-dimensional image and a three-dimensional image of a driver's face with a compact configuration and without the need for complicated image processing. Is to provide.
 上記課題を解決するために、この発明に係る画像処理装置の第1の態様は、運転者の顔に対し光学パターンを投影する投影部と、前記運転者の顔を撮像する撮像部と、撮像制御部と、画像処理部とを具備する。そして、撮像制御部により、前記運転者の顔に対し前記投影部により前記光学パターンが投影されていない状態で前記撮像部により撮像された第1の顔画像を取得すると共に、前記光学パターンが投影された状態で前記撮像部により撮像された第2の顔画像を取得し、前記画像処理部により、前記取得された第1の顔画像と前記第2の顔画像とを対応付けるようにしたものである。 In order to solve the above-mentioned subject, the 1st mode of the image processing device concerning this invention is a projection part which projects an optical pattern on a driver's face, an imaging part which picturizes the driver's face, and imaging A control unit and an image processing unit are provided. Then, the imaging control unit acquires a first face image captured by the imaging unit in a state where the optical pattern is not projected by the projection unit on the face of the driver, and the optical pattern is projected In which the second face image captured by the imaging unit is acquired in the selected state, and the acquired first face image and the second face image are associated by the image processing unit. is there.
 この発明に係る画像処理装置の第1の態様によれば、前記第1の顔画像と前記第2の顔画像が1台の撮像部により取得される。このため、第1の顔画像と第2の顔画像とをそれぞれ別個の撮像部により取得する場合に比べ、装置の小型化とコストダウンを図ることができる。これは、車両内のコックピットのように顔画像の撮像に適する十分なスペースを確保することが困難な場所でも、装置を設置することが可能となるという格別の効果を奏する。また、第1の顔画像と第2の顔画像が1台の撮像部により得られるので、これらの顔画像を対応付ける処理を、面倒なキャリブレーション処理等を必要とすることなく、簡単かつ高精度に行うことが可能となる。 According to the first aspect of the image processing device of the present invention, the first face image and the second face image are acquired by one imaging unit. Therefore, the size and cost of the apparatus can be reduced as compared with the case where the first face image and the second face image are acquired by separate imaging units. This has the special effect that the device can be installed even in a place such as a cockpit in a vehicle where it is difficult to secure a sufficient space for capturing a face image. In addition, since the first face image and the second face image can be obtained by one imaging unit, the process of correlating these face images is simple and highly accurate without the need for troublesome calibration processing and the like. It is possible to do
 この発明に係る画像処理装置の第2の態様は、前記画像処理装置の第1の態様において、前記運転者の顔を照明する照明部をさらに具備する。そして、前記撮像制御部が、前記照明部により前記顔が照明された状態で前記撮像部により撮像された画像と、前記照明部により前記顔が照明されていない状態で前記撮像部により撮像された画像とをそれぞれ取得し、前記取得された各画像の差分画像を前記第1の顔画像とするようにしたものである。 
 第2の態様によれば、周辺光の影響を低減して濃淡のバラツキのない高品質の第1の顔画像を得ることが可能となる。
According to a second aspect of the image processing apparatus of the present invention, in the first aspect of the image processing apparatus, the image processing apparatus further includes an illumination unit that illuminates the face of the driver. Then, the imaging control unit is an image captured by the imaging unit in a state in which the face is illuminated by the illumination unit and an image captured by the imaging unit in a state in which the face is not illuminated by the illumination unit. An image is obtained, and a difference image of the obtained images is used as the first face image.
According to the second aspect, it is possible to reduce the influence of ambient light and obtain a high quality first face image without variation in density.
 この発明に係る画像処理装置の第3の態様は、前記画像処理装置の第2の態様において、前記照明部を、近赤外光を発光する光源と、当該光源から発光された近赤外光を偏光する偏光フィルタとを備え、当該偏光フィルタを通過した前記近赤外光により前記顔を照明するように構成したものである。
 第3の態様によれば、近赤外光により顔が照明されるので、運転者に対する防眩性を維持すると共に、サングラスを着用した運転者についても顔画像を取得することができる。また、偏光フィルタを通して照明することで、運転者が仮に眼鏡を着用していたとしても、眼鏡により近赤外光が反射されることなく、良好な第1の顔画像を得ることができる。
In a third aspect of the image processing apparatus according to the present invention, in the second aspect of the image processing apparatus, the illumination unit includes a light source for emitting near infrared light, and near infrared light emitted from the light source. And a polarization filter that polarizes the light, and is configured to illuminate the face with the near-infrared light that has passed through the polarization filter.
According to the third aspect, since the face is illuminated by the near infrared light, it is possible to maintain the anti-glare property to the driver and to acquire a face image also for the driver wearing sunglasses. In addition, by illuminating through the polarizing filter, even if the driver wears glasses, near infrared light is not reflected by the glasses, and a good first face image can be obtained.
 この発明に係る画像処理装置の第4の態様は、前記画像処理装置の第1乃至第3の態様のいずれかにおいて、前記撮像制御部により、前記運転者の顔に対し前記投影部により前記光学パターンが投影された状態で前記撮像部により撮像された画像と、前記光学パターンが投影されていない状態で前記撮像部により撮像された画像とをそれぞれ取得し、前記取得された各画像の差分画像を前記第2の顔画像とするようにしたものである。
 第4の態様によれば、周辺光の影響を低減して光学パターンによる画像成分を精度良く抽出することができ、これにより第2の顔画像として高精度の三次元画像を得ることが可能となる。
In a fourth aspect of the image processing apparatus according to the present invention, in any one of the first to third aspects of the image processing apparatus, the imaging control unit causes the projection unit to face the face of the driver. An image captured by the imaging unit in a state in which the pattern is projected and an image captured by the imaging unit in a state in which the optical pattern is not projected are respectively acquired, and a differential image of the acquired images As the second face image.
According to the fourth aspect, it is possible to reduce the influence of ambient light and extract the image component by the optical pattern with high accuracy, and thereby it is possible to obtain a high accuracy three-dimensional image as the second face image. Become.
 この発明に係る画像処理装置の第5の態様は、前記画像処理装置の第1乃至第4の態様のいずれかにおいて、前記画像処理部として、前記第1の顔画像と前記第2の顔画像とを画素位置が同一のもの同士で対応付ける処理部と、前記第2の顔画像をもとに、前記撮像部と前記顔の目とを結ぶ光軸と、当該光軸と直交する二次元平面により定義される三次元空間における前記目の三次元位置を検出する第1の検出部と、前記第1の顔画像をもとに、前記三次元位置の検出対象となった目と同一位置にある目の画像を対象として、当該目の瞳孔の前記二次元平面における二次元位置を検出する第2の検出部とを備えるようにしたものである。
 第5の態様によれば、別々に得られる第1の顔画像と第2の顔画像が同一の画素位置同士で対応付けられることで、同一位置にある目の三次元位置とその瞳孔の二次元位置とをそれぞれ検出することが可能となる。
In a fifth aspect of the image processing apparatus according to the present invention, in any one of the first to fourth aspects of the image processing apparatus, as the image processing unit, the first face image and the second face image And an optical axis connecting the imaging unit and the eyes of the face based on the second face image, and a two-dimensional plane orthogonal to the optical axis. A first detection unit for detecting the three-dimensional position of the eye in the three-dimensional space defined by the first position image, and the same position as the eye that is the detection target of the three-dimensional position based on the first face image And a second detection unit configured to detect a two-dimensional position of the pupil of the eye on the two-dimensional plane with respect to an image of a certain eye.
According to the fifth aspect, since the first face image and the second face image obtained separately are associated with each other at the same pixel position, the two-dimensional position of the eye at the same position and the pupil of the eye It is possible to detect each of the dimensional positions.
 この発明に係る画像処理装置の第6の態様は、前記画像処理装置の第5の態様において、視線検出部をさらに具備し、この視線検出部により、前記画像処理部により対応付けられた前記検出された目の三次元位置と、前記検出された瞳孔の二次元位置とに基づいて、前記運転者の視線を検出するようにしたものである。
 第6の態様によれば、運転者の目の位置に対する瞳孔の位置により視線の方向を算出する際に、カメラに対する目の奥行き方向の位置も考慮されるので、運転者の視線の方向を正確に検出することができる。
A sixth aspect of the image processing apparatus according to the present invention, in the fifth aspect of the image processing apparatus, further includes a gaze detection unit, and the detection associated with the image processing unit by the gaze detection unit. The line of sight of the driver is detected based on the three-dimensional position of the eye and the two-dimensional position of the detected pupil.
According to the sixth aspect, when calculating the direction of the line of sight based on the position of the pupil relative to the position of the driver's eyes, the position in the depth direction of the eye with respect to the camera is also taken into account. Can be detected.
 上記課題を解決するためにこの発明に係る画像処理方法の第1の態様は、画像処理装置が、運転者の顔を撮像部により撮像して第1の顔画像を取得する過程と、前記運転者の顔に対し光学パターンを投影する過程と、前記運転者の顔に対し前記光学パターンが投影された状態で前記運転者の顔を前記撮像部により撮像して第2の顔画像を取得する過程と、前記第1の顔画像と前記第2の顔画像とを対応付ける過程とを具備するようにしたものである。 In order to solve the above problems, according to a first aspect of the image processing method of the present invention, the image processing apparatus captures an image of the driver's face with an imaging unit to obtain a first face image; Projecting an optical pattern onto the face of the person, and capturing an image of the face of the driver by the imaging unit in a state where the optical pattern is projected onto the face of the driver to obtain a second face image A process and a process of associating the first face image with the second face image are included.
 この発明に係る画像処理方法の第1の態様によれば、第1の顔画像と第2の顔画像を1台の撮像部により取得することが可能となる。このため、第1の顔画像と第2の顔画像とをそれぞれ別個の撮像部により取得する場合に比べ、装置の小型化とコストダウンを図ることができる。また、第1の顔画像と第2の顔画像とが1個の撮像部により得られるので、これらの画像を対応付ける処理を、面倒なキャリブレーション処理等を必要とすることなく、簡単かつ高精度に行うことが可能となる。 According to the first aspect of the image processing method of the present invention, the first face image and the second face image can be acquired by one imaging unit. Therefore, the size and cost of the apparatus can be reduced as compared with the case where the first face image and the second face image are acquired by separate imaging units. In addition, since the first face image and the second face image can be obtained by one imaging unit, the process of correlating these images is simple and highly accurate without the need for troublesome calibration processing and the like. It is possible to do
 この発明に係る画像処理方法の第2の態様は、前記画像処理方法の第1の態様において、画像処理装置が、運転者の顔に対し光学パターンを投影する過程と、前記運転者の顔に対し前記光学パターンが投影された状態で前記運転者の顔を撮像部により撮像して第2の顔画像を取得する過程と、前記光学パターンが投影されていない状態で前記運転者の顔を前記撮像部により撮像して第1の顔画像を取得する過程と、前記第1の顔画像と前記第2の顔画像とを対応付ける過程とを具備するようにしたものである。 In a second aspect of the image processing method according to the present invention, in the first aspect of the image processing method, the image processing apparatus projects an optical pattern on the face of the driver, and the face of the driver A process of capturing an image of the driver's face by the imaging unit to obtain a second face image while the optical pattern is projected, and a process of capturing the driver's face without the optical pattern being projected A process of capturing a first face image by imaging with an imaging unit and a process of associating the first face image with the second face image are included.
 この発明に係る画像処理方法の第2の態様によれば、第1の態様と同様に、第1の顔画像と第2の顔画像を1台の撮像部により取得することが可能となり、これにより装置の小型化およびコストダウンを可能とすると共に、第1の顔画像と第2の顔画像とをキャリブレーション等の処理を行うことなく簡単な処理により高精度に対応付けることができると云った優れた効果が奏せられる。 According to the second aspect of the image processing method according to the present invention, as in the first aspect, it is possible to obtain the first face image and the second face image by one imaging unit. As a result, the size and cost of the device can be reduced, and the first face image and the second face image can be associated with high accuracy by simple processing without performing processing such as calibration. Excellent effects can be achieved.
 すなわちこの発明の各態様によれば、運転者の顔の二次元画像と三次元画像を小型の構成でかつ複雑な画像処理を必要とすることなく検出できる技術を提供することができる。 That is, according to each aspect of the present invention, it is possible to provide a technique capable of detecting a two-dimensional image and a three-dimensional image of a driver's face with a compact configuration and without the need for complicated image processing.
図1は、この発明に係る画像処理装置の一適用例を説明するブロック図である。FIG. 1 is a block diagram for explaining an application example of an image processing apparatus according to the present invention. 図2は、この発明の一実施形態に係る画像処理装置を搭載した車両の概略構成図である。FIG. 2 is a schematic block diagram of a vehicle equipped with an image processing apparatus according to an embodiment of the present invention. 図3は、この発明の一実施形態に係る画像処理装置のカメラヘッドの内部構造を示す図である。FIG. 3 is a view showing an internal structure of a camera head of an image processing apparatus according to an embodiment of the present invention. 図4は、図3に示したカメラヘッドのパターン投影プロジェクタの構造を示す斜視図である。FIG. 4 is a perspective view showing the structure of the pattern projection projector of the camera head shown in FIG. 図5は、この発明の一実施形態に係る画像処理装置の顔画像検出処理ユニットのハードウェア構成を示すブロック図である。FIG. 5 is a block diagram showing the hardware configuration of the face image detection processing unit of the image processing apparatus according to the embodiment of the present invention. 図6は、この発明の一実施形態に係る画像処理装置のカメラヘッドおよび顔画像検出処理ユニットのハードウェア構成に、さらにソフトウェア構成を追加して示したブロック図である。FIG. 6 is a block diagram showing the hardware configuration of the camera head and the face image detection processing unit of the image processing apparatus according to the embodiment of the present invention, additionally including a software configuration. 図7は、図6に示した顔画像検出処理ユニットによる制御手順と制御内容を示すフローチャートである。FIG. 7 is a flowchart showing the control procedure and control contents by the face image detection processing unit shown in FIG. 図8は、図6に示した画像処理装置により撮像された二次元画像データの一例を示す図である。FIG. 8 is a view showing an example of two-dimensional image data captured by the image processing apparatus shown in FIG. 図9は、図6に示した画像処理装置により撮像された三次元画像データの一例を示す図である。FIG. 9 is a view showing an example of three-dimensional image data captured by the image processing apparatus shown in FIG. 図10は、目の位置の三次元計測の原理説明に使用する図である。FIG. 10 is a diagram used to explain the principle of three-dimensional measurement of the eye position. 図11は、視線検出の原理説明に使用する図である。FIG. 11 is a diagram used to explain the principle of sight line detection.
 以下、図面を参照してこの発明に係わる実施形態を説明する。 Hereinafter, embodiments according to the present invention will be described with reference to the drawings.
 [適用例]
 先ず、この発明の一実施形態に係る画像処理装置の一適用例について説明する。
 図1は、この発明に係る画像処理装置の一適用例を説明するブロック図である。画像処理装置は、カメラヘッド1と、顔画像検出処理ユニット2とを備えている。
[Example of application]
First, an application example of an image processing apparatus according to an embodiment of the present invention will be described.
FIG. 1 is a block diagram for explaining an application example of an image processing apparatus according to the present invention. The image processing apparatus includes a camera head 1 and a face image detection processing unit 2.
 カメラヘッド1は、1個の撮像部1aと、1個の投影部1bと、2個の照明部1c,1dとを備える。撮像部1aは、固体撮像素子を用いたイメージセンサからなり、運転者の顔を撮像してその顔画像データを出力する。投影部1bは、例えばパターン投影プロジェクタからなり、運転者の顔に対し例えば縞状または格子状の光学パターンを投影する。照明部1c,1dは、光源として例えばLED(Light Emitting Diode)等の固体発光素子を使用したもので、運転者の顔を照明する。 The camera head 1 includes one imaging unit 1a, one projection unit 1b, and two illumination units 1c and 1d. The imaging unit 1a includes an image sensor using a solid-state imaging device, captures an image of the driver's face, and outputs face image data thereof. The projection unit 1 b is, for example, a pattern projection projector, and projects, for example, a stripe or lattice optical pattern on the face of the driver. The illumination units 1c and 1d use a solid light emitting element such as a light emitting diode (LED) as a light source, and illuminate the driver's face.
 顔画像検出処理ユニット2は、撮像制御部2aと、画像処理部2bとを備えている。撮像制御部2aは、図示しないプログラムメモリに記憶された撮像制御プログラムに従い、上記カメラヘッド1の撮像部1a、投影部1bおよび照明部1c,1dをそれぞれ所定のタイミングで制御することにより、運転者の顔の二次元画像データとパターン投影画像データをそれぞれ取得する。 The face image detection processing unit 2 includes an imaging control unit 2a and an image processing unit 2b. The imaging control unit 2a controls the imaging unit 1a, the projection unit 1b and the illumination units 1c and 1d of the camera head 1 according to an imaging control program stored in a program memory (not shown) at a predetermined timing, thereby allowing the driver to The two-dimensional image data of the face and the pattern projection image data are acquired respectively.
 例えば、撮像制御部2aは、照明部1c,1dを点灯させて運転者の顔を照明した状態で撮像部1aにより運転者の顔を撮像し、これにより運転者の顔の二次元画像データ(第1の顔画像)を取得する。また撮像制御部2aは、投影部1bを動作させて運転者の顔に対し縞状または格子状の光学パターンを投影し、この状態で運転者の顔を上記撮像部1aにより撮像することにより運転者の顔のパターン投影画像データを取得する。なお、上記顔のパターン投影画像データは、顔の三次元位置情報を得るためのデータであることから、以後簡略的に三次元画像データ(第2の顔画像)と称する。 For example, the imaging control unit 2a captures an image of the driver's face by the imaging unit 1a in a state in which the driver's face is illuminated by lighting the illumination units 1c and 1d, whereby two-dimensional image data of the driver's face ( Get the first face image). Further, the imaging control unit 2a operates the projection unit 1b to project a striped or lattice optical pattern onto the face of the driver, and in this state, the imaging portion 1a captures an image of the driver's face to drive the vehicle. Acquire pattern projection image data of the person's face. Since the pattern projection image data of the face is data for obtaining three-dimensional position information of the face, it will be simply referred to as three-dimensional image data (second face image) hereinafter.
 画像処理部2bは、上記撮像制御部2aの制御の下、上記撮像部1aにより得られた二次元画像データと三次元画像データとを、同一の画素位置同士で対応付ける処理を行う。そして、この画素位置が対応付けられた二次元画像データと三次元画像データのうち、二次元画像データから例えばパターンマッチングにより運転者の目を検出し、さらに輝点(カメラ位置)に対する瞳孔の位置を検出することで視線の方向を算出する。しかしながら、目の奥行き位置が不明では正確な視線位置を特定することができないため、三次元画像を利用し目の奥行き位置を特定する。これにより正確な視線位置特定が可能となる。三次元位置は、撮像部1aと上記眼球の輝点とを結ぶ光軸をZ軸とし、当該光軸と直交する二次元平面をX,Y軸で表したとき、これらのX,Y,Z軸により定義される三次元空間の位置座標として表される。 Under the control of the imaging control unit 2a, the image processing unit 2b performs processing of associating the two-dimensional image data obtained by the imaging unit 1a with the three-dimensional image data at the same pixel position. Then, of the two-dimensional image data and the three-dimensional image data associated with the pixel position, the eye of the driver is detected by, for example, pattern matching from the two-dimensional image data, and the position of the pupil with respect to the bright spot (camera position) The direction of the line of sight is calculated by detecting However, if the depth position of the eye is unknown, an accurate gaze position can not be identified. Therefore, the depth position of the eye is identified using a three-dimensional image. This enables accurate gaze position determination. The three-dimensional position is defined by the optical axis connecting the imaging unit 1a and the bright spot of the eyeball as the Z axis, and the two-dimensional plane orthogonal to the optical axis is represented by the X and Y axes. It is expressed as position coordinates in a three-dimensional space defined by axes.
 また画像処理部2bは、上記二次元画像データから、上記三次元位置の検出対象となった目と位置が対応する目の画像領域を選択し、当該画像領域から上記目の瞳孔の二次元位置を検出する。この二次元位置は、上記X,Y軸で表される二次元平面における位置として表される。 Further, the image processing unit 2b selects, from the two-dimensional image data, an image area of the eye whose position corresponds to the eye whose detection target is the three-dimensional position, and the two-dimensional position of the pupil of the eye from the image area. To detect This two-dimensional position is represented as a position in a two-dimensional plane represented by the X and Y axes.
 上記検出された眼球の輝点の三次元位置と、上記検出された瞳孔の二次元位置は、例えば運転者の視線方向を検出するために使用される。 The three-dimensional position of the detected bright spot of the eyeball and the two-dimensional position of the detected pupil are used, for example, to detect the driver's gaze direction.
 以上のような構成であるから、運転者の顔の二次元画像データおよび三次元画像データはカメラヘッド1に設けられた1個の撮像部1aにより得られる。このため、二次元画像データと三次元画像データとを、キャリブレーション等の処理を行うことなく簡単な処理により精度良く対応付けることができる。また、二次元画像データと三次元画像データとをそれぞれ別個の撮像部により取得する場合に比べ、装置の小型化とコストダウンを図ることができる。これは、車両のコックピットのように設置可能な位置が限られる場所にカメラヘッド1を設置する場合に、非常に優位な効果となる。 With the above configuration, two-dimensional image data and three-dimensional image data of the driver's face can be obtained by one imaging unit 1 a provided in the camera head 1. Therefore, the two-dimensional image data and the three-dimensional image data can be accurately correlated by a simple process without performing a process such as calibration. Further, as compared with the case where two-dimensional image data and three-dimensional image data are acquired by separate imaging units, downsizing and cost reduction of the apparatus can be achieved. This is a very advantageous effect when the camera head 1 is installed in a place where the installable position is limited, such as a cockpit of a vehicle.
 [一実施形態]
 (構成例)
 図2は、この発明の一実施形態に係る画像処理装置を搭載した車両3の構成の一例を示すブロック図である。本実施形態に係る画像処理装置は、例えば、カメラヘッド10と、顔画像検出処理ユニット20とを備えている。
[One embodiment]
(Example of configuration)
FIG. 2 is a block diagram showing an example of the configuration of a vehicle 3 equipped with an image processing apparatus according to an embodiment of the present invention. The image processing apparatus according to the present embodiment includes, for example, a camera head 10 and a face image detection processing unit 20.
 車両3の運転座席には着座センサ40が設置されている。着座センサ40は、運転座席への運転者4の着座を検出し、その検出信号を顔画像検出処理ユニット20へ出力する。また車両3には表示制御装置50が設けられている。表示制御装置50は例えばヘッドアップディスプレイに対する表示を制御するもので、顔画像検出処理ユニット20から出力される運転者4の視線方向の検出情報をもとに、運転者4が視認している対象物の画像に例えば枠型のパターンを重ねて表示する。 A seating sensor 40 is installed at the driver's seat of the vehicle 3. The seating sensor 40 detects the seating of the driver 4 in the driver's seat, and outputs a detection signal to the face image detection processing unit 20. In addition, the display control device 50 is provided in the vehicle 3. The display control device 50 controls, for example, the display on the head-up display, and the target that the driver 4 visually recognizes based on the detection information of the line-of-sight direction of the driver 4 output from the face image detection processing unit 20 For example, a frame-shaped pattern is displayed superimposed on the image of the object.
 (1)カメラヘッド10
 カメラヘッド10は、例えば、ダッシュボードの運転者4と正対する位置に設置される。図3はカメラヘッド10の内部構造の一例を示す正面図である。
(1) Camera head 10
The camera head 10 is installed, for example, at a position facing the driver 4 of the dashboard. FIG. 3 is a front view showing an example of the internal structure of the camera head 10.
 カメラヘッド10は、横長の箱形に構成されたヘッド筐体11内に、撮像部としての1個のカメラ10aと、投影部としての1個のパターン投影プロジェクタ10bと、2個の照明部10c,10dを横一列に配置したものとなっている。その配置位置は、中央部にカメラ10aとパターン投影プロジェクタ10bが配置され、上記カメラ10aおよびパターン投影プロジェクタ10bの両隣に照明部10c,10dが配置されるように設定されている。 The camera head 10 includes a camera 10a as an imaging unit, a pattern projection projector 10b as a projection unit, and two illumination units 10c in a head housing 11 configured in a horizontally long box shape. , 10d are arranged in a row. The arrangement position is set so that the camera 10a and the pattern projection projector 10b are arranged at the center, and the illumination units 10c and 10d are arranged next to the camera 10a and the pattern projection projector 10b.
 カメラ10aは、撮像デバイスとして、例えば近赤外光を受光可能なCMOS(Complementary MOS)イメージセンサを使用する。そして、運転者4の顔を撮像してその顔画像データを出力する。なお、撮像デバイスとしては、CCD(Charge Coupled Device)等の他の固体撮像素子を用いてもよい。 The camera 10 a uses, for example, a CMOS (Complementary MOS) image sensor capable of receiving near-infrared light as an imaging device. Then, the face of the driver 4 is imaged and the face image data is output. Note that as the imaging device, another solid-state imaging device such as a CCD (Charge Coupled Device) may be used.
 パターン投影プロジェクタ10bは、運転者4の顔に対し例えば格子状の光学パターンを投影するもので、例えば次のように構成される。
 図4はその構成を示す斜視図である。すなわち、プロジェクタ基板31には、例えば近赤外光を発光する発光素子32が装着されている。またプロジェクタ基板31上には、上記発光素子32の光軸方向に、例えば円筒形状をなすプロジェクタ光学系33が配置されている。プロジェクタ光学系33は、マスクホルダと、レンズホルダとおよび投影レンズを有する。マスクホルダには、上記格子状の光学パターンを生成するためのマスクが内蔵されている。投影レンズは、上記マスクにより生成される光学パターンを所定の倍率で運転者の顔に投影する。なお、光学パターンは、格子状に限らず、他に縞状や市松模様のようなパターンでもよい。
The pattern projection projector 10 b projects, for example, a grid-like optical pattern onto the face of the driver 4 and is configured as follows, for example.
FIG. 4 is a perspective view showing the configuration. That is, the projector substrate 31 is mounted with a light emitting element 32 for emitting near infrared light, for example. Further, on the projector substrate 31, for example, a cylindrical projector optical system 33 is disposed in the optical axis direction of the light emitting element 32. The projector optical system 33 has a mask holder, a lens holder, and a projection lens. The mask holder incorporates a mask for generating the grid-like optical pattern. The projection lens projects the optical pattern generated by the mask onto the driver's face at a predetermined magnification. The optical pattern is not limited to a lattice, and may be a stripe or checkered pattern.
 照明部10c,10dは、例えば、それぞれ2個の発光素子11c,12cおよび11d,12dを有する光源と、偏光フィルタとを備える。光源は、近赤外光を発光する。照明部10c,10dは、上記発光素子11c,12cおよび11d,12dから発光された近赤外光の平面光を偏光フィルタを通して照明光として運転者の顔に照射する。偏光フィルタを使用する理由は、運転者4が眼鏡をかけていた場合に、眼鏡により近赤外光が反射されてしまい、運転者4の目を検出できなくなることを防ぐためである照明部10c,10dは、顔画像検出処理ユニット20から出力される点灯制御信号に従い点灯および消灯動作する。 The illumination units 10c and 10d each include, for example, a light source having two light emitting elements 11c, 12c and 11d and 12d, and a polarization filter. The light source emits near infrared light. The illumination units 10c and 10d irradiate flat light of near infrared light emitted from the light emitting elements 11c and 12c and 11d and 12d through illumination filters as illumination light to the driver's face. The reason for using a polarizing filter is to prevent near infrared light from being reflected by the glasses when the driver 4 wears glasses, and to prevent the eyes of the driver 4 from being undetectable. , 10d perform lighting and extinguishing operations according to the lighting control signal output from the face image detection processing unit 20.
 (2)顔画像検出処理ユニット20
 顔画像検出処理ユニット20は、上記カメラヘッド10を制御して、運転者4の顔の二次元画像データおよび三次元画像データを取得する。そして、この取得された二次元画像データおよび三次元画像データをもとに運転者4の目の三次元位置および当該目の瞳孔の二次元位置を検出し、さらにこれらの検出結果をもとに運転者4の視線を検出する処理を行う。
(2) Face image detection processing unit 20
The face image detection processing unit 20 controls the camera head 10 to acquire two-dimensional image data and three-dimensional image data of the face of the driver 4. Then, based on the acquired two-dimensional image data and three-dimensional image data, the three-dimensional position of the eye of the driver 4 and the two-dimensional position of the pupil of the eye are detected, and further, based on these detection results. A process of detecting the line of sight of the driver 4 is performed.
 (2-1)ハードウェア構成
 図5は、顔画像検出処理ユニット20のハードウェア構成の一例を示すブロック図である。
 顔画像検出処理ユニット20は、ハードウェアとして、例えば、CPU(Central Processing Unit)等のハードウェアプロセッサ21Aを有し、このハードウェアプロセッサ21Aに、プログラムメモリ21B、データメモリ25、カメラヘッドインタフェース(カメラヘッドI/F)23、外部インタフェース(外部I/F)24を、バス22を介して接続したものとなっている。
(2-1) Hardware Configuration FIG. 5 is a block diagram showing an example of the hardware configuration of the face image detection processing unit 20. As shown in FIG.
The face image detection processing unit 20 has a hardware processor 21A such as a CPU (Central Processing Unit) as hardware, and the hardware processor 21A includes a program memory 21B, a data memory 25, a camera head interface (camera A head I / F) 23 and an external interface (external I / F) 24 are connected via a bus 22.
 カメラヘッドI/F23は、上記カメラヘッド10のカメラ10a、パターン投影プロジェクタ10b、各照明部10c,10dに向けて制御信号を出力すると共に、カメラヘッド10のカメラ10aから出力された画像データを受信する。外部I/F24は、着座センサ40から出力される着座検出信号を受信すると共に、視線方向の検出結果を表す情報を表示制御装置50へ出力する。 The camera head I / F 23 outputs a control signal to the camera 10a of the camera head 10, the pattern projection projector 10b, and the lighting units 10c and 10d, and receives image data output from the camera 10a of the camera head 10. Do. The external I / F 24 receives the seating detection signal output from the seating sensor 40, and outputs information representing the detection result of the gaze direction to the display control device 50.
 カメラヘッドI/F23とカメラヘッド10との間、外部I/F24と着座センサ40および表示制御装置50との間の信号およびデータの伝送は、いずれも信号ケーブルにより行われる。なお、車内にLAN(Local Area Network)等の車内有線ネットワークや、Bluetooth(登録商標)等の小電力無線データ通信規格を採用した車内無線ネットワークが備えられている場合には、これらのネットワークを用いて上記信号およびデータの伝送を行ってもよい。 Transmission of signals and data between the camera head I / F 23 and the camera head 10 and between the external I / F 24 and the seating sensor 40 and the display control device 50 is all performed by a signal cable. If an in-vehicle wired network such as LAN (Local Area Network) or an in-vehicle wireless network adopting low power wireless data communication standards such as Bluetooth (registered trademark) is provided in the vehicle, these networks are used. The above signals and data may be transmitted.
 プログラムメモリ21Bは、記憶媒体として、例えば、HDD(Hard Disk Drive)、SSD(Solid State Drive)等の随時書込および読出しが可能な不揮発性メモリ、またはROM等の不揮発性メモリを使用したもので、本実施形態に係る各種制御処理を実行するために必要なプログラムが格納されている。 The program memory 21B uses, as a storage medium, for example, a non-volatile memory such as a hard disk drive (HDD) or a solid state drive (SSD) that can be written and read as needed, or a non-volatile memory such as a ROM. Programs necessary to execute various control processes according to the present embodiment are stored.
 データメモリ25は、例えば、HDDまたはSSD等の随時書込および読出しが可能な不揮発性メモリと、RAM等の揮発性メモリとを組み合わせたものを記憶媒体として備え、カメラ10aにより撮像された顔画像データや検出結果を表す情報を記憶するために使用される。 The data memory 25 includes, for example, a combination of a non-volatile memory such as an HDD or an SSD that can be written and read as needed and a volatile memory such as a RAM as a storage medium, and a face image captured by the camera 10a It is used to store data and information representing detection results.
 (2-2)ソフトウェア構成
 図6は、図5に示した顔画像検出処理ユニット20のハードウェア構成に、さらにソフトウェア構成を追加して示したブロック図である。
 データメモリ25の記憶領域には、二次元画像記憶部251と、三次元画像記憶部252と、検出結果記憶部253が設けられている。二次元画像記憶部251および三次元画像記憶部252は、それぞれ制御ユニット21により取得された二次元画像データおよび三次元画像データを記憶するために用いられる。検出結果記憶部253は、制御ユニット21により検出される、目の三次元位置および当該目の瞳孔の二次元位置を表す情報を記憶するために使用される。
(2-2) Software Configuration FIG. 6 is a block diagram showing a software configuration further added to the hardware configuration of the face image detection processing unit 20 shown in FIG.
In a storage area of the data memory 25, a two-dimensional image storage unit 251, a three-dimensional image storage unit 252, and a detection result storage unit 253 are provided. The two-dimensional image storage unit 251 and the three-dimensional image storage unit 252 are used to store two-dimensional image data and three-dimensional image data acquired by the control unit 21, respectively. The detection result storage unit 253 is used to store information indicating the three-dimensional position of the eye and the two-dimensional position of the pupil of the eye, which is detected by the control unit 21.
 制御ユニット21は、上記ハードウェアプロセッサ21Aと、上記プログラムメモリ21Bとから構成され、ソフトウェアによる処理機能部として、カメラヘッド制御部211と、二次元画像取得処理部212と、三次元画像取得処理部213と、画像処理部214と、視線検出部215とを備えている。このうちカメラヘッド制御部211、二次元画像取得処理部212および三次元画像取得処理部213は、撮像制御部を構成する。 The control unit 21 includes the hardware processor 21A and the program memory 21B, and as a processing function unit by software, a camera head control unit 211, a two-dimensional image acquisition processing unit 212, and a three-dimensional image acquisition processing unit 213, an image processing unit 214, and a gaze detection unit 215. Among them, the camera head control unit 211, the two-dimensional image acquisition processing unit 212, and the three-dimensional image acquisition processing unit 213 constitute an imaging control unit.
 上記制御部、各処理部および検出部211~215は、いずれもプログラムメモリ21Bに格納されたプログラムを上記ハードウェアプロセッサ21Aに実行させることにより実現される。 The control unit, each processing unit, and the detection units 211 to 215 are all realized by causing the hardware processor 21A to execute the program stored in the program memory 21B.
 カメラヘッド制御部211は、着座センサ40から出力される着座検出信号をトリガとして起動し、予め定められた検出周期で、所定の手順で上記カメラヘッド10の照明部10c,10d、パターン投影プロジェクタ10bおよびカメラ10aを制御する。上記検出周期および手順の一例については後に詳しく述べる。 The camera head control unit 211 is triggered by the seating detection signal output from the seating sensor 40 as a trigger, and illuminates the illumination units 10c and 10d of the camera head 10 and the pattern projection projector 10b in a predetermined procedure in a predetermined detection cycle. And control the camera 10a. An example of the above detection cycle and procedure will be described in detail later.
 二次元画像取得処理部212は、上記検出周期ごとに、上記カメラヘッド制御部211の制御に従いカメラ10aにより撮像された運転者4の顔の二次元画像データを、カメラヘッドI/F23を介して取り込み、データメモリ25の二次元画像記憶部251に記憶させる処理を行う。二次元画像データは、パターン投影プロジェクタ10bにより光学パターンが投影されていない状態で撮像された運転者4の顔画像データである。この二次元画像データの取得処理手法の一例については後に詳しく述べる。 The two-dimensional image acquisition processing unit 212 uses the camera head I / F 23 two-dimensional image data of the face of the driver 4 captured by the camera 10 a under the control of the camera head control unit 211 for each detection cycle. A process of taking in and storing it in the two-dimensional image storage unit 251 of the data memory 25 is performed. The two-dimensional image data is face image data of the driver 4 captured in a state where the optical pattern is not projected by the pattern projection projector 10b. An example of the acquisition processing method of this two-dimensional image data will be described in detail later.
 三次元画像取得処理部213は、上記検出周期ごとに、上記カメラヘッド制御部211の制御に従いカメラ10aにより撮像された運転者4の顔の三次元画像データを、カメラヘッドI/F23を介して取り込み、データメモリ25の三次元画像記憶部252に記憶させる処理を行う。三次元画像データは、パターン投影プロジェクタ10bにより光学パターンが投影されている状態で撮像された運転者4の顔の画像データである。この三次元画像データの取得処理手法の一例についても後に詳しく述べる。 The three-dimensional image acquisition processing unit 213 performs three-dimensional image data of the face of the driver 4 captured by the camera 10 a under the control of the camera head control unit 211 through the camera head I / F 23 at each detection cycle. Processing for loading and storing in the three-dimensional image storage unit 252 of the data memory 25 is performed. The three-dimensional image data is image data of the face of the driver 4 captured in a state where the optical pattern is projected by the pattern projection projector 10b. An example of this three-dimensional image data acquisition processing method will also be described in detail later.
 画像処理部214は、例えば以下の処理機能を有している。
 (1) 上記検出周期ごとに、上記二次元画像記憶部251および上記三次元画像記憶部252から、同一の検出周期内に取得された二次元画像データおよび三次元画像データを読み出し、この二次元画像データと三次元画像データとを同一の画素位置同士で対応付ける処理。
The image processing unit 214 has, for example, the following processing functions.
(1) The two-dimensional image data and the three-dimensional image data acquired in the same detection cycle are read out from the two-dimensional image storage unit 251 and the three-dimensional image storage unit 252 for each of the detection cycles. Process of associating image data and 3D image data at the same pixel position.
 (2) 上記検出周期ごとに、上記対応付け処理後の三次元画像データから運転者4の目の一方または両方を認識し、この認識された目の画像から眼球の角膜反射による輝点の三次元位置を検出して、その検出結果を表す情報を検出結果記憶部253に記憶させる処理。上記眼球の輝点の三次元位置は、カメラ10aと上記輝点とを結ぶ光軸をZ軸とし、当該光軸と直交する二次元平面をX,Y軸で表したとき、これらX,Y,Z軸により定義される三次元空間における位置として表される。 (2) At each detection cycle, one or both of the eyes of the driver 4 are recognized from the three-dimensional image data after the association processing, and from the recognized image of the eye, the third order of the bright spot by the corneal reflection of the eyeball A process of detecting an original position and storing information representing the detection result in the detection result storage unit 253. The three-dimensional position of the bright spot of the eyeball is such that the optical axis connecting the camera 10a and the bright spot is the Z axis, and the two-dimensional plane orthogonal to the optical axis is X and Y axes. , Z axis is expressed as a position in a three-dimensional space.
 (3) 上記検出周期ごとに、上記対応付け処理後の二次元画像データから、上記眼球の輝点の三次元位置の検出対象となった目と位置が対応する目の画像領域を選択する。そして、当該画像領域から上記目の瞳孔の二次元位置を検出して、その検出結果を表す情報を上記検出結果記憶部253に記憶させる処理。上記瞳孔の二次元位置は、上記X,Y軸で表される二次元平面における位置として表される。 (3) For each detection cycle, from the two-dimensional image data after the association processing, select an image area of the eye whose position corresponds to the eye whose detection target is the three-dimensional position of the bright spot of the eye. And a process of detecting the two-dimensional position of the pupil of the eye from the image area and storing information representing the detection result in the detection result storage unit 253. The two-dimensional position of the pupil is represented as a position on a two-dimensional plane represented by the X and Y axes.
 視線検出部215は、上記検出周期ごとに、上記検出結果記憶部253から上記検出対象となった目の輝点の三次元位置および当該目の瞳孔の二次元位置を読み出し、この読み出された目の輝点の三次元位置および目の瞳孔の二次元位置とから運転者4の視線の方向を検出する。また視線検出部215は、上記検出された視線の方向を表す情報を外部I/F24から表示制御装置50へ出力する。 The line-of-sight detection unit 215 reads out the three-dimensional position of the bright spot of the eye to be detected and the two-dimensional position of the pupil of the eye from the detection result storage unit 253 for each detection cycle. The direction of the line of sight of the driver 4 is detected from the three-dimensional position of the bright spot of the eye and the two-dimensional position of the pupil of the eye. The line-of-sight detection unit 215 also outputs information representing the detected direction of the line of sight from the external I / F 24 to the display control device 50.
 (動作例)
 次に、以上のように構成された画像処理装置の動作例を説明する。
 図7は顔画像検出処理ユニット20の処理手順と処理内容の一例を示すフローチャートである。
(Operation example)
Next, an operation example of the image processing apparatus configured as described above will be described.
FIG. 7 is a flowchart showing an example of the processing procedure and processing contents of the face image detection processing unit 20.
 (1)二次元画像データの取得
 顔画像検出処理ユニット20の制御ユニット21は、カメラヘッド制御部211の制御の下、ステップS11において着座センサ40から出力される着座検出信号の入力を監視している。この状態で、運転者が座席に着座すると着座センサ40から着座検出信号が出力される。この着座検出信号は、外部I/F24を介して制御ユニット21に入力される。制御ユニット21は、上記着座検出信号の入力をステップS11で検出すると、先ず運転者の顔の二次元画像データを取得するための制御を以下の手順で実行する。
(1) Acquisition of Two-Dimensional Image Data The control unit 21 of the face image detection processing unit 20 monitors the input of the seating detection signal output from the seating sensor 40 in step S11 under the control of the camera head control unit 211. There is. In this state, when the driver is seated on the seat, a seating detection signal is output from the seating sensor 40. The seating detection signal is input to the control unit 21 via the external I / F 24. When detecting the input of the seating detection signal in step S11, the control unit 21 first executes control for acquiring two-dimensional image data of the driver's face in the following procedure.
 すなわち、先ずカメラヘッド制御部211の制御の下、ステップS12により、カメラヘッドI/F23からカメラヘッド10に対し照明点灯信号が出力される。カメラヘッド10では、上記照明点灯信号を受信すると照明部10c,10dが点灯し、これにより運転者4の顔が照明される。この照明光は偏光フィルタを通した近赤外光である。このため、運転者4は眩しさを感じず運転に支障は生じない。また、運転者4が仮に眼鏡をかけていた場合でも、眼鏡により近赤外光が反射されてしまい、運転者4の目を検出できなくなるといった不具合も生じない。 That is, first, under the control of the camera head control unit 211, at step S12, an illumination lighting signal is output from the camera head I / F 23 to the camera head 10. In the camera head 10, when the illumination lighting signal is received, the lighting units 10c and 10d are turned on, whereby the face of the driver 4 is illuminated. This illumination light is near infrared light which has passed through the polarization filter. For this reason, the driver 4 does not feel dazzling and there is no hindrance to driving. In addition, even when the driver 4 wears glasses, near infrared light is reflected by the glasses, and the problem that the eyes of the driver 4 can not be detected does not occur.
 この状態でカメラヘッド制御部211は、ステップS13により、カメラヘッドI/F23からカメラヘッド10に対し撮像制御信号を出力する。カメラヘッド10は、上記撮像制御信号を受信するとカメラ10aが運転者4の顔を撮像し、この撮像により得られた運転者4の顔の二次元画像データを顔画像検出処理ユニット20へ出力する。顔画像検出処理ユニット20の制御ユニット21は、二次元画像取得処理部212の制御の下、ステップS14において上記二次元画像データVa1をカメラヘッドI/F23を介して取り込む。そして、当該二次元画像データVa1をデータメモリ25の二次元画像記憶部251に記憶させる。このとき、上記照明部10c,10dから顔面を照明している照明光は偏光フィルタを通したものである。このため、仮に運転者4が眼鏡をかけていたとしても、眼鏡により近赤外光が反射されてしまい、運転者4の目を検出できなくなる不具合は生じない。したがって、二次元画像データVa1は、例えば図8に示すように眼球を含む顔画像となる。 In this state, the camera head control unit 211 outputs an imaging control signal to the camera head 10 from the camera head I / F 23 in step S13. When the camera head 10 receives the image pickup control signal, the camera 10 a picks up the face of the driver 4 and outputs two-dimensional image data of the face of the driver 4 obtained by the image pickup to the face image detection processing unit 20 . The control unit 21 of the face image detection processing unit 20 takes in the two-dimensional image data Va1 through the camera head I / F 23 in step S14 under the control of the two-dimensional image acquisition processing unit 212. Then, the two-dimensional image storage unit 251 of the data memory 25 stores the two-dimensional image data Va1. At this time, the illumination light illuminating the face from the illumination units 10c and 10d passes through the polarization filter. For this reason, even if the driver 4 wears glasses, near infrared light is reflected by the glasses and there is no problem that the eyes of the driver 4 can not be detected. Therefore, the two-dimensional image data Va1 is, for example, a face image including an eyeball as shown in FIG.
 続いて制御ユニット21は、カメラヘッド制御部211の制御の下、ステップS15により、カメラヘッドI/F23を介して照明消灯信号を出力する。カメラヘッド10は、上記照明消灯信号を受信すると照明部10c,10dを消灯する。 Subsequently, under the control of the camera head control unit 211, the control unit 21 outputs an illumination off signal via the camera head I / F 23 in step S15. The camera head 10 turns off the illumination units 10c and 10d when it receives the above-mentioned illumination off signal.
 この状態で制御ユニット21は、カメラヘッド制御部211の制御の下、ステップS16により、カメラヘッドI/F23からカメラヘッド10に対し撮像制御信号を出力する。カメラヘッド10は、上記撮像制御信号を受信するとカメラ10aが運転者4の顔を撮像し、この撮像動作により得られた運転者4の顔の二次元画像データを顔画像検出処理ユニット20へ出力する。 In this state, the control unit 21 outputs an imaging control signal from the camera head I / F 23 to the camera head 10 in step S16 under the control of the camera head control unit 211. When the camera head 10 receives the image pickup control signal, the camera 10 a picks up the face of the driver 4 and outputs two-dimensional image data of the face of the driver 4 obtained by this image pickup operation to the face image detection processing unit 20 Do.
 顔画像検出処理ユニット20の制御ユニット21は、二次元画像取得処理部212の制御の下、ステップS17において上記二次元画像データVa2をカメラヘッドI/F23を介して取り込む。そして、当該二次元画像データVa2をデータメモリ25の二次元画像記憶部251に記憶させる。 The control unit 21 of the face image detection processing unit 20 takes in the two-dimensional image data Va2 through the camera head I / F 23 in step S17 under the control of the two-dimensional image acquisition processing unit 212. Then, the two-dimensional image storage unit 251 of the data memory 25 stores the two-dimensional image data Va2.
 続いて二次元画像取得処理部212は、ステップS18において、上記二次元画像記憶部251に記憶された、照明された状態で撮像された二次元画像データVa1と、照明されていない状態で撮像された二次元画像データVa2をそれぞれ読み出す。そして、これらの二次元画像データ間の差分画像データVa(Va=Va1-Va2)を算出する。差分画像を得る目的は、対向車のヘッドライトや各種メータの発光等の周辺光の影響を除去するためである。そして二次元画像取得処理部212は、上記二次元差分画像データVaを二次元画像記憶部251に記憶させる。 Subsequently, in step S18, the two-dimensional image acquisition processing unit 212 captures an image of the two-dimensional image data Va1 captured in the illuminated state and stored in the two-dimensional image storage unit 251 and in the non-illuminated state. Each two-dimensional image data Va2 is read out. Then, differential image data Va (Va = Va1-Va2) between these two-dimensional image data is calculated. The purpose of obtaining a difference image is to remove the influence of ambient light such as the headlights of oncoming vehicles and the light emission of various meters. Then, the two-dimensional image acquisition processing unit 212 stores the two-dimensional difference image data Va in the two-dimensional image storage unit 251.
 (2)三次元画像データの取得
 上記二次元画像データの取得処理を終了すると、次に制御ユニット21は、カメラヘッド制御部211の制御の下、ステップS19において、カメラヘッドI/F23からカメラヘッド10に対しパターン投影信号を出力する。カメラヘッド10は、上記パターン投影信号を受信するとパターン投影プロジェクタ10bが点灯する。これにより運転者の顔には格子状のパターンが投影される。
(2) Acquisition of Three-Dimensional Image Data When the acquisition process of the two-dimensional image data is completed, the control unit 21 controls the camera head I / F 23 from the camera head I / F 23 in step S19 under the control of the camera head controller 211 Output a pattern projection signal to 10. When the camera head 10 receives the pattern projection signal, the pattern projection projector 10 b lights up. Thereby, a lattice pattern is projected on the face of the driver.
 この状態でカメラヘッド制御部211は、ステップS20により、カメラヘッドI/F23からカメラヘッド10に対し撮像制御信号を出力する。カメラヘッド10は、上記撮像制御信号を受信すると、カメラ10aが上記光学パターンの投影された運転者4の顔を撮像し、この撮像により得られた運転者4の顔のパターン投影画像データを顔画像検出処理ユニット20へ出力する。 In this state, the camera head control unit 211 outputs an imaging control signal to the camera head 10 from the camera head I / F 23 in step S20. When the camera head 10 receives the imaging control signal, the camera 10a captures an image of the face of the driver 4 on which the optical pattern is projected, and the face projection image data of the face of the driver 4 obtained by this imaging is a face Output to the image detection processing unit 20.
 顔画像検出処理ユニット20の制御ユニット21は、三次元画像取得処理部213の制御の下、ステップS21において、上記パターン投影下で撮像されたパターン投影画像データ、つまり三次元画像データVb1を、カメラヘッドI/F23を介して取り込む。そして、当該三次元画像データVb1をデータメモリ25の三次元画像記憶部252に記憶させる。このとき、上記パターン投影プロジェクタ10bは偏光フィルタを通さないパターン光を投影している。このため、上記三次元画像データVb1は、例えば図9に例示するように眼球において角膜反射の輝点42が映り込んだ画像となる。 Under the control of the three-dimensional image acquisition processing unit 213, the control unit 21 of the face image detection processing unit 20 controls the pattern projection image data captured under the pattern projection, that is, three-dimensional image data Vb1 in step S21. Load via the head I / F 23. Then, the three-dimensional image data Vb 1 is stored in the three-dimensional image storage unit 252 of the data memory 25. At this time, the pattern projection projector 10b projects pattern light which does not pass through the polarizing filter. Therefore, the three-dimensional image data Vb1 is an image in which the bright spot 42 of the corneal reflection is reflected in the eye as illustrated in FIG. 9, for example.
 続いて三次元画像取得処理部213は、ステップS22により、上記三次元画像記憶部252に記憶された三次元画像データVb1と、上記二次元画像記憶部251に記憶された、照明されていない状態で撮像された二次元画像データVa2をそれぞれ読み出し、これらの三次元画像データVb1と二次元画像データVa2との間の差分画像データVb(Vb=Vb1-Va2)を算出する。この三次元画像データについて差分画像を得る目的は、先に述べた二次元画像の場合と同様に、対向車のヘッドライトや各種メータの発光等の周辺光の影響を上記三次元画像データから除去するためである。そして三次元画像取得処理部213は、上記三次元差分画像データVbを三次元画像記憶部252に記憶させる。 Subsequently, at step S22, the three-dimensional image acquisition processing unit 213 stores the three-dimensional image data Vb1 stored in the three-dimensional image storage unit 252 and the non-illuminated state stored in the two-dimensional image storage unit 251. The two-dimensional image data Va2 picked up by the above are read out, and difference image data Vb (Vb = Vb1-Va2) between these three-dimensional image data Vb1 and two-dimensional image data Va2 is calculated. The purpose of obtaining a difference image for this three-dimensional image data is to remove from the three-dimensional image data the influence of ambient light such as the light of headlights of oncoming vehicles and various meters, as in the case of the two-dimensional image described above. In order to Then, the three-dimensional image acquisition processing unit 213 stores the three-dimensional difference image data Vb in the three-dimensional image storage unit 252.
 (3)二次元差分画像データVaと三次元差分画像データVbとの対応付け
 上記三次元差分画像データVbが取得されると、続いて制御ユニット21は画像処理部214の制御の下、ステップS23において、上記二次元画像記憶部251に記憶された二次元差分画像データVaと、上記三次元画像記憶部252に記憶された三次元差分画像データVbを読み出す。そして、この読み出された二次元差分画像データVaと三次元差分画像データVbとを、画素位置が同一のもの同士で対応付ける。この対応付け処理は、上記二次元差分画像データVaと三次元差分画像データVbがいずれも1個の同一のカメラ10aにより得られた画像をもとに生成されたものであるため、フレームを重ねるだけで簡単かつ精度良く行うことができる。
(3) Associating the two-dimensional difference image data Va with the three-dimensional difference image data Vb When the three-dimensional difference image data Vb is acquired, subsequently, the control unit 21 controls the image processing unit 214 to perform step S23. Then, the two-dimensional difference image data Va stored in the two-dimensional image storage unit 251 and the three-dimensional difference image data Vb stored in the three-dimensional image storage unit 252 are read out. Then, the read two-dimensional difference image data Va and the three-dimensional difference image data Vb are associated with each other at the same pixel position. In this association process, since the two-dimensional difference image data Va and the three-dimensional difference image data Vb are both generated based on an image obtained by one and the same camera 10a, the frames are overlapped. It can be done simply and accurately.
 (4)瞳孔の二次元位置の検出
 次に制御ユニット21は、画像処理部214の制御の下、ステップS24において運転者4の瞳孔の二次元位置を例えば以下のように検出する。
 すなわち、上記二次元画像記憶部251から二次元差分画像データVaを読み出し、この二次元差分画像データVaから運転者4の目を認識する。目の認識は、例えばパターンマッチング等の既存の画像認識技術を用いることで実現できる。そして、上記認識した目の画像から図8に例示するように瞳孔41を認識し、この瞳孔41の二次元位置を検出する。この瞳孔41の二次元位置は、二次元画像のフレームを二次元平面(X-Y座標平面)と定義するとき、この二次元平面のX-Y座標により表される。画像処理部214は、上記検出された瞳孔41の二次元位置を表す情報を検出結果記憶部253に記憶させる。
(4) Detection of Two-Dimensional Position of Pupil Next, under the control of the image processing unit 214, the control unit 21 detects the two-dimensional position of the pupil of the driver 4 in the following manner, for example.
That is, the two-dimensional difference image data Va is read from the two-dimensional image storage unit 251, and the eyes of the driver 4 are recognized from the two-dimensional difference image data Va. The recognition of the eyes can be realized, for example, by using an existing image recognition technology such as pattern matching. Then, the pupil 41 is recognized from the recognized eye image as illustrated in FIG. 8, and the two-dimensional position of the pupil 41 is detected. The two-dimensional position of the pupil 41 is represented by the XY coordinates of the two-dimensional plane when the frame of the two-dimensional image is defined as a two-dimensional plane (X-Y coordinate plane). The image processing unit 214 causes the detection result storage unit 253 to store information indicating the detected two-dimensional position of the pupil 41.
 (5)目の輝点の三次元位置の検出
 続いて制御ユニット21は、画像処理部214の制御の下、ステップS25により運転者4の目(眼球)の輝点の三次元位置を例えば以下のように検出する。
 すなわち、画像処理部214は、先ず三次元画像記憶部252から三次元差分画像データVbを読み出し、この三次元差分画像データVbから運転者4の目を認識する。ここでの目の認識も、例えばパターンマッチング等の既存の画像認識技術を適用することで実現できる。
(5) Detection of Three-Dimensional Position of Bright Point of Eye Subsequently, under the control of the image processing unit 214, the control unit 21 performs three-dimensional position of the bright point of the eye (eyeball) of the driver 4 in step S25. To detect.
That is, the image processing unit 214 first reads the three-dimensional difference image data Vb from the three-dimensional image storage unit 252, and recognizes the eyes of the driver 4 from the three-dimensional difference image data Vb. The recognition of the eyes here can also be realized, for example, by applying an existing image recognition technology such as pattern matching.
 続いて画像処理部214は、上記認識した目の画像をもとに、当該目の輝点の三次元位置を例えば三角測量法を用いて以下のように検出する。図10は、その検出処理の概要を説明するための図である。 Subsequently, based on the recognized eye image, the image processing unit 214 detects the three-dimensional position of the bright spot of the eye, for example, using triangulation as follows. FIG. 10 is a diagram for explaining the outline of the detection process.
 すなわち、画像処理部214は先ず上記認識された目の画像から眼球の角膜反射による輝点を検出し、カメラ10aから当該輝点との間の光軸LaをZ軸と定義する。またそれと共に、当該光軸Laと直交する二次元方向をX-Y軸で定義する。つまり、運転者4の目の位置をX,Y,Z各軸により表される三次元座標で定義可能とする。 That is, the image processing unit 214 first detects a bright spot due to corneal reflection of the eyeball from the recognized eye image, and defines the optical axis La between the camera 10a and the bright spot as the Z axis. At the same time, a two-dimensional direction orthogonal to the optical axis La is defined by the XY axis. That is, the position of the eye of the driver 4 can be defined by the three-dimensional coordinates represented by the X, Y, Z axes.
 ここで、カメラヘッド10において、パターン投影プロジェクタ10bはカメラ10aに対しその光軸と直交する方向に一定距離だけ離間して配置されている。そのため、カメラ10aと眼球の角膜反射による輝点との間を結ぶ光軸Laと、パターン投影プロジェクタ10bと上記眼球の角膜反射による輝点との間を結ぶ光軸Lbとの間には一定の角度が存在する。 Here, in the camera head 10, the pattern projection projector 10b is disposed apart from the camera 10a by a fixed distance in the direction orthogonal to the optical axis. Therefore, there is a constant between the optical axis La connecting the camera 10a and the bright spot due to the corneal reflection of the eyeball and the optical axis Lb connecting the pattern projection projector 10b and the bright spot due to the corneal reflex of the eyeball. There is an angle.
 このような条件の下で、カメラ10aに対する運転者4の顔の位置が奥行き方向に変化すると、その変化の距離に応じて三次元差分画像Vbにおける光学パターンの二次元位置(X-Y座標位置)が変化する。例えば、図10の例では、運転者4の顔の位置がP1からP2に変化すると、三次元差分画像データVbにおける光学パターンRの位置が図10のQ1からQ2のように変化する。画像処理部214は、上記三次元差分画像データVbにおける光学パターンRの位置の変化量Q1-Q2を検出する。そして、この光学パターンRの位置の変化量Q1-Q2と、上記光軸LaとLbとの間の角度とをもとに、上記カメラ10aに対する眼球における輝点の位置の変化量(距離差)D12を算出する。 Under such conditions, when the position of the face of the driver 4 with respect to the camera 10a changes in the depth direction, the two-dimensional position (XY coordinate position of the optical pattern in the three-dimensional difference image Vb according to the distance of the change ) Changes. For example, in the example of FIG. 10, when the position of the face of the driver 4 changes from P1 to P2, the position of the optical pattern R in the three-dimensional difference image data Vb changes as Q1 to Q2 in FIG. The image processing unit 214 detects the amount of change Q1-Q2 of the position of the optical pattern R in the three-dimensional difference image data Vb. Then, based on the variation Q1-Q2 of the position of the optical pattern R and the angle between the optical axes La and Lb, the variation (distance difference) of the position of the bright spot in the eye with respect to the camera 10a. Calculate D12.
 かくして、カメラ10aに対する運転者4の顔の奥行き方向(Z軸方向)の位置を求めることができ、これにより眼球の角膜反射による輝点の三次元位置を検出できる。画像処理部214は、ステップS26において、上記検出された眼球の角膜反射による輝点の三次元位置を表す情報を、上記瞳孔の二次元位置を表す情報と対応付けて、検出結果記憶部253に記憶させる。 Thus, the position in the depth direction (Z-axis direction) of the face of the driver 4 with respect to the camera 10a can be determined, whereby the three-dimensional position of the bright spot due to the corneal reflection of the eyeball can be detected. In step S26, the image processing unit 214 associates the information representing the three-dimensional position of the bright spot due to the corneal reflection of the detected eyeball with the information representing the two-dimensional position of the pupil, in the detection result storage unit 253. Remember.
 (5)運転者の視線方向の検出
 上記瞳孔の二次元位置の検出処理および目の三次元位置の検出処理が終了すると、制御ユニット21は、視線検出部215の制御の下、ステップS27において運転者4の視線方向の検出処理を実行する。
(5) Detection of Direction of Gaze of Driver When the processing of detecting the two-dimensional position of the pupil and the processing of detecting the three-dimensional position of the eyes is completed, the control unit 21 controls the driver in step S27 under control of the gaze detection unit 215. The detection process of the gaze direction of the person 4 is executed.
 図11は、上記視線方向の検出処理の一例を説明するための概略図である。視線の方向を検出するには、眼球の角膜反射による輝点位置に対する瞳孔の二次元(X,Y)方向の位置ずれ量と、カメラ10aに対する眼球の角膜反射による輝点の奥行き(Z)方向の位置が必要である。 FIG. 11 is a schematic view for explaining an example of the process of detecting the sight line direction. In order to detect the direction of the line of sight, the amount of positional deviation of the pupil in the two-dimensional (X, Y) direction with respect to the bright spot position due to the corneal reflection of the eyeball and the depth (Z) direction of the bright spot due to the corneal reflection of the eyeball with respect to the camera 10a The position of is required.
 すなわち、例えばいまカメラ10aに対する目の位置が図11のP1だったとすると、このときの視線の方向W1は、眼球の角膜反射による輝点位置に対する瞳孔の二次元(X,Y)方向の位置ずれ量と、カメラ10aから眼球の角膜反射による輝点位置までの距離D1とから算出される。同様に、カメラ10aに対する目の位置が図11のP2だったとすると、このときの視線の方向W2は、眼球の角膜反射による輝点位置に対する瞳孔の二次元(X,Y)方向の位置ずれ量と、カメラ10aから眼球の角膜反射による輝点位置までの距離D2とから算出される。 That is, for example, assuming that the position of the eye relative to the camera 10a is P1 in FIG. 11, the direction W1 of the line of sight at this time is the two-dimensional (X, Y) displacement of the pupil with respect to the bright spot position due to corneal reflection of the eyeball. It is calculated from the amount and the distance D1 from the camera 10a to the bright spot position by corneal reflection of the eyeball. Similarly, assuming that the eye position with respect to the camera 10a is P2 in FIG. 11, the direction W2 of the line of sight at this time is the displacement of the pupil in the two-dimensional (X, Y) direction with respect to the bright spot position due to corneal reflection of the eyeball. And the distance D2 from the camera 10a to the bright spot position due to the corneal reflection of the eyeball.
 視線検出部215は、上記視線方向の検出結果を表す情報を、外部I/F24から表示制御装置50へ出力する。表示制御装置50は、上記顔画像検出処理ユニット20から出力された運転者4の視線方向の検出情報をもとに、運転者4が視認している対象物を特定する。そして、例えばヘッドアップディスプレイにおいて、車両前方の撮像画像に枠型のパターンを重ねて表示する。 The gaze detection unit 215 outputs information representing the detection result of the gaze direction from the external I / F 24 to the display control device 50. The display control device 50 specifies the target object visually recognized by the driver 4 based on the detection information of the line-of-sight direction of the driver 4 output from the face image detection processing unit 20. Then, for example, in a head-up display, a frame-shaped pattern is superimposed and displayed on the captured image in front of the vehicle.
 例えば、図11の例では、検出された視線の方向W1と車両前方を撮像した映像とから、運転者4は例えば対象物K1を見ていると判定でき、この対象物K1の映像に対し枠型のパターンを重ねて表示する。また、検出された視線の方向W2と車両前方を撮像した映像とから、運転者4は例えば対象物K2を見ていると判定でき、この対象物K1の映像に対し枠型のパターンを重ねて表示する。 For example, in the example of FIG. 11, it can be determined that the driver 4 is looking at the object K1, for example, from the detected direction W1 of the sight line and the image obtained by imaging the front of the vehicle Display the pattern of the mold in layers. Further, it can be determined that the driver 4 is looking at the object K2, for example, from the detected direction W2 of the sight line and the image obtained by imaging the front of the vehicle, and a frame type pattern is superimposed on the image of the object K1. indicate.
 以上述べた(1)~(5)の一連の処理は、予め設定された検出周期で繰り返し実行される。検出周期は、例えば1秒当たり上記(1)~(5)の処理が30回実行されるように設定される。 The series of processes (1) to (5) described above are repeatedly executed in a preset detection cycle. The detection cycle is set such that, for example, the above processes (1) to (5) are performed 30 times per second.
 (効果)
 以上詳述したように一実施形態では、カメラヘッド10に1個のカメラ10aを設け、この1個のカメラ10aにより、運転者4の顔の二次元画像データと、格子状の光学パターンを投影した状態で撮像した運転者4の顔の三次元画像データとをそれぞれ取得している。そして、上記取得された二次元画像データと三次元画像データとを画素位置が同一のもの同士で対応付け、これにより運転者4の同一の目を対象として、上記二次元画像データから瞳孔の二次元位置を検出すると共に、上記三次元画像データから目の三次元位置を検出するようにしている。
(effect)
As described in detail above, in one embodiment, the camera head 10 is provided with one camera 10a, and this one camera 10a projects two-dimensional image data of the face of the driver 4 and a grid-like optical pattern. The three-dimensional image data of the face of the driver 4 captured in the off state is respectively acquired. Then, the acquired two-dimensional image data and three-dimensional image data are associated with each other at the same pixel position, thereby setting the same eye of the driver 4 from the two-dimensional image data to the pupil 2 The three-dimensional position of the eye is detected from the three-dimensional image data while detecting the dimensional position.
 従って、運転者4の顔の二次元画像データと三次元画像データを、キャリブレーション等の処理を行うことなく簡単な処理により精度良く対応付けることができる。また、二次元画像データと三次元画像データとをそれぞれ別個の撮像部により取得する場合に比べ、カメラヘッドの小型化とコストダウンを図ることができる。これは、車両のコックピットのようにカメラヘッドを設置可能な場所が限られる場合に、特に有効である。 Therefore, the two-dimensional image data and the three-dimensional image data of the face of the driver 4 can be accurately correlated by a simple process without performing a process such as calibration. Further, the camera head can be miniaturized and the cost can be reduced as compared with the case where two-dimensional image data and three-dimensional image data are acquired by separate imaging units. This is particularly effective when the location where the camera head can be installed is limited, such as a cockpit of a vehicle.
 また、二次元画像データを取得する際に、照明部10c,10dにより照明した状態で撮像した画像データと、照明していない状態で撮像した画像データとの差分画像を生成するようにしている。また同様に、三次元画像データを取得する際には、パターン投影画像と、照明をしていない状態で撮像した画像との差分画像を生成するようにしている。このため、周辺光の影響を低減して濃淡のバラツキのない高品質の二次元画像データおよび三次元画像データを取得することができる。 Further, when obtaining two-dimensional image data, a difference image is generated between the image data captured in a state of being illuminated by the illumination units 10c and 10d and the image data captured in a state of not being illuminated. Similarly, when three-dimensional image data is acquired, a difference image between the pattern projection image and the image captured without illumination is generated. For this reason, the influence of ambient light can be reduced to obtain high quality two-dimensional image data and three-dimensional image data without variation in density.
 さらに、照明部10c,10dおよびパターン投影プロジェクタ10bは近赤外光を発光するので、運転者4に対する防眩性を維持し、かつサングラスを着用している運転者に対しても視線方向を検出することができる。 Furthermore, since the illumination units 10c and 10d and the pattern projection projector 10b emit near infrared light, the antiglare property to the driver 4 is maintained, and the gaze direction is also detected to the driver wearing sunglasses. can do.
 さらに、上記二次元画像データから検出された瞳孔の二次元位置と、上記三次元画像データから検出された目の三次元位置とをもとに、運転者4の視線方向を検出している。このため、カメラ10aに対する運転者4の顔の奥行き方向の位置が変化しても、運転者4の視線方向を正確に検出することができる。 Furthermore, the gaze direction of the driver 4 is detected based on the two-dimensional position of the pupil detected from the two-dimensional image data and the three-dimensional position of the eye detected from the three-dimensional image data. Therefore, even if the position of the face of the driver 4 in the depth direction with respect to the camera 10a changes, it is possible to accurately detect the sight line direction of the driver 4.
 [変形例]
 (1)前記一実施形態では、カメラヘッド10において、カメラ10a、パターン投影プロジェクタ10bおよび照明部10c,10dを一例に配置した構成とした場合について例示したが、1個のカメラ10aを囲むようにパターン投影プロジェクタ10bおよび照明部10c,10dを円形または円弧状に配置するようにしてもよい。
[Modification]
(1) In the embodiment described above, the camera head 10 includes the camera 10a, the pattern projection projector 10b, and the illumination units 10c and 10d as an example. However, the camera head 10 may be configured to surround one camera 10a. The pattern projection projector 10b and the illumination units 10c and 10d may be arranged in a circular or arc shape.
 (2)前記一実施形態では、顔の二次元画像データの取得処理を行った後に、三次元画像データの取得処理を行う場合について例示した。しかし、それに限らず、先ず三次元画像データの取得処理を行い、しかる後二次元画像データの取得処理を行うようにしてもよい。 (2) In the embodiment described above, the case of performing the acquisition process of three-dimensional image data after performing the process of acquiring two-dimensional image data of a face has been illustrated. However, the present invention is not limited to this. First, acquisition processing of three-dimensional image data may be performed, and then acquisition processing of two-dimensional image data may be performed.
 (3)前記一実施形態では、運転者4の視線方向を検出し、その結果をヘッドアップディスプレイに表示する場合について述べた。しかし、これに限らず、上記視線方向の検出結果を表す情報を例えば脇見判定装置に入力し、この脇見判定装置において上記視線方向の検出結果をもとに運転者4の脇見の有無を判定するようにしてもよい。 (3) In the above-described embodiment, the case where the line of sight direction of the driver 4 is detected and the result is displayed on the head-up display has been described. However, the present invention is not limited to this, and for example, information representing the detection result of the gaze direction is input to the aptitude determination device, and the presence of the driver 4 is determined based on the detection result of the gaze direction in the aptitude determination device. You may do so.
 (4)また、上記視線方向の検出結果を表す情報を例えば自動運転制御装置に入力し、この自動運転制御装置において上記視線方向の検出結果をもとに、例えば自動運転モードから手動運転モードへの切り替えの可否を判定するようにしてもよい。 (4) Further, information representing the detection result of the gaze direction is input to, for example, an automatic driving control device, and based on the detection result of the gaze direction in the automatic driving control device, for example, from the automatic driving mode to the manual driving mode It may be possible to determine whether or not to switch.
 以上、本発明の実施形態を詳細に説明してきたが、前述までの説明はあらゆる点において本発明の例示に過ぎない。本発明の範囲を逸脱することなく種々の改良や変形を行うことができることは言うまでもない。つまり、本発明の実施にあたって、実施形態に応じた具体的構成が適宜採用されてもよい。 While the embodiments of the present invention have been described in detail, the above description is merely illustrative of the present invention in all respects. It goes without saying that various improvements and modifications can be made without departing from the scope of the present invention. That is, in the implementation of the present invention, a specific configuration according to the embodiment may be appropriately adopted.
 [付記]
 上記実施形態の一部または全部は、特許請求の範囲のほか以下の付記に示すように記載することも可能であるが、これに限定されない。
 (付記1)
 ハードウェアプロセッサ(21A)とメモリ(21B)とを有する画像処理装置であって、
 前記ハードウェアプロセッサ(21A)が、
  前記運転者(4)の顔に対し光学パターンを投影し(211)、
  前記運転者(4)の顔を撮像し(211)、
  前記運転者(4)の顔に対し前記前記光学パターンが投影されていない状態で前記撮像された第1の顔画像と、前記光学パターンが投影されている状態で前記撮像された第2の顔画像とをそれぞれ取得し(212,213)、
  前記第1の顔画像と前記第2の顔画像とを対応付ける(214)ように構成される、画像処理装置。
[Supplementary note]
A part or all of the above-mentioned embodiment can be described as shown in the following appendices besides the claims, but is not limited thereto.
(Supplementary Note 1)
An image processing apparatus comprising a hardware processor (21A) and a memory (21B), comprising:
The hardware processor (21A)
Projecting an optical pattern onto the face of said driver (4) (211);
Imaging the face of the driver (4) (211);
The captured first face image in a state in which the optical pattern is not projected on the face of the driver (4) and the captured second face in a state in which the optical pattern is projected Acquire the image and (212, 213),
An image processing apparatus, configured to associate (214) the first face image with the second face image.
 (付記2)
 ハードウェアプロセッサ(21A)とメモリ(21B)とを有する装置が実行する画像処理方法であって、
 前記ハードウェアプロセッサ(21A)が、運転者(4)の顔を撮像するように撮像部を制御して、当該撮像部により撮像された第1の顔画像を取得する過程(S13~S18)と、
 前記ハードウェアプロセッサ(21A)が、前記運転者(4)の顔に対し光学パターンを投影するように投影部を制御する過程(S19)と、
 前記ハードウェアプロセッサ(21A)が、前記運転者(4)の顔に対し前記光学パターンが投影された状態で前記運転者(4)の顔を撮像するように前記撮像部を制御し、当該撮像部(10a)により撮像された第2の顔画像を取得する過程(S20~S22)と、
 前記ハードウェアプロセッサ(21A)が、前記第1の顔画像と前記第2の顔画像とを対応付ける過程(S23)と
 を具備する、画像処理方法。
(Supplementary Note 2)
An image processing method executed by an apparatus having a hardware processor (21A) and a memory (21B),
The hardware processor (21A) controls the imaging unit to capture the face of the driver (4), and acquires a first face image captured by the imaging unit (S13 to S18); ,
Controlling the projection unit to project an optical pattern on the face of the driver (4) (S19);
The hardware processor (21A) controls the imaging unit to image the face of the driver (4) in a state where the optical pattern is projected on the face of the driver (4), and the imaging is performed Acquiring a second face image captured by the unit (10a) (S20 to S22);
An image processing method comprising: the hardware processor (21A) associating the first face image with the second face image (S23).
 (付記3)
 ハードウェアプロセッサ(21A)とメモリ(21B)とを有する装置が実行する画像処理方法であって、
 前記ハードウェアプロセッサ(21A)が、前記運転者(4)の顔に対し、光学パターンを投影するように投影部を制御する過程(S19)と、
 前記ハードウェアプロセッサ(21A)が、前記運転者(4)の顔に対し前記光学パターンが投影された状態で、前記運転者(4)の顔を撮像するように撮像部(10a)を制御して、第2の顔画像を取得する過程と(S20~S22)、
 前記ハードウェアプロセッサ(21A)が、前記光学パターンが投影されていない状態で前記運転者(4)の顔を撮像するように前記撮像部(10a)を制御して、第1の顔画像を取得する過程(S13~S18)と、
 前記ハードウェアプロセッサ(21A)が、前記第1の顔画像と前記第2の顔画像とを対応付ける過程(S23)と
 を具備する、画像処理方法。
(Supplementary Note 3)
An image processing method executed by an apparatus having a hardware processor (21A) and a memory (21B),
Controlling the projection unit to project an optical pattern on the face of the driver (4) by the hardware processor (21A) (S19);
The hardware processor (21A) controls the imaging unit (10a) to capture the face of the driver (4) in a state where the optical pattern is projected onto the face of the driver (4). Obtaining a second face image (S20 to S22);
The hardware processor (21A) controls the imaging unit (10a) to capture a face of the driver (4) in a state where the optical pattern is not projected, and obtains a first face image Process (S13 to S18),
An image processing method comprising: the hardware processor (21A) associating the first face image with the second face image (S23).
 1,10…カメラヘッド
 1a…撮像部
 10a…カメラ
 1b…投影部
 10b…パターン投影プロジェクタ
 1c,10c,10d,10d…照明部
 11…ヘッド筐体
 11c,12c,11d,12d…照明用の発光素子
 2,20…顔画像検出処理ユニット
 2a…撮像制御部
 2b…画像処理部
 21…制御ユニット
 21A…CPU
 21B…プログラムメモリ
 22…バス
 23…カメラヘッドI/F
 24…外部I/F
 25…データメモリ
 3…車両
 4…運転者
 31…プロジェクタ基板
 32…発光素子
 33…プロジェクタ光学系
 40…着座センサ
 50…表示制御装置
 211…カメラヘッド制御部
 212…二次元画像取得処理部
 213…三次元画像取得処理部
 214…画像処理部
 215…視線検出部
 251…二次元画像記憶部
 252…三次元画像記憶部
 253…検出結果記憶部
1, 10: Camera head 1a: Imaging unit 10a: Camera 1b: Projection unit 10b: Pattern projection projector 1c, 10c, 10d, 10d: Illumination unit 11: Head housing 11c, 12c, 11d, 12d: Light emitting element for illumination 2, 20: face image detection processing unit 2a: imaging control unit 2b: image processing unit 21: control unit 21A: CPU
21B: Program memory 22: Bus 23: Camera head I / F
24: External I / F
Reference Signs List 25 data memory 3 vehicle 4 driver 31 projector substrate 32 light emitting element 33 projector optical system 40 seating sensor 50 display control device 211 camera head controller 212 two-dimensional image acquisition processor 213 tertiary Original image acquisition processing unit 214 ... Image processing unit 215 ... Line-of-sight detection unit 251 ... Two-dimensional image storage unit 252 ... Three-dimensional image storage unit 253 ... Detection result storage unit

Claims (8)

  1.  車両内に設置され、撮像画像から運転者の顔画像を検出する画像処理装置であって、
     前記運転者の顔に対し光学パターンを投影する投影部と、
     前記運転者の顔を撮像する撮像部と、
     前記運転者の顔に対し前記投影部により前記光学パターンが投影されていない状態で前記撮像部により撮像された第1の顔画像と、前記光学パターンが投影されている状態で前記撮像部により撮像された第2の顔画像とをそれぞれ取得する撮像制御部と、
     前記第1の顔画像と前記第2の顔画像とを対応付ける画像処理部と
     を具備する、画像処理装置。
    An image processing apparatus installed in a vehicle and detecting a face image of a driver from a captured image,
    A projection unit that projects an optical pattern onto the face of the driver;
    An imaging unit configured to image the face of the driver;
    The first face image captured by the imaging unit in a state in which the optical pattern is not projected by the projection unit on the face of the driver, and the imaging by the imaging unit in a state in which the optical pattern is projected An imaging control unit for acquiring each of the second face images
    An image processing apparatus, comprising: an image processing unit that associates the first face image with the second face image.
  2.  前記運転者の顔を照明する照明部を、さらに具備し、
     前記撮像制御部は、前記照明部により前記顔が照明された状態で前記撮像部により撮像された画像と、前記照明部により前記顔が照明されていない状態で前記撮像部により撮像された画像とをそれぞれ取得し、前記取得された各画像の差分画像を前記第1の顔画像とする、請求項1に記載の画像処理装置。
    The lighting device further comprises a lighting unit for lighting the face of the driver,
    The image pickup control unit includes an image taken by the image pickup unit in a state where the face is illuminated by the illumination unit, and an image taken by the image pickup unit in a state where the face is not illuminated by the illumination unit. The image processing apparatus according to claim 1, wherein each of the acquired differential images is used as the first face image.
  3.  前記照明部は、近赤外光を発光する光源と、当該光源から発光された近赤外光を偏光する偏光フィルタとを備え、当該偏光フィルタを通過した前記近赤外光により前記顔を照明する、請求項2に記載の画像処理装置。 The illumination unit includes a light source that emits near-infrared light, and a polarization filter that polarizes the near-infrared light emitted from the light source, and illuminates the face with the near-infrared light that has passed through the polarization filter. The image processing apparatus according to claim 2.
  4.  前記撮像制御部は、前記運転者の顔に対し前記投影部により前記光学パターンが投影された状態で前記撮像部により撮像された画像と、前記光学パターンが投影されていない状態で前記撮像部により撮像された画像とをそれぞれ取得し、前記取得された各画像の差分画像を前記第2の顔画像とする、請求項1乃至3のいずれかに記載の画像処理装置。 The imaging control unit is configured such that the image captured by the imaging unit in a state in which the optical pattern is projected by the projection unit on the face of the driver and the imaging unit in a state in which the optical pattern is not projected The image processing apparatus according to any one of claims 1 to 3, wherein each of the captured images is acquired, and a differential image of each of the acquired images is used as the second face image.
  5.  前記画像処理部は、
      前記第1の顔画像と前記第2の顔画像とを画素位置が同一のもの同士で対応付ける処理部と、
      前記第2の顔画像をもとに、前記撮像部と前記顔の目とを結ぶ光軸と、当該光軸と直交する二次元平面により定義される三次元空間における前記目の三次元位置を検出する第1の検出部と、
      前記第1の顔画像をもとに、前記三次元位置の検出対象となった目を対象として、当該目の瞳孔の前記二次元平面における二次元位置を検出する第2の検出部と
     を備える、請求項1乃至4のいずれかに記載の画像処理装置。
    The image processing unit
    A processing unit that associates the first face image and the second face image with ones having the same pixel position;
    Based on the second face image, the three-dimensional position of the eye in a three-dimensional space defined by an optical axis connecting the imaging unit and the eye of the face and a two-dimensional plane orthogonal to the optical axis A first detection unit to detect;
    And a second detection unit that detects a two-dimensional position of the pupil of the eye on the two-dimensional plane with respect to the eye whose detection target is the three-dimensional position based on the first face image. The image processing apparatus according to any one of claims 1 to 4.
  6.  前記画像処理部により対応付けられた前記検出された目の三次元位置と前記検出された瞳孔の二次元位置とに基づいて前記運転者の視線を検出する視線検出部を、
     さらに具備する、請求項5に記載の画像処理装置。
    A line-of-sight detection unit for detecting the line of sight of the driver based on the three-dimensional position of the detected eye and the two-dimensional position of the detected pupil associated by the image processing unit;
    The image processing apparatus according to claim 5, further comprising:
  7.  車両内に設置され、撮像画像から運転者の顔画像を検出する画像処理装置が実行する画像処理方法であって、
     前記画像処理装置が、前記運転者の顔を撮像部により撮像して、第1の顔画像を取得する過程と、
     前記画像処理装置が、前記運転者の顔に対し縞状または格子状をなす光学パターンを投影する過程と、
     前記画像処理装置が、前記運転者の顔に対し前記光学パターンが投影された状態で前記運転者の顔を前記撮像部により撮像して、第2の顔画像を取得する過程と、
     前記画像処理装置が、前記第1の顔画像と前記第2の顔画像とを画素位置が同一のもの同士で対応付ける過程と
     を具備する、画像処理方法。
    An image processing method installed in a vehicle and executed by an image processing apparatus for detecting a face image of a driver from a captured image,
    The image processing apparatus capturing an image of the driver's face with an imaging unit to obtain a first face image;
    The image processing apparatus projecting a stripe or grid optical pattern on the face of the driver;
    The image processing apparatus capturing a face of the driver by the imaging unit in a state where the optical pattern is projected on the face of the driver, and acquiring a second face image;
    An image processing apparatus including the process of correlating the first face image and the second face image with ones having the same pixel position.
  8.  車両内に設置され運転者の顔の画像を検出する画像処理装置が実行する画像処理方法であって、
     前記画像処理装置が、前記運転者の顔に対し、縞状または格子状をなす光学パターンを投影する過程と、
     前記画像処理装置が、前記運転者の顔に対し前記光学パターンが投影された状態で前記運転者の顔を撮像部により撮像して、第2の顔画像を取得する過程と、
     前記画像処理装置が、前記光学パターンが投影されていない状態で前記運転者の顔を前記撮像部により撮像して、第1の顔画像を取得する過程と、
     前記画像処理装置が、前記第1の顔画像と前記第2の顔画像とを画素位置が同一のもの同士で対応付ける過程と
     を具備する、画像処理方法。
    An image processing method performed by an image processing apparatus installed in a vehicle and detecting an image of a face of a driver,
    The image processing apparatus projecting a striped or grid-like optical pattern onto the face of the driver;
    The image processing device capturing an image of the driver's face by an imaging unit in a state where the optical pattern is projected on the driver's face, and acquiring a second face image;
    The image processing apparatus capturing a face of the driver by the imaging unit in a state where the optical pattern is not projected, and acquiring a first face image;
    An image processing apparatus including the process of correlating the first face image and the second face image with ones having the same pixel position.
PCT/JP2018/044832 2017-12-11 2018-12-06 Image processing device and image processing method WO2019117001A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017236683A JP6946993B2 (en) 2017-12-11 2017-12-11 Image processing device and image processing method
JP2017-236683 2017-12-11

Publications (1)

Publication Number Publication Date
WO2019117001A1 true WO2019117001A1 (en) 2019-06-20

Family

ID=66820343

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/044832 WO2019117001A1 (en) 2017-12-11 2018-12-06 Image processing device and image processing method

Country Status (2)

Country Link
JP (1) JP6946993B2 (en)
WO (1) WO2019117001A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002133446A (en) * 2000-08-30 2002-05-10 Microsoft Corp Face image processing method and system
JP2008261662A (en) * 2007-04-10 2008-10-30 Denso Corp Three-dimensional shape restoration device
WO2010026983A1 (en) * 2008-09-03 2010-03-11 日本電気株式会社 Image processing device, image processing method, and image processing program
JP2012177671A (en) * 2011-02-27 2012-09-13 Plex International Design Co Ltd Fine aperiodic pattern projection device and method and three-dimensional measuring device using the same
JP2015059849A (en) * 2013-09-19 2015-03-30 凸版印刷株式会社 Method and device for measuring color and three-dimensional shape

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002133446A (en) * 2000-08-30 2002-05-10 Microsoft Corp Face image processing method and system
JP2008261662A (en) * 2007-04-10 2008-10-30 Denso Corp Three-dimensional shape restoration device
WO2010026983A1 (en) * 2008-09-03 2010-03-11 日本電気株式会社 Image processing device, image processing method, and image processing program
JP2012177671A (en) * 2011-02-27 2012-09-13 Plex International Design Co Ltd Fine aperiodic pattern projection device and method and three-dimensional measuring device using the same
JP2015059849A (en) * 2013-09-19 2015-03-30 凸版印刷株式会社 Method and device for measuring color and three-dimensional shape

Also Published As

Publication number Publication date
JP2019105908A (en) 2019-06-27
JP6946993B2 (en) 2021-10-13

Similar Documents

Publication Publication Date Title
JP7519126B2 (en) Augmented reality display with active alignment and corresponding method
JP6036065B2 (en) Gaze position detection device and gaze position detection method
JP5341789B2 (en) Parameter acquisition apparatus, parameter acquisition system, parameter acquisition method, and program
US20180157035A1 (en) Projection type display device, projection display method, and projection display program
JP2003131319A (en) Optical transmission and reception device
JP5772714B2 (en) Light detection device and vehicle control system
JP2015179254A (en) Spectacle wearing image analysis device, spectacle wearing image analysis method and spectacle wearing image analysis program
US11902501B2 (en) Dynamic illumination for eye-tracking
TW201800822A (en) Method and system for multi-lens module alignment
CN111417885B (en) System and method for determining pose of augmented reality glasses
KR101739768B1 (en) Gaze tracking system at a distance using stereo camera and narrow angle camera
JP5895792B2 (en) Work assistance system and program
JP6717330B2 (en) Eye-gaze detecting device, control method of the eye-gaze detecting device, method of detecting corneal reflection image position, and computer program
JP6555707B2 (en) Pupil detection device, pupil detection method, and pupil detection program
WO2019117001A1 (en) Image processing device and image processing method
JP6653048B1 (en) Lens shape measuring device, lens shape measuring method, lens optical characteristic measuring device, program, and recording medium
WO2019117000A1 (en) Image processing device and image processing method
CN115409973A (en) Augmented reality head-up display imaging method, device, equipment and storage medium
JP2009006968A (en) Vehicular display device
JP2016122380A (en) Position detection device, position detection method, gazing point detection device, and image generation device
JP2015066046A (en) Spectacles-wearing image analysis apparatus and spectacles-wearing image analysis program
CN111971527B (en) Image pickup apparatus
JPWO2017179280A1 (en) Gaze measurement apparatus and gaze measurement method
JP4080183B2 (en) Anterior segment imaging device
JP5893384B2 (en) Parameter acquisition apparatus, parameter acquisition method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18887940

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18887940

Country of ref document: EP

Kind code of ref document: A1