WO2017203769A1 - Dispositif de détection de ligne de visée - Google Patents

Dispositif de détection de ligne de visée Download PDF

Info

Publication number
WO2017203769A1
WO2017203769A1 PCT/JP2017/007189 JP2017007189W WO2017203769A1 WO 2017203769 A1 WO2017203769 A1 WO 2017203769A1 JP 2017007189 W JP2017007189 W JP 2017007189W WO 2017203769 A1 WO2017203769 A1 WO 2017203769A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
subject
eye region
detected
face
Prior art date
Application number
PCT/JP2017/007189
Other languages
English (en)
Japanese (ja)
Inventor
山下 龍麿
正行 中西
Original Assignee
アルプス電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by アルプス電気株式会社 filed Critical アルプス電気株式会社
Priority to JP2018519093A priority Critical patent/JP6767482B2/ja
Publication of WO2017203769A1 publication Critical patent/WO2017203769A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Definitions

  • the present invention relates to a gaze detection method for detecting a gaze direction of a subject.
  • the center position of the face, the center position of the parts constituting the face, the organ position such as the pupil position, and the like are detected from the acquired image data, and the detected center Using the position and the organ position, normalization is performed so that the size of the face is a predetermined size and the orientation of the face is upright. After that, using the normalized image data, the feature amount corresponding to the face direction and the feature amount of the eye region are extracted, and the gaze direction is estimated using these feature amounts.
  • an object of the present invention is to provide a gaze detection method capable of increasing the speed by suppressing the burden of calculation processing while ensuring the accuracy of gaze direction detection.
  • the eye gaze detection method of the present invention determines whether or not an eye region image of a subject is included in a predetermined range of images acquired for extracting an eye region image at a constant cycle.
  • a first discriminating step for discriminating when the eye region image of the subject is included in the image of the predetermined range in the first discriminating step, the eye region image is extracted, and the extracted eye region image is Based on this, the direction of the subject's line of sight is detected, and when the eye area image of the subject is not included in the image in the predetermined range in the first determination step, the whole image is newly acquired, and the subject
  • a face image of the subject, the eye area image of the subject person is extracted from the detected face image, the line-of-sight direction of the subject person is detected based on the extracted eye area image, and the range including the extracted eye area image Is updated as a predetermined range.
  • the gaze direction is calculated, the calculation processing load can be suppressed while maintaining the accuracy of the gaze direction calculation, and the processing speed can be increased.
  • the entire image acquired in the second determination step has a lower resolution than the image in the predetermined range determined in the first determination step, and the subject's face image from the entire image acquired in the second determination step Cannot be detected, a new whole image is acquired without waiting for the next first determination step, the face image of the subject is detected from this image, and the eye area image of the subject is detected from the detected face image.
  • image acquisition is performed by an image sensor in which a plurality of pixels are arranged in a horizontal direction and a vertical direction and driven by a rolling shutter system, and the predetermined range is aligned in the horizontal direction of the image sensor. It is preferable to be composed of one or two or more lines. As a result, the cost of the image sensor can be reduced, and the burden of calculation processing can be reduced, and high-speed and high-precision gaze direction detection can be realized.
  • the line-of-sight detection method of the present invention it is possible to reduce the processing load and ensure high-speed processing while ensuring the accuracy of line-of-sight detection.
  • FIG. 5 is a figure which shows typically the light emission period of a 1st light source and a 2nd light source. It is a flowchart which shows the flow of a gaze detection based on 1st Embodiment of this invention. It is a flowchart which shows the flow of a gaze detection based on 2nd Embodiment of this invention.
  • FIG. 1 is a functional block diagram showing the configuration of the line-of-sight detection device 10 according to the first embodiment
  • FIG. 2 is a functional block diagram showing the configuration of the image acquisition unit 20 of the first embodiment
  • FIG. It is a functional block diagram which shows the structure of the gaze detection part 60 of 1st Embodiment.
  • FIG. 4 is a diagram illustrating an example of an image of a subject.
  • the line-of-sight detection device 10 includes a control unit 11, a memory 12, an image acquisition unit 20, a face detection unit 30, a normalization processing unit 40, and an eye region image.
  • An acquisition unit 50 and a line-of-sight detection unit 60 are provided.
  • the line-of-sight detection device 10 is installed, for example, on an instrument panel in an automobile interior or an upper part of a windshield so as to face the driver's face as a subject.
  • the face detection unit 30 extracts the face image A2 (FIG. 4) from the entire image A1 (FIG. 4) of the subject SB acquired by the image acquisition unit 20, for example, an image in a range corresponding to the upper body.
  • the normalization processing unit 40 performs normalization processing on the face image A2.
  • a predetermined range A3 (FIG. 4) including the eye region is set in the eye region image acquisition unit 50, and an eye region image within this predetermined range is extracted and output to the line-of-sight detection unit 60. Is done.
  • the line-of-sight detection unit 60 extracts a feature amount based on the received image, and detects the gaze direction of the subject based on the feature amount.
  • Processing from image acquisition by the image acquisition unit 20 to detection of the line-of-sight direction by the line-of-sight detection unit 60 is executed according to control by the control unit 11, information necessary for the processing, processing results, and the like are stored in the memory 12 and necessary. Reads accordingly.
  • the predetermined range set by the eye area image acquisition unit 50 is stored in the memory 12, and after the gaze direction is detected by the line-of-sight detection unit 60, the next image is acquired within this predetermined range, and this image includes the eye area image. It is determined by the control unit 11 as a determination unit. If an eye area image is included in the acquired image, the line-of-sight direction is detected in the same manner as the above-described processing. As a result of the determination by the control unit 11, when the eye area image is not included in the image in the predetermined range, the entire image is acquired by the image acquisition unit 20, and the detection of the face image and the normalization processing are performed based on this image.
  • a predetermined range is newly set, and the data in the predetermined range stored in the memory 12 is updated with this range. Further, when an eye area image is included in the image acquired in the predetermined range, the line-of-sight direction is detected based on the feature amount extracted from the eye area image.
  • each constituent member / block will be described.
  • the image acquisition unit 20 includes a first light source 21, a second light source 22, a first camera 23, a second camera 24, an exposure control unit 25, and a light source control unit 26. .
  • the first light source 21 includes a plurality of LED (light emitting diode) light sources. These LED light sources are arranged outside the lens of the first camera 23 so as to surround the lens.
  • the second light source 22 is also composed of a plurality of LED light sources. These LED light sources are arranged outside the lens of the second camera 24 so as to surround the lens.
  • the LED light source of the first light source 21 and the LED light source of the second light source 22 emit infrared light (near infrared light) of 800 nm or more and 1000 nm or less, and this detection light can be given to the driver's eyes.
  • infrared light near infrared light
  • 850 nm is a wavelength with a low light absorption rate in the eyeball of a human eye, and this light is easily reflected by the retina at the back of the eyeball.
  • the cameras 23 and 24 have, for example, CMOS (complementary metal oxide semiconductor) as an image sensor.
  • CMOS complementary metal oxide semiconductor
  • This image sensor acquires an image of a face including the driver's eyes, and light is detected by a plurality of pixels arranged in the horizontal direction and the vertical direction.
  • band pass filters it is preferable to arrange band pass filters in accordance with the wavelengths of the detection lights emitted from the two light sources 21 and 22. Thereby, the extraction of the pupil image in the bright pupil image detection unit 61 and the dark pupil image detection unit 62 and the calculation of the gaze direction in the gaze direction calculation unit 65 can be performed with high accuracy.
  • the cameras 23 and 24 can switch the shooting range and resolution according to the control of the control unit 11.
  • the shooting range can be switched between, for example, an entire image and a partial image.
  • the entire image is an image of the upper body of the driver who has arrived at the driver's seat as a position that is the target of gaze detection.
  • the partial image is an image in a predetermined range set by the eye region image acquisition unit 50 based on the entire image, that is, an image in a range corresponding to the driver's eye region.
  • the shooting resolution can be switched between high resolution and low resolution, for example.
  • a high-resolution image is an image having a resolution capable of extracting at least a feature amount necessary for detecting the gaze direction, and a low-resolution image can detect at least a feature portion of the face. It is an image having a resolution that can be detected.
  • the distance between the optical axes of the LED light sources of the first camera 23 and the first light source 21 is determined based on the optical axes of the first camera 23 and the second camera 24 in consideration of the distance between the line-of-sight detection unit 60 and the driver as the driver.
  • the distance is sufficiently short. Therefore, the first light source 21 can be regarded as having substantially the same optical axis as the first camera 23.
  • the distance between the optical axes of the LED light sources of the second camera 24 and the second light source 22 is sufficiently shorter than the distance between the optical axes of the first camera 23 and the second camera 24. It can be considered that the optical axes 22 of the second camera 24 are substantially coaxial with each other.
  • the optical axes of the first camera 23 and the second camera 24 are not coaxial.
  • the above arrangement may be expressed as two members being substantially coaxial and the like, and the two members being non-coaxial.
  • the timing of lighting (light emission) of the first light source 21 and the second light source 22 is controlled by the light source control unit 26.
  • the timing of this lighting is set by an instruction signal from the exposure control unit 25, and the exposure control unit 25 performs shooting described later so as to be synchronized with the lighting of the first light source 21 and the second light source 22 according to the control of the control unit 11.
  • the first camera 23 and the second camera 24 are caused to perform imaging under conditions (bright pupil imaging conditions, dark pupil imaging conditions).
  • the face detection unit 30 performs downsizing on the entire image A1 (FIG. 4) acquired by the image acquisition unit 20 to reduce the number of pixels by binning processing or the like as preprocessing.
  • This downsizing is to reduce the resolution and reduce the size of the image data by combining a predetermined number of adjacent pixels in the entire image A1 into one pixel.
  • This downsizing process is set to a level at which a later face detection process can be performed, and the number of pixels to be combined into one pixel is determined in accordance with this level. As a result, the data size of the image is reduced, and the speed can be increased while ensuring the accuracy of the subsequent face detection processing.
  • the face detection unit 30 performs face detection by applying various detection methods to the image after the downsizing process. For example, initial detection is performed based on the Haar-like face detection method, and information on general facial feature parts registered in the memory 12 in advance, for example, eyebrow, eyeball, iris, nose, lip position, shape, The face is detected according to the collation result in comparison with the size data.
  • the face orientation is also detected by comparing the acquired image with information on each characteristic part for a plurality of face orientations, for example, front, diagonal right direction, and diagonal left direction.
  • the face detection unit 30 also uses a plurality of landmarks corresponding to each feature part, such as eyebrows, eyeballs, irises, lip contours, and nasal ridge lines, based on the color and brightness of the detected face image. Is detected.
  • landmarks corresponding to each feature part, such as eyebrows, eyeballs, irises, lip contours, and nasal ridge lines, based on the color and brightness of the detected face image. Is detected.
  • a combination of facial feature information of a specific individual and a name or other identification information that identifies the individual is registered in advance. The individual may be authenticated together with the face detection by collating with the image acquired by the image acquisition unit 20.
  • the normalization processing unit 40 maintains the relationship between the plurality of landmarks detected by the face detection unit 30, and converts the face to face forward and has a predetermined size by, for example, affine transformation, This normalizes the face image.
  • the eye region image acquisition unit 50 predetermines a range in which an image including both eyes is included in the image normalized by the normalization processing unit 40 based on the position / range information of the eyeball detected as a landmark. Set as a range. Furthermore, the eye area image acquisition unit 50 acquires, as the eye area image, a bright pupil image and a dark pupil image corresponding to a predetermined range among the images acquired by the image acquisition unit 20. The set predetermined range is stored in the memory 12, and the acquired eye region image is output to the line-of-sight detection unit 60.
  • FIG. 5 is a diagram schematically illustrating an example of image acquisition timing from a rolling shutter type imaging device and light emission timing of a light source.
  • 5A shows image acquisition timing from the image sensor
  • FIG. 5B shows light emission periods of the first light source 21 and the second light source 22.
  • the image sensor drive system includes a global shutter system and a rolling shutter system, and the cameras 23 and 24 of the first embodiment can use any image sensor, but here the case of the rolling shutter system will be described. To do.
  • H000, H100, H200, H300, H400, H500, H600, H700, and H800 are lines of pixels arranged in the horizontal direction in order from the top to the bottom in the vertical direction in the image sensor. Respectively.
  • the image sensor is driven for each of these lines by a rolling shutter system.
  • “VSYNC” in FIG. 5A is a vertical synchronization signal output from the cameras 23 and 24 and is determined by the frame rate of the camera, and the control unit 11 is synchronized with these vertical synchronization signals, Capture image data corresponding to a pixel line of an image sensor of a corresponding camera.
  • B11 to B18, B21 to... Indicate the timing of capturing image data corresponding to each pixel line of the image sensor, which means horizontal synchronization signals.
  • FIG. 5B shows the detection light emission periods I11, I12, and I13 from the first light source 21, and the detection light emission periods I21 and I22 from the second light source 22, respectively.
  • the light emission times of the light sources 21 and 22 are the same, and light is emitted alternately at a constant cycle.
  • the image sensor is driven frame by frame from the line H000 to the line H800.
  • the image obtained by driving for one frame corresponds to the entire image of the subject, and one or a plurality of pixel lines can be set as a predetermined range corresponding to the eye region in the image.
  • the line-of-sight detection unit 60 is composed of a CPU and a memory of a computer, and the processing by each block shown in FIG. 3 is performed by executing software installed in advance.
  • the gaze detection unit 60 includes a bright pupil image detection unit 61, a dark pupil image detection unit 62, a pupil center calculation unit 63, a corneal reflection light center detection unit 64, and a gaze direction calculation unit 65. .
  • the image given to the line-of-sight detection unit 60 is read into the bright pupil image detection unit 61 and the dark pupil image acquisition unit 62, respectively.
  • the bright pupil image detection unit 61 detects an eye image when the light source and the camera are combined, which satisfies any of the following bright pupil imaging conditions (a).
  • the dark pupil image detection unit 62 detects the following dark pupils: An eye image when the combination of the light source and the camera satisfies any one of the imaging conditions (b) is detected.
  • A-2) During the lighting period of the second light source 22 An image is acquired by the substantially coaxial second camera 24 (b) Dark pupil photographing condition (b-1) An image is acquired by the non-coaxial second camera 24 during the lighting period of the first light source 21 (b-2) During the lighting period of the second light source 22, an image is acquired by the first camera 23 that is non-coaxial with the first light source 22.
  • the infrared light reflected by the retina is transmitted to the second camera 24.
  • the pupil appears dark because it is hardly incident. Therefore, this image is extracted by the dark pupil image detection unit 62 as a dark pupil image. The same applies to an image acquired by the non-coaxial first camera 23 when the second light source 22 is turned on.
  • the pupil center calculation unit 63 subtracts the dark pupil image detected by the dark pupil image detection unit 62 from the bright pupil image detected by the bright pupil image detection unit 61 to generate a pupil image signal whose pupil shape is bright. To be acquired.
  • the pupil image signal is image-processed and binarized, and an area image corresponding to the shape and area of the pupil is calculated. Further, an ellipse including this area image is extracted, and an intersection point between the major axis and the minor axis of the ellipse is calculated as a feature amount as the center position of the pupil.
  • the center position of the pupil may be calculated from the luminance distribution of the pupil image.
  • the dark pupil image signal detected by the dark pupil image detection unit 62 is given to the corneal reflection light center detection unit 64.
  • the dark pupil image signal includes a luminance signal by reflected light reflected from the reflection point of the cornea.
  • the reflected light from the reflection point of the cornea forms a Purkinje image, and is acquired as a spot image with a very small area by the imaging devices of the cameras 23 and 24.
  • the corneal reflection light center detection unit 64 performs image processing on the spot image, and obtains the center of the reflected light from the reflection point of the cornea as a feature amount.
  • the pupil center calculated value calculated by the pupil center calculating unit 63 and the corneal reflected light center calculated value calculated by the corneal reflected light center detecting unit 64 are given to the gaze direction calculating unit 65.
  • the line-of-sight direction calculation unit 65 detects the direction of the line of sight from the pupil center calculated value and the corneal reflection light center calculated value.
  • the line-of-sight direction calculation unit 65 calculates a linear distance ⁇ between the center of the pupil and the center of the reflection point from the cornea. In addition, XY coordinates with the center of the pupil as the origin are set, and an inclination angle ⁇ between the line connecting the center of the pupil and the center of the reflection point and the X axis is calculated. Further, the line-of-sight direction is calculated from the linear distance ⁇ and the inclination angle ⁇ .
  • the calculated gaze direction data is output to the control unit 11 as a detection result by the gaze direction calculation unit 65.
  • the line-of-sight direction may be calculated using the iris center instead of the pupil center.
  • the iris center for example, the difference between the reflectance of the iris (black eye) and white eye of the image satisfying the bright pupil photographing condition is used to extract the iris part into an ellipse or a circle, and the center of the extracted figure is calculated.
  • the difference between the reflectance of the iris (black eye) and white eye of the image satisfying the bright pupil photographing condition is used to extract the iris part into an ellipse or a circle, and the center of the extracted figure is calculated.
  • FIG. 6 is a flowchart showing the flow of gaze detection according to the first embodiment.
  • the entire image A1 (FIG. 4) of the subject SB is acquired by the image acquisition unit 20 (step S11 in FIG. 6).
  • the first light source 21 and the second light source 22 emit light alternately, and the first camera 23 and the second camera 24 simultaneously capture images in synchronization with the lighting of the first light source 21.
  • a bright pupil image is acquired by the first camera 23 and a dark pupil image is acquired by the second camera 24.
  • the first camera 23 and the second camera 24 capture images simultaneously in synchronization with the lighting.
  • a dark pupil image is acquired by the first camera 23 and a bright pupil image is acquired by the second camera 24.
  • the captured image data is stored in the memory 12, and the bright pupil image acquired by the first camera 23 or the second camera 24 is given to the face detection unit 30 as the entire image A1.
  • the face detection unit 30 performs face detection processing on the entire image A1 (FIG. 4) given from the image acquisition unit 20 (step S12 in FIG. 6).
  • the face detection unit 30 Prior to face detection processing, the face detection unit 30 performs downsizing to reduce the number of pixels by binning processing or the like.
  • the face detection unit 30 performs face detection by applying various detection methods to the downsized image, and extracts a face image A2. For example, initial detection is performed based on the Haar-like face detection method, and information on general facial feature parts registered in the memory 12 in advance, for example, the eyebrow BR and eyeball EB in the entire image A1 shown in FIG.
  • the face image A2 is extracted by comparing the positions, shapes, sizes, etc.
  • the face orientation is also detected by comparing the acquired image with information on each characteristic part for a plurality of face orientations, for example, front, diagonal right direction, and diagonal left direction.
  • the face detection unit 30 is based on the color and brightness of the detected face image, and a plurality of landmarks corresponding to each feature part, for example, the contour lines of the eyebrows BR, the eyeballs EB, the iris IR, and the lips LP. The ridgeline of the nose NS is detected. Detection information about the detected face image A2 and landmark is output to the normalization processing unit 40.
  • the normalization processing unit 40 maintains the relationship between the plurality of landmarks detected by the face detection unit 30 so that the face is directed frontward and has a predetermined size by, for example, affine transformation.
  • the face image is normalized by this conversion (step S13).
  • the normalized image data is sent to the eye region image acquisition unit 50, and the eye region image acquisition unit 50 generates an image including the eyeballs of both eyes based on the position / range information of the eyeballs detected as landmarks.
  • the included range is set as the initial predetermined range A3 (FIG. 4) (step S14).
  • the eye region image acquisition unit 50 reads the bright pupil image and the dark pupil image acquired by the image acquisition unit 20 from the memory 12, and extracts and acquires an image in a range corresponding to the predetermined range A3 in these images. (Step S15).
  • the image acquired in this way is given to the control unit 11 as a determination unit.
  • the control unit 11 determines whether or not an eye region image is included in the image received from the eye region image acquisition unit 50 (step S16, first determination step). This determination is performed by comparing general eye feature information registered in advance in the memory 12 with, for example, the position, shape, and size of the eyeball and iris.
  • the control unit 11 adds the eye region image to the image received from the eye region image acquisition unit 50. Is included (YES in step S16).
  • the control unit 11 outputs the image received from the eye area image acquisition unit 50 to the line-of-sight detection unit 60.
  • the bright pupil image detection unit 61 detects the bright pupil image
  • the dark pupil image detection unit 62 detects the dark pupil image.
  • the pupil center calculation unit 63 subtracts the dark pupil image from the bright pupil image to obtain a pupil image signal in which the shape of the pupil is bright, and based on this signal, a portion corresponding to the shape and area of the pupil The center position of the pupil is calculated as a feature amount from the ellipse including this area image (step S17).
  • the corneal reflection light center detection unit 64 performs image processing on the spot image included in the dark pupil image signal, and obtains the center of the reflection light from the reflection point of the cornea as a feature amount (step S17).
  • the gaze direction calculation unit 65 detects the gaze direction from the pupil center calculation value calculated by the pupil center calculation unit 63 and the corneal reflection light center calculation value calculated by the corneal reflection light center detection unit 64. (Step S18).
  • step S15 After detecting the line-of-sight direction, an image is acquired within the predetermined range set in step S14 (step S15).
  • the first light source 21 and the second light source 22 are alternately turned on, and the pupil images corresponding to the imaging conditions are captured by the two cameras 23 and 24.
  • the control unit 11 determines whether or not an eye region image is included (step S16, first determination step).
  • the line-of-sight detection unit 60 extracts the positions of the pupil center and the corneal reflection light center as the feature amount. (Step S17). Based on this feature amount, the line-of-sight direction calculation unit 65 detects the line-of-sight direction (step S18).
  • step S16 determines whether the eye area image is included as a result of the determination by the control unit 11 (step S16) (NO in step S16).
  • step S11 the entire image is acquired again (step S11).
  • face image detection (step S12) and normalization processing (step S13) are executed, and a new predetermined range is set for the normalized image.
  • step S14 the data of the predetermined range stored in the memory 12 is updated (step S14), and the subsequent processing after image acquisition (step S15) is performed.
  • the eye region image when only the image of the eyeball of one eye is included, the image of the eyeball of both eyes has sufficient density and resolution for feature amount detection. The case where there was not.
  • the visual line detection method has the following effects.
  • (1) In the first determination step (step S16 in FIG. 6), it is determined whether or not the eye area image of the subject is included in the image of the predetermined range. Without detecting the entire image, the line-of-sight direction is continuously detected based on the image in the predetermined range. For this reason, it is possible to suppress the data size of the image acquired every time while ensuring the accuracy of the line-of-sight detection, and to reduce the processing load and increase the processing speed.
  • (2) When an image sensor driven by a rolling shutter system is used, the cost of the image sensor can be reduced, and the burden of calculation processing can be reduced and high-speed and high-precision gaze direction detection can be realized. .
  • the second determination step is executed by periodically acquiring a low resolution whole image.
  • the line-of-sight detection apparatus according to the second embodiment has the same configuration as the line-of-sight detection apparatus 10 according to the first embodiment.
  • detailed description of the same configuration, processing, action, and the like as in the first embodiment will be omitted.
  • FIG. 7 is a flowchart showing the flow of gaze detection according to the second embodiment.
  • the image acquisition unit 20 acquires the entire image A1 (FIG. 4) of the subject SB (step S21 in FIG. 7), and uses this entire image A1.
  • the face detection unit 30 performs face detection processing and extracts the face image A2 (step S22). Further, the face detection unit 30 detects a face orientation and a plurality of landmarks corresponding to each feature part.
  • step S23 the normalization of the face image in the normalization processing unit 40 (step S23), the setting of the predetermined range A3 in the eye region image acquisition unit 50 (step S24), and the determination in the control unit 11 as the determination unit (step S26,
  • the first determination step is the same as in the first embodiment.
  • step S24 and S25 since the predetermined range A3 is set based on the image including the eyeballs of both eyes, the control unit 11 adds the eye region image to the image received from the eye region image acquisition unit 50. Is included (YES in step S26), and the image received from the eye region image acquisition unit 50 is output to the line-of-sight detection unit 60.
  • the bright pupil image detection unit 61 detects the bright pupil image
  • the dark pupil image detection unit 62 detects the dark pupil image.
  • the pupil center calculation unit 63 subtracts the dark pupil image from the bright pupil image to obtain a pupil image signal in which the shape of the pupil is bright, and based on this signal, a portion corresponding to the shape and area of the pupil The center position of the pupil is calculated as a feature amount from the ellipse including this area image (step S27).
  • the corneal reflection light center detection unit 64 performs image processing on the spot image included in the dark pupil image signal, and obtains the center of the reflected light from the reflection point of the cornea as a feature amount (step S27). Subsequently, the gaze direction calculation unit 65 detects the gaze direction from the pupil center calculation value calculated by the pupil center calculation unit 63 and the corneal reflection light center calculation value calculated by the corneal reflection light center detection unit 64. (Step S28).
  • step S29 the entire image of the subject SB is acquired by the image acquisition unit 20 (step S29).
  • This image has a resolution lower than that of the image acquired in step S21, and has a minimum resolution that enables simple face image detection described below.
  • face image detection processing is executed based on this image (step S30, second determination step).
  • this face image detection it is confirmed that the position and orientation of the face are not deviated by a predetermined amount or more with respect to the face image detection in step S22 by comparing feature parts, and the detection of landmarks is omitted.
  • This predetermined amount is set as a reference amount in which the eye area image is again included in the predetermined range A3 once set in the general feature site arrangement.
  • step S24 is performed.
  • An image is acquired for the predetermined range set in (Step S25).
  • the resolution of the image acquired here is as high as the image acquired in step S21, and is higher than the resolution of the image acquired in step S29.
  • the first light source 21 and the second light source 22 are turned on alternately, and the pupil images corresponding to the imaging conditions are captured by the two cameras 23 and 24, as in step S21.
  • the control unit 11 determines whether or not an eye area image is included (step S26, first determination step).
  • the line-of-sight detection unit 60 extracts the positions of the pupil center and the corneal reflection light center as the feature amount. (Step S27). Based on this feature amount, the line-of-sight direction calculation unit 65 detects the line-of-sight direction (step S28).
  • step S26 As a result of the determination by the control unit 11 (step S26), when an eye area image is not included (NO in step S26), and (2) when a face image cannot be detected in step S30 ( If NO in step S30), that is, if the face position and orientation have deviated by a predetermined amount or more with respect to the result of face image detection in step S22, the entire image is acquired again (step S21). Face image detection (step S22) and normalization processing (step S23) are executed for the entire image, a new predetermined range is set for the normalized image, and the new predetermined range is stored in the memory 12. The data in the predetermined range is updated (step S24), and the subsequent processing after image acquisition (step S25) is performed.
  • step S30 of FIG. 7 is executed every time the detection of the line-of-sight direction (step S28) ends, but this execution interval may be set every predetermined number of times instead of every time. Further, instead of the second determination step shown in step S30 of FIG. 7, the second determination step may be executed independently of the processing flow shown in FIG.
  • the line-of-sight detection method of the second embodiment since it is possible to perform the determination with an image having a small data amount in the second determination step, it is possible to reduce the burden of calculation processing while ensuring the accuracy of line-of-sight detection. it can.
  • Other operations, effects, and modifications are the same as those in the first embodiment.
  • the line-of-sight detection method according to the present invention is useful in that the processing load can be reduced and the processing speed can be increased while ensuring the accuracy of line-of-sight detection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biophysics (AREA)
  • Human Computer Interaction (AREA)
  • Eye Examination Apparatus (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

Le but de la présente invention est de fournir un procédé de détection de ligne de visée qui, tout en assurant une précision de détection de direction de ligne de visée, est capable de réduire la charge de traitement de calcul et d'obtenir une plus grande vitesse. Ainsi, l'invention porte sur un procédé de détection de ligne de visée, comprenant une première étape d'évaluation consistant à évaluer périodiquement si des images de région de l'œil d'un sujet sont incluses dans une image d'une plage prescrite qui est acquise afin d'extraire les images de la région de l'œil. Si les images de la région de l'œil du sujet sont comprises dans l'image de la plage prescrite dans la première étape d'évaluation, les images de la région de l'œil sont extraites et la direction de la ligne de visée du sujet est détectée sur la base des images de la région de l'œil extraites. Si les images de la région de l'œil du sujet ne sont pas incluses dans l'image de la plage prescrite dans la première étape d'évaluation, une nouvelle image globale est acquise, une image de visage du sujet est détectée à partir de l'image globale, des images de région d'œil du sujet sont extraites de l'image de visage détectée, la direction de ligne de visée du sujet est détectée sur la base des images de région d'œil extraites, et en outre, une plage qui comprend les images de région d'œil extraites est mise à jour en tant que plage prescrite.
PCT/JP2017/007189 2016-05-23 2017-02-24 Dispositif de détection de ligne de visée WO2017203769A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2018519093A JP6767482B2 (ja) 2016-05-23 2017-02-24 視線検出方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-102080 2016-05-23
JP2016102080 2016-05-23

Publications (1)

Publication Number Publication Date
WO2017203769A1 true WO2017203769A1 (fr) 2017-11-30

Family

ID=60412231

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/007189 WO2017203769A1 (fr) 2016-05-23 2017-02-24 Dispositif de détection de ligne de visée

Country Status (2)

Country Link
JP (1) JP6767482B2 (fr)
WO (1) WO2017203769A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021524975A (ja) * 2018-05-04 2021-09-16 グーグル エルエルシーGoogle LLC 検出されたジェスチャおよび凝視に基づく自動化アシスタントの機能の呼び出し
WO2021210041A1 (fr) * 2020-04-13 2021-10-21 三菱電機株式会社 Dispositif de traitement de détection de visage et procédé de traitement de détection de visage
JP2023509750A (ja) * 2020-01-08 2023-03-09 シャンハイ センスタイム リンガン インテリジェント テクノロジー カンパニー リミテッド 表情識別方法及び装置、機器、コンピュータ可読記憶媒体並びにコンピュータプログラム
US11614794B2 (en) 2018-05-04 2023-03-28 Google Llc Adapting automated assistant based on detected mouth movement and/or gaze
US11688417B2 (en) 2018-05-04 2023-06-27 Google Llc Hot-word free adaptation of automated assistant function(s)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004062393A (ja) * 2002-07-26 2004-02-26 Japan Science & Technology Corp 注目判定方法及び注目判定装置
JP2012038106A (ja) * 2010-08-06 2012-02-23 Canon Inc 情報処理装置、情報処理方法、およびプログラム
JP2012187178A (ja) * 2011-03-09 2012-10-04 Fujitsu Ltd 視線検出装置及び視線検出方法
JP2016049258A (ja) * 2014-08-29 2016-04-11 アルプス電気株式会社 照明撮像装置及びそれを備えた視線検出装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004062393A (ja) * 2002-07-26 2004-02-26 Japan Science & Technology Corp 注目判定方法及び注目判定装置
JP2012038106A (ja) * 2010-08-06 2012-02-23 Canon Inc 情報処理装置、情報処理方法、およびプログラム
JP2012187178A (ja) * 2011-03-09 2012-10-04 Fujitsu Ltd 視線検出装置及び視線検出方法
JP2016049258A (ja) * 2014-08-29 2016-04-11 アルプス電気株式会社 照明撮像装置及びそれを備えた視線検出装置

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021524975A (ja) * 2018-05-04 2021-09-16 グーグル エルエルシーGoogle LLC 検出されたジェスチャおよび凝視に基づく自動化アシスタントの機能の呼び出し
US11493992B2 (en) 2018-05-04 2022-11-08 Google Llc Invoking automated assistant function(s) based on detected gesture and gaze
US11614794B2 (en) 2018-05-04 2023-03-28 Google Llc Adapting automated assistant based on detected mouth movement and/or gaze
JP7277569B2 (ja) 2018-05-04 2023-05-19 グーグル エルエルシー 検出されたジェスチャおよび凝視に基づく自動化アシスタントの機能の呼び出し
US11688417B2 (en) 2018-05-04 2023-06-27 Google Llc Hot-word free adaptation of automated assistant function(s)
JP2023509750A (ja) * 2020-01-08 2023-03-09 シャンハイ センスタイム リンガン インテリジェント テクノロジー カンパニー リミテッド 表情識別方法及び装置、機器、コンピュータ可読記憶媒体並びにコンピュータプログラム
JP7317241B2 (ja) 2020-01-08 2023-07-28 シャンハイ センスタイム リンガン インテリジェント テクノロジー カンパニー リミテッド 表情識別方法及び装置、機器、コンピュータ可読記憶媒体並びにコンピュータプログラム
WO2021210041A1 (fr) * 2020-04-13 2021-10-21 三菱電機株式会社 Dispositif de traitement de détection de visage et procédé de traitement de détection de visage

Also Published As

Publication number Publication date
JPWO2017203769A1 (ja) 2019-04-18
JP6767482B2 (ja) 2020-10-14

Similar Documents

Publication Publication Date Title
US11699293B2 (en) Neural network image processing apparatus
WO2017203769A1 (fr) Dispositif de détection de ligne de visée
JP4895797B2 (ja) 瞼検出装置、瞼検出方法及びプログラム
US10896324B2 (en) Line-of-sight detection device and method for detecting line of sight
CN110703904B (zh) 一种基于视线跟踪的增强虚拟现实投影方法及系统
JP5366028B2 (ja) 顔画像撮像装置
WO2018030515A1 (fr) Dispositif de détection de ligne de visée
JP6631951B2 (ja) 視線検出装置及び視線検出方法
KR20120057033A (ko) Iptv 제어를 위한 원거리 시선 추적 장치 및 방법
JP4452836B2 (ja) 瞳孔を検出する方法及び装置
US11361560B2 (en) Passenger state detection device, passenger state detection system, and passenger state detection method
US20160063334A1 (en) In-vehicle imaging device
JP6957048B2 (ja) 眼部画像処理装置
CN109415020B (zh) 辉度控制装置、辉度控制系统以及辉度控制方法
JPWO2016031666A1 (ja) 視線検出装置
JP6555707B2 (ja) 瞳孔検出装置、瞳孔検出方法及び瞳孔検出プログラム
JP2016028669A (ja) 瞳孔検出装置、および瞳孔検出方法
JP2016051317A (ja) 視線検出装置
JP7228885B2 (ja) 瞳孔検出装置
JP7046347B2 (ja) 画像処理装置及び画像処理方法
WO2017154356A1 (fr) Dispositif de détection de ligne de visée et procédé de détection de ligne de visée
JP2016051312A (ja) 視線検出装置
JP6370168B2 (ja) 照明撮像装置及びそれを備えた視線検出装置
JP2018190213A (ja) 顔認識装置及び視線検出装置
JP2017162233A (ja) 視線検出装置および視線検出方法

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018519093

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17802371

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17802371

Country of ref document: EP

Kind code of ref document: A1