US10192135B2 - 3D image analyzer for determining the gaze direction - Google Patents

3D image analyzer for determining the gaze direction Download PDF

Info

Publication number
US10192135B2
US10192135B2 US15/221,847 US201615221847A US10192135B2 US 10192135 B2 US10192135 B2 US 10192135B2 US 201615221847 A US201615221847 A US 201615221847A US 10192135 B2 US10192135 B2 US 10192135B2
Authority
US
United States
Prior art keywords
image
pattern
gaze
hough
pupil
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US15/221,847
Other languages
English (en)
Other versions
US20160335475A1 (en
Inventor
Daniel KRENZER
Albrecht HESS
András Kátai
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Publication of US20160335475A1 publication Critical patent/US20160335475A1/en
Assigned to FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V. reassignment FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HESS, Albrecht, KÁTAI, András, KRENZER, Daniel
Application granted granted Critical
Publication of US10192135B2 publication Critical patent/US10192135B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/145Square transforms, e.g. Hadamard, Walsh, Haar, Hough, Slant transforms
    • G06K9/4671
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • G06K9/00335
    • G06K9/00597
    • G06K9/00604
    • G06K9/0061
    • G06K9/00986
    • G06K9/4633
    • G06K9/481
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T5/002
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/955Hardware or software architectures specially adapted for image or video understanding using specific electronic processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20008Globally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • Embodiments of the present invention relate to a 3D image analyzer for determining the gaze direction (i.e. direction vector) or a line of sight (consisting of position vector and direction vector) within a 3D room without the necessity of a calibration by the user, the gaze direction of whom is to be determined. Further embodiments relate to an image analyzing system with a 3D image analyzer for recognizing an alignment and/or gaze direction and to a corresponding method for recognizing the alignment and/or gaze direction.
  • One common category are the video-based systems, which record with one or more cameras the eyes of the person and analyze these video recordings online or offline in order to determine therefrom the gaze direction.
  • Systems for a video-based determination of the gaze direction as a rule necessitate for each user prior to the use and in some cases additionally during the use (e.g. when leaving the camera's detection zone or in the event of a change of the position between user and system) a calibration procedure in order to be able to determine the gaze direction of the user.
  • some of these systems necessitate a very specific and defined arrangement of the camera(s) and the illumination to each other or a very specific arrangement of the camera(s) towards the user and a previous knowledge about the user's position (as e.g. disclosed in the German patent no. DE 10 2004 046 617 A1) in order to be able to perform the determination of the gaze direction.
  • an image analyzing system for the determination of a gaze direction based on a previously detected or tracked pupil or iris may have: at least one Hough path for at least one camera of a monoscopic camera assembly or at least two Hough paths for at least two cameras of a stereoscopic or multi-scopic camera assembly, wherein every Hough path has a Hough processor with the following features: a pre-processor which is configured to receive a plurality of samples respectively having an image and to rotate and/or to reflect the image of the respective sample and to output a plurality of versions of the image of the respective sample for each sample; and a Hough transformation unit which is configured to collect a predetermined searched pattern within the plurality of samples on the basis of the plurality of versions, wherein a characteristic of the Hough transformation unit, which depends on the searched pattern, is adjustable; a unit for analyzing the collected pattern and for outputting a set of image data which describes a position and/or a geometry of the pattern; and a 3D image analyzer as mentioned above.
  • a method for the determination of a gaze direction may have the steps of: receiving of at least one first set of image data, which is determined on the basis of a first image, and a further set of image data, which is determined on the basis of a further image, wherein the first image displays a pattern of a three-dimensional object from a first perspective into a first image plane and wherein the further set has a further image or an information, which describes a relation between at least one point of the three-dimensional object and the first image plane; calculating a position of the pattern in a three-dimensional room based on the first set, a further set, and a geometric relation between the perspectives of the first and the further image or calculating of the position of the pattern in a three-dimensional room based on a first set and a statistically evaluated relation between at least two characteristic features in the first image or calculating the position of the pattern in a three-dimensional room based on the first set and a position relation between at least one point of the three-dimensional object and the first image plane,
  • Still another embodiment may have a computer readable digital storage medium, on which a computer program is stored with a program code for the execution of a method for the determination of a gaze direction with the following steps: receiving of at least one first set of image data, which is determined on the basis of a first image, and a further set of image data, which is determined on the basis of a further image, wherein the first image displays a pattern of a three-dimensional object from a first perspective into a first image plane and wherein the further set has a further image or an information, which describes a relation between at least one point of the three-dimensional object and the first image plane; calculating a position of the pattern in a three-dimensional room based on the first set, a further set, and a geometric relation between the perspectives of the first and the further image or calculating of the position of the pattern in a three-dimensional room based on a first set and a statistically evaluated relation between at least two characteristic features in the first image or calculating the position of the pattern in a three-dimensional room based
  • a method for the determination of a gaze direction may have the steps of: receiving of at least one first set of image data, which is determined on the basis of a first image, and a further set of image data, which is determined on the basis of the first image or of a further image, wherein the first image displays a pattern of a three-dimensional object from a first perspective into a first image plane and wherein the further set has a further image or an information, which describes a relation between at least one point of the three-dimensional object and the first image plane; calculating a position of the pattern in a three-dimensional room based on the first set, a further set, and a geometric relation between the perspectives of the first and the further image or calculating of the position of the pattern in the three-dimensional room based on a first set and a statistically evaluated relation between at least two characteristic features in the first image or calculating the position of the pattern in the three-dimensional room based on the first set and a position relation between at least one point of the three-dimensional object and the first
  • the embodiments of the present invention create a 3D image analyzer for the determination of a gaze direction or a line of sight (comprising e.g. a gaze direction vector and a location vector, which e.g. indicates the pupil midpoint and where the gaze direction vector starts) or of a point of view, whereby the 3D image analyzer is configured in order to at least receive one first set of image data, which is determined on the basis of a first image and a further set of information which are determined on the basis of a first image, whereby the first image contains a pattern resulting from the display of a three-dimensional object (e.g.
  • the 3D image analyzer comprises a position calculator and an alignment calculator setup.
  • the position calculator is configured in order to calculate a position of the pattern within a three-dimensional room based on the first set, a further set, which is determined on the basis of the further image, and a geometric relation between the perspectives of the first and the further image or in order to calculate the position of the pattern within a three-dimensional room based on the first set and a statistically evaluated relation between at least two characterizing features towards each other in the first image, or in order to calculate the position of the pattern within the three-dimensional room based on the first set and a position relation between at least one point of the three-dimensional object and the first image plane.
  • the alignment calculator is configured in order to calculate two possible 3D gaze vectors per image and to determine from these possible 3D gaze vectors the 3D gaze vector according to which the pattern in the three-dimensional room is aligned, whereby the calculation and the determination is based on the first set, the further set and on the calculated position of the pattern.
  • the gist of the present invention is the fact that it had been recognized that—based on the determined position of the pattern by the above mentioned position calculator—an alignment of an object in the room, as e.g. an alignment of a pupil in the room (thus, the gaze direction), and/or a line of sight (consisting of a gaze direction vector and a location vector, which e.g. indicates the pupil midpoint and where the gaze direction vector starts) based on at least one set of image data, e.g. from a first perspective and additional information and/or a further set of image data (from a further perspective) can be determined. Determination of the alignment is carried out by means of a position calculator, which in a first step determines the position of the pattern.
  • an alignment of an object in the room as e.g. an alignment of a pupil in the room (thus, the gaze direction), and/or a line of sight (consisting of a gaze direction vector and a location vector, which e.g. indicates the pupil midpoint and where
  • these two possible 3D gaze vectors are e.g. determined so that the optical distortion of the pattern can be compared with a basic form of the pattern and that therefrom it is determined to which amount the pattern is tipped towards the optical plane of the image (cf. first set of image data).
  • a (round) pupil which in case of tipping is depicted as an ellipsis
  • there are two possible tipping degrees of the pupil vis-à-vis the optical plane which leads to the ellipsis-shaped depiction of the pupil.
  • the alignment calculator determines on the basis of the further set of image data or on the basis of additional information, which are also obtained based on the first set of image information, which corresponds to the theoretically possible tipping degree and/or the real 3D gaze vectors, thus, to the actual gaze direction.
  • the gaze direction vector and/or the line of sight (consisting of the searched pattern and direction vector) without prior knowledge of a distance between pupil and camera or without exact positioning of the optical axes of the camera (e.g. by the pupil midpoint) can be determined.
  • the determination and/or the selection of the applicable 3D gaze vector takes place in a way that two further possible 3D gaze vectors for a further set of image data (from a further perspective) are determined, whereby a 3D gaze vector from the first set of image data corresponds to a 3D gaze vector from the further set of image data, which, thus, is the actual 3D gaze vector.
  • the first set of image data can be analyzed, e.g. in respect of the fact how many pixels of the eye's sclera depicted in the first image are scanned by the two possible 3D gaze vectors (starting at the pupil midpoint).
  • the 3D gaze vector is selected, which scans less pixels of the sclera. Instead f the analysis of the sclera, it would also be possible that the 3D gaze vector is selected, along the projection of which into the image (starting from the pupil midpoint) the smaller distance between the pupil midpoint and the edge of the eye's opening results.
  • statistically determined relations as e.g. a distance between two facial characteristic (e.g. nose, eye) can be consulted to calculate the 3D position of a point in the pattern (e.g. pupil or iris center).
  • a distance between two facial characteristic e.g. nose, eye
  • a point in the pattern e.g. pupil or iris center
  • the determination of the above described 3D position of a point in the pattern is not limited to the use of statistically determined values. It can also occur based on the results of an upstream calculator, which provides the 3D positions of facial characteristics (e.g. nose, eye) or a 3D position of the above mentioned pattern.
  • the selection of the actual 3D gaze vector from the possible 3D gaze vectors can also occur based on the 3D position of the pattern (e.g. pupil or iris center) and on the above mentioned 3D positions of the facial characteristics (e.g. eye's edge, mouth's edge).
  • the 3D position of the pattern e.g. pupil or iris center
  • the facial characteristics e.g. eye's edge, mouth's edge
  • the alignment calculation occurs in a way that a first virtual projection plane due to rotation of the actual first projection plane including optics around the optics' intersection is calculated for the first image so that a first virtual optical axis, which is defined as being a perpendicular to the first virtual projection plane, extends through the midpoint of the recognized pattern.
  • a second virtual position is calculated for the further image by rotation of the actual second projection plane including optics around the optics' intersection so that a second virtual optical axis, which is defined being a perpendicular to the second virtual projection plane, extends through the midpoint of the edge pattern.
  • the 3D gaze vector can be described by a set of equations, whereby every equation describes a geometric relation of the respective axes and the respective virtual projection plane vis-à-vis the 3D gaze vector.
  • a first equation on the basis of the image data of the first set of 3D gaze vectors can be described, whereby two solutions of the first equation are possible.
  • a second equation on the basis of the image data of the second set leads to two (further) solutions for the 3D gaze vector referring to the second virtual projection plane.
  • the actual 3D gaze vector can be calculated by a measured averaging from respectively one solution vector of the first and one solution vector of the second equation.
  • the 3D image analyzer can be implemented in a processing unit comprising e.g. a selective-adaptive data processor.
  • the 3D image analyzer can be part of an image analyzing system for tracking a pupil.
  • an image analyzing system typically comprises at least one Hough path for at least one camera or advantageously, two Hough paths for at least two cameras.
  • every Hough path can comprise one pre-processor as well as one Hough transformation unit.
  • this Hough transformation unit also a unit for analyzing the collected patterns and for outputting a set of image data can be included.
  • a method for determining a gaze direction or a line of sight comprises the steps of the receipt of at least one first set of image data, which is determined on the basis of a first image, and a further set of information, which is determined on the basis of the first image or a further image, whereby the first image displays a pattern of a three-dimensional object from a first perspective in a first image plane, and whereby the further set contains a further image with a pattern, which results from the illustration of the same three-dimensional object from a further perspective in a further image plane, or comprises information, which describes a relation between at least one point of the three-dimensional object and the first image plane.
  • the method further comprises the step of calculating a position of the pattern in a three-dimensional room based on a first set, a further set, which is determined on the basis of a further image, and a geometric relation between the perspectives of the first and the further image, or of calculating the position of the pattern in the three-dimensional room based on the first set and a statistically determined relation between at least two characteristic features to one another in the first image or of calculating the position of the pattern in the three-dimensional room based on the first set and a position relation between at least one point of the three-dimensional object and the first image plane.
  • a 3D gaze vector is calculated according to which the pattern is aligned to in the three-dimensional room, whereby the calculation occurs based on the first set of image data and the further set of information and on the calculated position of the pattern.
  • this method can be performed by a computer.
  • a further embodiment relates to a computer-readable digital storage medium with a program code for performing the above method.
  • FIG. 1 a schematic block diagram of a 3D image analyzer according to an embodiment
  • FIG. 2 a a schematic block diagram of a Hough processor with a pre-processor and a Hough transformation unit according to an embodiment
  • FIG. 2 b a schematic block diagram of a pre-processor according to an embodiment
  • FIG. 2 c a schematic illustration of Hough cores for the detection of straights (sections);
  • FIG. 3 a a schematic block diagram of a possible implementation of a Hough transformation unit according to an embodiment
  • FIG. 3 b a single cell of a deceleration matrix according to an embodiment
  • FIG. 4 a - d a schematic block diagram of a further implementation of a Hough transformation unit according to an embodiment
  • FIG. 5 a a schematic block diagram of a stereoscopic camera assembly with two image processors and a post-processing unit, whereby each of the image processors comprises one Hough processor according to embodiments;
  • FIG. 5 b an exemplary picture of an eye for the illustration of a point of view detection, which is feasible with the unit from FIG. 5 a and for explanation of the point of view detection in the monoscopic case;
  • FIG. 6-7 c further illustrations for explanation of additional embodiments and/or aspects
  • FIG. 8 a - c schematic illustrations of optical systems with associated projection planes.
  • FIG. 8 d a schematic illustration of an ellipsis with the parameters mentioned in the description thereto;
  • FIG. 8 e a schematic illustration of the depiction of a circle in the 3D room as ellipsis of a plane for explanation of the calculation of the alignment of the circle in the 3D room based on the parameters of the ellipsis, and
  • FIG. 9 a -9 i further illustrations for explanation of background knowledge for the Hough transformation unit.
  • FIG. 1 shows a 3D image analyzer 400 with a position calculator 404 and an alignment calculator 408 .
  • the 3D image analyzer is configured in order to determine on the basis of at least one set of image data, however advantageously on the basis of a first set and a second set of image data, a gaze direction in a 3D room (thus, a 3D gaze direction). Together with a likewise determined point on the line of sight (e.g. the pupil or iris center in the 3D room), from this point and the above mentioned gaze direction, the 3D line of sight results, which also can be used as basis for the calculation of the 3D point of view.
  • a gaze direction in a 3D room e.g. the pupil or iris center in the 3D room
  • the fundamental method for the determination comprises three basic steps: receipt of at least the one first set of image data, which is determined on the basis of a first image 802 a (cf. FIG. 8 a ) and a further set of information, which is determined on the basis of the first image 802 a and a further image 802 b .
  • the first image 802 a displays a pattern 804 a of a three-dimensional object 806 a (cf. FIG. 8 b ) from a first perspective of a first image plane.
  • the further set typically comprises the further image 802 b.
  • the further set can alternatively also contain one or more of the following information (instead of concrete image data), a position relation between a point P MP of the three-dimensional object 806 a and the first image plane 802 , position relations between several characteristic points to one another in the face or eye, position relations of characteristic points in the face or eye in respect of the sensor, the position and alignment of the face.
  • the position of the pattern 806 a in the three-dimensional room based on the first set, the further set and a geometric relation between the perspectives of the first and the second image 802 a and 802 is calculated.
  • the calculation of the position of the pattern 806 in the three-dimensional room based on the first set and a statistically evaluated relation between at least two characteristic features in the first image 804 a to one another can be calculated.
  • the last step of this unit operation relates to the calculation of the 3D gaze vector according to which the pattern 804 a and 804 b is aligned to in the three-dimensional rom. The calculation occurs based on the first set and the second set.
  • an elliptic pupil projection respectively arises (cf. FIG. 8 a ).
  • the center of the pupil is on both sensors 802 a and 802 b and, thus, also in the respective camera images depicted as midpoint E MP K1 and E MP K2 of the ellipsis. Therefore, due to stereoscopic rear projection of these two ellipsis midpoints E MP K1 and E MP K2 , the 3D pupil midpoint can be determined by means of the objective lens model.
  • An optional requirement thereto is an ideally time synchronous picture so that the depicted scenes taken from both cameras are identical and, thus, the pupil midpoint was collected at the same position.
  • the rear projection beam RS of the ellipsis midpoint has to be calculated, which runs along an intersection beam between the object and the intersection on the object's side (H1) of the optical system ( FIG. 8 a ).
  • RS ( t ) RS 0+t ⁇ RS ⁇ right arrow over (n) ⁇ (A1)
  • This rear projection beam is defined by equation (A1). It consists of a starting point RS 0 and a standardized direction vector RS ⁇ right arrow over (n) ⁇ , which result in the used objective lens model ( FIG. 8 b ) from the equations (A2) and (A3) from the two main points H 1 and H 2 of the objective as well as from the ellipsis center E MP in the sensor plane. For this, all three points (H 1 , H 2 and E MP ) have to be available in the eye-tracker coordination system.
  • the 3D ellipsis center in the camera coordination system can be calculated from the previously determined ellipsis center parameters x m and y m , which are provided by means of the equation
  • P image is the resolution of the camera image in pixels
  • S offset is the position on the sensor, at which it is started to read out the image
  • S res is the resolution of the sensor
  • S PxGr is the pixel size of the sensor.
  • the searched pupil midpoint is in the ideal case the point of intersection of the two rea projection beams RS K1 and RS K2 .
  • model parameters and ellipsis midpoints already by minimum measurement errors, no intersection point of the straight lines result anymore in the 3D room.
  • Two straight lines in this constellation, which neither intersect, nor run parallel, are designated in the geometry as skew lines.
  • the two skew lines respectively pass the pupil midpoint very closely. Thereby, the pupil midpoint lies at the position of their smallest distance to each other on half of the line between the two straight lines.
  • the shortest distance between two skew lines is indicated by a connecting line, which is perpendicular to the two straight lines.
  • the direction vector ⁇ right arrow over (n) ⁇ St of the perpendicularly standing line on both rear projection beams can be calculated according to equation (A4) as intersection product of its direction vectors.
  • ⁇ right arrow over (n) ⁇ St RS ⁇ right arrow over (n) ⁇ K1 ⁇ RS ⁇ right arrow over (n) ⁇ K2 (A4)
  • Equation (A5) The position of the shortest connecting line between the rear projection beams is defined by equation (A5).
  • equation (A5) By use of RS K1 (s), RS K2 (t) and ⁇ right arrow over (n) ⁇ St it results therefrom an equation system, from which s, t and u can be calculated.
  • d RS u ⁇
  • the calculated pupil midpoint is one of the two parameters, which determine the line of sight of the eye to be determined by the eye-tracker. Moreover, it is needed for the calculation of the gaze direction vector P ⁇ right arrow over (n) ⁇ , which is described in the following.
  • the gaze direction vector P ⁇ right arrow over (n) ⁇ to be determined corresponds to the normal vector of the circular pupil surface and, thus, is due to the alignment of the pupil specified in the 3D room.
  • the position and alignment of the pupil can be determined.
  • the lengths of the two half-axes as well as the rotation angles of the projected ellipses are characteristic for the alignment of thee pupil and/or the gaze direction relative to the camera position.
  • the distance between pupil and camera with several hundred millimeters is very large vis-à-vis the pupil radius, which is between 2 mm and 8 mm. Therefore, the deviation of the pupil projection from an ideal ellipsis form, which occurs with the inclination of the pupil vis-à-vis the optical axis, is very small and can be omitted.
  • the influence of the angle ⁇ to the ellipsis parameter has to be eliminated so that the form of the pupil projection alone is influenced by the alignment of the pupil. This is given, if the pupil midpoint P MP directly lies in the optical axis of the camera system. Therefore, the influence of the angle ⁇ can be removed by calculating the pupil projection on the sensor of a virtual camera system vK, the optical axis of which passes directly the previously calculated pupil midpoint P MP , as shown in FIG. 8 c.
  • the position and alignment of such a virtual camera system 804 a ′ (vK in FIG. 8 c ) can be calculated from the parameter of the original camera system 804 a (K in FIG. 8 b ) by rotation about its object-side main point H 1 .
  • this corresponds simultaneously to the object-side main point vH 1 of the virtual camera system 804 a ′. Therefore, the direction vectors of the intersection beams of the depicted objects are in front and behind the virtual optical system 808 c ′ identically to those in the original camera system. All further calculations to determining the gaze direction vector take place in the eye-tracker coordination system.
  • the vectors vK x and vK ⁇ right arrow over (y) ⁇ can be calculated, which indicate the x- and y-axis of the virtual sensor in the eye-tracker coordination system.
  • vK 0 vH 1 ⁇ ( d+b ) ⁇ vK ⁇ right arrow over (n) ⁇ (A9)
  • the distance d necessitated for this purpose between the main points as well as the distance b between the main plane 2 and the sensor plane have to be known or e.g. determined by an experimental setup.
  • edge points RP 3D of the previously determined ellipsis on the Sensor in the original position are necessitated.
  • E a is the short half-axis of the ellipsis
  • E b is the long half-axis of the ellipsis
  • E y m is the midpoint coordinate of the ellipsis
  • E ⁇ is the rotation angle of the ellipsis.
  • the position of one point RP 3D in the eye-tracker coordination system can be calculated by the equations (A11) to (A14) from the parameters of the E, the sensors S and the camera K, wherein ⁇ indicates the position of an edge point RP 2D according to FIG. 8 d on the ellipsis circumference.
  • [ x ′ y ′ ] [ E a ⁇ cos ⁇ ( ⁇ ) E b ⁇ sin ⁇ ( ⁇ ) ] ( A11 )
  • RP 2 ⁇ ⁇ D [ x ′ ⁇ cos ⁇ ( E ⁇ ) + y ′ ⁇ sin ⁇ ( E ⁇ ) + E x m - x ′ ⁇ sin ⁇ ( E ⁇ ) + y ′ ⁇ cos ⁇ ( E ⁇ ) + E y m ]
  • [ s 1 t 1 ] ( RP 2 ⁇ D ⁇ 1 2 ⁇ S res - S offset ) ⁇ S PxGr ( A13 )
  • RP 3 ⁇ D K 0 + s 1 ⁇ K x ⁇ + t 1 ⁇ K y -> ( A14 )
  • intersection beam KS in the original camera system which displays a pupil edge point as ellipsis edge point RP 3D on the sensor
  • the intersection beams of the ellipsis edge points in FIG. 8 b and FIG. 8 c demonstrate this aspect.
  • the two beams KS and vKS have the same direction vector, which results from equation (A15).
  • the parameter of the virtual ellipsis vE can be calculated by means of ellipsis fitting, e.g. with the “direct least square fitting of ellipses”, algorithm according to Fitzgibbon et al.
  • ellipsis fitting e.g. with the “direct least square fitting of ellipses”, algorithm according to Fitzgibbon et al.
  • at least six virtual edge points vRP 2D are necessitated, which can be calculated by using several ⁇ in equation (A11) with the above described path.
  • the form of the virtual ellipsis vE determined this way, only depends on the alignment of the pupil. Furthermore, its midpoint is in the center of the virtual sensor and together with the sensor normal, which corresponds to the camera normal vK ⁇ right arrow over (n) ⁇ t, it forms a straight line running along the optical axis through the pupil midpoint P MP .
  • the requirements are fulfilled to subsequently calculate the gaze direction based on the approach presented in the patent specification of DE 10 2004 046 617 A1. Thereby, with this approach, it is now also possible by using the above described virtual camera system to determine the gaze direction, if the pupil midpoint lies beyond the axis of the optical axis of the real camera system, which is frequently the case in real applications.
  • the previously calculated virtual ellipsis vE is now accepted in the virtual main plane 1 .
  • the 3D ellipsis midpoint vE′ MP corresponds to the virtual main point 1 .
  • it is the dropped perpendicular foot of the pupil midpoint P MP in the virtual main plane 1 .
  • only the axial ratio and the rotation angle of the ellipsis vE is used.
  • Every picture of the pupil 806 a in a camera image can arise by two different alignments of the pupil.
  • two virtual intersections vS of the two possible straights of view with the virtual main plane 1 arise from the results of every camera.
  • the two possible gaze directions P ⁇ right arrow over (n) ⁇ ,1 and P ⁇ right arrow over (n) ⁇ ,2 can be determined as follows.
  • both virtual intersections vS 1 as well as vS 2 can be determined and therefrom, the possible gaze directions P ⁇ right arrow over (n) ⁇ ,1 as well as P ⁇ right arrow over (n) ⁇ ,2 .
  • v ⁇ ⁇ S 1 v ⁇ ⁇ H 1 + r ⁇ r n ⁇ , 1 ( A22 )
  • v ⁇ ⁇ S 2 v ⁇ ⁇ H 1 + r ⁇ r n ⁇ , 2 ( A23 )
  • P n ⁇ , 1 v ⁇ ⁇ S 1 - P MP ⁇ v ⁇ ⁇ S 1 - P MP ⁇ ( A24 )
  • P n ⁇ , 2 v ⁇ ⁇ S 2 - P MP ⁇ v ⁇ ⁇ S 2 - P MP ⁇ ( A25 )
  • the possible gaze directions of the camera 1 P ⁇ right arrow over (n) ⁇ ,1 K1 as well as P ⁇ right arrow over (n) ⁇ ,2 K1
  • the camera 2 P ⁇ right arrow over (n) ⁇ ,1 K2 as well as P ⁇ right arrow over (n) ⁇ ,2 K2
  • From these 1′11 four vectors, respectively one of each camera indicates the actual gaze direction, whereby these two standardized vectors are ideally identical.
  • the differences of the respectively selected possible gaze direction vectors are formed from a vector of one camera and from a vector of the other camera. The combination, which has the smallest difference, contains the searched vectors.
  • the angle w diff between the two averaged vectors P ⁇ right arrow over (n) ⁇ K1 and P ⁇ right arrow over (n) ⁇ K2 can be calculated.
  • the corresponding angles can be added to the determined points of view ⁇ BW and ⁇ BW .
  • the implementation of the above introduced method does not depend on the platform so that the above introduced method can be performed on different hardware platforms, as e.g. a PC.
  • FIG. 2 a shows a Hough processor 100 with a pre-processor 102 and a Hough transformation unit 104 .
  • the pre-processor 102 constitutes the first signal processing stage and is informationally linked to the Hough transformation unit 104 .
  • the Hough transformation unit 104 has a delay filter 106 , which can comprise at least one, however, advantageously a plurality of delay elements 108 a , 108 b , 108 c , 110 a , 110 b , and 110 c .
  • the delay elements 108 a to 108 c and 110 a to 110 c of the delay filter 106 are typically arranged as a matrix, thus, in columns 108 and 110 and lines a to c and signaling linked to each other.
  • At least one of the delay elements 108 a to 108 c and/or 110 a to 110 c has an adjustable delay time, here symbolized by means of the “+/ ⁇ ” symbols.
  • a separate control logic and/or control register (not shown) can be provided for activating the delay elements 108 a to 108 c and 110 a to 110 c and/or for controlling the same.
  • This control logic controls the delay time of the individual delay elements 108 a to 108 c and/or 110 a to 110 c via optional switchable elements 109 a to 109 c and/or 111 a to 111 c , which e.g. can comprise a multiplexer and a bypass.
  • the Hough transformation unit 104 can comprise an additional configuration register (not shown) for the initial configuration of the individual delay elements 108 a to 108 c and 110 a to 110 c.
  • the pre-processor 102 has the objective to process the individual samples 112 a , 112 b , and 112 c in a way that they can be efficiently processed by the Hough transformation unit 104 .
  • the pre-processor 102 receives the image data and/or the plurality of samples 112 a , 112 b , and 112 c and performs a pre-processing, e.g. in form of a rotation and/or in form of a reflection, in order to output the several versions (cf. 112 a and 112 a ′) to the Hough transformation unit 104 .
  • the outputting can occur serially, if the Hough transformation unit 104 has a Hough core 106 , or also parallel, if several Hough cores are provided.
  • the n versions of the image are either entirely parallel, semi-parallel (thus, only partly parallel) or serially outputted and processed.
  • the pre-processing in the pre-processor 102 which serves the purpose to detect several similar patterns (rising and falling straight line) with a search pattern or a Hough core configuration, is explained in the following by means of the first sample 112 a.
  • This sample can e.g. be rotated, e.g. about 90° in order to obtain the rotated version 112 a ′.
  • This procedure of the rotation has reference sign 114 .
  • the rotation can occur either about 90°, but also about 180° or 270° or generally about 360°/n, whereby it should be noted that depending on the downstream Hough transformation (cf. Hough transformation unit 104 ), it may be very efficient to carry out only a 90° rotation.
  • Hough transformation unit 104 it may be very efficient to carry out only a 90° rotation.
  • the reflecting 116 corresponds to a rearward read-out of the memory. Based on the reflected version 112 a ′′ as well as based on the rotated version 112 a ′, a fourth version can be obtained from a rotated and reflected version 112 a ′′′, either by carrying out the procedure 114 or 116 . On the basis of the reflection 116 , then two similar patterns (e.g. rightwards opened semicircle and leftwards opened semicircle) with the same Hough core configuration as subsequently described, are detected.
  • two similar patterns e.g. rightwards opened semicircle and leftwards opened semicircle
  • the Hough transformation unit 104 is configured in order to detect in the versions 112 a or 112 a ′ (or 112 a ′′ or 112 a ′′′) provided by the pre-processor 102 a predetermined searched pattern, as e.g. an ellipsis or a segment of an ellipsis, a circle or a segment of a circle, a straight line or a graben segment.
  • a predetermined searched pattern as e.g. an ellipsis or a segment of an ellipsis, a circle or a segment of a circle, a straight line or a graben segment.
  • the filter arrangement is configured corresponding to the searched predetermined pattern.
  • some of the delay elements 108 a to 108 c or 110 a to 110 c are activated or bypassed.
  • the column amount is outputted via the column amount output 108 x or 110 x , whereby here optionally an addition element (not shown) for establishing the column amount of each column 108 or 110 can be provided.
  • an addition element for establishing the column amount of each column 108 or 110 can be provided.
  • a maximum of one of the column amounts a presence of a searched image structure or of a segment of the searched image structure or at least of the associated degree of accordance with the searched structure can be assumed.
  • the film strip is moved further about a pixel or about a column 108 or 110 so that with every processing step by means of a starting histogram, it is recognizable, whether one of the searched structures is detected or not, or if the probability for the presence of the searched structure is correspondingly high.
  • a threshold value of the respective column amount of column 108 or 110 show the detection of a segment of the searched image structure, whereby every column 108 or 110 is associated to a searched pattern or a characteristic of a searched pattern (e.g. angle of a straight line or radius of a circle).
  • the searched characteristic (thus, e.g. the radius or the increase) can be adjusted during ongoing operation.
  • a change of the entire filter characteristic of the filter 106 occurs during adjusting the delay time of one of the delay elements 108 a to 108 c or 110 a to 110 c .
  • Due to the flexible adjustment of the filter characteristic of the filter 106 of the Hough transformation unit 104 it is possible to adjust the transformation core 106 during the runtime so that e.g. dynamic image contents, as e.g. for small and large pupils can be collected and tracked with the same Hough core 106 .
  • all delay elements 108 a , 108 b , 108 c , 110 a , 110 b and/or 110 c are carried out with a variable or discretely switchable delay time so that during the ongoing operation, it can be switched between the different patterns to be detected or between the different characteristics of the patterns to be detected.
  • the size of the shown Hough core 104 is configurable (either during operation or previously) so that, thus, additional Hough cells can be activated or deactivated.
  • the transformation unit 104 can be connected to means for adjusting the same or, to be precise, for adjusting the individual delay elements 108 a to 108 c and 110 a to 110 c , as e.g. with a controller (not shown).
  • the controller is e.g. arranged in a downstream processing unit and is configured in order to adjust the delay characteristic of the filter 106 , if a pattern cannot be recognized, or if the recognition is not sufficiently well (low accordance of the image content with the searched pattern of the presence of the searched patterns). With reference to FIG. 5 a , it is referred to this controller.
  • the above mentioned embodiment has the advantage that it is easily and flexibly to be realized and that it is particularly able to be implemented on an FPGA (Field Programmable Gate Array).
  • FPGA Field Programmable Gate Array
  • the background hereto is that the above described parallel Hough transformation gets along without regression and is so to say entirely parallelized. Therefore, the further embodiments relate to FPGAs, which at least have the Hough transformation unit 104 and/or the pre-processor 102 .
  • 60 FPS with a resolution of 640 ⁇ 480 could be achieved by using a frequency at 96 MHz, as due to the above described structure 104 with a plurality of columns 108 and 110 , a parallel processing or a so-called parallel Hough transformation is possible.
  • gaze direction primarily the optical axis of the eye is meant.
  • This optical axis of the eye is to be distinguished from the visual axis of the eye, whereby the optical axis of the eye, however, can rather serve as an estimate for the visual axis, as these axes typically depend on each other.
  • a direction or a direction vector can be calculated, which is even a clearly better estimate of the alignment of the actual visual axis of the eye.
  • FIGS. 2 a and 2 b show the pre-processor 102 , which serves the pre-processing of the video data stream 112 with the frames 112 a , 112 b , and 112 c .
  • the pre-processor 102 is configured in order to receive the samples 112 as binary edge images or even as gradient images and to carry out on the basis of the same the rotation 114 or the reflection 116 , in order to obtain the four versions 112 a , 112 a ′, 112 a ′′, and 112 a ′′′.
  • the background is that typically the parallel Hough transformation, as carried out by the Hough transformation unit, is based on two or four respectively pre-processed, e.g.
  • the pre-processor has in the corresponding embodiments an internal or external storage, which serves the charging of the received image data 112 .
  • the processing of rotating 114 and/or reflecting 116 of the pre-processor 102 depends on the downstream Hough transformation, the number of the parallel Hough cores (parallelizing degree) and the configuration of the same, as it is described in particular with reference to FIG. 2 c .
  • the pre-processor 102 can be configured in order to output the pre-processed video stream according to the parallelizing degree of the downstream Hough transformation unit 104 corresponding to one of the three following constellations via the output 126 :
  • the pre-processor 102 can be configured in order to carry out further image processing steps, as e.g. an up-sampling. Additionally, it would also be possible that the pre-processor creates the gradient image. For the case that the gradient image creation will be part of the image pre-processing, the grey-value image (initial image) could be rotated in the FPGA.
  • FIG. 2 c shows two Hough core configurations 128 and 130 , e.g. for two parallel 31 ⁇ 31 Hough cores, configured in order to recognize a straight line or a straight section. Furthermore, a unit circle 132 is applied in order to illustrate in which angle segment, the detection is possible. It should be noted at this point that the Hough core configuration 128 and 130 is to be respectively seen in a way that the white dots illustrate the delay elements.
  • the Hough core configuration 128 corresponds to a so-called type 1 Hough core, whereas the Hough core configuration 120 corresponds to a so-called type 2 Hough core.
  • the one constitutes the inverse of the other one.
  • the Hough core configuration 128 With the first Hough core configuration 128 , a straight line in the segment 1 between 3 ⁇ /4 and ⁇ /2 can be detected, whereas a straight line in the segment 3 ⁇ /2 und 5 ⁇ /4 (segment 2) is detectable by means of the Hough core configuration 130 .
  • the Hough core configuration 128 and 130 is applied to the rotated version of the respective image. Consequently, by means of the Hough core configuration 128 , the segment 1r between ⁇ /4 and zero and by means of the Hough core configuration 130 , the segment 2r between ⁇ and 3 ⁇ /4 can be collected.
  • Hough core e.g. type 1 Hough core
  • a rotation of the image once about 90°, once about 180° and once about 270° can be useful, in order to collect the above described variants of the straight line alignment.
  • only one Hough core type can be used, which is during ongoing operation reconfigured or regarding which the individual delay elements can be switched on or off in a way, that the Hough core corresponds to the inverted type.
  • the respective Hough core configuration or the selection of the Hough core type depends on the pre-processing, which is carried out by the pre-processor 102 .
  • FIG. 3 a shows a Hough core 104 with m columns 108 , 110 , 138 , 140 , 141 , and 143 and n lines a, b, c, d, e, and f so that m ⁇ n cells are formed.
  • the column 108 , 110 , 138 , 140 , 141 , and 143 of the filter represents a specific characteristic of the searched structure, e.g. for a specific curve or a specific straight increase.
  • Every cell comprises a delay element, which is adjustable with respect to the delay time, whereby in this embodiment, the adjustment mechanism is realized due to the fact that respectively a switchable delay element with a bypass is provided.
  • the cell ( 108 a ) from FIG. 3 b comprises the delay element 142 , a remote controllable switch 144 , as e.g. a multiplexer, and a bypass 146 .
  • the line signal either can transferred via the delay element 142 or it can be lead undelayed to the intersection 148 .
  • the intersection 148 is on the one hand connected to the amount element 150 for the column (e.g. 108 ), whereby on the other hand, via this intersection 148 , also the next cell (e.g. 110 a ) is connected.
  • the multiplexer 144 is configured via a so-called configuration register 160 (cf. FIG. 3 a ). It should be noted at this point that the reference sign 160 shown here only relates to a part of the configuration register 160 , which is directly coupled to the multiplexer 144 .
  • the element of the configuration register 160 is configured in order to control the multiplexer 144 and receives thereto via a first information input 160 a , a configuration information, which originates e.g. from a configuration matrix, which is stored in the FPGA internal BRAM 163 .
  • This configuration information can be a column-by-column bit string and relates to the configuration of several (also during transformation) of the configured delaying cells ( 142 + 144 ).
  • the configuration information can be furthermore transmitted via the output 160 b .
  • the configuration register 160 or the cell of the configuration register 160 receives a so-called enabler signal via a further signal input 160 c , by means of which the reconfiguration is started.
  • the reconfiguration of the Hough core needs a certain time, which depends on the number of delay elements or in particular on the size of a column. Thereby, for every column element, a clock cycle is associated and a latency of few clock cycles occurs due to the BRAM 163 or the configuration logic 160 .
  • the total latency for the reconfiguration is typically negligible for video-based image processing.
  • the video data streams recorded with a CMOS sensor have a horizontal and vertical blanking, whereby the horizontal blanking or the horizontal blanking time can be used for the reconfiguration.
  • the size of the Hough core structure implemented in the FPGA predetermines the maximum size for the Hough core configuration. If e.g. smaller configurations are used, these are vertically centered and aligned in horizontal direction to column 1 of the Hough core structure. Non-used elements of the Hough core structure are all occupied with activated delay elements.
  • the evaluation of the data streams processed in this way with the individual delay elements ( 142 + 144 ) occurs column-by-column. For this, it is summed-up column-by-column, in order to detect a local amount maximum, which displays a recognized searched structure.
  • the summation per column 108 , 110 , 138 , 140 , 141 , and 143 serves to determine a value, which is representative for the degree of accordance with the searched structure for one of the characteristic of the structure, assigned to the respective column.
  • comparer 108 v , 110 v , 138 v , 140 v , 141 v , or 143 v are provided, which are connected to the respective amount elements 150 .
  • comparer 108 v , 110 v , 138 v , 140 v , 141 v , 143 v of the different column 108 , 110 , 138 , 140 , 141 , or 143 also further delay elements 153 can be provided, which serve to compare the column amounts of adjacent columns.
  • the columns 108 , 110 , 138 , or 140 with the highest degree of accordance for a characteristic of the searched pattern is picked out of the filter.
  • the result comprises a so-called multi-dimensional Hough room, which comprises all relevant parameters of the searched structure, as e.g. the kind of the pattern (e.g.
  • the Hough core cell from FIG. 3 b can have an optional pipeline delay element 162 (pipeline-delay), which e.g. is arranged at the output of the cell and is configured in order to delay the by means of the delay element 142 delayed signal as well as the by means of the bypass 146 non-delayed signal.
  • pipeline-delay e.g. is arranged at the output of the cell and is configured in order to delay the by means of the delay element 142 delayed signal as well as the by means of the bypass 146 non-delayed signal.
  • such a cell also can have a delay element with a variability or a plurality of switched and bypassed delay elements so that the delay time is adjustable in several stages.
  • a delay element with a variability or a plurality of switched and bypassed delay elements so that the delay time is adjustable in several stages.
  • FIG. 5 a shows an FPGA implemented image processor 10 a with a pre-processor 102 and a Hough transformation unit 104 .
  • an input stage 12 may be implemented in the image processor 10 a , which is configured in order to receive image data or image samples from a camera 14 a .
  • the input stage 12 may e.g. comprise an image takeover intersection 12 a , a segmentation and edge detector 12 b and measures for the camera control 12 c .
  • the measures for the camera control 12 c are connected to the image intersection 12 a and the camera 14 and serve to control the factors like intensification and/or illumination.
  • the image processor 10 a further comprises a so-called Hough feature extractor 16 , which is configured in order to analyze the multi-dimensional Hough room, which is outputted by the Hough transformation unit 104 and which includes all relevant information for the pattern recognition, and on the basis of the analyzing results to output a compilation of all Hough features.
  • a smoothing of the Hough feature rooms occurs here, i.e. a spatial smoothing by means of a local filter or a thinning of the Hough room (rejection of information being irrelevant for the pattern recognition). This thinning is carried out under consideration of the kind of the pattern and the characteristic of the structure so that non-maxima in the Hough probability room are faded out.
  • threshold values can be defined so that e.g. minimally or maximally admissible characteristics of a structure, as e.g. a minimal or a maximal curve or a smallest or greatest increase can be previously determined.
  • threshold-based rejection also a noise suppression in the Hough probability room may occur.
  • the analytical retransformation of the parameters of all remaining points in the original image segment results e.g. from the following Hough features: for the curved structure, position (x- and y-coordinates), appearance probability, radius and angle, which indicates to which direction the arc is opened, can be transmitted.
  • For a straight line parameters as position (x- and y-coordinates), appearance probability, angle, which indicates the increase of a straight line, and length of the representative straight segment can be determined.
  • This thinned Hough room is outputted by the Hough feature extractor 16 or generally, by the image processor 10 a for the processing at a post-processing unit 18 .
  • a further embodiment comprises the use of a 3D image analyzer 400 ( FIG. 5 a ) within an image processing system together with an upstream image processor 10 a ( FIG. 5 a ) or an upstream Hough processor, whereby the Hough processor and in particular the components of the post-processing unit 18 for the detection of pupils or iris which are displayed as ellipsis, are adjusted.
  • the post-processing unit of the Hough processor may e.g. be realized as embedded processor and according to its application, may comprise different sub-units, which are exemplarily explained in the following.
  • the post-processing unit 18 ( FIG. 5 a ) may comprise a Hough feature post-geometry-converter 202 .
  • This geometry converter 202 is configured in order to analyze one or more predefined searched patterns, which are outputted by the Hough feature extractor and to output the geometry explaining parameters.
  • the geometry converter 202 may e.g. be configured in order to output on the basis of the detected Hough features geometry parameters, as e.g. first diameter, second diameter, shifting and position of the midpoint regarding an ellipsis (pupil) or a circle.
  • the geometry converter 202 serves to detect and select a pupil by means of 3 to 4 Hough features (e.g. curves).
  • Hough features e.g. curves
  • criteria as e.g. the degree of accordance with the searched structure or the Hough features, the curve of the Hough features or the predetermined pattern to be detected, the position and the orientation of the Hough features are included.
  • the selected Hough feature combinations are arranged, whereby primarily the arrangement according to the amount of the obtained Hough features and in a second line, according to the degree of accordance with the searched structure occurs. After the arrangement, the Hough feature combination at this point is selected and therefrom, the ellipsis is fitted, which most likely represents the pupil within the camera image.
  • the post-processing unit 18 ( FIG. 5 a ) comprises an optional controller 204 , which is formed to return a control signal to the image processor 10 a (cf. control channel 206 ) or, to be precise, return to the Hough transformation unit 104 , on the basis of which the filter characteristic of the filter 106 is adjustable.
  • the controller 204 typically is connected to the geometry converter 202 in order to analyze the geometry parameters of the recognized geometry and in order to track the Hough core within defined borders in a way that a more precise recognition of the geometry is possible. This procedure is a successive one, which e.g.
  • the controller can adjust the ellipsis size, which e.g. depends on the distance between the object to be recorded and the camera 14 a , if the person belonging thereto approaches the camera 14 a .
  • the control of the filter characteristic hereby occurs on the basis of the last adjustments and on the basis of the geometry parameters of the ellipsis.
  • the post-processing unit 18 may have a selective-adaptive data processor 300 .
  • the data processor has the purpose to post-process outliers and dropouts within a data series in order to e.g. carry out a smoothing of the data series. Therefore, the selective-adaptive data processor 300 is configured in order to receive several sets of values, which are outputted by the geometry converter 202 , whereby every set is assigned to respective sample.
  • the filter processor of the data processor 300 carries out a selection of values on the basis of the several sets in a way that the data values of implausible sets (e.g. outliers or dropouts) are exchanged by internally determined data values (exchange values) and the data values of the remaining sets are further used unchanged.
  • the data values of plausible sets are transmitted and the data values of implausible sets (containing outliers or dropouts) are exchanged by data values of a plausible set, e.g. the previous data value or by an average from several previous data values.
  • the resulting data series from transmitted values and probably from exchange values is thereby continuously smoothened.
  • an adaptive time smoothing of the data series e.g. of a determined ellipsis midpoint coordinate
  • dropouts and outliers of the data series to be smoothened do not lead to fluctuations of the smoothened data.
  • the data processor may smoothen over the data value of the newly received set, if it does not fall within the following criteria:
  • the previous value is outputted or at least consulted for smoothing the actual value.
  • the actual values are stronger rated than past values.
  • the smoothing coefficient is within defined borders dynamically adjusted to the tendency of the data to be smoothened, e.g. reduction of the rather constant value developments or increase regarding inclining or falling value developments. If in a long-term a greater leap occurs regarding the geometry parameters to be smoothened (ellipsis parameters), the data processor and, thus, the smoothened value development adjust to the new value.
  • the selective adaptive data processor 300 can also be configured by means of parameters, e.g. during initializing, whereby via these parameters, the smoothing behavior, e.g. maximum period of dropouts or maximum smoothing factor, are determined.
  • the selective adaptive data processor 300 or generally, the post-processing unit 18 may output plausible values with high accuracy of the position and geometry of a pattern to be recognized.
  • the post-processing unit has an intersection 18 a , via which optionally also external control commands may be received. If more data series shall be smoothened, it is also conceivable to use for every data series a separate selective adaptive data processor or to adjust the selective adaptive data processor in a way that per set of data values, different data series can be processed.
  • the data processor 300 e.g. may have two or more inputs as well as one output.
  • One of the inputs (receives the data value) is provided for the data series to be processed.
  • the output is a smoothened series based on selected data. For the selection, further inputs (the additional values for the more precise assessment of the data values are received) are consulted and/or the data series itself.
  • a change of the data series occurs, whereby it is distinguished between the treatment of outliers and the treatment of dropouts within the data series.
  • outliers are (within the data series to be processed) arranged and exchanged by other (internally determined) values.
  • Dropouts For the assessment of the quality of the data series to be processed, one or more further input signals (additional values) are consulted. The assessment occurs by means of one or more threshold values, whereby the data is divided into “high” and “low” quality. Data with a low quality are assessed being dropouts and are exchanged by other (internally determined) values.
  • a smoothing of the data series occurs (e.g. exponential smoothing of a time series).
  • the data series is consulted, which has been adjusted of dropouts and outliers.
  • the smoothing may occur by a variable (adaptive) coefficient.
  • the smoothing coefficient is adjusted to the difference of the level of the data to be processed.
  • the post-processing unit 18 comprises an image analyzer, as e.g. a 3D image analyzer 400 .
  • a further image collecting unit consisting of an image processor 10 b and a camera 14 can be provided.
  • two cameras 14 a and 14 b as well as the image processors 10 a and 10 b establish a stereoscopic camera arrangement, whereby advantageously the image processor 10 b is identical with the image processor 10 a.
  • the 3D image analyzer 400 is corresponding to a basic embodiment configured in order to receive at least one set of image data, which is determined on the basis of one first image (cf. camera 14 a ), and a second set of image data, which is determined on the basis of a second image (cf. camera 14 b ), whereby the first and the second image display a pattern from different perspectives and in order to calculate on the basis of this a point of view or a 3D gaze vector.
  • the 3D image analyzer 400 comprises a position calculator 404 and an alignment calculator 408 .
  • the position calculator 404 is configured in order to calculate a position of the pattern within a three-dimensional room based on the first set, the second set and a geometric relation between the perspectives or the first and the second camera 14 a and 14 b .
  • the alignment calculator 408 is configured in order to calculate a 3D gaze vector, e.g. a gaze direction, according to which the recognized pattern is aligned to within the three-dimensional room, whereby the calculation is based on the first set, the second set and the calculated position (cf. position calculator 404 ).
  • Further embodiments may also operate with the image data of a camera and a further set of information (e.g. relative or absolute position of characteristic points in the face or the eye), which serves for the calculation of the position of the pattern (e.g. pupils or iris midpoints) and for the selection of the actual gaze direction vector.
  • a further set of information e.g. relative or absolute position of characteristic points in the face or the eye
  • 3D camera system model which e.g. has stored in a configuration file all model parameters, as position parameter, optical parameter (cf. camera 14 a and 14 b ).
  • the model stored or loaded in the 3D image analyzer 400 comprises data regarding the camera unit, i.e. regarding the camera sensor (e.g. pixel size, sensor size, and resolution) and the used objective lenses (e.g. focal length and objective lens distortion), data or characteristics of the object to be recognized (e.g. characteristics of an eye) and data regarding further relevant objects (e.g. a display in case of using the systems 1000 as input device).
  • the camera sensor e.g. pixel size, sensor size, and resolution
  • the used objective lenses e.g. focal length and objective lens distortion
  • data or characteristics of the object to be recognized e.g. characteristics of an eye
  • data regarding further relevant objects e.g. a display in case of using the systems 1000 as input device.
  • the 3D position calculator 404 calculates the eye position or the pupil midpoint on the basis of the two or even several camera images (cf. 14 a and 14 b ) by triangulation. For this, it is provided with 2D coordinates of a point in the two camera images (cf. 14 a and 14 b ) via the process chain from image processors 10 a and 10 b , geometry converter 202 and selective adaptive data processor 300 . From the delivered 2D coordinates, for both cameras 14 a and 14 b , the rays of light are calculated, which have displayed the 3D point as 2D point on the sensor, by means of the 3D camera model, in particular under consideration of the optical parameters.
  • the point of the two straight lines with the lowest distance to each other (in the ideal case, the intersection of the straight lines) is assumed as being the position of the searched 3D point.
  • This 3D position together with an error measure describing the accuracy of the delivered 2D coordinates in connection with the model parameters, is either via the intersection 18 a outputted as the result, or is transmitted to the gaze direction calculator 408 .
  • the gaze direction calculator 408 can determine the gaze direction from two ellipsis-shaped projections of the pupil to the camera sensors without calibrating and without knowing the distance between the eyes and the camera system. For this, the gaze direction calculator 408 uses besides the 3D position parameters of the image sensor, the ellipsis parameter, which had been determined by means of the geometry analyzer 202 and the position determined by means of the position calculator 404 . From the 3D position of the pupil midpoint and the position of the image sensors, by rotation of the real camera units, virtual camera units are calculated, the optical axis of which passes through the 3D pupil midpoint.
  • projections of the pupil on the virtual sensors are calculated so that two virtual ellipses arise.
  • two points of view of the eye on an arbitrarily parallel plane to the respective virtual sensor plane may be calculated.
  • four gaze direction vectors can be calculated, thus, respectively two vectors per camera. From these four possible gaze direction vectors, exactly one of the one camera is nearly identical to the one of the other camera. Both identical vectors indicate the searched gaze direction of the eye, which is then outputted by the gaze direction calculator 404 via the intersection 18 a.
  • a particular advantage of this 3D calculation is that a contactless and entirely calibration-free determination of the 3D eye position of the 3D gaze direction and the pupil size does not depend on the knowledge on the position of the eye towards the camera is possible.
  • An analytic determination of the 3D eye position and the 3D gaze direction under consideration of a 3D room model enables an arbitrary number of cameras (greater 1) and an arbitrary camera position in the 3D room.
  • a short latency time with the simultaneously high frame rate enables a real-time capability of the described system 1000 .
  • the so-called time regimes may be fixed so that the time differences between successive results are constant. This is e.g. of advantage in security-critical applications, regarding which the results have to be available within fixed time periods and this may be achieved by using FPGAs for the calculation.
  • a gaze direction determination with only one camera. For this, on the one hand it is necessitated to calculate the 3D pupil midpoint based on the image data of a camera and possibly on one set of additional information and on the other hand, from the two possible gaze direction vectors, which may be calculated per camera, the actual gaze direction vector has to be selected, as it is later on explained with reference to FIG. 5 b.
  • a straight line is calculated, which passes through the 3D pupil midpoint, whereby, however, it not yet known, where on this straight line the searched pupil midpoint is to be found.
  • the distance between the camera or exact main point 1 of the camera H 1 K1 in FIG. 8 a
  • This information can be estimated, if at least two characteristic features in the first camera image (e.g. the pupil midpoints) are determined and their distances to each other are known as a statistically evaluated value, e.g. via a large group of persons.
  • the distance between camera and 3D pupil midpoint can be estimated by relating the determined distance (e.g. in pixels) between the characteristic features to the distance known as statistic value (e.g. in pixels) of the characteristics into a known distance to the camera.
  • a further variation in order to obtain the 3D pupil midpoint is that its position or its distance to the camera is provided to the 3D image analyzer within a second set of information (e.g. by an upstream module for 3D face detection, according to which the positions of characteristic facial points or the eye area is determined in the 3D room).
  • a second set of information e.g. by an upstream module for 3D face detection, according to which the positions of characteristic facial points or the eye area is determined in the 3D room.
  • the gaze direction vector In order to determine the actual gaze direction vector, in the previous description regarding the “3D image analyzer”, which includes the method for the calibration-free eye-tracking, so far at least two camera images from different perspectives had been necessitated.
  • the calculation of the gaze direction there is a position, at which per camera image exactly two possible gaze direction vectors are determined, whereby respectively the second vector corresponds to a reflection of the first vector at the intersection line between the virtual camera sensor center and the 3D pupil midpoint. From both vectors, which result from the other camera image, exactly one vector nearly corresponds to a calculated vector from the first camera image. These corresponding vectors indicate the gaze direction to be determined.
  • the actual gaze direction vector (in the following “vb”) has to be selected from the two possible gaze direction vectors, in the following “v 1 ” and “v 2 ), which are determined from the camera image.
  • FIG. 5 b shows an illustration of the visible part of the eyeball (green framed) with the pupil and the two possible gaze directions v 1 and v 2 projected into the image.
  • the selection of the correct 3D gaze vector occurs from two possible 3D gaze vectors, whereby e.g. according to an embodiment, only one single camera image (+additional information) is used.
  • Some of these possibilities are explained in the following, whereby it is assumed that v 1 and v 2 (cf. FIG. 5 a ) have already been determined at the point in time of this selection:
  • an evaluation based on the sclera may occur in the camera image.
  • 2 beams are defined (starting at the pupil midpoint and being infinitely long), one in the direction of v 1 and one in the direction of v 2 . Both beams are projected into the camera image of the eye and run there from the pupil midpoint to the image edge, respectively.
  • the beam distorting the pixel which belong fewer to the sclera belongs to the actual gaze direction vector vb.
  • the pixel of the sclera differ by their grey value from those of the adjacent iris and from those of the eyelids. This method reaches its limits, if the face belonging to the captured eye is too far averted from the camera (thus, if the angle between the optical axis of the camera and the perpendicularly on the facial plane standing vector becomes too large).
  • an evaluation of the position of the pupil midpoint may occur during the eye opening.
  • the position of the pupil midpoint within the visible part of the eyeball or during the eye opening may be used for the selection of the actual gaze direction vector.
  • One possibility thereto is to define two beams (starting at the pupil midpoint and being infinitely long), one in direction of v 1 and one in direction of v 2 . Both beams are projected into the camera image of the eye and run there from the pupil midpoint to the image edge, respectively. Along both beams in the camera image, respectively the distance between the pupil midpoint and the edge of the eye opening (in FIG. 5 b green marked) is determined.
  • the beam, for which the shorter distance arises belongs to the actual gaze direction vector. This method reaches its limits, if the if the face belonging to the captured eye is too far averted from the camera (thus, if the angle between the optical axis of the camera and the perpendicularly on the facial plane standing vector becomes too large).
  • an evaluation of the position of the pupil midpoint may occur towards a reference pupil midpoint.
  • the position of the pupil midpoint determined in the camera image within the visible part of the eyeball or during the eye opening may be used together with a reference pupil midpoint for selecting the actual gaze direction vector.
  • One possibility for this is to define 2 beams (starting at the pupil midpoint and being infinitely long), one in direction of v 1 and one in direction of v 2 . Both beams are projected into the camera image of the eye and run there from the pupil midpoint to the edge of the image, respectively.
  • the reference pupil midpoint during the eye opening corresponds to the pupil midpoint in that moment, in which the eye looks direction to the direction of the camera which is used for the image recording (more precise, in the direction of the first main point of the camera).
  • the beam projected into the camera image, which has in the image the greater distance to the reference pupil midpoint, belongs to the actual gaze direction vector.
  • Possibility 1 (specific case of application): The reference pupil midpoint arises from the determined pupil midpoint, in the case, in which the eye looks directly in the direction of the camera sensor center. This is given, if the pupil contour on the virtual sensor plane (cf. description regarding gaze direction calculation) characterizes a circle.
  • Possibility 2 As rough estimate of the position of the reference pupil midpoint the focus of the surface of the eye opening may be used. This method of estimation reaches its limits, if the plane in which the face is lying, is not parallel to the sensor plane of the camera. This limitation may be compensated, if the inclination of the facial plane towards the camera sensor plane is known (e.g. by a previously performed determination of the head position and alignment) and this is used for correction of the position of the estimated reference pupil midpoint. This method moreover necessitates that the distance between the 3D pupil midpoint and the optical axis of the virtual sensor is much more lower than the distance between the 3D pupil midpoint and the camera.
  • Possibility 3 (general case of application): If the 3D position of the eye midpoint is available, a straight line between the 3D eye midpoint and the virtual sensor midpoint can be determined as well as the intersection of this straight lines with the surface of the eyeball. The reference pupil midpoint arises from the position of this intersection converted into the camera image.
  • an ASIC application specific chip
  • the here used Hough processor or the method carried out on the Hough processor remains very robust and not susceptible to failures. It should be noted at this point that the Hough processor 100 as shown in FIG. 2 a can be used in various combinations with different features, in particular presented regarding FIG. 5 a.
  • Applications of the Hough processor according to FIG. 2 a are e.g. warning systems for momentary nodding off or fatigue detectors as driving assistance systems in the automobile sector (or generally for security-relevant man-machine-interfaces). Thereby, by evaluation of the eyes (e.g. covering of the pupil as measure for the blink degree) and under consideration of the points of view and the focus, specific fatigue pattern can be detected. Further, the Hough processor can be used regarding input devices or input interfaces for technical devices; whereby then the eye position and the gaze direction are used as input parameters. Precise application would be the analysis or support of the user when viewing screen contents, e.g. with highlighting of specific focused areas. Such applications are in the field of assisted living, computer games, regarding optimizing of 3D visualizing by including the gaze direction, regarding market and media development or regarding ophthalmological diagnostics and therapies of particular interest.
  • a further embodiment relates to a method for the Hough processing with the steps of processing a majority of samples, which respectively have an image by using a pre-processor, whereby the image of the respective sample is rotated and/or reflected so that a majority of versions of the image of the respective sample for each sample is outputted and of the collection of predetermined patterns in a majority of samples on the basis of the majority of versions by using a Hough transformation unit, which has a delay filter with a filter characteristic being dependent on the selected predetermined set of patterns.
  • the adjustable characteristic may also relate to the post-processing characteristic (curve or distortion characteristic) regarding a fast 2D correlation. This implementation is explained with reference to FIG. 4 a to FIG. 4 d.
  • FIG. 4 a shows a processing chain 1000 of a fast 2D correlation.
  • the processing chain of the 2D correlation comprises at least the function blocks 1105 for the 2D curve and 1110 for the merging.
  • the procedure regarding the 2D curve is illustrated in FIG. 4 b .
  • FIG. 4 b shows the exemplary compilation at templates.
  • FIG. 4 c exemplarily shows the pixel-wise correlation with n templates (in this case e.g. for straight lines with different increase) for the recognition of the ellipsis 1115
  • FIG. 4 d shows the result of the pixel-wise correlation, whereby typically via the n result images still a maximum search occurs. Every result image contains one Hough feature per pixel.
  • this Hough processing is described in the overall context.
  • the delay filter is exchanged by fast 2D correlation.
  • the previous delay filter is to be formed according to the size in the position n of characteristics of a specific pattern. This n characteristics are stored as template in the storage.
  • the pre-processed image e.g. binary edge image or gradient image
  • the pre-processed image is passed pixel-wise.
  • all stored templates with the subjacent image content are synchronized (i.e. the environment of the pixel position (in size of the templates) is evaluated).
  • This procedure is referred to as correlation in the digital image processing.
  • a correlation value is obtained—i.e. a measure for the accordance—with the subjacent image content.
  • the latter correspond to the column amounts form the previous delay filter.
  • decision is made (per pixel) for the template with the highest correlation value and its template number is memorized (the template number describes the characteristic of the searched structure, e.g. increase of the straight line segment).
  • the correlation of the individual templates with the image content may be carried out in the local area as well as in the frequency area.
  • the warning system for momentary nodding off is a system consisting at least of an image collecting unit, an illumination unit, a processing unit and an acoustic and/or optical signaling unit.
  • the system can e.g. be developed in a form that a CMOS image sensor is used and the scene is illuminated in the infrared range. This has the advantage that the device works independently from the environmental light and, in particular does not blind the user.
  • processing unit an embedded processor system is used, which executes a software code on the subjacent operation system.
  • the signaling unit can e.g. consist of a multi-frequency buzzer and an RGB-LED.
  • the evaluation of the recorded image can occur in form of the fact that in a first processing stage, a face and an eye detection and an eye analysis are performed with a classifier.
  • This processing stage provides first indications for the alignment of the face, the eye position and the degree of the blink reflex.
  • An eye model used therefor can e.g. consist of: a pupil and/or iris position, a pupil and/or iris size, a description of the eyelids and the eye edge points. Thereby, it is sufficient, if at every point in time, some of these components are found and evaluated. The individual components may also be tracked via several images so that they have not to be completely searched again in every image.
  • Hough features can be used in order to carry out the face detection or the eye detection or the eye analysis or the eye precise analysis.
  • a 2D image analyzer can be used for the face detection or the eye detection or the eye analysis.
  • the described adaptive selective data processor can be used for the smoothing of the determined result values or intermediate results or value developments during the face detection or eye detection or eye analysis.
  • a chronological evaluation of the degree of the blink reflex and/or the results of the eye precise analysis can be used for determining the momentary nodding of or the fatigue or deflection of the user.
  • the calibration-free gaze direction determination as described in connection with the 3D image analyzer can be used in order obtain better results for the determination of the momentary nodding off or the fatigue or deflection of the user.
  • the selective adaptive data processor can be used.
  • the Hough processor in the stage of initial image can comprise a unit for the camera control.
  • a so-called point of view (intersection of the line of sight with a further plane) can be determined, e.g. for controlling a PC.
  • the implementation of the above outlined method is independent from the platform so that the above presented method can also be carried out on other hardware platforms, as e.g. a PC.
  • embodiments of invention may be implemented into hardware or software.
  • the implementation may be carried out by using a digital storage medium, as e.g. a Floppy Disc, a DVD, a Blu-ray Disc, a CD, a ROM, a PROM, am EPROM, an EEPROM, or a FLASH memory, a hard disc or any other magnetic or optical storage, on which electronically readable control signals are stored, which collaborate with a programmable computer system in a way that the respective method is carried out. Therefore, the digital storage medium may be computer readable.
  • Some embodiments according to the invention thus, comprise a data carrier having electronically readable control signals, which are able to collaborate with a programmable computer system in a way that one of the herein described methods is carried out.
  • embodiments of the present invention can be implemented as computer program product with a program code, whereby the program code is effective in order to carry out one of the methods, if the computer program product runs on a computer.
  • the program code may e.g. be stored on a machine-readable carrier.
  • one embodiment of the method according to the invention is a computer program having a program code for the execution of one of the methods defined herein, if the computer program runs on a computer.
  • a further embodiment of the method according to the invention is a data carrier (or a digital storage medium or a computer-readable medium), on which the computer program for execution of one of the methods defined herein is recorded.
  • a further embodiment of the method according to the invention is a data stream or a sequence of signals, which constitute the computer program for carrying out one of the herein defined methods.
  • the data stream or the sequence of signals can e.g. be configured in order to be transferred via a data communication connection, e.g. via the Internet.
  • a further embodiment comprises a processing unit, e.g. a computer or a programmable logic component, which is configured or adjusted in order to carry out one of the herein defined methods.
  • a processing unit e.g. a computer or a programmable logic component, which is configured or adjusted in order to carry out one of the herein defined methods.
  • a further embodiment comprises a computer, on which the computer program for executing one of the herein defined method is installed.
  • a further embodiment according to the invention comprises a device or a system, which are designed in order to transmit a computer program for executing at least one of the herein defined methods to a recipient.
  • the transmission may e.g. occur electronically or optically.
  • the recipient may be a computer, a mobile device, a storage device, or a similar device.
  • the device or the system can e.g. comprise a file server for the transmission of the computer program to the recipient.
  • a programmable logic component e.g. a field programmable gate array, an FPGA
  • a field-programmable gate array can collaborate with a microprocessor, in order to execute one of the herein defined methods.
  • the methods are executed by an arbitrary hardware device. This can be a universally applicable hardware as a computer processor (CPU) or a hardware specific for the method, as e.g. an ASIC.
  • the integrated eye-tracker comprises a compilation of FPGA-optimized algorithms, which are suitable to extract (ellipsis) features (Hough features) by means of a parallel Hough transformation from a camera live image and to calculate therefrom a gaze direction.
  • ellipsis features
  • the pupil ellipsis can be determined.
  • the 3D position of the pupil midpoint as well as the 3D gaze direction and the pupil diameter can be determined.
  • the position and form of the ellipsis in the camera images are consulted. Calibration of the system for the respective user is not required as well as knowledge of the distance between the cameras and the analyzed eye.
  • the used image processing algorithms are in particular characterized in that they are optimized for the processing on an FPGA (field programmable gate array).
  • the algorithms enable a very fast image processing with a constant refresh rate, minimum latency periods and minimum resource consumption in the FPGA.
  • these modules are predestined for time-, latency, and security-critical applications (e.g. driving assistance systems), medical diagnostic systems (e.g. perimeters) as well as application for human machine interfaces (e.g. mobile devices), which necessitate a small construction volume.
  • the overall system determines from two or more camera images, in which the same eye is displayed, respectively a list of multi-dimensional Hough features and respectively calculates on their basis the position and form of the pupil ellipsis. From the parameters of these two ellipses as well as solely from the position and alignment of the camera to each other, the 3D position of the pupil midpoint as well as the 3D gaze direction and the pupil diameter can be determined entirely calibration-free.
  • a combination of at least two image sensors, FPGA and/or downstream microprocessor system is used (without the mandatory need of a PCI).
  • “Hough preprocessing”, “Parallel Hough transform”, “Hough feature extractor”, “Hough feature to ellipse converter”, “Core-size control”, “Temporal smart smoothing filter”, “3D camera system model”, “3D position calculation” and “3D gaze direction calculation” relate to individual function modules of the integrated eye tracker. They fall into line of the image processing chain of the integrated eye-tracker as follows:
  • FIG. 6 shows a block diagram of the individual function modules in the integrated eye-tracker.
  • the block diagram shows the individual processing stages of the integrated eye-tracker. In the following, a detailed description of the modules is presented.
  • One aspect of the invention relates to an autonomous (PC-independent) system, which in particular uses FPGA-optimized algorithms and which is suitable to detect a face in a camera live image and its (spatial) position.
  • the used algorithms are in particular characterized in that they are optimized for the processing on an FPGA (field programmable gate array) and compared to the existing methods, get along without recursion in the processing.
  • the algorithms allow a very fast image processing with constant frame rate, minimum latency periods and minimum resource consumption in the FPGA.
  • these modules are predestined for a time-/latency-/security-critical application (e.g. driving assistance systems) or applications as human machine interfaces (e.g. for mobile devices), which necessitate a small construction volume.
  • the spatial position of the user for specific points in the image may be determined highly accurate, calibration-free and contactless.
  • the overall system determines from a camera image (in which only one face is displayed) the face position and determines by using this position, the positions of the pupil midpoints of the left and right eye. If two or more cameras with a known alignment to each other are used, these two points can be indicated for the three-dimensional room. Both determined eye positions may be further processed in systems, which use the “integrated eye-tracker”.
  • the “parallel image scaler”, “parallel face finder”, “parallel eye analyzer”, “parallel pupil analyzer”, “temporal smart smoothing filter”, “3D camera system model” and “3D position calculation” relate to individual function modules of the overall system (FPGA face tracker). They get in lane with the image processing chain of FPGA face trackers as follows:
  • FIG. 7 a shows a block diagram of the individual function modules in the FPGA face tracker.
  • the function modules “3D camera system model” and “3D position calculation” are mandatorily necessitated for the face tracking, however, are used when using a stereoscopic camera system and calculating suitable points on both cameras for the determination of spatial positions (e.g. for determining the 3D head position during calculation of the 2D face midpoints in both camera images).
  • the module “feature extraction (classification)” of the FPGA face trackers is based on the feature extraction and classification of Rieblbeck/Ernst of Fraunhofer IIS (Er Weg, Germany) and uses an adjusted variant of its classification on the basis of census features.
  • the block diagram shows the individual processing stages of the FPGA face tracking system. In the following, a detailed description of the modules is presented.
  • FIG. 7 b shows the initial image (original image) and result (downscaling image) of the parallel image scaler.
  • the result of the classification (rightwards) constitutes the input for the parallel face finder.
  • the objective of the present subsequent embodiments is to develop on the basis of the parallel Hough transformation a robust method for the feature extraction.
  • the Hough core is revised and a method for the feature extraction is presented, which reduces the results of the transformation and breaks them down to a few “feature vectors” per image.
  • the newly developed method is implemented in a MATLAB toolbox and is tested.
  • an FPGA implementation of the new method is presented.
  • the parallel Hough transformation uses Hough cores of different size, which have to be configured by means of configuration matrices for the respective application.
  • the mathematic contexts and methods for establishing such configuration matrices are presented in the following.
  • the MATLAB alc_config_lines_curvatures.m refers to these methods and establishes configuration matrices for straight lines and half circles of different sizes.
  • the arrays of curves can be generated by variation of the increase m.
  • the straight line increase of 0° to 45° is broke down into intervals of same size.
  • the number of intervals depends on the Hough core size and corresponds to the number of Hough core lines.
  • the increase may be tuned via the control variable Y core of 0 to core heigt .
  • the function values of the arrays of curves are calculated by variation of the control variable (in (B3) exchanged by x core ), the values of which are of 0 to core width.
  • y M h - r ( B6 )
  • r 2 y M 2 + ( core width 2 ) 2 ( B7 )
  • y M r 2 - ( core width 2 ) 2 ⁇ ( - 1 ) ( B8 )
  • core height 2 has to be varied. This happens via the control variable y core which runs from 0 to core height .
  • Configuration matrices may be occupied either by zeros or ones. A one thereby represents a used delay element in the Hough core. Initially, the configuration matrix is initialized in the dimensions of the Hough core with zero values. Thereafter, the following steps are passed:
  • the configurations for circles represent circle arcs around the vertex of the half circle. Only the highest y-index number of the arrays of curves (smallest radius) represents a complete half circle.
  • the developed configurations can be used for the new Hough core.
  • a decisive disadvantage of the FPGA implementation of Holland-Nell is the rigid configuration of the Hough cores.
  • the delay lines have to be parameterized prior to the synthesis and are afterwards fixedly deposited in the hardware structures (Holland-Nell, p. 48-49). Changes during runtime (e.g. Hough core size) are not possible any more.
  • the new method is to become more flexible at this point.
  • the new Hough core shall be—also during runtime—in the FPGA completely newly configurable. This has several advantages. On the one hand, not two Hough cores (type 1 and type 2) have to be parallel filed and on the other hand, also different configuration for straight lines and half circles may be used. Furthermore, the Hough core size can be flexibly changed during runtime.
  • Previous Hough core structures consist of a delay and a bypass and prior to the FPGA synthesis, it is determined, which path is to be used.
  • this structure is extended by a multiplexer, a further register for the configuration of the delay elements (switching the multiplexers) and by a pipeline delay.
  • the configuration register may be modified during runtime. This way, different configuration matrices can be brought into the Hough core.
  • the synthesis tool in the FPGA has more liberties during the implementation of the Hough core design and higher clock rates can be achieved.
  • Pipeline delays break through time-critical paths within the FPGA structures. In FIG. 9 d , the new design of the delay elements are demonstrated.
  • the delay elements of the new Hough cores are built up a bit more complex.
  • an additional register is necessitated and the multiplexer occupies further logic resources (implemented in the FPGA in an LUT).
  • the pipeline delay is optional.
  • modifications of the design of the Hough core had been carried out.
  • the new Hough core is demonstrated in FIG. 9 e.
  • the “line amounts”, originally referred to as signals of the initial histogram, are as of now referred to as “column amounts”. Every column of the Hough cores, thus, represents a curve of the arrays of curves.
  • the new Hough core furthermore can be impinged with new configuration matrices during runtime.
  • the configuration matrices are filed in the FPGA-internal BRAM and are loaded by a configuration logic. This loads the configurations as column-by-column bit string in the chained configuration register (cf. FIG. 9 d ).
  • the reconfiguration of the Hough cores necessitates a certain time period and depends on the length of the columns (or the amount of delay lines). Thereby, every column element necessitates a clock cycle and a latency of few tack cycles by the BRAM and the configuration logic is added. Although, the overall latency for the reconfiguration is disadvantageous, but for the video-based image processing, it can be accepted. Normally, the video data streams recorded with a CMOS sensor have a horizontal and a vertical blanking. The reconfiguration, thus, can occur without problems in the horizontal blanking time.
  • the size of the Hough core structure implemented in the FPGA also pre-determines the maximally possible size of the Hough core configuration.
  • the Hough core is as previously fed with a binary edge image passing through the configured delay lines. With each processing step, the column amounts are calculated via the entire Hough core and are respectively compared with the amount signal of the previous column. If a column provides a higher total value, the total value of the original column is overwritten. As initial signal, the new Hough core provides a column total value and the associated column number. On the basis of these values, later on, a statement on which structure was found (represented by the column number) and with which appearance probability this was detected (represented by the total value) can be made.
  • the initial signal of the Hough cores can also be referred to as Hough room or accumulator room. In contrast to the usual Hough transformation, the Hough room is available to the parallel Hough transformation in the image coordinate system.
  • x-coordinate Is delayed according to the length of the Hough core structure. A precise correction of the x-coordinate can take place.
  • the new Hough core structure produces significantly more initial data. As such a data quantity is only hard to be handled, a method for feature extraction is presented, which clearly reduces the result data quantity.
  • the parallel Hough transformation To the embodiments regarding the parallel Hough transformation, the necessity of the image rotation and the peculiarities of type 2 Hough cores, was already introduced.
  • the initial image has to pass the Hough core four times. This is necessitated so that the straight lines and half circles can be detected in different angle positions. If only a type 1 Hough core is used, the image would have to be processed in the initial position and rotated about 90°, 180°, and 270°. By including the type 2 Hough core, the rotation about 180° and 270° are omitted. If the non-rotated initial image is processed with a type 2 Hough core, this corresponds to a processing of the about 180° rotated initial image with a type 1 Hough core.
  • the quadruplicate data rate occurs in the Hough core.
  • the pixel data rate amounts to 24 Mhz.
  • the Hough core would have to be operated with 96 Mhz, which already constitutes a high clock rate for an FPGA of the Spartan 3 generation. In order to optimize the design, it should be intensified operated with pipeline delays within the Hough core structure.
  • the feature extraction works on behalf of the data sets from the previous table. These data sets can be summarized in a feature vector (B16).
  • the feature vector can in the following be referred to as Hough feature.
  • MV [MV X ,MV Y ,MV 0 ,MV KS ,MV H ,MV G-1 ,MV A ] (B16)
  • a feature vector respectively consists of respectively an x- and y-coordinate for the detected feature (MV x and MV y ), the orientation MV 0 , the curve intensity MV KS , the frequency MV H , the Hough core size MV G-1 and the kind of the detected structure MV A .
  • the detailed meaning and the value range of the single elements of the feature vector can be derived from the following table.
  • MVx and MVy Both coordinates respectively run to the size of the initial image MV 0
  • the orientation represents the alignment of the Hough core. This is composed by the image rotation and the used Hough core type and can be divided into four sections. The conversion of the four sections into their respective orientation is demonstrated in the following table.
  • MV KS The curve intensity maximally runs to the size of the Hough core and corresponds to the Hough core column with the highest column amount (or frequency MV H ). By way of illustration, it is referred to FIG. 9e in combination with the above table.
  • straight lines configuration of the Hough cores the Hough core column represents the increase or the angle of the straight lines. If half circle configurations are used, the Hough core column represents the radius of the half circle.
  • MV H The frequency is a measure for the correlation of the image content with the searched structure. It corresponds to the column amount (cg. FIG. 9e and above table) and can maximally reach the size of the Hough core (more precisely the size of a Hough core column with non-square Hough cores).
  • the coordinates are to represent the midpoint and regarding half circles or curves, the vertex.
  • the y-coordinate may be corrected corresponding to the implemented Hough core structure and does not depend on the size of the configuration used for the transformation (cf. FIG. 9 f ). Similar to a local filter, the y-coordinate is indicated vertically centered.
  • a context via the Hough core column is established, which has provided the hit (in the feature vector, the Hough core column is stored with the designation MV KS ).
  • the Hough core column is stored with the designation MV KS .
  • Dependent on the Hough core type and the image rotation also calculation provisions for three different cases can be indicated.
  • a Hough core of type 1 it is respectively referred to formula (B17) for the non-rotated and the rotated initial image. If a Hough core of type 2 is available, it has to be referred to formula (B18) or formula (B19) dependent on the image rotation.
  • MV x corrected MV x detected + floor ⁇ ( ( MV KS + 1 ) 2 ) ( B17 )
  • MV x corrected imagewidth non - rotated - ( MV x detected + floor ⁇ ( ( MV KS + 1 ) 2 ) ) ( B18 )
  • MV x corrected imagewidth rotated - ( MV x detected + floor ⁇ ( ( MV KS + 1 ) 2 ) ) ( B19 )
  • the non-maximum suppression operator differs regarding straight lines and half circles. Via the threshold values, a minimum MV KS min and maximum curve intensity MV KS max an is given and a minimum frequency MV H min is determined.
  • the non-maximum suppression operator can be seen as being a local operator of the size 3 ⁇ 3 (cf. FIG. 9 h ). A valid feature for half circles (or curves) arises exactly if the condition of the non-maximum suppression operator (nms-operator) in (B23) is fulfilled and the thresholds according to formulas (B20) to (B22) are exceeded.
  • MV nms 2,2 KS ⁇ MV KS min B20
  • MV nms 2,2 KS ⁇ MV KS max B21
  • MV nms 2,2 H ⁇ MV H min B22
  • Hough features are suppressed, which do not constitute local maxima in the frequency room of the feature vector. This way, Hough features are suppressed, which do not contribute to the searched structure and which are irrelevant for the post-processing.
  • the feature extraction is only parameterized via three thresholds, which can be beforehand usefully adjusted. A detailed explanation of the thresholds can be derived from the following table.
  • a non-maximum suppression operator of the size 3 ⁇ 3 (cf. FIG. 9 h ) can be likewise deduced. Thereby, some peculiarities are to be considered. Unlikely to the curves, the searched structures regarding the straight line segments are not detected according to continuously occurring of several maxima along the binary edge development. The non-maximum suppression, thus, can be based on the method in the Canny edge detection algorithm. According to the Hough core type and the detected angle area, three cases can be distinguished (cf. FIG. 9 i in combination with the above table). The case distinguishing is valid for rotated as well as for non-rotated initial images, as the retransformation of rotated coordinates only takes place after the non-maximum suppression.
  • the angle area provided by a Hough core with configuration for straight lines is divided by the angle area bisection.
  • the angle area bisection can be indicated as Hough core column (decimally refracted)(MV KS gure ).
  • the mathematical context depending on the Hough core size is described by formula (B24). In which angle area the Hough feature is lying, refers to the Hough core column having delivered the hit (MV KS ), which can be directly compared to the angle area bisectional Hough core column.
  • MV KS half tan ⁇ ( 45 2 ) ⁇ ⁇ 180 ⁇ Houghcore size ( B24 )
  • the condition regarding the respective nms-operator can be requested similar to the non-maximum suppression for curves (formulas (B25) to (B27)). If all conditions are fulfilled and if additionally the threshold values according to the formulas (B20) to (B22) are exceeded, the Hough feature at position nms 2,2 can be assumed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Software Systems (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Eye Examination Apparatus (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Complex Calculations (AREA)
US15/221,847 2014-02-04 2016-07-28 3D image analyzer for determining the gaze direction Expired - Fee Related US10192135B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
DE102014201997 2014-02-04
DE102014201997 2014-02-04
PCT/EP2015/052004 WO2015117905A1 (de) 2014-02-04 2015-01-30 3d-bildanalysator zur blickrichtungsbestimmung

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/052004 Continuation WO2015117905A1 (de) 2014-02-04 2015-01-30 3d-bildanalysator zur blickrichtungsbestimmung

Publications (2)

Publication Number Publication Date
US20160335475A1 US20160335475A1 (en) 2016-11-17
US10192135B2 true US10192135B2 (en) 2019-01-29

Family

ID=52434840

Family Applications (3)

Application Number Title Priority Date Filing Date
US15/221,847 Expired - Fee Related US10192135B2 (en) 2014-02-04 2016-07-28 3D image analyzer for determining the gaze direction
US15/228,826 Expired - Fee Related US10592768B2 (en) 2014-02-04 2016-08-04 Hough processor
US15/228,844 Active US10074031B2 (en) 2014-02-04 2016-08-04 2D image analyzer

Family Applications After (2)

Application Number Title Priority Date Filing Date
US15/228,826 Expired - Fee Related US10592768B2 (en) 2014-02-04 2016-08-04 Hough processor
US15/228,844 Active US10074031B2 (en) 2014-02-04 2016-08-04 2D image analyzer

Country Status (6)

Country Link
US (3) US10192135B2 (zh)
EP (4) EP3103059A1 (zh)
JP (3) JP6248208B2 (zh)
KR (2) KR101991496B1 (zh)
CN (3) CN106258010B (zh)
WO (4) WO2015117906A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200228787A1 (en) * 2017-02-23 2020-07-16 Karl Storz Se & Co. Kg Apparatus for Capturing a Stereo Image
US10924725B2 (en) * 2017-03-21 2021-02-16 Mopic Co., Ltd. Method of reducing alignment error between user device and lenticular lenses to view glass-free stereoscopic image and user device performing the same
US11195252B2 (en) * 2016-12-06 2021-12-07 SZ DJI Technology Co., Ltd. System and method for rectifying a wide-angle image
US20220070368A1 (en) * 2019-03-27 2022-03-03 Schölly Fiberoptic GmbH Method for commissioning a camera control unit (ccu)
US20230259966A1 (en) * 2022-02-14 2023-08-17 Korea Advanced Institute Of Science And Technology Method and apparatus for providing advertisement disclosure for identifying advertisements in 3-dimensional space

Families Citing this family (126)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150022664A1 (en) 2012-01-20 2015-01-22 Magna Electronics Inc. Vehicle vision system with positionable virtual viewpoint
US10365711B2 (en) 2012-05-17 2019-07-30 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for unified scene acquisition and pose tracking in a wearable display
CN104715227B (zh) * 2013-12-13 2020-04-03 北京三星通信技术研究有限公司 人脸关键点的定位方法和装置
KR101991496B1 (ko) * 2014-02-04 2019-06-20 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. 허프 프로세서
DE102015202846B4 (de) 2014-02-19 2020-06-25 Magna Electronics, Inc. Fahrzeugsichtsystem mit Anzeige
US10445573B2 (en) 2014-06-27 2019-10-15 Fove, Inc. Gaze detection device
US10318067B2 (en) * 2014-07-11 2019-06-11 Hewlett-Packard Development Company, L.P. Corner generation in a projector display area
US20180227735A1 (en) * 2014-08-25 2018-08-09 Phyziio, Inc. Proximity-Based Attribution of Rewards
US11049476B2 (en) 2014-11-04 2021-06-29 The University Of North Carolina At Chapel Hill Minimal-latency tracking and display for matching real and virtual worlds in head-worn displays
KR20160094190A (ko) * 2015-01-30 2016-08-09 한국전자통신연구원 시선 추적 장치 및 방법
JP6444233B2 (ja) * 2015-03-24 2018-12-26 キヤノン株式会社 距離計測装置、距離計測方法、およびプログラム
US20160363995A1 (en) * 2015-06-12 2016-12-15 Seeing Machines Limited Circular light element for illumination of cornea in head mounted eye-tracking
CN105511093B (zh) * 2015-06-18 2018-02-09 广州优视网络科技有限公司 3d成像方法及装置
US9798950B2 (en) * 2015-07-09 2017-10-24 Olympus Corporation Feature amount generation device, feature amount generation method, and non-transitory medium saving program
CN108352393B (zh) 2015-07-23 2022-09-16 光程研创股份有限公司 高效宽光谱传感器
US10707260B2 (en) 2015-08-04 2020-07-07 Artilux, Inc. Circuit for operating a multi-gate VIS/IR photodiode
TW202335281A (zh) 2015-08-04 2023-09-01 光程研創股份有限公司 光感測系統
US10761599B2 (en) * 2015-08-04 2020-09-01 Artilux, Inc. Eye gesture tracking
US10861888B2 (en) 2015-08-04 2020-12-08 Artilux, Inc. Silicon germanium imager with photodiode in trench
US10616149B2 (en) * 2015-08-10 2020-04-07 The Rocket Science Group Llc Optimizing evaluation of effectiveness for multiple versions of electronic messages
CN108140656B (zh) 2015-08-27 2022-07-26 光程研创股份有限公司 宽频谱光学传感器
JP6634765B2 (ja) * 2015-09-30 2020-01-22 株式会社ニデック 眼科装置、および眼科装置制御プログラム
EP3360023A4 (en) * 2015-10-09 2018-10-10 SZ DJI Technology Co., Ltd. Salient feature based vehicle positioning
US10254389B2 (en) 2015-11-06 2019-04-09 Artilux Corporation High-speed light sensing apparatus
US10418407B2 (en) 2015-11-06 2019-09-17 Artilux, Inc. High-speed light sensing apparatus III
US10886309B2 (en) 2015-11-06 2021-01-05 Artilux, Inc. High-speed light sensing apparatus II
US10739443B2 (en) 2015-11-06 2020-08-11 Artilux, Inc. High-speed light sensing apparatus II
US10741598B2 (en) 2015-11-06 2020-08-11 Atrilux, Inc. High-speed light sensing apparatus II
CN106200905B (zh) * 2016-06-27 2019-03-29 联想(北京)有限公司 信息处理方法及电子设备
EP3481661A4 (en) 2016-07-05 2020-03-11 Nauto, Inc. AUTOMATIC DRIVER IDENTIFICATION SYSTEM AND METHOD
JP6799063B2 (ja) * 2016-07-20 2020-12-09 富士フイルム株式会社 注目位置認識装置、撮像装置、表示装置、注目位置認識方法及びプログラム
CN105954992B (zh) * 2016-07-22 2018-10-30 京东方科技集团股份有限公司 显示系统和显示方法
GB2552511A (en) * 2016-07-26 2018-01-31 Canon Kk Dynamic parametrization of video content analytics systems
US10417495B1 (en) * 2016-08-08 2019-09-17 Google Llc Systems and methods for determining biometric information
EP3497405B1 (en) 2016-08-09 2022-06-15 Nauto, Inc. System and method for precision localization and mapping
US10733460B2 (en) 2016-09-14 2020-08-04 Nauto, Inc. Systems and methods for safe route determination
JP6587254B2 (ja) * 2016-09-16 2019-10-09 株式会社東海理化電機製作所 輝度制御装置、輝度制御システム及び輝度制御方法
EP3305176A1 (en) * 2016-10-04 2018-04-11 Essilor International Method for determining a geometrical parameter of an eye of a subject
US11361003B2 (en) * 2016-10-26 2022-06-14 salesforcecom, inc. Data clustering and visualization with determined group number
EP3535646A4 (en) * 2016-11-07 2020-08-12 Nauto, Inc. SYSTEM AND METHOD FOR DETERMINING DRIVER DISTRACTION
JP7076447B2 (ja) * 2016-11-24 2022-05-27 ユニヴァーシティ オブ ワシントン ヘッドマウントディスプレイのための光照射野キャプチャおよびレンダリング
DE102016224886B3 (de) * 2016-12-13 2018-05-30 Deutsches Zentrum für Luft- und Raumfahrt e.V. Verfahren und Vorrichtung zur Ermittlung der Schnittkanten von zwei sich überlappenden Bildaufnahmen einer Oberfläche
WO2018121878A1 (en) * 2016-12-30 2018-07-05 Tobii Ab Eye/gaze tracking system and method
US10282592B2 (en) * 2017-01-12 2019-05-07 Icatch Technology Inc. Face detecting method and face detecting system
JP7003455B2 (ja) * 2017-06-15 2022-01-20 オムロン株式会社 テンプレート作成装置、物体認識処理装置、テンプレート作成方法及びプログラム
US10430695B2 (en) 2017-06-16 2019-10-01 Nauto, Inc. System and method for contextualized vehicle operation determination
US10453150B2 (en) 2017-06-16 2019-10-22 Nauto, Inc. System and method for adverse vehicle event determination
EP3420887A1 (en) 2017-06-30 2019-01-02 Essilor International Method for determining the position of the eye rotation center of the eye of a subject, and associated device
EP3430973A1 (en) * 2017-07-19 2019-01-23 Sony Corporation Mobile system and method
JP2019017800A (ja) * 2017-07-19 2019-02-07 富士通株式会社 コンピュータプログラム、情報処理装置及び情報処理方法
KR101963392B1 (ko) * 2017-08-16 2019-03-28 한국과학기술연구원 무안경식 3차원 영상표시장치의 동적 최대 시역 형성 방법
US11250589B2 (en) * 2017-08-25 2022-02-15 Chris Hsinlai Liu General monocular machine vision system and method for identifying locations of target elements
US10460458B1 (en) * 2017-09-14 2019-10-29 United States Of America As Represented By The Secretary Of The Air Force Method for registration of partially-overlapped aerial imagery using a reduced search space methodology with hybrid similarity measures
CN107818305B (zh) * 2017-10-31 2020-09-22 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
EP3486834A1 (en) * 2017-11-16 2019-05-22 Smart Eye AB Detection of a pose of an eye
CN108024056B (zh) * 2017-11-30 2019-10-29 Oppo广东移动通信有限公司 基于双摄像头的成像方法和装置
KR102444666B1 (ko) * 2017-12-20 2022-09-19 현대자동차주식회사 차량용 3차원 입체 영상의 제어 방법 및 장치
CN108334810B (zh) * 2017-12-25 2020-12-11 北京七鑫易维信息技术有限公司 视线追踪设备中确定参数的方法和装置
JP7109193B2 (ja) 2018-01-05 2022-07-29 ラピスセミコンダクタ株式会社 操作判定装置及び操作判定方法
CN108875526B (zh) * 2018-01-05 2020-12-25 北京旷视科技有限公司 视线检测的方法、装置、系统及计算机存储介质
US10853674B2 (en) 2018-01-23 2020-12-01 Toyota Research Institute, Inc. Vehicle systems and methods for determining a gaze target based on a virtual eye position
US10817068B2 (en) * 2018-01-23 2020-10-27 Toyota Research Institute, Inc. Vehicle systems and methods for determining target based on selecting a virtual eye position or a pointing direction
US10706300B2 (en) * 2018-01-23 2020-07-07 Toyota Research Institute, Inc. Vehicle systems and methods for determining a target based on a virtual eye position and a pointing direction
TWI788246B (zh) 2018-02-23 2022-12-21 美商光程研創股份有限公司 光偵測裝置
US11105928B2 (en) 2018-02-23 2021-08-31 Artilux, Inc. Light-sensing apparatus and light-sensing method thereof
WO2019169031A1 (en) 2018-02-27 2019-09-06 Nauto, Inc. Method for determining driving policy
US11675428B2 (en) * 2018-03-29 2023-06-13 Tobii Ab Determining a gaze direction using depth information
TWI758599B (zh) 2018-04-08 2022-03-21 美商光程研創股份有限公司 光偵測裝置
CN108667686B (zh) * 2018-04-11 2021-10-22 国电南瑞科技股份有限公司 一种网络报文时延测量的可信度评估方法
KR20190118965A (ko) * 2018-04-11 2019-10-21 주식회사 비주얼캠프 시선 추적 시스템 및 방법
WO2019199035A1 (ko) * 2018-04-11 2019-10-17 주식회사 비주얼캠프 시선 추적 시스템 및 방법
US10854770B2 (en) 2018-05-07 2020-12-01 Artilux, Inc. Avalanche photo-transistor
US10969877B2 (en) 2018-05-08 2021-04-06 Artilux, Inc. Display apparatus
CN108876733B (zh) * 2018-05-30 2021-11-09 上海联影医疗科技股份有限公司 一种图像增强方法、装置、设备和存储介质
US10410372B1 (en) * 2018-06-14 2019-09-10 The University Of North Carolina At Chapel Hill Methods, systems, and computer-readable media for utilizing radial distortion to estimate a pose configuration
US10803618B2 (en) * 2018-06-28 2020-10-13 Intel Corporation Multiple subject attention tracking
CN109213031A (zh) * 2018-08-13 2019-01-15 祝爱莲 窗体加固控制平台
KR102521408B1 (ko) * 2018-08-27 2023-04-14 삼성전자주식회사 인포그래픽을 제공하기 위한 전자 장치 및 그에 관한 방법
CA3110980A1 (en) * 2018-08-30 2020-03-05 Splashlight Holding Llc Technologies for enabling analytics of computing events based on augmented canonicalization of classified images
CN109376595B (zh) * 2018-09-14 2023-06-23 杭州宇泛智能科技有限公司 基于人眼注意力的单目rgb摄像头活体检测方法及系统
JP6934001B2 (ja) * 2018-09-27 2021-09-08 富士フイルム株式会社 画像処理装置、画像処理方法、プログラムおよび記録媒体
JP7099925B2 (ja) * 2018-09-27 2022-07-12 富士フイルム株式会社 画像処理装置、画像処理方法、プログラムおよび記録媒体
CN110966923B (zh) * 2018-09-29 2021-08-31 深圳市掌网科技股份有限公司 室内三维扫描与危险排除系统
US11144779B2 (en) * 2018-10-16 2021-10-12 International Business Machines Corporation Real-time micro air-quality indexing
CN109492120B (zh) * 2018-10-31 2020-07-03 四川大学 模型训练方法、检索方法、装置、电子设备及存储介质
JP7001042B2 (ja) * 2018-11-08 2022-01-19 日本電信電話株式会社 眼情報推定装置、眼情報推定方法、プログラム
CN111479104A (zh) * 2018-12-21 2020-07-31 托比股份公司 用于计算视线会聚距离的方法
US11113842B2 (en) * 2018-12-24 2021-09-07 Samsung Electronics Co., Ltd. Method and apparatus with gaze estimation
CN109784226B (zh) * 2018-12-28 2020-12-15 深圳云天励飞技术有限公司 人脸抓拍方法及相关装置
US11049289B2 (en) * 2019-01-10 2021-06-29 General Electric Company Systems and methods to semi-automatically segment a 3D medical image using a real-time edge-aware brush
US10825137B2 (en) * 2019-01-15 2020-11-03 Datalogic IP Tech, S.r.l. Systems and methods for pre-localization of regions of interest for optical character recognition, and devices therefor
KR102653252B1 (ko) * 2019-02-21 2024-04-01 삼성전자 주식회사 외부 객체의 정보에 기반하여 시각화된 인공 지능 서비스를 제공하는 전자 장치 및 전자 장치의 동작 방법
US11068052B2 (en) * 2019-03-15 2021-07-20 Microsoft Technology Licensing, Llc Holographic image generated based on eye position
US11644897B2 (en) 2019-04-01 2023-05-09 Evolution Optiks Limited User tracking system using user feature location and method, and digital display device and digital image rendering system and method using same
WO2020201999A2 (en) 2019-04-01 2020-10-08 Evolution Optiks Limited Pupil tracking system and method, and digital display device and digital image rendering system and method using same
US20210011550A1 (en) * 2019-06-14 2021-01-14 Tobii Ab Machine learning based gaze estimation with confidence
CN110718067A (zh) * 2019-09-23 2020-01-21 浙江大华技术股份有限公司 违规行为告警方法及相关装置
US11080892B2 (en) * 2019-10-07 2021-08-03 The Boeing Company Computer-implemented methods and system for localizing an object
US11688199B2 (en) * 2019-11-13 2023-06-27 Samsung Electronics Co., Ltd. Method and apparatus for face detection using adaptive threshold
CN113208591B (zh) * 2020-01-21 2023-01-06 魔门塔(苏州)科技有限公司 一种眼睛开闭距离的确定方法及装置
JP7355213B2 (ja) * 2020-02-28 2023-10-03 日本電気株式会社 画像取得装置、画像取得方法および画像処理装置
CN113448428B (zh) * 2020-03-24 2023-04-25 中移(成都)信息通信科技有限公司 一种视线焦点的预测方法、装置、设备及计算机存储介质
US10949986B1 (en) 2020-05-12 2021-03-16 Proprio, Inc. Methods and systems for imaging a scene, such as a medical scene, and tracking objects within the scene
CN111768433B (zh) * 2020-06-30 2024-05-24 杭州海康威视数字技术股份有限公司 一种移动目标追踪的实现方法、装置及电子设备
US11676255B2 (en) * 2020-08-14 2023-06-13 Optos Plc Image correction for ophthalmic images
CN111985384B (zh) * 2020-08-14 2024-09-24 深圳地平线机器人科技有限公司 获取脸部关键点的3d坐标及3d脸部模型的方法和装置
US20240019990A1 (en) * 2020-09-04 2024-01-18 Telefonaktiebolaget Lm Ericsson (Publ) A Computer Software Module Arrangement, a Circuitry Arrangement, and Arrangement and a Method for Improved User Interface
US10909167B1 (en) * 2020-09-17 2021-02-02 Pure Memories Ltd Systems and methods for organizing an image gallery
CN112633313B (zh) * 2020-10-13 2021-12-03 北京匠数科技有限公司 一种网络终端的不良信息识别方法及局域网终端设备
CN112255882A (zh) * 2020-10-23 2021-01-22 泉芯集成电路制造(济南)有限公司 集成电路版图微缩方法
CN112650461B (zh) * 2020-12-15 2021-07-13 广州舒勇五金制品有限公司 一种基于相对位置的展示系统
US20220198731A1 (en) * 2020-12-23 2022-06-23 Facebook Technologies, Llc Pixel-aligned volumetric avatars
US12095975B2 (en) 2020-12-23 2024-09-17 Meta Platforms Technologies, Llc Reverse pass-through glasses for augmented reality and virtual reality devices
US11417024B2 (en) 2021-01-14 2022-08-16 Momentick Ltd. Systems and methods for hue based encoding of a digital image
KR20220115001A (ko) * 2021-02-09 2022-08-17 현대모비스 주식회사 스마트 디바이스 스위블을 이용한 차량 제어 장치 및 그 방법
US20220270116A1 (en) * 2021-02-24 2022-08-25 Neil Fleischer Methods to identify critical customer experience incidents using remotely captured eye-tracking recording combined with automatic facial emotion detection via mobile phone or webcams.
WO2022259499A1 (ja) * 2021-06-11 2022-12-15 三菱電機株式会社 視線検出装置
JP2022189536A (ja) * 2021-06-11 2022-12-22 キヤノン株式会社 撮像装置および方法
US11914915B2 (en) * 2021-07-30 2024-02-27 Taiwan Semiconductor Manufacturing Company, Ltd. Near eye display apparatus
TWI782709B (zh) * 2021-09-16 2022-11-01 財團法人金屬工業研究發展中心 手術機械臂控制系統以及手術機械臂控制方法
CN114387442A (zh) * 2022-01-12 2022-04-22 南京农业大学 一种多维空间中的直线、平面和超平面的快速检测方法
US12106479B2 (en) * 2022-03-22 2024-10-01 T-Jet Meds Corporation Limited Ultrasound image recognition system and data output module
CN114794992B (zh) * 2022-06-07 2024-01-09 深圳甲壳虫智能有限公司 充电座、机器人的回充方法和扫地机器人
CN115936037B (zh) * 2023-02-22 2023-05-30 青岛创新奇智科技集团股份有限公司 二维码的解码方法及装置
CN116523831B (zh) * 2023-03-13 2023-09-19 深圳市柯达科电子科技有限公司 一种曲面背光源的组装成型工艺控制方法
CN116109643B (zh) * 2023-04-13 2023-08-04 深圳市明源云科技有限公司 市场布局数据采集方法、设备及计算机可读存储介质

Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3069654A (en) 1960-03-25 1962-12-18 Paul V C Hough Method and means for recognizing complex patterns
JPH07244738A (ja) 1994-03-07 1995-09-19 Nippon Telegr & Teleph Corp <Ntt> 直線抽出ハフ変換画像処理装置
JP2002288670A (ja) 2001-03-22 2002-10-04 Honda Motor Co Ltd 顔画像を使用した個人認証装置
JP2003157408A (ja) 2001-11-19 2003-05-30 Glory Ltd 歪み画像の対応付け方法、装置およびプログラム
JP2003223630A (ja) 2002-01-30 2003-08-08 Hitachi Ltd パターン検査方法及びパターン検査装置
JP2005038121A (ja) 2003-07-18 2005-02-10 Fuji Heavy Ind Ltd 画像処理装置および画像処理方法
JP2005230049A (ja) 2004-02-17 2005-09-02 National Univ Corp Shizuoka Univ 距離イメージセンサを用いた視線検出装置
WO2006032253A1 (de) 2004-09-22 2006-03-30 Eldith Gmbh Vorrichtung und verfahren zur berührungslosen bestimmung der blickrichtung
JP2006285531A (ja) 2005-03-31 2006-10-19 Advanced Telecommunication Research Institute International 視線方向の検出装置、視線方向の検出方法およびコンピュータに当該視線方向の視線方法を実行させるためのプログラム
US20060274973A1 (en) 2005-06-02 2006-12-07 Mohamed Magdi A Method and system for parallel processing of Hough transform computations
US7164807B2 (en) 2003-04-24 2007-01-16 Eastman Kodak Company Method and system for automatically reducing aliasing artifacts
DE102005047160B4 (de) 2005-09-30 2007-06-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung, Verfahren und Computerprogramm zum Ermitteln einer Information über eine Form und/oder eine Lage einer Ellipse in einem graphischen Bild
US20080310730A1 (en) 2007-06-06 2008-12-18 Makoto Hayasaki Image processing apparatus, image forming apparatus, image processing system, and image processing method
JP2011112398A (ja) 2009-11-24 2011-06-09 N Tech:Kk 画像形成状態検査方法、画像形成状態検査装置及び画像形成状態検査用プログラム
US8032842B2 (en) * 2006-07-25 2011-10-04 Korea Institute Of Science & Technology System and method for three-dimensional interaction based on gaze and system and method for tracking three-dimensional gaze
US20120106790A1 (en) 2010-10-26 2012-05-03 DigitalOptics Corporation Europe Limited Face or Other Object Detection Including Template Matching
US20120274734A1 (en) * 2011-04-28 2012-11-01 Cisco Technology, Inc. System and method for providing enhanced eye gaze in a video conferencing environment
US20130083999A1 (en) 2011-09-30 2013-04-04 Anurag Bhardwaj Extraction of image feature data from images
US20130267317A1 (en) * 2012-04-10 2013-10-10 Wms Gaming, Inc. Controlling three-dimensional presentation of wagering game content
US20150243036A1 (en) * 2012-09-17 2015-08-27 Sensomotoric Instruments Gesellschaft Fur Innovative Sensorik Mbh Method and an apparatus for determining a gaze point on a three-dimensional object
US20160079538A1 (en) 2013-05-08 2016-03-17 Konica Minolta, Inc. Method for producing organic electroluminescent element having light-emitting pattern
US9323325B2 (en) * 2011-08-30 2016-04-26 Microsoft Technology Licensing, Llc Enhancing an object of interest in a see-through, mixed reality display device
US20160335475A1 (en) * 2014-02-04 2016-11-17 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. 3d image analyzer for determining the gaze direction
US9619884B2 (en) 2013-10-03 2017-04-11 Amlogic Co., Limited 2D to 3D image conversion device and method
US9648307B2 (en) * 2013-07-10 2017-05-09 Samsung Electronics Co., Ltd. Display apparatus and display method thereof
US20170172675A1 (en) * 2014-03-19 2017-06-22 Intuitive Surgical Operations, Inc. Medical devices, systems, and methods using eye gaze tracking
US20170200304A1 (en) 2015-06-30 2017-07-13 Ariadne's Thread (Usa), Inc. (Dba Immerex) Variable resolution virtual reality display system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2586213Y (zh) * 2002-12-24 2003-11-12 合肥工业大学 实时实现Hough变换的光学装置
CA2622365A1 (en) * 2005-09-16 2007-09-13 Imotions-Emotion Technology A/S System and method for determining human emotion by analyzing eye properties
JP2013024910A (ja) * 2011-07-15 2013-02-04 Canon Inc 観察用光学機器
CN103297767B (zh) * 2012-02-28 2016-03-16 三星电子(中国)研发中心 一种适用于多核嵌入式平台的jpeg图像解码方法及解码器
CN102662476B (zh) * 2012-04-20 2015-01-21 天津大学 一种视线估计方法
US11093702B2 (en) * 2012-06-22 2021-08-17 Microsoft Technology Licensing, Llc Checking and/or completion for data grids
CN103019507B (zh) * 2012-11-16 2015-03-25 福州瑞芯微电子有限公司 一种基于人脸跟踪改变视点角度显示三维图形的方法
CN103136525B (zh) * 2013-02-28 2016-01-20 中国科学院光电技术研究所 一种利用广义Hough变换的异型扩展目标高精度定位方法

Patent Citations (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3069654A (en) 1960-03-25 1962-12-18 Paul V C Hough Method and means for recognizing complex patterns
JPH07244738A (ja) 1994-03-07 1995-09-19 Nippon Telegr & Teleph Corp <Ntt> 直線抽出ハフ変換画像処理装置
US5832138A (en) 1994-03-07 1998-11-03 Nippon Telegraph And Telephone Corporation Image processing method and apparatus for extracting lines from an image by using the Hough transform
JP2002288670A (ja) 2001-03-22 2002-10-04 Honda Motor Co Ltd 顔画像を使用した個人認証装置
JP2003157408A (ja) 2001-11-19 2003-05-30 Glory Ltd 歪み画像の対応付け方法、装置およびプログラム
JP2003223630A (ja) 2002-01-30 2003-08-08 Hitachi Ltd パターン検査方法及びパターン検査装置
US20030179921A1 (en) 2002-01-30 2003-09-25 Kaoru Sakai Pattern inspection method and its apparatus
US7164807B2 (en) 2003-04-24 2007-01-16 Eastman Kodak Company Method and system for automatically reducing aliasing artifacts
JP2005038121A (ja) 2003-07-18 2005-02-10 Fuji Heavy Ind Ltd 画像処理装置および画像処理方法
JP2005230049A (ja) 2004-02-17 2005-09-02 National Univ Corp Shizuoka Univ 距離イメージセンサを用いた視線検出装置
US20070014552A1 (en) 2004-02-17 2007-01-18 Yoshinobu Ebisawa Eyeshot detection device using distance image sensor
DE102004046617A1 (de) 2004-09-22 2006-04-06 Eldith Gmbh Vorrichtung und Verfahren zur berührungslosen Bestimmung der Blickrichtung
JP2008513168A (ja) 2004-09-22 2008-05-01 エルディート ゲゼルシャフト ミット ベシュレンクテル ハフツング 視線方向を非接触で特定する為の装置及び方法
WO2006032253A1 (de) 2004-09-22 2006-03-30 Eldith Gmbh Vorrichtung und verfahren zur berührungslosen bestimmung der blickrichtung
JP2006285531A (ja) 2005-03-31 2006-10-19 Advanced Telecommunication Research Institute International 視線方向の検出装置、視線方向の検出方法およびコンピュータに当該視線方向の視線方法を実行させるためのプログラム
US20060274973A1 (en) 2005-06-02 2006-12-07 Mohamed Magdi A Method and system for parallel processing of Hough transform computations
JP2008546088A (ja) 2005-06-02 2008-12-18 モトローラ・インコーポレイテッド ハフ変換計算の並列処理のための方法およびシステム
DE102005047160B4 (de) 2005-09-30 2007-06-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung, Verfahren und Computerprogramm zum Ermitteln einer Information über eine Form und/oder eine Lage einer Ellipse in einem graphischen Bild
US20080012860A1 (en) 2005-09-30 2008-01-17 Frank Klefenz Apparatus, method and computer program for determining information about shape and/or location of an ellipse in a graphical image
JP2009510571A (ja) 2005-09-30 2009-03-12 フラウンホッファー−ゲゼルシャフト ツァ フェルダールング デァ アンゲヴァンテン フォアシュンク エー.ファオ 図形画像内の楕円の形状および/または位置に関する情報を決定するための装置、方法およびコンピュータプログラム
US8032842B2 (en) * 2006-07-25 2011-10-04 Korea Institute Of Science & Technology System and method for three-dimensional interaction based on gaze and system and method for tracking three-dimensional gaze
US20080310730A1 (en) 2007-06-06 2008-12-18 Makoto Hayasaki Image processing apparatus, image forming apparatus, image processing system, and image processing method
JP2011112398A (ja) 2009-11-24 2011-06-09 N Tech:Kk 画像形成状態検査方法、画像形成状態検査装置及び画像形成状態検査用プログラム
US20120106790A1 (en) 2010-10-26 2012-05-03 DigitalOptics Corporation Europe Limited Face or Other Object Detection Including Template Matching
US20120274734A1 (en) * 2011-04-28 2012-11-01 Cisco Technology, Inc. System and method for providing enhanced eye gaze in a video conferencing environment
US9323325B2 (en) * 2011-08-30 2016-04-26 Microsoft Technology Licensing, Llc Enhancing an object of interest in a see-through, mixed reality display device
US20130083999A1 (en) 2011-09-30 2013-04-04 Anurag Bhardwaj Extraction of image feature data from images
KR20140066789A (ko) 2011-09-30 2014-06-02 이베이 인크. 이미지 특징 데이터 추출 및 사용
US20130267317A1 (en) * 2012-04-10 2013-10-10 Wms Gaming, Inc. Controlling three-dimensional presentation of wagering game content
US20150243036A1 (en) * 2012-09-17 2015-08-27 Sensomotoric Instruments Gesellschaft Fur Innovative Sensorik Mbh Method and an apparatus for determining a gaze point on a three-dimensional object
US20160079538A1 (en) 2013-05-08 2016-03-17 Konica Minolta, Inc. Method for producing organic electroluminescent element having light-emitting pattern
US9648307B2 (en) * 2013-07-10 2017-05-09 Samsung Electronics Co., Ltd. Display apparatus and display method thereof
US9619884B2 (en) 2013-10-03 2017-04-11 Amlogic Co., Limited 2D to 3D image conversion device and method
US20160335475A1 (en) * 2014-02-04 2016-11-17 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. 3d image analyzer for determining the gaze direction
US20170032214A1 (en) 2014-02-04 2017-02-02 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. 2D Image Analyzer
US20170172675A1 (en) * 2014-03-19 2017-06-22 Intuitive Surgical Operations, Inc. Medical devices, systems, and methods using eye gaze tracking
US20170200304A1 (en) 2015-06-30 2017-07-13 Ariadne's Thread (Usa), Inc. (Dba Immerex) Variable resolution virtual reality display system

Non-Patent Citations (19)

* Cited by examiner, † Cited by third party
Title
Chen, et al., "Quantization-free parameter space reduction in ellipse detection", ESA, 2011.
Crowley, James L. , "A Representation for Visual Information", Pittsburgh, Pennsylvania, URL:http://www-primaimag.fr/j1c/papers/Crowley-Thesis81.pdf, Nov. 1981.
Ebisawa, Y. et al., "Remote Eye-gaze Tracking System by One-Point Gaze Calibration", Official journal of the Institute of Image Information and Television Engineers, vol. 65, No. 12, P.1768-1775 Japan, the Institute of Image Information and Television Engineers, Dec. 1, 2011, Dec. 1, 2011,.
Fitzgibbon, A. et al., "Direct least square fitting of ellipses", IEEE Transactions on Pattern Analysis and Machine Intelligence, Jg. 21 (Nr. 5), 1999, pp. 476-480.
Hezel, S. et al., "FPGA-Based Template Matching Using Distance Transforms", Filed-Programmable Custom Computing Machines. Proceedings 10th Annual IEEE Symposium on Apr. 22-24, 2002, Piscataway NJ., Apr. 22, 2002, pp. 89-97.
Husar, Peter et al., "Autonomes, Kalibrationsfreies and Echtzeitfaehiges System zur Blickrichtungsverfolgung Eines Fahrers", VDE-Kongress 2010-E-Mobility: Technologien-Infrastruktur Markte Nov. 8-9, 2010 at Leipzig, Deutschland, Jan. 1, 2010, pp. 1-4. (With English Abstract).
Klefenz, F. et al., "Real-time calibration-free autonomous eye tracker", Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, IEEE, Piscataway, NJ, USA, Mar. 14, 2010, pp. 762-766.
Kohlbecher, S. , "Calibration-free eye tracking by reconstruction of the pupil ellipse in 3D space", ETRA '08 Proceedings of the 2008 Symposium on Eye Tracking Research & Applications, Jan. 1, 2008, pp. 135-138.
Küblbeck, Christian , "Face detection and tracking in video sequences using the modified census transformation", 2006, pp. 564-572.
Liang, Xuejun et al., "Data Buffering and Allocation in Mapping Generealized Template Matching on Reconfigurable Systems", The Journal of Supercomputing, Kluwer Academic Publishers, May 1, 2001, pp. 77-91.
Lischka, T. , "Untersuchung eines Eye Tracker Prototypen zur automatischen Operationsmikroskopsteuerung", Doktorarbeit, Universität Hamburg, 2007, 75 pages. (With English Translation by Machine).
Safaee-Rad, Reza et al., "Three-Dimensional Location Estimation of Circular Features for Machine Vision", IEEE Transactions on Robotics and Automation, IEEE Inc, New York, US, vol. 8, No. 5, Oct. 1, 1992, pp. 624-640.
Schreiber, K. , "Erstellung und Optimierung von Algorithmen zur Messung von , Augenbewegungen mittels Video-Okulographie-Methoden", Diplomarbeit, Universität Tübingen, Online verfügbar unter: http://www.genista.de/manches/diplom/diplom.html (zuletzt geprüft am: Oct. 24, 2011), 1999, 135 pages. (With English Translation by Machine).
Schreiber, Kai , "Creation and Optimization of Algorithms for Measuring Eye Movements by Means of Video Oculography Methods", English Translation by Machine, Jan. 22, 1999, 1-275.
Sheng-Wen, Shih et al., "A Novel Approach to 3-D Gaze Tracking Using Stereo Cameras", IEEE Transactions on Systems, Man and Cybernetics. Part B: Cybernetics, IEEE Service Center, Piscataway, NJ, US, vol. 34, No. 1, Feb. 1, 2004, pp. 234-245.
Spindler, Fabien et al., "Gaze Control Using Human Eye Movements", Proceedings of the 1997 IEEE International Conference on Robotics and Automation [online]. Internet URL: http//ieeexplore.ieeee.org/document/619297, Apr. 20, 1997, pp. 2258-2263.
Stockman, G. C. et al., "Equivalence of Hough Curve Detection to Template Matching", Communications of the ACM [online], Internet URL: https://dl.acmorg/citation/cfm?id=359882. vol. 20, No. 11, Nov. 30, 1977, pp. 820-822.
Viola, Paul et al., "Robust Real-time Object Detection", Second International Workshop on Statistical and Computational Theories of Vision-Modeling, Learning, Computing, and Sampling, Vancouver, Canada, Jul. 13, 2001., 25 pages.
Viola, Paul et al., "Robust Real-time Object Detection", Second International Workshop on Statistical and Computational Theories of Vision—Modeling, Learning, Computing, and Sampling, Vancouver, Canada, Jul. 13, 2001., 25 pages.

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11195252B2 (en) * 2016-12-06 2021-12-07 SZ DJI Technology Co., Ltd. System and method for rectifying a wide-angle image
US20200228787A1 (en) * 2017-02-23 2020-07-16 Karl Storz Se & Co. Kg Apparatus for Capturing a Stereo Image
US10791322B2 (en) * 2017-02-23 2020-09-29 Karl Storz Se & Co. Kg Apparatus for capturing a stereo image
US11546573B2 (en) * 2017-02-23 2023-01-03 Karl Storz Se & Co. Kg Apparatus for capturing a stereo image
US10924725B2 (en) * 2017-03-21 2021-02-16 Mopic Co., Ltd. Method of reducing alignment error between user device and lenticular lenses to view glass-free stereoscopic image and user device performing the same
US20220070368A1 (en) * 2019-03-27 2022-03-03 Schölly Fiberoptic GmbH Method for commissioning a camera control unit (ccu)
US20230259966A1 (en) * 2022-02-14 2023-08-17 Korea Advanced Institute Of Science And Technology Method and apparatus for providing advertisement disclosure for identifying advertisements in 3-dimensional space
US11887151B2 (en) * 2022-02-14 2024-01-30 Korea Advanced Institute Of Science And Technology Method and apparatus for providing advertisement disclosure for identifying advertisements in 3-dimensional space

Also Published As

Publication number Publication date
JP2017514193A (ja) 2017-06-01
JP6268303B2 (ja) 2018-01-24
WO2015117904A1 (de) 2015-08-13
EP3103058A1 (de) 2016-12-14
US20160335475A1 (en) 2016-11-17
KR20160119176A (ko) 2016-10-12
WO2015117905A1 (de) 2015-08-13
JP6483715B2 (ja) 2019-03-13
EP3103059A1 (de) 2016-12-14
CN106133750B (zh) 2020-08-28
CN106258010A (zh) 2016-12-28
KR101858491B1 (ko) 2018-05-16
US20160342856A1 (en) 2016-11-24
WO2015117906A1 (de) 2015-08-13
US10592768B2 (en) 2020-03-17
WO2015117907A3 (de) 2015-10-01
EP3103060A1 (de) 2016-12-14
US20170032214A1 (en) 2017-02-02
WO2015117907A2 (de) 2015-08-13
CN106258010B (zh) 2019-11-22
KR20160119146A (ko) 2016-10-12
JP6248208B2 (ja) 2017-12-13
CN106104573A (zh) 2016-11-09
JP2017508207A (ja) 2017-03-23
US10074031B2 (en) 2018-09-11
KR101991496B1 (ko) 2019-06-20
EP3968288A2 (de) 2022-03-16
JP2017509967A (ja) 2017-04-06
CN106133750A (zh) 2016-11-16

Similar Documents

Publication Publication Date Title
US10192135B2 (en) 3D image analyzer for determining the gaze direction
CN109716268B (zh) 眼部和头部跟踪
CN110543871B (zh) 基于点云的3d比对测量方法
US20140313308A1 (en) Apparatus and method for tracking gaze based on camera array
KR20140125713A (ko) 카메라 어레이에 기반하여 시선을 추적하는 방법 및 장치
US10281264B2 (en) Three-dimensional measurement apparatus and control method for the same
US20150341618A1 (en) Calibration of multi-camera devices using reflections thereof
CN107209849A (zh) 眼睛跟踪
CN111160136B (zh) 一种标准化3d信息采集测量方法及系统
CN111028205B (zh) 一种基于双目测距的眼睛瞳孔定位方法及装置
CN113780201B (zh) 手部图像的处理方法及装置、设备和介质
US11054659B2 (en) Head mounted display apparatus and distance measurement device thereof
CN110213491B (zh) 一种对焦方法、装置及存储介质
US20170069108A1 (en) Optimal 3d depth scanning and post processing
KR101961266B1 (ko) 시선 추적 장치 및 이의 시선 추적 방법
US20180199810A1 (en) Systems and methods for pupillary distance estimation from digital facial images
JP6244960B2 (ja) 物体認識装置、物体認識方法及び物体認識プログラム
US10609311B2 (en) Method and device for increasing resolution of an image sensor
KR101348903B1 (ko) 시선 추적용 기하광학에 기반한 각막 반경 측정 알고리즘을 이용한 각막 반경 측정 장치 및 방법
Zhao Stereo imaging and obstacle detection methods for vehicle guidance
CN118799819A (zh) 一种基于双目摄像头的对象跟踪方法以及装置
Kılıç Performance improvement of a 3d reconstruction algorithm using single camera images

Legal Events

Date Code Title Description
AS Assignment

Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DER ANGEWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KRENZER, DANIEL;HESS, ALBRECHT;KATAI, ANDRAS;SIGNING DATES FROM 20161102 TO 20170221;REEL/FRAME:042142/0853

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20230129