EP3103059A1 - 3d-bildanalysator zur blickrichtungsbestimmung - Google Patents

3d-bildanalysator zur blickrichtungsbestimmung

Info

Publication number
EP3103059A1
EP3103059A1 EP15701823.5A EP15701823A EP3103059A1 EP 3103059 A1 EP3103059 A1 EP 3103059A1 EP 15701823 A EP15701823 A EP 15701823A EP 3103059 A1 EP3103059 A1 EP 3103059A1
Authority
EP
European Patent Office
Prior art keywords
image
pattern
hough
pupil
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP15701823.5A
Other languages
German (de)
English (en)
French (fr)
Inventor
Daniel KRENZER
Albrecht HESS
András KÁTAI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority to EP21203252.8A priority Critical patent/EP3968288A2/de
Publication of EP3103059A1 publication Critical patent/EP3103059A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/145Square transforms, e.g. Hadamard, Walsh, Haar, Hough, Slant transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/955Hardware or software architectures specially adapted for image or video understanding using specific electronic processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20008Globally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • Exemplary embodiments of the present invention relate to a 3D image analyzer for determining a viewing direction (or direction vector) or a viewing line (consisting of position vector and direction vector) in a 3D space without a calibration by the user whose viewing direction is to be determined , is required. Further exemplary embodiments relate to an image analysis system with a 3D image analyzer for detecting an alignment or viewing direction and to a corresponding method for detecting the orientation or viewing direction.
  • a very widespread category are video-based systems that record the person's eyes with one or more cameras and evaluate these video recordings online or offline in order to determine the viewing direction.
  • Systems for video-based viewing direction determination usually require a calibration procedure for each user at the beginning of use and in some cases also during use (eg when leaving the camera detection area or when the position between the user and the system changes) of the user to be able to determine. Furthermore, some of these systems require a very specific and defined arrangement of the camera (s) and the lighting to each other or a very special arrangement of the camera (s) to the user and prior knowledge of the position of the user (as in the patent DE 10 2004 046 617 AI) in order to be able to determine the viewing direction.
  • An object is to provide efficient and reliable direction detection, e.g. To enable viewing direction recognition.
  • Exemplary embodiments of the present invention provide a 3D image analyzer for determining a line of sight or a straight line of sight (comprising, for example, a viewing device). direction vector and a location vector, the z. Indicating the pupil center and to which the view direction vector attaches) or a viewpoint, wherein the 3-D image analyzer is adapted to generate at least a first set of image data determined based on a first image and another set of information based on Based on the first image or another image, the first image containing a pattern resulting from the imaging of a three-dimensional object (eg, a pupil, iris, or ellipse pattern) from a first perspective into a first image plane and wherein the further set also contains an image with a pattern resulting from the image of the same three-dimensional object from a further perspective into a further image plane, or wherein the further set contains information describing a (relative) relationship between describe at least one point of the three-dimensional object and the first image plane.
  • the 3D image analyzer includes a position calculator and an orientation calculator.
  • the position calculator is configured to calculate or translate a position of the pattern in a three-dimensional space based on the first sentence, another sentence determined based on the further image, and a geometric relationship between the perspectives of the first and the further images calculate the position of the pattern in the three-dimensional space based on the first set and a statistically determined relationship between at least two characteristic features to each other in the first image or the position of the pattern in the three-dimensional space based on the first set and a positional relationship between at least to calculate a point of the three-dimensional object and the first image plane.
  • the alignment enhancer is configured to compute two possible SD view vectors per image and from these possible 3D view vectors to determine the 3D view vector according to which the pattern is aligned in the three-dimensional space, wherein the calculation and determination on the first set , the further sentence and based on the calculated position of the pattern.
  • the core of the present invention thus lies in the fact that it has been recognized that - based on the above-mentioned position determining device specific position of the pattern, an orientation of an object in space, such.
  • An alignment of a pupil in space (ie, the line of sight), and / or a straight line (consisting of a line of sight vector and a location vector indicating, for example, the pupil center and to which the viewpoint vector attaches) based on at least one set of image data , z. B. from a first perspective and additional information or another set of image data (from a further perspective) can be determined.
  • a position calculation device is used, which determines the position of the pattern in a first step.
  • the compensation device now determines, based on the further set of image information or based on additional information that can also be obtained based on the first set of image information, which of the theoretically possible tilt angles or 3D view vectors of the reality, ie the actual line of sight, equivalent.
  • the line of sight vector and / or the straight line advantageously the line of sight vector and / or the straight line (consisting of location vector of the searched pattern and direction vector) without prior knowledge of a distance between the pupil and camera or without exact positioning of the optical axes of the cameras (eg through the pupil center).
  • the determination or selection of the appropriate 3D view vector is carried out by determining two further possible SD view vectors for a further set of image files (from a further perspective), wherein a 3D view vector from the first image data set with an SD view vector from the further image data set, which is then the actual 3D view vector.
  • the first image data set can be analyzed, for. In terms of how many pixels of the sclera of the eye imaged in the first image are swept by the two possible 3 D view vectors (starting at the pupil center). In this case, the SD view vector is selected, which passes over fewer pixels of the sclera.
  • statistically determined relationship such as a distance between two characteristic facial features (eg nose, eye) can also be used to calculate the 3-D position of a point of the pattern (eg pupil or iris midpoint).
  • a point of the pattern eg pupil or iris midpoint.
  • the determination of the above-described 3 D position of a point of the sample is not limited to the use of statistical values determined. It may also be based on the results of an upstream calculator providing the 3-D positions of characteristic facial features (eg nose, eye) or a 3D position of the above-mentioned pattern.
  • the selection of the actual SD view vector from the possible 3D view vectors may also be based on the 3D position of the pattern (eg, pupil or iris midpoint) and the above-mentioned 3-D positions of characteristic facial features (e.g. Corner of the eye, mouth corners).
  • the orientation calculation is carried out by calculating a first virtual projection plane, by rotation of the actual first projection plane including optics around the nodal point of the optics, for the first image, such that a first virtual optical axis perpendicular to the first virtual plane Projection level is defined, extends through the center of the recognized pattern.
  • a second virtual positional plane is calculated for the further image by rotation of the actual second projection plane including optics around the nodal point of the optics, so that a second virtual optical axis which is perpendicular to the second virtual projection plane is defined, extends through the center of the edge pattern.
  • the 3-D gaze vector may be described by a set of equations, each equation describing a geometric relationship of the respective axes and the respective virtual projection plane from the SD gaze vector.
  • a first equation based on the image data of the first set can be used to describe the 3 D view vector, two solutions of the first equation being possible.
  • a second equation based on the image data of the second set results in two (further) solutions for the 3D view vector with respect to the second virtual projection plane.
  • the actual SD view vector can be calculated by a weighted average of one solution vector each of the first and one solution vector of the second equation.
  • the 3D image analyzer may be implemented in a processing unit comprising, for example, a selective-adaptive data processor.
  • the 3D image analyzer may be part of an image analysis system for tracking a pupil.
  • an image analysis system typically comprises at least one Hough path for at least one camera or preferably two Hough paths for at least two cameras.
  • each Hough path may comprise a pre-processor as well as a Hough-TransfonTiation device.
  • Hough transform means it may also include means for analyzing the detected pattern and outputting a set of image data.
  • a method for determining a line of sight or straight line comprises the steps of receiving at least a first set of image data determined based on a first image and another set of information determined based on the first image or another image, the first image being a pattern of a three-dimensional object from a first perspective into a first image plane and wherein the further sentence contains a further image with a uster, resulting from the image of the same three-dimensional object from a further perspective in another image plane, or comprises an infomiation, which describes a relationship between at least one point of the three-dimensional object and the first image plane.
  • the method further includes the step of calculating a position of the pattern in a three-dimensional space based on the first sentence, another sentence determined based on the further image, and a geometric relationship between the perspectives of the first and the further images or calculating the position of the pattern in the three-dimensional space based on the first set and a statistically determined relationship between at least two characteristic features to each other in the first image or calculating the position of the pattern in the three-dimensional space based on the first set and a positional relationship between at least a point of the three-dimensional object and the first image plane.
  • a D-view vector is calculated according to which the pattern in the three-dimensional is aligned based on the first set of image data and the further set of information and on the calculated position of the pattern.
  • this method may be performed by a computer.
  • another embodiment relates to a computer-readable digital storage medium having a program code for carrying out the above method.
  • FIG. 1 is a schematic block diagram of a 3D image analyzer according to an embodiment
  • FIG. 2a shows a schematic block diagram of a high-quality processor with a
  • FIG. 2b shows a schematic block diagram of a pre-processor according to an exemplary embodiment
  • FIG. 2c is a schematic representation of Hough cores for the detection of straight lines (sections);
  • 3a is a schematic block diagram of a possible implementation of a
  • Hough transform means according to an embodiment example
  • 3b shows a single cell of a delay matrix according to an embodiment
  • FIG. 4a-d is a schematic block diagram of a further implementation of a Hough transformation device according to an exemplary embodiment
  • FIG. 5a is a schematic block diagram of a stereoscopic camera arrangement with two image processors and a post-processing device, wherein each of the image processors has a Hough processor according to embodiments;
  • Fig. 5b shows an exemplary recording of an eye to illustrate a with the
  • FIG. 5a shows a device for viewing angle detection and for explaining the viewing angle detection in a monoscopic case
  • FIG. 6-7 further illustrations for explaining additional embodiments or aspects
  • FIG. 8e a schematic representation of the illustration of a circle in 3D space as
  • 9a-9i further illustrations for explaining background knowledge for the Hough transformation device.
  • the 3D image analyzer is configured to be based on at least one set of image data, but preferably based on a first set and a second set of image data Image data to determine a viewing direction in a 3D space (ie, a 3 D viewing direction). Together with a likewise determined point on the line of sight (eg the pupil or iris midpoint in the SD space), the 3D line of sight, which also serves as the basis for the calculation of the 3D viewpoint, results from this point and the viewing direction mentioned above can be used.
  • the basic method of determination includes the three basic steps of receiving at least the first set of image data determined based on a first image 802a (see Fig.
  • the first image 802a maps a pattern 804a of a three-dimensional object 806a (see FIG. 8b) from a first perspective into a first image plane.
  • the further set typically includes the further image 802b.
  • the further sentence may alternatively (instead of concrete image data) also contain one or more of the following information, a positional relationship between a point PP of the three-dimensional object 806a and the first image plane 802a, positional relationships between a plurality of characteristic points to each other in the face or eye , Positional relationships of characteristic points on the face or eye with respect to the sensor, the position and orientation of the face.
  • the position of the pattern 806a in the three-dimensional space is calculated based on the first set of the wider set and a geometric relationship between the perspectives of the first and second images 802a and 802b.
  • the calculation of the position of the pattern 806a in the three-dimensional space based on the first set and a statistically determined relationship between at least two characteristic features in the first image 804a may be calculated to each other.
  • the last step of this basic method relates to computing the SD view vector according to which the pattern 804a and 804b are aligned in three-dimensional space. The calculation is based on the first sentence and the second sentence.
  • the backprojection beam RS of the ellipse center has to be calculated, which extends along the node beam between the object and the object node (Hl) of the optical system (FIG. 8a).
  • This back projection beam is defined by equation (AI). It consists of a starting point RSo and a normalized directional vector RS ⁇ , which are used in the
  • Lens model (Fig. 8b) by equations (A2) and (A3) resulting from the two principal points H ⁇ and H2 of the lens and the ellipse center EMP in the sensor plane. All three points (Hi, H 2 and EMP) must be in the eye track coordinate system.
  • Pssiid is the resolution of the camera image in pixels
  • S 0ffse t is the position on the sensor at which the image is started to be read out
  • S res is the resolution of the sensor
  • S xGr is the pixel size of the sensor.
  • the desired pupil center is ideally the intersection of the two rear projection beams RS K1 and RS 1 ⁇ 2 .
  • Two straight lines in this constellation which neither intersect nor run parallel, are called skewed straight lines in geometry.
  • the two skewed straight lines each run very close to the pupil center.
  • the pupil center is at the point of their smallest distance to each other halfway between the two lines.
  • the shortest distance between two skewed straight lines is indicated by a connecting line perpendicular to both straight lines.
  • the distance perpendicular to both rear projection beams can be calculated according to equation (A4) as a cross product of their direction vectors.
  • Equation (A5) The location of the shortest link between the back projection beams is defined by Equation (A5).
  • RS K '(s), RS K2 (t) and n St results in a system of equations from which s, / and u can be calculated.
  • the sought pupil center P MP which lies halfway between the back projection beams, thus results from equation (A6) after the onset of the values calculated for s and u.
  • the calculated pupil center is one of the two parameters that determine the eye's eye's line of sight. In addition, it is needed to calculate the line of sight vector P, which will be described below.
  • the advantage of this method for calculating the pupillary center is that the distances between the cameras and the eye need not be stored permanently in the system. This is z. B. in the method described in the patent DE 10 2004 046 617 AI required.
  • the sight line vector P- to be determined corresponds to the normal vector of the circular pupil surface and is thus defined by the orientation of the pupil in the 3-D space.
  • the position and orientation of the pupil can be determined from the ellipse parameters which can be determined for each of the two ellipse-shaped projections of the pupil on the camera sensors.
  • the lengths of the two half-axes and the rotation angle of the projected ellipses are characteristic of the orientation of the pupil or the viewing direction relative to the camera positions.
  • the calculation of the pupil center can be made and the distances of the cameras to the eye need not be known in advance, which is one of the significant innovations over the above-mentioned patent.
  • Due to the perspective projection however, the shape of the pupil ellipse imaged on the sensor does not result solely from the inclination of the pupil relative to the sensor surface, in contrast to the parallel projection.
  • the deflection ⁇ of the pupil center from the optical axis of the camera lens, as sketched in FIG. 8b, likewise has an influence on the shape of the pupil projection and thus on the ellipse parameters determined therefrom.
  • the distance between pupil and camera with several hundred millimeters is very large compared to the pupil radius, which is between 2 mm and 8 mm. Therefore, the deviation of the pupil projection from an ideal ellipse shape, which arises at an inclination of the pupil with respect to the optical axis, becomes very small and can be neglected.
  • the influence of the angle ⁇ on the ellipse parameters must be eliminated, so that the shape of the pupil projection is influenced solely by the orientation of the pupil. This is always the case when the pupil center PMP lies directly in the optical axis of the camera system. Therefore, the influence of the angle ⁇ can be eliminated by calculating the pupil projection on the sensor of a virtual camera system vK whose optical axis passes directly through the previously calculated pupil center PMP, as shown in Fig. 8c.
  • the position and orientation of such a virtual camera system 804a '(vk in Fig. 8c) can be calculated from the parameters of the original camera system 804a (K in Fig. 8b) by rotation about its object-side principal point Hj. At the same time, this corresponds to the object-side main point vllj of the virtual camera system 804a '.
  • the direction vectors of the node beams of the imaged objects in front of and behind the virtual optical system 808c ' are identical to those in the original camera system. All further calculations for determining the line of sight vector take place in the eye tracker coordinate system.
  • the normalized normal vector vK n of the virtual camera vK is as follows: P MP ⁇ H l
  • Unit vectors of the y-direction of the eyetracker coordinate system by the angles vK (j , ⁇ ⁇ and ⁇ ⁇ the vectors vK x and vK - can be calculated, which are the x- and y-
  • the required distance d between the main points and the distance b between the main plane 2 and the sensor plane must be known or z. B. be determined experimentally with a test setup.
  • edge points RP "of the previously determined ellipse on the sensor in original position are first required.
  • E a is the short half-axis of the ellipse
  • E b is the long half-axis of the ellipse
  • E and E Y the center coordinates of the ellipse and E a of the rotation angle of the ellipse
  • the position of a point RP 3D in the eye tracker coordinate system can be determined by the equations (AI 1) to (AI 4) from the parameters of the ellipse E, the sensor S and the Camera K are calculated, where ⁇ indicates the location of an edge point RP corresponding to Fig. 8d on the ellipse circumference.
  • the direction of a node beam KS in the original camera system, which images a pupil edge point as ellipse edge point RP ⁇ D on the sensor, is equal to the direction of the node beam VCS in the virtual camera system, which images the same pupil edge point as ellipse edge point RP ID on the virtual sensor.
  • the node beams of the ellipse edge points in Figs. 8b and 8c illustrate this.
  • the two beams KS and vKS have the same directional vector, which results from equation (AI 5).
  • VKSQ vHj is always valid.
  • v.KS 0 + r, - vKS, K 0 + s 2 - K, + t 2 - K -
  • the shape of the virtual ellipse vE thus determined depends only on the orientation of the pupil.
  • its center is always in the center of the virtual sensor and, together with the sensor normal, which corresponds to the camera normal vK ⁇ , a line along the optical axis through the pupil center PMP ⁇
  • the prerequisites are met to building on the in the Patent DE 10 2004 046 617 AI presented approach to calculate the line of sight.
  • this approach it is now also possible to determine the viewing direction by using the virtual camera system described above, when the pupil center lies outside the optical axis of the real camera system, which is almost always the case in real applications.
  • the previously calculated virtual ellipse vE is now assumed in the main virtual plane 1. Since the center of vE is at the center of the virtual sensor and thus in the optical axis, the 3-D ellipse center vE 'MP corresponds to the virtual principal point 1. At the same time, it is also the lotus point of the pupil center point PMP in the main virtual plane 1. Subsequently, only the axis ratio and the rotation angle of the ellipse vE are used. These shape parameters of vE can also be used unchanged with reference to the principal plane 1, since the orientations of the x and y axes of the 2-D sensor plane to which they relate correspond to the alignment of the 3-D sensor plane and thus also the orientation of main level 1.
  • Each image of the pupil 806a in a camera image can be formed by two different orientations of the pupil.
  • two virtual intersections v S of the two possible straight lines with the main virtual plane 1 result from the results of each camera.
  • the two possible viewing directions p. , and P tj are determined as follows.
  • the distance A between the known pupil center and the ellipse center vE ' MP is
  • the possible viewing directions of the camera 1 (P- 1 and P ⁇ 1 ) and the camera 2 (P ä * 2 and P ⁇ 2 ) are required. Of these four vectors, one from each camera indicates the actual viewing direction, and these two normalized vectors are ideally identical. In order to identify them, for all four possible combinations of one vector of one camera and one vector of the other camera, the differences of the respectively selected possible line of sight vectors are formed. The combination which gives the least difference contains the vectors sought. These result in the direction of sight vector P n to be determined. In the case of averaging, an almost simultaneous image acquisition must be assumed so that the same pupil position as well as the same orientation and thus the same line of sight were captured by both cameras.
  • the angle w ⁇ yj between the two averaged vectors P Kl and P K2 which indicate the actual viewing direction, can be calculated.
  • the smaller w ⁇ j the more accurate were the model parameters and ellipse centers, which were used for the previous calculations.
  • the viewing angles 6BW and ⁇ ⁇ with respect to the normal position of the pupil (P H is parallel to the z-axis of the eye tracker coordinate system) can be determined by the equations
  • LoS [f) P MP + t -P.
  • the implementation of the above-presented method is platform independent, so the above-presented method can be used on various hardware platforms, e.g. a PC can be executed.
  • FIG. 2 a shows a Hough processor 100 with a Prc processor 102 and a Hough transformation device 104.
  • the pre-processor 102 represents the first signal processing stage and is coupled to the Hough transformation device 104 for information purposes.
  • the Hough transformer 104 has a delay filter 106 which may comprise at least one but preferably a plurality of delay elements 108a, 108b, 108c, 110a, 110b and 110c.
  • the delay elements 108a to 108c and 110a to 110c of the delay filter 106 are typically arranged as a matrix, that is to say in columns 108 and 110 and lines a to c, and are coupled together by signal technology.
  • At least one of the delay elements 108a to 108c or 110a to 110c has an adjustable delay time, here symbolized by the "+/-" symbol, for driving the delay elements 108a to 108c and 110a to 110c
  • a separate control logic or control gate may be provided for controlling the same. This control logic controls the delay time of the individual delay elements 108a to 108c or 110a to 110c via optional switchable elements 109a to 109c and 11 la, respectively
  • the Hough transform format 104 may include an additional configuration register (not shown) for initially configuring the individual delay elements 108a-108c and 110a-110c.
  • the purpose of the pre-processor 102 is to prepare the individual samplcs 1 12a, 12b and 1 12c so that they can be processed efficiently by the Hough transformation device 104.
  • the preprocessor 102 receives the Bikklatei or the multiple Samplcs 1 12a, 1 12b and 1 12c and performs a preprocessing, for example in the form of a rotation and / or in the form of a mirror to the multiple versions (see. 1 12a and 1 12a ') to the Hough Transforaiations worn 104 spend.
  • the output may be serial if the Hough transformer 104 has a Hough core 106, or parallel if multiple Hough cores are provided.
  • the n versions of the image are either completely parallel, semi-parallel (ie only partially parallel) or serially output and processed.
  • the pre-processing in the pre-processor 102 which serves the purpose with a search pattern or a 1 Iough-Kemkon figuration to detect several similar patterns (rising and falling line) is explained below with reference to the first sample 112a.
  • this sample can be rotated, eg rotated 90 °, to obtain the rotated version 1 12a '.
  • This process of rotation is provided with the reference numeral 1 14.
  • the rotation can take place either by 90 [deg.], But also by 180 [deg.] Or 270 [deg.] Or in general by 360 [mm], it being noted that, depending on the downstream, it can be very efficient in the Hough transformation (see Hough transformation device 104), only make a 90 ° turn.
  • the image 112a may also be mirrored to obtain the mirrored version 112a ".
  • the mirroring operation is designated by the reference numeral 116.
  • the mirror 116 corresponds to reading the memory backward
  • a fourth version can be obtained by a rotated and mirrored version 112a'"by performing either the process 1 14 or 1 16.
  • two similar patterns eg, semicircle open to the right and the half-circle opened to the left
  • the Hough transformer 104 is designed to be in the version 1 12a or 12 provided by the preprocessor 102, respectively.
  • the filter arrangement is configured according to the sought vorbc- certain pattern.
  • some of the delay elements 108a to 108c and 110a to 110c are activated or bypassed, respectively.
  • some pixels are selectively delayed by the delay elements 108a to 108c, which corresponds to an intermediate memory, and others are forwarded directly to the next column 110.
  • this process "straightens" curved or oblique geometries 12a ', high column sums occur in one of the columns 108 or 110 while the column sums in other columns are lower
  • the column sum is output via the column sum output 108x or I l Ox, optionally an addition element (not shown) for formation the column sum may be provided per column 108 or 1.
  • an addition element for formation the column sum may be provided per column 108 or 1.
  • At a maximum of one of the column sums may indicate the presence of a searched image structure or a segment the sought picture structure or at least the corresponding degree of agreement with the sought structure.
  • each processing step the image strip is shifted by one pixel or by one column 108 or 110, so that it can be detected with each processing step from an initial histogram whether one of the structures sought is detected or not Probability of the presence of the sought structure is correspondingly high.
  • exceeding a threshold value of the respective column sum of the column 108 or 110 indicates the detection of a segment of the searched image structure, each column 108 or 110 representing a searched pattern of a searched pattern (eg angle a straight line or radius of a circle) is assigned.
  • variable delay elements 108a to 108c or 110a to 110c delay elements
  • the desired characteristic that is to say, for example, the radius or the rise
  • the individual columns 108 and 110 are coupled to each other, adjusting the delay time of one of the delay elements 108a to 108c or 110a to 110c, a change in the overall filter characteristic of the filter 106.
  • the size of the illustrated Hough core 104 is configurable (either in operation or in advance) so that additional Hough cells may be enabled or disabled.
  • the transformation means 104 may be provided with means for adjusting the same or, more precisely, for adjusting the individual delay elements 108a-108c and 110a-110c, such as with a controller (not shown).
  • the controller is arranged, for example, in a downstream processing device and adapted to adapt the delay characteristic of the filter 106 when no pattern can be detected or when the recognition is not sufficiently good (poor matching of the image content with the searched patterns of the pattern being searched for) , This controller will be discussed with reference to FIG. 5 a.
  • a very high frame rate of, for example, 60 FPS at a resolution of 640x480 could be achieved using a clock frequency at 96 MHz because of the structure 104 described above with the plurality of columns 108 and 110 a parallel processing or a so-called parallel Hough transformation is possible. It should be noted that in the above and following Ausluhrungs- examples with "line of sight” or "eye vector" primarily the optical axis of
  • This optical axis of the eye is to be distinguished from the visual axis of the eye, but the optical axis of the eye can serve as an estimate for the visual axis, as these axes are typically interdependent.
  • a direction or a direction vector can be calculated which is still a significantly better estimate of the orientation of the actual visual axis of the eye.
  • the pre-processor 102 is configured to receive the samples 1 12 as binary edge images or also as gradient images and to carry out the rotation 1 14 or the reflection 1 16 on the basis of this to produce the four versions 1 12a, 12a ', 1 12a "and 1 12a '".
  • the parallel Hough transform as performed by the Hough transform, is built on two or four respectively preprocessed, eg 90 °, versions of an image 12a. As shown in Fig.
  • a 90 ° rotation (1 12a to 1 12a ') first takes place before the two versions 1 12a and 1 12a' are mirrored horizontally (compare 1 12a to 1 12a "and 1 12a '). to 1 12a '").
  • the preprocessor has in corresponding embodiments, an internal or external memory, which serves to hold the received image files 1 12.
  • the processing rotation 14 and / or mirror 16 of the pre-processor 102 depends on the downstream Hough transformer, the number of parallel Hough cores (degree of parallelization) and the configuration thereof, as will be described in particular with reference to FIG. 2c , In this respect, the preprocessor 102 may be configured to output the preprocessed video stream via the output 126 in accordance with one of the following three constellations, depending on the degree of parallelization of the downstream Hough transform means 104:
  • the pre-processor 102 may be configured to also perform further image processing steps, such as up-sampling.
  • the halftone image (output image) in the FPGA could be rotated.
  • Figure 2c shows two Hough core configurations 128 and 130, eg for two parallel 31x31 Hough cores, configured to detect a straight section.
  • a unit circle 132 is plotted to illustrate in which angular ranges the detection is possible. It should be noted at this point that the Hough core configurations 128 and 130 are each shown so that the white dots illustrate the delay elements.
  • the Hough core configuration 128 corresponds to a so-called Type 1 Hough kernel
  • the Hugo kernel configuration 130 corresponds to a so-called Type 2 Hough kernel.
  • the Hough core configuration 128 and 130 is applied to the rotated version of the respective image. Consequently, by means of the Hough core configuration 128, the range lr between ⁇ / 4 and zero and, by means of the Hough core configuration 130, the range 2r between ⁇ and 3 ⁇ / 4 can be detected.
  • Hough core e.g., a Type 1 Hough core
  • only one Hough core type can be used which is reconfigured during operation or in which the individual delay elements can be switched on or off, that is the Hough core corresponds to the inverted type.
  • the pre-processor 102 in the 50% parallelization mode
  • the configurable Hough transformation device 104 with only one Hough core and with only one image rotation, this means that the complete functionality can be mapped can only be covered by means of two parallel Hough core.
  • the respective Hough core configuration or the choice of the I lough core type is dependent on the pre-processing, which is performed by the pre-processor 102.
  • Fig. 3a shows a Hough kernel 104 with m columns 108, 110, 138, 140, 141 and 143 and n rows a, b, c, d, e and f such that mxn cells are formed.
  • the column 108, 110, 138, 140, 141 and 143 of the filter stands for a certain expression of the sought structure, for example for a certain curvature or a certain straight rise.
  • Each cell comprises a delay time-adjustable delay em en t, wherein in this exemplary embodiment the adjustment mechanism is realized by providing a switchable delay element with a bypass in each case.
  • the structure of all cells will be explained by way of example with reference to FIG. 3b.
  • the cell (108a) of Fig. 3b comprises the delay element 142, a remote-controlled switch 144, such as a multiplexer. and a bypass 146.
  • a remote-controlled switch 144 By means of the remotely operable switch 144, either the line signal may be passed through the delay element 142 or may be passed to the node 148 instantaneously.
  • the node 148 is on the one hand connected to the sum element 150 for the column (eg 108), and on the other hand via this node 148, the next cell (eg 1 10a) is connected.
  • the multiplexer 144 is configured via a so-called configuration register 160 (see Fig. 3a). It should be noted at this point that the reference numeral 160 shown here refers only to a portion of the configuration register 160 that is coupled directly to the multiplexer 144.
  • the element of the configuration register 160 is designed to control the multiplexer 144 and, via a first information input 160a, receives configuration information that originates, for example, from a configuration matrix stored in the FPGA-internal BRAM 163.
  • This configuration information can be a column-wise bit string and refers to the configuration of several of the (also during the transformation) configurable delay cells (142 + 144). Therefore, the configuration information can be further forwarded via the output 160b.
  • the configuration register 160 or the cell of the configuration register 160 receives a so-called enabler signal via a further signal input 160c, by means of which the reconfiguration is initiated.
  • the reconfiguration of the Hough core requires a certain amount of time, which depends on the number of delay elements or, in particular, on the size of a column. In this case, one clock cycle is assigned for each split element and there is a latency of a few clock cycles through the BRAM 163 or the configuration logic 160. The overall latency for the reconfiguration is typically negligible for video-based image processing.
  • the video data streams recorded with a CMOS sensor have horizontal and vertical blanking, and the horizontal blanking time can be used for reconfiguration.
  • the size of the Hough Kem structure implemented in the FPGA dictates the maximum size possible for Hough core configurations. For example, if smaller configurations are used, they are centered vertically and in the horizontal direction on column 1 of the Hough core. Structure aligned. Unused elements of the Hough Kem structure are all populated with activated delay elements.
  • the evaluation of the data streams thus processed with the individual delay cells (142 + 144) takes place column by column.
  • aulmum is column-wise in order to detect a local sum maximum, which indicates a recognized desired structure.
  • the summation per column 108, 110, 138, 140, 141 and 143 serves to determine a value representative of the degree of agreement with the searched structure for a characteristic of the structure associated with the respective column.
  • comparators 108v, HOv, 138v, 140v, 141v and 143v are provided per column, which are connected to the respective summation elements 150.
  • additional delay elements 153 can also be provided between the individual comparators 108v, HOv, 138v, 140v, 141v, 143v of the different columns 108, 110, 138, 140, 141 or 143, which serve to compare the column sums of adjacent columns.
  • the column 108, 110, 138, or 140 is always passed out of the filter with the greatest degree of match for a given sample to be found.
  • Upon detection of a local maximum of a column sum (comparison previous, subsequent column) can be concluded on the presence of a sought structure.
  • the result comprises a so-called multi-dimensional Hough space, which contains all the relevant parameters of the structure sought, such as the desired structure.
  • Type of pattern e.g., straight line or semicircle
  • measure of pattern match
  • shape of the structure curvature of curvature of curve segments or slope and length of straight line segments
  • the location or orientation of the searched pattern e.g., for each point in the Hough space, the gray values of the corresponding structures in the image area are added up.
  • maxima are formed by means of which the sought-after structure in Hough space can be easily localized and returned to the image area.
  • the Hough-core cell of FIG. 3b may include an optional pipeline delay element 162, arranged, for example, at the output of the cell and configured to receive both the signal delayed by means of the delay element 142 and that by means of the bypass 146 delay delayed signal.
  • an optional pipeline delay element 162 arranged, for example, at the output of the cell and configured to receive both the signal delayed by means of the delay element 142 and that by means of the bypass 146 delay delayed signal.
  • such a cell may also comprise a delay element with a variability or comprise a multiplicity of interconnected and bypassed delay elements, so that the delay time can be set in several stages.
  • further implementations would be alternatively conceivable via the implementation of the I-core core cell shown in FIG. 3b.
  • FIG. 5a shows an FPGA-implemented image processor 10a with a preprocessor 102 and a Hough transform device 104
  • Pre-processor 102 may further include an input stage 12 implemented in image processor 10a configured to receive image data or image samples from a camera 14a.
  • the input stage 12 may comprise, for example, an image transfer interface 12a, a segmentation and edge detector 12b and means for camera control 12c.
  • the camera control means 12c are connected to the image interface 12a and the camera 14 and serve to control factors such as gain and / or exposure.
  • the image processor 10a further comprises a so-called Hough feature extractor 16 adapted to analyze the Hough multi-dimensional space output by the Hough transform means 104 and including all relevant information for pattern recognition, and based on the Hough feature extractor 16 Output a collection of all Hough features.
  • a smoothing of the Hough feature spaces i. that is, a spatial smoothing by means of a local filter or a thinning out of the Hough space (suppression of irrelevant information for the m ore). This thinning is done taking into account the nature of the pattern and the nature of the structure so that non-maxima in the Hough probability space are masked out.
  • threshold values c may also be defined for the thinning, so that, for example, minimum or maximum permissible values of a structure, such as e.g. a minimum or maximum curvature or a smallest or largest increase, can be determined in advance.
  • noise suppression can also be carried out in the Hough probability space.
  • the analytic inverse transformation of the parameters of all remaining points in the original image area yields, for example, the following Hough features: For the bent structure, position (x and y coordinates), occurrence probability, radius and angle, which indicates in which direction the arc opens is to be forwarded. For a straight line, parameters such as position (x- and y-coordinate), probability of occurrence, angle of the Slope of the straight line indicates and length of the representative straight section can be determined.
  • This thinned Hough space is output by the Hough feature extractor 16 or generally by the image processor 10a to a post-processing device 18 for further processing.
  • a further exemplary embodiment comprises the use of a 3 D image analyzer 400 (FIG. 5a) within an image processing system together with an upstream image processor 10a (FIG. 5a) or upstream Hough processor, the Hough processor and in particular the components of the postprocessing device 18 are adapted for the detection of pupil or iris depicted as an ellipse.
  • the post-processing device of the Hough processor can, for example, be realized as an embedded processor and, depending on the application, have different subunits, which are explained below by way of example.
  • Postprocessing device 18 may include a Hough feature-to-geometry converter 202.
  • This geometry converter 202 is configured to analyze one or more predefined searched patterns output by the Hough feature extractor and to output parameters describing the geometry per sample.
  • the geometry converter 202 may be configured based on the detected Hough features geometry parameters, such as, for example. first diameter, second diameter, tilt and position of the center in an ellipse (pupil) or a circle output.
  • the geometry converter 202 is operable to detect and select a pupil based on 3 to 4 Hough features (e.g., curvatures). Criteria, such as the degree of conformity with the desired structure or the Hough features, the curvature of the Hough features or the predetermined pattern to be detected, the position and orientation of the Hough features.
  • the selected Ilough feature combinations are sorted, the first line being sorted according to the number of Hough features obtained, and secondly, the degree of agreement with the searched structure. After sorting, the Hough feature combination in the first place is selected and the ellipse which best represents the pupil in the camera image is fitted out of it.
  • the post-processing device 18 (FIG. 5 a) comprises an optional controller 204, which is designed to output a control signal back to the image processor 10 a (see control channel 206) or, to be exact, back to the Hough transformation device 104 , on the basis of which the filter characteristic of the filter 106 is adaptable.
  • the control The sensor 204 is typically coupled to the geometry analyzer and server 202 to analyze the geometry parameters of the detected geometry and to track the Hough kernel within defined limits such that more accurate geometry recognition is possible. This process is a successive process starting, for example, with the last Hough core configuration (last used Hough kernel size) and being tracked as soon as the 202 detection yields insufficient results.
  • the controller 204 can thus adapt the ellipse size, which is dependent, for example, on the distance between the object to be photographed and the camera 14a, when the associated person nourishes the camera 14a.
  • the filter characteristic is controlled on the basis of the last settings and on the basis of the geometry parameters of the ellipse.
  • the post-processing device 18 may comprise a selective-adaptive data processor 300.
  • the purpose of the data processor is to rework outliers and dropouts within the data series in order to smooth the data series, for example. Therefore, the selective adaptive data processor 300 is configured to receive a plurality of sets of values output by the geometry converter 202, each set being assigned to a respective sample.
  • the filter processor of data processor 300 performs a selection of values based on the multiple sets such that the data values of implausible sets (eg, outliers or dropouts) are replaced by internally-determined data values (substitute values) and the data values of the remaining sets are unchanged continue to be used.
  • the data values of plausible sentences are deduced and the data values of implausible sentences (containing outliers or dropouts) are represented by data values of a plausible sentence, e.g. For example, the previous data value or an averaging of several previous data values replaced.
  • the resulting data series of forwarded values and, if necessary, substitute values is continuously smoothed.
  • an adaptive temporal smoothing of the data series e.g a determined ellipse midpoint coordinate
  • misfires or outliers eg as a result of false detection in pupil detection
  • the data processor may smooth over the data value of a newly received sentence if it does not fall into one of the following criteria: -
  • the searched structure is a dropout in the data series.
  • the associated size parameter or geometry parameter is a dropout when z.
  • the size of the current object is too different from the size of the previous object.
  • An illustrative example of this is when z. B. the current position coordinate (data value of the sentence) of an object deviates too much from the previously determined by the selectively adaptive data processor position coordinate.
  • the previous value is still output or at least used to smooth the current value.
  • the current values are optionally weighted more heavily than past values. For example, using exponential smoothing, the current value can be determined by the following formula:
  • the smoothing coefficient is dynamically adjusted within defined limits to the trend of the data to be smoothed, e.g. Reduction in rather constant value curves or increase in ascending or descending values. If, in the long term, there is a major jump in the geometry parameters (ellipse parameters) to be smoothed, the data processor and thus also the smoothed value curve adapt to the new value.
  • the selective adaptive data processor 300 may also be controlled by parameters, e.g. be initialized during the initialization, whereby the smoothing behavior, eg. B. maximum duration of a dropout or maximum smoothing factor. be determined.
  • the selective adaptive data processor 300 can plausibly output values describing the position and geometry of a pattern to be recognized with high accuracy.
  • the post-processing device has an interface 18a via which control commands can optionally also be received externally. If several data series are to be smoothed, it is conceivable to use a separate selectively adaptive data processor for each data series or to adapt the selectively adaptive data processor so that data sets of different data series can be processed per data set.
  • the characteristics of the selective adaptive data processor 300 explained above with reference to a concrete embodiment will be generally described.
  • the data processor 300 may, for. B. have two or more inputs and an output. One of the inputs (receives the data value and) is for the data series to be processed. The output is a smoothed row based on selected data. For selection, the further inputs (receive the additional values for more accurate assessment of the data value) are used and / or the data series itself. In the processing within the data processor 300, a change in the data series occurs, wherein between the treatment of outliers and the treatment of dropouts within the Data series is distinguished.
  • outliers In the selection, outliers (within the data series to be processed) are sorted out and replaced by other (internally determined) values.
  • Dropouts One or more additional input signals (additional values) are used to assess the quality of the data series to be processed. The assessment is based on one or more thresholds, dividing the data into "high” and “low” grades. Low-quality data is evaluated as a dropout and replaced by other (internally determined) values.
  • a smoothing of the data series takes place (for example exponential smoothing of a time series).
  • the data series adjusted by dropouts and outliers is used.
  • the smoothing can be done by a variable (adaptive) coefficient.
  • the smoothing coefficient is adjusted to the difference in the level of the data to be processed.
  • the post-processing device 18 includes an image analyzer, such as an image analyzer.
  • a 3D image analyzer 400 includes.
  • the post-processing device 18 can also be provided with a further image capture device consisting of image processor 10b and camera 14b.
  • the two cameras 14a and 14b and the image processors 10a and 10b form a stereoscopic camera arrangement, preferably the image processor 10b is identical to the image processor 10a.
  • the 3D image analyzer 400 is configured, in accordance with a basic embodiment, to generate at least a first set of image data that is determined based on a first image (see camera 14a) and a second set of image data based on a second image (see camera 14b) is determined to be received, wherein the first and second images map a pattern from different perspectives, and to calculate therefrom a 3 D view vector.
  • the 3D image analyzer 400 includes a position calculator 404 and an alignment calculator 408.
  • the position calculator 404 is configured to calculate a position of the pattern in a three-dimensional space based on the first set, the second set, and a geometric relationship between the perspectives or the first and the second camera 14a and 14b.
  • the alignment calculator 408 is configured to calculate a 3D view vector, eg, a line of sight, according to which the recognized pattern is aligned in the three-dimensional space, wherein the calculation on the first set, the second set, and the calculated position (see FIG. Position calculator 404).
  • a 3D view vector eg, a line of sight
  • Other embodiments may also work with the image data of a camera and another set of information (eg, relative or absolute positions of characteristic points on the face or eye) used to calculate the position of the pattern (eg, pupil or iris center). and to select the actual line of sight vector.
  • a so-called 3D camera system model can be consulted which, for example, has stored all model parameters, such as position parameters, optical parameters (see camera 14a and 14b) in a configuration file.
  • the model stored or read in the 3D image analyzer 400 includes data regarding the camera unit, i. with regard to the camera sensor (eg pixel size, sensor size and resolution) and lenses used (eg focal length and lens distortion), data or characteristics of the object to be detected (eg characteristics of an eye) and data regarding other relevant objects (eg a display in case of use of the Systems 1000 as input device).
  • the camera sensor eg pixel size, sensor size and resolution
  • lenses used eg focal length and lens distortion
  • data or characteristics of the object to be detected eg characteristics of an eye
  • data regarding other relevant objects eg a display in case of use of the Systems 1000 as input device.
  • the D-position calculator 404 calculates the eye position or the pupil center on the basis of the two or more camera images (see Figures 14a and 14b) by triangulation. For this purpose, it receives 2D coordinates of a point in the two camera images (see FIGS. 14 a and 14 b) via the process chain consisting of image processors 10 a and 10 b, geometry converter 202 and selectively adaptive data processor 300. From the transmitted 2D coordinates, the 3 D - on the cram 11 taking into account the optical parameters for both cameras 14a and 14b, the light beams which have mapped the 3 D point as a 2 D point on the sensor are calculated.
  • the point of the two straight lines with the shortest distance to each other is assumed to be the position of the desired 3-D point.
  • This 3D position along with an error measure describing the accuracy of the passed 2D coordinates in connection with the model parameters, is either output via the interface 18a or passed to the viewing direction calculator 408.
  • the viewing angle calculator 408 can determine the viewing direction from two elliptical projections of the pupil to the camera sensors without calibration and without knowledge of the distance between the eyes and the camera system.
  • the viewing direction calculator 408 uses, in addition to the 3 D positional parameters of the image sensors, the ellipse parameters which have been determined by means of the geometry analyzer 202 and the position determined by means of the position calculator 404.
  • virtual camera units are calculated by rotation of the real camera units, the optical axis of which passes through the 3D pupil center.
  • projections of the pupil on the virtual sensors are respectively calculated from the projections of the pupil on the real sensors, so that two virtual ellipses are created.
  • two viewpoints of the eye can be calculated for each image sensor on any plane parallel to the respective virtual sensor plane.
  • An analytical determination of the 3-D eye position and SD viewing direction using a 3-D spatial model allows any number of cameras (greater than 1) and any camera position in 3-D space.
  • the short latency with the simultaneously high frame rate enables a real-time capability of the system 1000 described.
  • the so-called time regimes may also be fixed, so that the time differences between successive ones Results are constant.
  • the 3D pupil center point There are several possibilities for determining the 3D pupil center point. One of them is based on the evaluation of relations between characteristic points in the first camera image. In this case, starting from the pupil center point in the first camera image, taking into account the optical system of the camera, as described above, a straight line is calculated, which leads through the 3 D pupil center, but it is not yet known where on this straight line the desired pupil center is located , For this purpose, the distance between camera or exact main point 1 of the camera (Hi in Fig. 8a) is required. This information can be estimated if at least two characteristic features in the first camera image (eg the pupil centers) are determined and their distances from each other are determined as statistically determined value, eg. B. over a large group of people known. Then, the distance between the camera and the 3 D pupil center can be estimated by ratioing the determined distance (eg, in pixels) between the characteristic features to the distance (eg, in pixels) known as statistical size Features at a known distance to the camera.
  • the determined distance e
  • Another variation to obtain the 3 D pupil center point is that its position or distance from the camera is provided to the 3-D image analyzer within the second set of information (eg, from an upstream 3D face detection module) the positions of characteristic points of the face or eye area in 3 - space are determined).
  • Fig. 5b shows an image of the visible part of the eyeball (bordered in green) with the pupil and the two possible viewing directions vi and v2 projected into the image.
  • an evaluation based on the sclera (the white dermis around the iris) can take place in the camera image. It defines 2 rays (starting at the pupillary center and being infinitely long), one in the direction of vi and one in the direction of v2. The two beams are projected into the camera image of the eye and run there from the pupil center to the edge of the image. The ray which sweeps over fewer pixels belonging to the sclera belongs to the actual line of sight vector vb. The pixels of the sclera differ by their gray value from those of the iris bordering on them and those of the eyelids.
  • This method reaches its limits if the face belonging to the recorded eye is too far away from the camera (ie the angle between the optical axis of the camera and the vector perpendicular to the face plane becomes too large).
  • an evaluation of the position of the pupil center can be made within the eye opening.
  • the position of the pupil center within the visible part of the eyeball or within the eye opening can be used to select the actual line of sight vector.
  • One way to do that is to define 2 rays (starting at the pupillary center and are infinitely long), one in the direction of vi and one in the direction of v2.
  • the two beams are projected into the camera image of the eye and run there from the pupil center to the edge of the image.
  • the distance between the pupil center and the edge of the eye opening (shown in green in FIG. 5b) is determined along both beams in the camera image.
  • the ray that results in the shorter distance belongs to the actual line of sight vector. This method reaches its limits when the face belonging to the removed eye is too far away from the camera (ie the angle between the optical axis of the camera and the vector perpendicular to the face plane becomes too large).
  • an evaluation of the position of the pupil center can be made to a reference pupil center.
  • the position of the pupil center determined in the camera image within the visible part of the eyeball or within the eye opening can be used together with a reference pupil center to select the actual line of sight vector.
  • One way to do this is to define 2 rays (starting at the pupillary center and infinitely long), one in the direction of vi and one in the direction of v2.
  • the two beams are projected into the camera image of the eye and run there from the pupil center to the edge of the image.
  • the reference pupil center point within the eye opening corresponds to the pupil center point at the instant that the eye looks directly towards the camera used for image recording (more precisely in the direction of the first main point of the camera).
  • the beam projected into the camera image, which has the greater distance to the reference pupil center in the image belongs to the actual direction vector.
  • Possibility 1 (special application): The reference pupil center results from the determined pupil center, in the case where the eye looks directly in the direction of the camera sensor center point. This is the case if the pupil contour on the virtual sensor plane (see description for viewing direction calculation) describes a circle.
  • Possibility 2 As a rough estimate of the position of the reference pupil center, the center of gravity of the area of the eye opening can be used. This method of estimation reaches its limits when the plane in which the face lies is not parallel to the sensor plane of the camera. This limitation can be compensated if the inclination of the face plane to the camera sensor plane is known (eg, by a previously performed determination of the head position and orientation) and this is used to correct the position of the estimated reference pupil center. This method also requires that the distance between the pupil center of the pupil and the optical axis of the virtual sensor be much smaller than the distance between the pupil center of the pupil and the camera.
  • Possibility 3 (general application): If the 3D position of the center of the eye is available, a straight line /. 3 D-eye center point and virtual sensor center point can be determined and the intersection of this line with the surface of the eyeball. The reference pupil center results from the position of this intersection converted into the camera image.
  • an ASIC application-specific chip
  • the Hough processor used here or the method executed on the Hough processor remains very robust and prone to interference. It should be noted at this point that the Hough processor 100 explained in FIG. 2 a can be used in different combinations with features presented differently, in particular with regard to FIG. 5.
  • Hough processors are, for example, microsleep sleepers or fatigue detectors as driver assistance systems in the automotive sector (or in general for safety-relevant human machine interfaces). By evaluating the eyes (eg covering the pupil as a measure of the degree of opening) and taking into account the viewpoints and the focus, a specific fatigue pattern can be detected. Furthermore, the Hough processor can be used on input devices or input interfaces for technical devices; Here the eye position and viewing direction are used as input parameters. Concrete applications here would be the analysis or support of the user when viewing screen contents, eg when highlighting certain focused areas. Such applications are in the area of assisted living, in computer games, with optimization of 3 D visualization By looking at the viewing direction, in market and media research or in ophthalmological diagnostics and therapies particularly interesting.
  • another embodiment relates to a method for high-contrast processing comprising the steps of processing a plurality of samples each having an image using a pre-processor, wherein the image of the respective sample is rotated and / or mirrored is outputted so that a plurality of versions of the image of each sample for each sample and the detection of predetermined pattern in the plurality of samples based on the plurality of versions using a Hough transformer having a delay filter with a filter characteristic Filter characteristic is set depending on the selected predetermined pattern set.
  • the adaptive characteristic may also refer to the post-processing characteristic (bending or distortion characteristic) in a fast 2D correlation. This implementation will be explained with reference to FIGS. 4a to 4d.
  • Fig. 4a shows a processing chain 1000 of a fast 2D correlation.
  • the color correlation chain of the 2D correlation comprises at least the functional blocks 1 105 for 2D bending and 1 1 10 for the fusion.
  • the procedure for 2D bending is illustrated in FIG. 4b.
  • Fig. 4b decorates exemplary compilation of templates. How a Hough feature can be extracted on the basis of this processing chain 1000 becomes clear with reference to FIG. 4c together with FIG. 4d.
  • 4c shows the pixel-by-pixel correlation with n templates illustrated (in this case, for example, for straight lines of different pitch) for recognizing the ellipse 1 1 15, while FIG. 4d shows the result of the pixel-by-pixel correlation, typically with a maximum search via the n result images ,
  • Each result image contains a hough feature per pixel.
  • the following is an explanation of this lough processing in the overall context.
  • the delay filter is replaced by fast 2D correlation.
  • the previous delay filter is able to map n characteristics of a specific pattern. These n values are stored as templates in the memory. Subsequently, the preprocessed image (eg binary edge image or gradient image) is traversed pixel by pixel.
  • the correlation of the individual templates with the image content can be carried out both in the local and in the frequency domain.
  • the second sleep detector is a system that consists of at least one image capture device, a lighting unit, a processing unit and an acoustic and / or visual signaling device.
  • the device By evaluating an image recorded by the user, the device is capable of recognizing onset of instantaneous sleep or tiredness or exhaustion of the user and of warning the user.
  • the system can z. B. be configured in such a way that one uses a CMOS image sensor and illuminates the scene in the infrared range. This has the advantage that the device works independently of the ambient light and in particular does not dazzle the user.
  • the processing unit used is an embedded processor system that executes a software code on an underlying operating system.
  • the signaling device can, for. B. consist of a Multifrequenzbuzzer and an RGB LED.
  • the evaluation of the recorded image can take the form that in a first processing stage, face and eye detection and eye analysis are performed with a classifier. This level of processing provides initial clues for the orientation of the face, the eye positions, and the degree of lid closure.
  • An eye model used for this purpose can, for. Example, consist of: a pupil and / or iris position, a pupil and / or iris size, a description of the eyelids and the corner of the eyes. It is sufficient if at any time some of these components are found and evaluated. The individual components can also be traced over several images, so that they do not have to be completely re-searched in each image. Hough features can be used to perform the face detection or the eye detection or the eye analysis or the eyes feinanal ysc.
  • a 2D image analyzer can be used for face detection or for eye detection or eye analysis. For smoothing the result values or intermediate results or value profiles which have been determined during face detection or eye detection or eye analysis or eye fine analysis, the described adaptive data processor can be used.
  • a temporal evaluation of the lid closure degree and / or the results of the eye fine analysis can be used to determine the microsleep or the fatigue or distraction be used by the user.
  • the calibration-free sighting determination described in connection with the 3D image analyzer can also be used to obtain better results in the determination of the microsleep or the fatigue or distraction of the user.
  • the selective adaptive data processor can also be used.
  • the Hough processor in the image input stage may comprise a camera control device.
  • a so-called view point intersection of the straight line with another plane, e.g. to control a PC.
  • the implementation of the method presented above is platform independent, so that the method presented above can also be used on other hardware platforms, e.g. a PC can be executed
  • aspects have been described in the context of a device, it will be understood that these aspects also constitute a description of the corresponding method such that a block or device of a device is also to be understood as a corresponding method step or feature of a method step , Similarly, aspects described in connection with or as a method step also represent a description of a corresponding block or detail or feature of a corresponding device.
  • Some or all of the method steps may be performed by an apparatus (using a hardware apparatus) such as a processor eg a microprocessor, a programmable computer or an electronic circuit. In some embodiments, some or more of the important method steps may be performed by such an apparatus. Depending on particular implementation requirements, embodiments of the invention may be implemented in hardware or in software.
  • the implementation may be performed using a digital storage medium, such as a floppy disk, a DVD, a Blu-ray Disc, a CD, a ROM, a PROM, an EPROM, a KEP ROM or FLASH memory, a hard disk or a hard disk other magnetic or optical memory are stored on the electronically readable control signals, which can cooperate with a programmable computer system or cooperate such that the respective method is performed. Therefore, the digital storage medium can be computer readable.
  • some embodiments according to the invention include a data carrier having electronically readable control signals capable of interacting with a programmable computer system such that one of the methods described herein is performed.
  • embodiments of the present invention may be implemented as a computer program product having a program code, wherein the program code is operable to perform one of the methods when the computer program product runs on a computer.
  • the program code can also be stored, for example, on a machine-readable carrier.
  • Other embodiments include the computer program for performing any of the methods described herein, wherein the computer program is stored on a machine-readable medium.
  • an embodiment of the method according to the invention is thus a computer program which has a program code for performing one of the methods described herein when the computer program runs on a computer.
  • a further embodiment of the inventive method is thus a data carrier (or a digital storage medium or a computer-readable medium) on which the computer program is recorded for performing one of the methods described herein.
  • a further exemplary embodiment of the method according to the invention is thus a data stream or a sequence of signals which represents or represents the computer program for performing one of the methods described herein.
  • the data stream or the sequence of signals may be configured, for example, to be transferred via a data communication connection, for example via the Internet.
  • Another embodiment includes a processing device, such as a computer or a programmable logic device, that is configured or adapted to perform one of the methods described herein.
  • a processing device such as a computer or a programmable logic device
  • Another embodiment includes a computer on which the computer program is installed to perform one of the methods described herein.
  • a further embodiment according to the invention comprises a device or system adapted to transmit a com- muter terminal to a receiver for performing at least one of the methods described herein. The transmission can be done for example electronically or optically.
  • the receiver may be, for example, a computer, a mobile device, a storage device or a similar device.
  • the device or system may include a file server for transmitting the computer program to the recipient.
  • a programmable logic device eg, a field programmable gate array, an FPGA
  • a field programmable gate array may cooperate with a microprocessor to perform one of the methods described herein.
  • the methods are performed by any hardware device. This may be a universal hardware such as a computer processor (CPU) or hardware specific to the process, such as an ASIC.
  • the "Integrated Eyetracker” comprises a compilation of FPGA-optimized algorithms that are suitable for extracting (elliptical) features (Hough features) from a camera live image by means of a parallel Hough transformation and calculating a viewing direction from them.
  • the pupillary ellipse can be determined.
  • the 3D position of the pupil center as well as the 3D viewing direction and the pupil diameter can be determined.
  • the position and shape of the ellipses in the camera images are used for the calculation There is no need to calibrate the system for each user and no knowledge of the distance between the cameras and the analyzed eye.
  • the image processing algorithms used are in particular characterized in that they are optimized for processing on an FPGA (field programmable gate array).
  • the algorithms allow a very fast image processing with constant Refresh rate, minimal latency, and minimal resource consumption in the FPGA.
  • These modules are therefore predestined for time- / latency / safety-critical applications (eg driver assistance systems), medical diagnostic systems (eg perimeters) as well as applications such as human machine interfaces (eg for mobile devices) Require construction volume.
  • the overall system determines a list of multi-dimensional Hough features from two or more camera images in which the same eye is depicted, and in each case calculates the position and shape of the pupillary ellipse on the basis thereof. From the parameters of these two ellipses as well as solely from the position and orientation of the cameras to one another, the 3D position of the pupil center as well as the 3D viewing direction and the pupil diameter can be determined completely without calibration.
  • a hardware platform we use a combination of at least two image sensors, FPGA and / or downstream microprocessor system (without a PC being absolutely necessary).
  • FIG. 6 shows a block diagram of the individual functional modules in the Integrated Eyetracker.
  • the block diagram shows the individual processing stages of the Integrated Eyetracker. The following is a detailed description of the modules.
  • Filter core with variable size consisting of delay relays • For adaptive adaptation of the filter to the desired patterns, delay elements can be switched on and off during runtime
  • Each column of the filter stands for a certain characteristic of the sought structure (curvature or straight line increase)
  • the filter For each image pixel, the filter returns a point in the Hough space containing the following information:
  • Type of pattern e.g., straight or semicircle
  • the first Hough feature combination is selected and the ellipse which most likely represents the pupil in the camera image is fitted
  • the Hough core size is tracked within defined limits in order to increase the accuracy of the Hough transformation results in the detection of the ellipse extreme points
  • the current input value is used for smoothing if it does not fall into one of the following categories:
  • the smoothing coefficient is dynamically adjusted within defined limits to the trend of the data to be smoothed: o Reduction in the case of more or less constant value progression of the data series
  • the model includes the following elements at the moment:
  • the viewpoint of a viewer can be calculated on another object in the 3D model (eg on a display) as well as the focused area of the viewer
  • ⁇ Error measure describes the accuracy of the passed 2D coordinates in conjunction with the modcll parameters. O Detailed description
  • the light beams are calculated for both cameras using the "3D camera syst cm model" (especially taking into account the optical parameters), which have mapped the 3 D point as 2D points on the sensors
  • One aspect of the invention relates to an autonomous (PC-independent) system, which in particular uses FPGA-optimized algorithms, and is suitable for detecting a face in a camera live image and determining its (spatial) position.
  • the algorithms used are characterized in particular by the fact that they are optimized for processing on a FPGA (tield programmable gate array) and manage without any recursions in the processing in comparison to the existing methods.
  • the algorithms enable very fast image processing with a constant frame rate, minimal latency and minimal resource consumption in the FPGA.
  • these modules are predestined for time / latency / security critical applications (e.g., driver assistance systems) or applications such as human machine interfaces (e.g., for mobile devices) that require a low volume of construction.
  • time / latency / security critical applications e.g., driver assistance systems
  • applications such as human machine interfaces (e.g., for mobile devices) that require a low volume of construction.
  • the spatial position of the user for specific points in the image
  • the input image is successively brought into different scaling levels (until the smallest scaling level is reached) and each searched with a multi-level classifier for faces
  • classifiers Based on the detected facial position, classifiers only deliver inaccurate eye positions (the position of the eyes - in particular the pupil center - is not analytically determined (or measured) and is therefore subject to high inaccuracies)
  • the determined face and eye positions are only available in 2D image coordinates, not in 3D
  • the overall system determines the face position from a camera image (in which a face is shown) and, using this position, determines the positions of the pupil centers of the left and right eyes. If two or more cameras with a known orientation to each other are used, these two points can be specified for the 3-dimensional space.
  • the two determined eye positions can be further processed in systems that use the "Integrated Eyetracker".
  • the "Parallel image scaler”, "Parallel face scanner”, “Parallel eye analyzer”, “Parallel pupil analyzer”, “Temporal smart smoothing filter”, “3D camera system model” and “3D position calculation” relate to individual functional modules of the overall system ( FPGA Facetracker). They are arranged in the image processing chain of the FPGA Facetracker as follows:
  • Fig. 7a shows a block diagram of the individual functional modules in the FPGA Facetracker.
  • the function modules “3D camera system model” and “3D position calculation” are not absolutely necessary for face tracking, but are used to determine spatial positions when using a stereoscopic camera system and billing suitable points on both cameras (for example, to determine the SD head position) Verification of the 2D facial centers in both camera images).
  • the module “Feature extraction (Classification)” of the FPGA Facetracker builds on the feature extraction and classification of kublbeck / Ernst from Fraunhofer IIS (Er Weg) and uses an adapted version of its classification based on Census characteristics.
  • the block diagram shows the individual processing stages of the FPGA Facetracking System. The following is a detailed description of the modules.
  • Fig. 7b shows the output image (original image) and result (downscaling image) of the parallel image scaler.
  • the image coordinates of the respective scaling stage are transformed into the image coordinate system of the target matrix on the basis of various criteria:
  • Detects a face from classification results of several scaling levels, which are arranged together in a matrix
  • the result of the classification (right) represents the input for the parallel face linder.
  • the eye search described below for each eye is performed in a defined area (eye area) within the face region provided by the "Parallel face finder":
  • Detects the position of the pupil centers within the detected eyes from a previously determined eye position (this increases the accuracy of the eye position, which is important for surveying or subsequent evaluation of the pupil).
  • the pupil center is detected separately in horizontal and vertical direction as explained below:
  • a set of filter parameters can be used to set its behavior when the filter is initialized
  • the current input value is used for smoothing if it does not fall into one of the following categories:
  • the corresponding downscaling level is a nonsensical value (value found in a downscaling level that is too far away)
  • the smoothing coefficient is dynamically adjusted within defined limits to the trend of the data to be smoothed: o Reduction in the case of more or less constant value progression of the data series
  • the viewpoint of an observer on another object in the 3D model can also be calculated as well as the focused area of the observer
  • ⁇ Error measure describes the accuracy of the passed 2D coordinates in conjunction with the model parameters
  • the light rays are calculated for both cameras using the ..3D camera system model "(especially taking into account the optical parameters), which have mapped the 3 D point as 2 D points on the sensors ⁇ These rays of light are described as straight lines in the 3 D space of the model
  • Head and eye parameters (inter alia position)
  • the aim of the present following is to develop on the basis of the parallel Hough transformation and a robust method for feature extraction. This will be done by revising the Houghcore and introducing a feature extraction method that reduces the results of the transformation and breaks it down to a few "feature vectors" per image, then implements and tests the newly developed method in a Matlab toolbox, eventually becoming an FPGA implementation of the new procedure.
  • the Hough parallel transformation uses Houghcores of different sizes, which must be configured using configuration matrices for each application.
  • the mathematical relationships and methods for creating such configuration matrices are shown below.
  • the mailab script alc_config_lines_curvatures. m uses these methods and creates configuration matrices for straight lines and semicircles in various sizes.
  • To create the configuration matrices it is first necessary to calculate a set of curves in discrete representation and for different Houghcore sizes.
  • the requirements (educational regulations) on the group of curves have already been shown. Taking into account these educational regulations, straight lines and semicircles are particularly suitable for configuring the Houghcores.
  • Houghcores with semi-circle (or curvature) configurations are used for line of sight determination.
  • the configurations for straight lines (or straight line sections) are also derived here.
  • the mathematical relationships for determining the curves for straight lines are illustrated.
  • the families of curves can be generated by varying the rise m. For this, the straight line slope from 0 ° to 45 ° is split into equal intervals. The number of intervals depends on the Houghcore size and corresponds to the number of Houghcore lines. The increase can be tuned by the variable Y core from 0 to core ⁇ .
  • the function values of the curves are calculated by varying the variables (in (B3) replaced by x core ) whose values run from 0 to core width.
  • the radius is missing, which is obtained by inserting (B6) in (B7) and by further transformations.
  • variable h must be from 0 to
  • Configuration matrices can be either zeros or ones.
  • a one stands for a used delay element in Houghcorc.
  • the circle configurations always represent circular arcs around the vertex of the semicircle. Only the largest y-index number of the family of curves (smallest radius) represents a complete semicircle.
  • the developed configurations can be used for the new Houghcore.
  • Houghcore A major disadvantage of Holland-Neil's FPGA implementation is the rigid configuration of the Houghcores.
  • the delay lines must be parameterized before the synthesis and are then stored permanently in the hardware structures (Holland-Neil, p. 48-49). Changes during runtime (eg Houghcore size) are no longer possible. The new procedure should become more flexible at this point.
  • the new Houghcore should also be completely reconfigurable during the runtime in the FPGA. This has several advantages. On the one hand, not two Houghcores (Type 1 and Type 2) need to be stored in parallel, and on the other hand, different configurations for straight lines and semicircles can be used. In addition, the Houghcore size can be changed flexibly during runtime.
  • Previous Houghcore structure consists of a delay and a bypass and it is determined before the FPGA synthesis which path should be used.
  • this structure is extended by a multiplexer, a further register for configuring the delay element (switching of the multiplexer) and a pipeline delay.
  • the configuration registers can be modified during runtime. In this way different configuration matrices can be imported into the Houghcore.
  • the synthesis tool in the FPGA has more freedom in implementing the Houghcore design, and higher clock rates can be achieved.
  • Pipe-linedelays break through time-critical paths within FPGA structures. In Fig. 9d, the new design of the delay elements is illustrated.
  • the delay elements of the new Houghcore have a somewhat more complex structure.
  • An additional register is needed for flexible configuration of the delay element and the multiplexer occupies additional logic resources (must be implemented in the F GA in a LUT).
  • the pipeline delay is optional.
  • modifications to the design of the Houghcore were made.
  • the new Houghcore is illustrated in Figure 9e.
  • Each column element requires one clock cycle and there is a latency of a few clock cycles through the BRAM and configuration logic. Although overall latency for reconfiguration is disadvantageous, it can be accepted for video-based image processing. Normally, the video data streams recorded with a CMOS sensor have horizontal and vertical blanking. The reconfiguration can thus take place without problems in the horizontal blanking time.
  • the size of the Houghcore structure implemented in the FPGA also dictates the maximum size possible for Houghcore configurations. When small configurations are used, they are vertically centered and aligned in the horizontal direction on column 1 of the Houghcore structure (see Fig. 9f). Unused elements of the Houghcore structure are all filled with delays. The correct alignment of smaller configurations is important for the correction of the x-coordinates (see Formulas (B17) to (B19)).
  • the Houghcore is fed as before with a binary edge image that passes through the configured delay lines.
  • the column sums are calculated over the entire Houghcore and compared with the sum signal of the previous column, respectively. If a column returns a higher sum value, the sum value of the original column is overwritten.
  • the new Houghcore returns a column sum value and its associated column number. On the basis of these values, a statement can later be made as to which structure was found (represented by the column number) and with which occurrence probability it was detected (represented by the summation value).
  • the output signal of the Houghcore can also be referred to as an empty space or accumulator space.
  • the Hough space of the parallel Hough transformation is present in the image coordinate system.
  • the feature extraction works on the records from the previous table. These data sets can be combined in a feature vector (B16).
  • the feature vector can also be referred to below as a Hough feature.
  • MV [MV X , MVy, MV 0 , MV KS, MV H , MV G. h MV A ]
  • the two elements MVQ and MVK S have different meanings for straight lines and semicircles.
  • the combination of orientation and curvature strength forms the position angle of the detected straight line section at an angle of 0 ° to 180 °.
  • the orientation addresses an angular range and the curvature strength stands for a concrete angle within this range.
  • the orientation is the position angle or the orientation of the semicircle. Semicircles can only be detected in four directions due to the principle.
  • the curvature is in semicircular configurations for the radius.
  • the coordinates In addition to the orientation MVo and the curvature strength MVKS, another special feature must be taken into account at the coordinates (MVx and MVy) (see Fig. 9g). In the case of straight lines, the coordinates should always represent the center point and, in the case of semicircles or bends, always the vertex.
  • the y-coordinate can be corrected according to the implemented Houghcore structure and is independent of the size of the configuration used for transformation (see Fig. 9f). Similar to a local filter, the y-coordinate is specified vertically centered. For the x-coordinate, a relation is made about the Houghcore column that provided the hit (in the feature vector, the Houghcore column is placed under the label MVKS).
  • the non-maximum-suppression operator differs in straight lines and semicircles. Above the threshold values, a minimum MV KS and maximum curvature strength MV ⁇ are specified and a minimum frequency MV H is determined.
  • the non-maximum suppression operator can be considered a 3x3 local operator (see Figure 9h). A valid semicircle feature (or curvature) always arises when the condition of the nms operator in (B23) is satisfied and the thresholds are exceeded according to formulas (B20) to (B22).
  • Non-maximum suppression suppresses Hough features that are not local maxima in the frequency domain of the feature vectors. This suppresses Hough features that do not contribute to the searched structure and that are irrelevant to postprocessing.
  • the feature extraction is parameterized only via three thresholds, which can be sensibly adjusted in advance. A detailed explanation of the threshold values can be found in the following table. Threshold Description Comparable parameter of the Katzmann method
  • Threshold for a minimum frequency ie a column Hough sum value that must not be undershot.
  • a non-maximum suppression operator of size 3x3 (see Fig. 9h) can also be derived.
  • the non-maximum suppression can thus be based on the method in the Canny edge detection algorithm.
  • three cases can be distinguished (see Fig. 9i in combination with the above table). The case distinction applies to both rotated and non-rotated output images, since the inverse transformation of rotated coordinates only takes place after non-maximum suppression. Which nms operator to use depends on the Houghcore type as well as the angle range.
  • the angular range that a Houghcore provides with straight line configurations is divided by the angular range bisector.
  • the angle bisector can be given as a Houghcore column (decimal broken) (MV KSI ⁇ ).
  • MV KSI ⁇ decimal broken
  • the mathematical relationship depending on the Hougheore size is described by formula (B24).
  • the angle range the Hough-Fcature lies in depends on the Houghcore column that delivered the hit (MVKS), which can be compared directly with the angular range bisecting Houghcore column. ⁇
  • the condition can be queried via the respective nms-oprator, similar to the non-maximum suppression for curvatures (formulas (B25) to (B27)). If all conditions are met and if the threshold values values according to the formulas (B20) to (B22), the Hough feature tion nms2,2 can be adopted.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Software Systems (AREA)
  • Ophthalmology & Optometry (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Eye Examination Apparatus (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Complex Calculations (AREA)
EP15701823.5A 2014-02-04 2015-01-30 3d-bildanalysator zur blickrichtungsbestimmung Withdrawn EP3103059A1 (de)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP21203252.8A EP3968288A2 (de) 2014-02-04 2015-01-30 3d-bildanalysator zur blickrichtungsbestimmung

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102014201997 2014-02-04
PCT/EP2015/052004 WO2015117905A1 (de) 2014-02-04 2015-01-30 3d-bildanalysator zur blickrichtungsbestimmung

Related Child Applications (1)

Application Number Title Priority Date Filing Date
EP21203252.8A Division EP3968288A2 (de) 2014-02-04 2015-01-30 3d-bildanalysator zur blickrichtungsbestimmung

Publications (1)

Publication Number Publication Date
EP3103059A1 true EP3103059A1 (de) 2016-12-14

Family

ID=52434840

Family Applications (4)

Application Number Title Priority Date Filing Date
EP21203252.8A Withdrawn EP3968288A2 (de) 2014-02-04 2015-01-30 3d-bildanalysator zur blickrichtungsbestimmung
EP15702739.2A Withdrawn EP3103060A1 (de) 2014-02-04 2015-01-30 2d-bildanalysator
EP15701823.5A Withdrawn EP3103059A1 (de) 2014-02-04 2015-01-30 3d-bildanalysator zur blickrichtungsbestimmung
EP15701822.7A Ceased EP3103058A1 (de) 2014-02-04 2015-01-30 Hough-prozessor

Family Applications Before (2)

Application Number Title Priority Date Filing Date
EP21203252.8A Withdrawn EP3968288A2 (de) 2014-02-04 2015-01-30 3d-bildanalysator zur blickrichtungsbestimmung
EP15702739.2A Withdrawn EP3103060A1 (de) 2014-02-04 2015-01-30 2d-bildanalysator

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP15701822.7A Ceased EP3103058A1 (de) 2014-02-04 2015-01-30 Hough-prozessor

Country Status (6)

Country Link
US (3) US10192135B2 (zh)
EP (4) EP3968288A2 (zh)
JP (3) JP6483715B2 (zh)
KR (2) KR101858491B1 (zh)
CN (3) CN106258010B (zh)
WO (4) WO2015117906A1 (zh)

Families Citing this family (122)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013109869A1 (en) 2012-01-20 2013-07-25 Magna Electronics, Inc. Vehicle vision system with free positional virtual panoramic view
WO2013173728A1 (en) 2012-05-17 2013-11-21 The University Of North Carolina At Chapel Hill Methods, systems, and computer readable media for unified scene acquisition and pose tracking in a wearable display
CN104715227B (zh) * 2013-12-13 2020-04-03 北京三星通信技术研究有限公司 人脸关键点的定位方法和装置
JP6483715B2 (ja) * 2014-02-04 2019-03-13 フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン ハフプロセッサ
DE102015202846B4 (de) 2014-02-19 2020-06-25 Magna Electronics, Inc. Fahrzeugsichtsystem mit Anzeige
US10445573B2 (en) * 2014-06-27 2019-10-15 Fove, Inc. Gaze detection device
US10318067B2 (en) * 2014-07-11 2019-06-11 Hewlett-Packard Development Company, L.P. Corner generation in a projector display area
US11049476B2 (en) 2014-11-04 2021-06-29 The University Of North Carolina At Chapel Hill Minimal-latency tracking and display for matching real and virtual worlds in head-worn displays
KR20160094190A (ko) * 2015-01-30 2016-08-09 한국전자통신연구원 시선 추적 장치 및 방법
JP6444233B2 (ja) * 2015-03-24 2018-12-26 キヤノン株式会社 距離計測装置、距離計測方法、およびプログラム
US20160363995A1 (en) * 2015-06-12 2016-12-15 Seeing Machines Limited Circular light element for illumination of cornea in head mounted eye-tracking
CN105511093B (zh) * 2015-06-18 2018-02-09 广州优视网络科技有限公司 3d成像方法及装置
US9798950B2 (en) * 2015-07-09 2017-10-24 Olympus Corporation Feature amount generation device, feature amount generation method, and non-transitory medium saving program
WO2017015580A1 (en) 2015-07-23 2017-01-26 Artilux Corporation High efficiency wide spectrum sensor
US10861888B2 (en) 2015-08-04 2020-12-08 Artilux, Inc. Silicon germanium imager with photodiode in trench
US10761599B2 (en) 2015-08-04 2020-09-01 Artilux, Inc. Eye gesture tracking
US10707260B2 (en) 2015-08-04 2020-07-07 Artilux, Inc. Circuit for operating a multi-gate VIS/IR photodiode
TW202335281A (zh) 2015-08-04 2023-09-01 光程研創股份有限公司 光感測系統
US10616149B2 (en) * 2015-08-10 2020-04-07 The Rocket Science Group Llc Optimizing evaluation of effectiveness for multiple versions of electronic messages
EP3783656B1 (en) 2015-08-27 2023-08-23 Artilux Inc. Wide spectrum optical sensor
JP6634765B2 (ja) * 2015-09-30 2020-01-22 株式会社ニデック 眼科装置、および眼科装置制御プログラム
EP3360023A4 (en) * 2015-10-09 2018-10-10 SZ DJI Technology Co., Ltd. Salient feature based vehicle positioning
US10739443B2 (en) 2015-11-06 2020-08-11 Artilux, Inc. High-speed light sensing apparatus II
US10886309B2 (en) 2015-11-06 2021-01-05 Artilux, Inc. High-speed light sensing apparatus II
US10418407B2 (en) 2015-11-06 2019-09-17 Artilux, Inc. High-speed light sensing apparatus III
US10741598B2 (en) 2015-11-06 2020-08-11 Atrilux, Inc. High-speed light sensing apparatus II
US10254389B2 (en) 2015-11-06 2019-04-09 Artilux Corporation High-speed light sensing apparatus
CN106200905B (zh) * 2016-06-27 2019-03-29 联想(北京)有限公司 信息处理方法及电子设备
JP2019531560A (ja) 2016-07-05 2019-10-31 ナウト, インコーポレイテッドNauto, Inc. 自動運転者識別システムおよび方法
WO2018016209A1 (ja) * 2016-07-20 2018-01-25 富士フイルム株式会社 注目位置認識装置、撮像装置、表示装置、注目位置認識方法及びプログラム
CN105954992B (zh) * 2016-07-22 2018-10-30 京东方科技集团股份有限公司 显示系统和显示方法
GB2552511A (en) * 2016-07-26 2018-01-31 Canon Kk Dynamic parametrization of video content analytics systems
US10417495B1 (en) * 2016-08-08 2019-09-17 Google Llc Systems and methods for determining biometric information
JP2019527832A (ja) 2016-08-09 2019-10-03 ナウト, インコーポレイテッドNauto, Inc. 正確な位置特定およびマッピングのためのシステムおよび方法
US10733460B2 (en) 2016-09-14 2020-08-04 Nauto, Inc. Systems and methods for safe route determination
JP6587254B2 (ja) * 2016-09-16 2019-10-09 株式会社東海理化電機製作所 輝度制御装置、輝度制御システム及び輝度制御方法
EP3305176A1 (en) 2016-10-04 2018-04-11 Essilor International Method for determining a geometrical parameter of an eye of a subject
US11361003B2 (en) * 2016-10-26 2022-06-14 salesforcecom, inc. Data clustering and visualization with determined group number
EP3535646A4 (en) 2016-11-07 2020-08-12 Nauto, Inc. SYSTEM AND METHOD FOR DETERMINING DRIVER DISTRACTION
WO2018097831A1 (en) * 2016-11-24 2018-05-31 Smith Joshua R Light field capture and rendering for head-mounted displays
JP6900609B2 (ja) * 2016-12-06 2021-07-07 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd 広角画像を修正するシステム及び方法
DE102016224886B3 (de) * 2016-12-13 2018-05-30 Deutsches Zentrum für Luft- und Raumfahrt e.V. Verfahren und Vorrichtung zur Ermittlung der Schnittkanten von zwei sich überlappenden Bildaufnahmen einer Oberfläche
US20200125167A1 (en) * 2016-12-30 2020-04-23 Tobii Ab Eye/Gaze Tracking System and Method
US10282592B2 (en) * 2017-01-12 2019-05-07 Icatch Technology Inc. Face detecting method and face detecting system
DE102017103721B4 (de) * 2017-02-23 2022-07-21 Karl Storz Se & Co. Kg Vorrichtung zur Erfassung eines Stereobilds mit einer rotierbaren Blickrichtungseinrichtung
KR101880751B1 (ko) * 2017-03-21 2018-07-20 주식회사 모픽 무안경 입체영상시청을 위해 사용자 단말과 렌티큘러 렌즈 간 정렬 오차를 줄이기 위한 방법 및 이를 수행하는 사용자 단말
JP7003455B2 (ja) * 2017-06-15 2022-01-20 オムロン株式会社 テンプレート作成装置、物体認識処理装置、テンプレート作成方法及びプログラム
US10430695B2 (en) 2017-06-16 2019-10-01 Nauto, Inc. System and method for contextualized vehicle operation determination
US10453150B2 (en) 2017-06-16 2019-10-22 Nauto, Inc. System and method for adverse vehicle event determination
EP3420887A1 (en) 2017-06-30 2019-01-02 Essilor International Method for determining the position of the eye rotation center of the eye of a subject, and associated device
JP2019017800A (ja) * 2017-07-19 2019-02-07 富士通株式会社 コンピュータプログラム、情報処理装置及び情報処理方法
EP3430973A1 (en) * 2017-07-19 2019-01-23 Sony Corporation Mobile system and method
KR101963392B1 (ko) * 2017-08-16 2019-03-28 한국과학기술연구원 무안경식 3차원 영상표시장치의 동적 최대 시역 형성 방법
US11250589B2 (en) * 2017-08-25 2022-02-15 Chris Hsinlai Liu General monocular machine vision system and method for identifying locations of target elements
US10460458B1 (en) * 2017-09-14 2019-10-29 United States Of America As Represented By The Secretary Of The Air Force Method for registration of partially-overlapped aerial imagery using a reduced search space methodology with hybrid similarity measures
CN107818305B (zh) * 2017-10-31 2020-09-22 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
EP3486834A1 (en) * 2017-11-16 2019-05-22 Smart Eye AB Detection of a pose of an eye
CN108024056B (zh) * 2017-11-30 2019-10-29 Oppo广东移动通信有限公司 基于双摄像头的成像方法和装置
KR102444666B1 (ko) * 2017-12-20 2022-09-19 현대자동차주식회사 차량용 3차원 입체 영상의 제어 방법 및 장치
CN108334810B (zh) * 2017-12-25 2020-12-11 北京七鑫易维信息技术有限公司 视线追踪设备中确定参数的方法和装置
CN108875526B (zh) * 2018-01-05 2020-12-25 北京旷视科技有限公司 视线检测的方法、装置、系统及计算机存储介质
JP7109193B2 (ja) * 2018-01-05 2022-07-29 ラピスセミコンダクタ株式会社 操作判定装置及び操作判定方法
US10853674B2 (en) 2018-01-23 2020-12-01 Toyota Research Institute, Inc. Vehicle systems and methods for determining a gaze target based on a virtual eye position
US10817068B2 (en) * 2018-01-23 2020-10-27 Toyota Research Institute, Inc. Vehicle systems and methods for determining target based on selecting a virtual eye position or a pointing direction
US10706300B2 (en) * 2018-01-23 2020-07-07 Toyota Research Institute, Inc. Vehicle systems and methods for determining a target based on a virtual eye position and a pointing direction
US11105928B2 (en) 2018-02-23 2021-08-31 Artilux, Inc. Light-sensing apparatus and light-sensing method thereof
JP6975341B2 (ja) 2018-02-23 2021-12-01 アーティラックス・インコーポレイテッド 光検出装置およびその光検出方法
US11392131B2 (en) 2018-02-27 2022-07-19 Nauto, Inc. Method for determining driving policy
US11675428B2 (en) * 2018-03-29 2023-06-13 Tobii Ab Determining a gaze direction using depth information
WO2019199691A1 (en) 2018-04-08 2019-10-17 Artilux, Inc. Photo-detecting apparatus
CN108667686B (zh) * 2018-04-11 2021-10-22 国电南瑞科技股份有限公司 一种网络报文时延测量的可信度评估方法
KR20190118965A (ko) * 2018-04-11 2019-10-21 주식회사 비주얼캠프 시선 추적 시스템 및 방법
WO2019199035A1 (ko) * 2018-04-11 2019-10-17 주식회사 비주얼캠프 시선 추적 시스템 및 방법
US10854770B2 (en) 2018-05-07 2020-12-01 Artilux, Inc. Avalanche photo-transistor
US10969877B2 (en) 2018-05-08 2021-04-06 Artilux, Inc. Display apparatus
CN108876733B (zh) * 2018-05-30 2021-11-09 上海联影医疗科技股份有限公司 一种图像增强方法、装置、设备和存储介质
US10410372B1 (en) * 2018-06-14 2019-09-10 The University Of North Carolina At Chapel Hill Methods, systems, and computer-readable media for utilizing radial distortion to estimate a pose configuration
US10803618B2 (en) * 2018-06-28 2020-10-13 Intel Corporation Multiple subject attention tracking
CN109213031A (zh) * 2018-08-13 2019-01-15 祝爱莲 窗体加固控制平台
KR102521408B1 (ko) * 2018-08-27 2023-04-14 삼성전자주식회사 인포그래픽을 제공하기 위한 전자 장치 및 그에 관한 방법
AU2019327554A1 (en) * 2018-08-30 2021-03-18 Splashlight Holding Llc Technologies for enabling analytics of computing events based on augmented canonicalization of classified images
CN109376595B (zh) * 2018-09-14 2023-06-23 杭州宇泛智能科技有限公司 基于人眼注意力的单目rgb摄像头活体检测方法及系统
JP6934001B2 (ja) * 2018-09-27 2021-09-08 富士フイルム株式会社 画像処理装置、画像処理方法、プログラムおよび記録媒体
JP7099925B2 (ja) * 2018-09-27 2022-07-12 富士フイルム株式会社 画像処理装置、画像処理方法、プログラムおよび記録媒体
CN110966923B (zh) * 2018-09-29 2021-08-31 深圳市掌网科技股份有限公司 室内三维扫描与危险排除系统
US11144779B2 (en) * 2018-10-16 2021-10-12 International Business Machines Corporation Real-time micro air-quality indexing
CN109492120B (zh) * 2018-10-31 2020-07-03 四川大学 模型训练方法、检索方法、装置、电子设备及存储介质
JP7001042B2 (ja) * 2018-11-08 2022-01-19 日本電信電話株式会社 眼情報推定装置、眼情報推定方法、プログラム
CN111479104A (zh) * 2018-12-21 2020-07-31 托比股份公司 用于计算视线会聚距离的方法
US11113842B2 (en) 2018-12-24 2021-09-07 Samsung Electronics Co., Ltd. Method and apparatus with gaze estimation
CN109784226B (zh) * 2018-12-28 2020-12-15 深圳云天励飞技术有限公司 人脸抓拍方法及相关装置
US11049289B2 (en) * 2019-01-10 2021-06-29 General Electric Company Systems and methods to semi-automatically segment a 3D medical image using a real-time edge-aware brush
US10825137B2 (en) * 2019-01-15 2020-11-03 Datalogic IP Tech, S.r.l. Systems and methods for pre-localization of regions of interest for optical character recognition, and devices therefor
KR102653252B1 (ko) * 2019-02-21 2024-04-01 삼성전자 주식회사 외부 객체의 정보에 기반하여 시각화된 인공 지능 서비스를 제공하는 전자 장치 및 전자 장치의 동작 방법
US11068052B2 (en) * 2019-03-15 2021-07-20 Microsoft Technology Licensing, Llc Holographic image generated based on eye position
US11644897B2 (en) 2019-04-01 2023-05-09 Evolution Optiks Limited User tracking system using user feature location and method, and digital display device and digital image rendering system and method using same
WO2020201999A2 (en) 2019-04-01 2020-10-08 Evolution Optiks Limited Pupil tracking system and method, and digital display device and digital image rendering system and method using same
US20210011550A1 (en) * 2019-06-14 2021-01-14 Tobii Ab Machine learning based gaze estimation with confidence
CN110718067A (zh) * 2019-09-23 2020-01-21 浙江大华技术股份有限公司 违规行为告警方法及相关装置
US11080892B2 (en) * 2019-10-07 2021-08-03 The Boeing Company Computer-implemented methods and system for localizing an object
US11688199B2 (en) * 2019-11-13 2023-06-27 Samsung Electronics Co., Ltd. Method and apparatus for face detection using adaptive threshold
CN113208591B (zh) * 2020-01-21 2023-01-06 魔门塔(苏州)科技有限公司 一种眼睛开闭距离的确定方法及装置
CN113448428B (zh) * 2020-03-24 2023-04-25 中移(成都)信息通信科技有限公司 一种视线焦点的预测方法、装置、设备及计算机存储介质
CN111768433B (zh) * 2020-06-30 2024-05-24 杭州海康威视数字技术股份有限公司 一种移动目标追踪的实现方法、装置及电子设备
US11676255B2 (en) * 2020-08-14 2023-06-13 Optos Plc Image correction for ophthalmic images
CN111985384A (zh) * 2020-08-14 2020-11-24 深圳地平线机器人科技有限公司 获取脸部关键点的3d坐标及3d脸部模型的方法和装置
US10909167B1 (en) * 2020-09-17 2021-02-02 Pure Memories Ltd Systems and methods for organizing an image gallery
CN112633313B (zh) * 2020-10-13 2021-12-03 北京匠数科技有限公司 一种网络终端的不良信息识别方法及局域网终端设备
CN112255882A (zh) * 2020-10-23 2021-01-22 泉芯集成电路制造(济南)有限公司 集成电路版图微缩方法
CN112650461B (zh) * 2020-12-15 2021-07-13 广州舒勇五金制品有限公司 一种基于相对位置的展示系统
US11417024B2 (en) 2021-01-14 2022-08-16 Momentick Ltd. Systems and methods for hue based encoding of a digital image
KR20220115001A (ko) * 2021-02-09 2022-08-17 현대모비스 주식회사 스마트 디바이스 스위블을 이용한 차량 제어 장치 및 그 방법
US20220270116A1 (en) * 2021-02-24 2022-08-25 Neil Fleischer Methods to identify critical customer experience incidents using remotely captured eye-tracking recording combined with automatic facial emotion detection via mobile phone or webcams.
JP2022189536A (ja) * 2021-06-11 2022-12-22 キヤノン株式会社 撮像装置および方法
JPWO2022259499A1 (zh) * 2021-06-11 2022-12-15
US11914915B2 (en) * 2021-07-30 2024-02-27 Taiwan Semiconductor Manufacturing Company, Ltd. Near eye display apparatus
CN114387442A (zh) * 2022-01-12 2022-04-22 南京农业大学 一种多维空间中的直线、平面和超平面的快速检测方法
US11887151B2 (en) * 2022-02-14 2024-01-30 Korea Advanced Institute Of Science And Technology Method and apparatus for providing advertisement disclosure for identifying advertisements in 3-dimensional space
CN114794992B (zh) * 2022-06-07 2024-01-09 深圳甲壳虫智能有限公司 充电座、机器人的回充方法和扫地机器人
CN115936037B (zh) * 2023-02-22 2023-05-30 青岛创新奇智科技集团股份有限公司 二维码的解码方法及装置
CN116523831B (zh) * 2023-03-13 2023-09-19 深圳市柯达科电子科技有限公司 一种曲面背光源的组装成型工艺控制方法
CN116109643B (zh) * 2023-04-13 2023-08-04 深圳市明源云科技有限公司 市场布局数据采集方法、设备及计算机可读存储介质

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3069654A (en) 1960-03-25 1962-12-18 Paul V C Hough Method and means for recognizing complex patterns
JP3163215B2 (ja) * 1994-03-07 2001-05-08 日本電信電話株式会社 直線抽出ハフ変換画像処理装置
JP4675492B2 (ja) * 2001-03-22 2011-04-20 本田技研工業株式会社 顔画像を使用した個人認証装置
JP4128001B2 (ja) * 2001-11-19 2008-07-30 グローリー株式会社 歪み画像の対応付け方法、装置およびプログラム
JP4275345B2 (ja) * 2002-01-30 2009-06-10 株式会社日立製作所 パターン検査方法及びパターン検査装置
CN2586213Y (zh) * 2002-12-24 2003-11-12 合肥工业大学 实时实现Hough变换的光学装置
US7164807B2 (en) 2003-04-24 2007-01-16 Eastman Kodak Company Method and system for automatically reducing aliasing artifacts
JP4324417B2 (ja) * 2003-07-18 2009-09-02 富士重工業株式会社 画像処理装置および画像処理方法
JP4604190B2 (ja) * 2004-02-17 2010-12-22 国立大学法人静岡大学 距離イメージセンサを用いた視線検出装置
DE102004046617A1 (de) * 2004-09-22 2006-04-06 Eldith Gmbh Vorrichtung und Verfahren zur berührungslosen Bestimmung der Blickrichtung
US8995715B2 (en) * 2010-10-26 2015-03-31 Fotonation Limited Face or other object detection including template matching
JP4682372B2 (ja) * 2005-03-31 2011-05-11 株式会社国際電気通信基礎技術研究所 視線方向の検出装置、視線方向の検出方法およびコンピュータに当該視線方向の検出方法を実行させるためのプログラム
US7406212B2 (en) 2005-06-02 2008-07-29 Motorola, Inc. Method and system for parallel processing of Hough transform computations
JP2009508553A (ja) * 2005-09-16 2009-03-05 アイモーションズ−エモーション テクノロジー エー/エス 眼球性質を解析することで、人間の感情を決定するシステムおよび方法
DE102005047160B4 (de) * 2005-09-30 2007-06-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung, Verfahren und Computerprogramm zum Ermitteln einer Information über eine Form und/oder eine Lage einer Ellipse in einem graphischen Bild
KR100820639B1 (ko) * 2006-07-25 2008-04-10 한국과학기술연구원 시선 기반 3차원 인터랙션 시스템 및 방법 그리고 3차원시선 추적 시스템 및 방법
US8180159B2 (en) * 2007-06-06 2012-05-15 Sharp Kabushiki Kaisha Image processing apparatus, image forming apparatus, image processing system, and image processing method
JP5558081B2 (ja) * 2009-11-24 2014-07-23 株式会社エヌテック 画像形成状態検査方法、画像形成状態検査装置及び画像形成状態検査用プログラム
US8670019B2 (en) * 2011-04-28 2014-03-11 Cisco Technology, Inc. System and method for providing enhanced eye gaze in a video conferencing environment
JP2013024910A (ja) * 2011-07-15 2013-02-04 Canon Inc 観察用光学機器
US9323325B2 (en) * 2011-08-30 2016-04-26 Microsoft Technology Licensing, Llc Enhancing an object of interest in a see-through, mixed reality display device
US8737728B2 (en) 2011-09-30 2014-05-27 Ebay Inc. Complementary item recommendations using image feature data
CN103297767B (zh) * 2012-02-28 2016-03-16 三星电子(中国)研发中心 一种适用于多核嵌入式平台的jpeg图像解码方法及解码器
US9308439B2 (en) * 2012-04-10 2016-04-12 Bally Gaming, Inc. Controlling three-dimensional presentation of wagering game content
CN102662476B (zh) * 2012-04-20 2015-01-21 天津大学 一种视线估计方法
US11093702B2 (en) * 2012-06-22 2021-08-17 Microsoft Technology Licensing, Llc Checking and/or completion for data grids
EP2709060B1 (en) * 2012-09-17 2020-02-26 Apple Inc. Method and an apparatus for determining a gaze point on a three-dimensional object
CN103019507B (zh) * 2012-11-16 2015-03-25 福州瑞芯微电子有限公司 一种基于人脸跟踪改变视点角度显示三维图形的方法
CN103136525B (zh) * 2013-02-28 2016-01-20 中国科学院光电技术研究所 一种利用广义Hough变换的异型扩展目标高精度定位方法
JP6269662B2 (ja) 2013-05-08 2018-01-31 コニカミノルタ株式会社 発光パターンを有する有機エレクトロルミネッセンス素子の製造方法
KR20150006993A (ko) * 2013-07-10 2015-01-20 삼성전자주식회사 디스플레이 장치 및 이의 디스플레이 방법
US9619884B2 (en) 2013-10-03 2017-04-11 Amlogic Co., Limited 2D to 3D image conversion device and method
JP6483715B2 (ja) * 2014-02-04 2019-03-13 フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン ハフプロセッサ
WO2015143067A1 (en) * 2014-03-19 2015-09-24 Intuitive Surgical Operations, Inc. Medical devices, systems, and methods using eye gaze tracking
US9607428B2 (en) 2015-06-30 2017-03-28 Ariadne's Thread (Usa), Inc. Variable resolution virtual reality display system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
None *
See also references of WO2015117905A1 *

Also Published As

Publication number Publication date
WO2015117904A1 (de) 2015-08-13
KR20160119176A (ko) 2016-10-12
WO2015117907A2 (de) 2015-08-13
JP2017514193A (ja) 2017-06-01
EP3968288A2 (de) 2022-03-16
JP2017509967A (ja) 2017-04-06
JP6483715B2 (ja) 2019-03-13
CN106104573A (zh) 2016-11-09
KR101991496B1 (ko) 2019-06-20
WO2015117905A1 (de) 2015-08-13
EP3103058A1 (de) 2016-12-14
US10074031B2 (en) 2018-09-11
JP6268303B2 (ja) 2018-01-24
US20160335475A1 (en) 2016-11-17
JP6248208B2 (ja) 2017-12-13
KR101858491B1 (ko) 2018-05-16
KR20160119146A (ko) 2016-10-12
EP3103060A1 (de) 2016-12-14
JP2017508207A (ja) 2017-03-23
WO2015117906A1 (de) 2015-08-13
US10192135B2 (en) 2019-01-29
CN106133750B (zh) 2020-08-28
CN106258010A (zh) 2016-12-28
CN106258010B (zh) 2019-11-22
US10592768B2 (en) 2020-03-17
CN106133750A (zh) 2016-11-16
US20160342856A1 (en) 2016-11-24
WO2015117907A3 (de) 2015-10-01
US20170032214A1 (en) 2017-02-02

Similar Documents

Publication Publication Date Title
EP3103059A1 (de) 3d-bildanalysator zur blickrichtungsbestimmung
DE19953835C1 (de) Rechnerunterstütztes Verfahren zur berührungslosen, videobasierten Blickrichtungsbestimmung eines Anwenderauges für die augengeführte Mensch-Computer-Interaktion und Vorrichtung zur Durchführung des Verfahrens
EP3542211B1 (de) Verfahren und vorrichtung sowie computerprogramm zum ermitteln einer repräsentation eines brillenglasrands
EP2101867B1 (de) Sehhilfe mit dreidimensionaler bilderfassung
CN110363116B (zh) 基于gld-gan的不规则人脸矫正方法、系统及介质
DE102007056528B3 (de) Verfahren und Vorrichtung zum Auffinden und Verfolgen von Augenpaaren
DE102015010214A1 (de) Erzeugung von Tiefenkarten
DE102010001520A1 (de) Durch einen Flugzeugsensor unterstütztes Iriserfassungssystem und -Verfahren
DE102004049676A1 (de) Verfahren zur rechnergestützten Bewegungsschätzung in einer Vielzahl von zeitlich aufeinander folgenden digitalen Bildern, Anordnung zur rechnergestützten Bewegungsschätzung, Computerprogramm-Element und computerlesbares Speichermedium
WO2017174525A1 (de) Verfahren und vorrichtung zum bestimmen von parametern zur brillenanpassung
EP2886043A1 (de) Verfahren zum Fortsetzen von Aufnahmen zum Erfassen von dreidimensionalen Geometrien von Objekten
DE112009000094T5 (de) Verfeinerung dreidimensionaler Modelle
EP3332284A1 (de) Verfahren und vorrichtung zur datenerfassung und auswertung von umgebungsdaten
DE112016006066T5 (de) Analyse von umgebungslicht zur blickverfolgung
DE102019104310A1 (de) System und Verfahren zur simultanen Betrachtung von Kanten und normalen bei Bildmerkmalen durch ein Sichtsystem
DE102018100909A1 (de) Verfahren zum Rekonstruieren von Bildern einer Szene, die durch ein multifokales Kamerasystem aufgenommen werden
EP3635478A1 (de) Verfahren, vorrichtungen und computerprogramm zum bestimmen eines nah-durchblickpunktes
EP3959497B1 (de) Verfahren und vorrichtung zum vermessen der lokalen brechkraft und/oder der brechkraftverteilung eines brillenglases
EP4143628A1 (de) Computerimplementiertes verfahren zur bestimmung von zentrierparametern für mobile endgeräte, mobiles endgerät und computerprogramm
CN105488780A (zh) 一种用于工业生产线的单目视觉测距追踪装置及其追踪方法
DE102014113686A1 (de) Anzeigevorrichtung, die auf den Kopf eines Benutzers aufsetzbar ist, und Verfahren zum Steuern einer solchen Anzeigevorrichtung
DE102019102423A1 (de) Verfahren zur Live-Annotation von Sensordaten
DE112019002126T5 (de) Positionsschätzungsvorrichtung, positionsschätzungsverfahren und programm dafür
DE102010054168B4 (de) Verfahren, Vorrichtung und Programm zur Bestimmung der torsionalen Komponente der Augenposition
DE102012209664B4 (de) Vorrichtung und verfahren zum kalibrieren von trackingsystemen

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20160720

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20180620

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20211019