EP3103060A1 - Analyseur d'image 2d - Google Patents

Analyseur d'image 2d

Info

Publication number
EP3103060A1
EP3103060A1 EP15702739.2A EP15702739A EP3103060A1 EP 3103060 A1 EP3103060 A1 EP 3103060A1 EP 15702739 A EP15702739 A EP 15702739A EP 3103060 A1 EP3103060 A1 EP 3103060A1
Authority
EP
European Patent Office
Prior art keywords
image
pattern
hough
scaled
overview
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP15702739.2A
Other languages
German (de)
English (en)
Inventor
Daniel KRENZER
Albrecht HESS
András KÁTAI
Christian WIEDE
Andreas Ernst
Tobias Ruf
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Original Assignee
Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV filed Critical Fraunhofer Gesellschaft zur Forderung der Angewandten Forschung eV
Publication of EP3103060A1 publication Critical patent/EP3103060A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/145Square transforms, e.g. Hadamard, Walsh, Haar, Hough, Slant transforms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/955Hardware or software architectures specially adapted for image or video understanding using specific electronic processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20008Globally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • Exemplary embodiments of the present invention relate to a 2D image analyzer and to a corresponding method.
  • digital image processing uses multiple scaled versions of the input image for pattern recognition.
  • An example of such a pattern recognizer is the classifier by Viola-Jones, which is trained on a specific model size.
  • the fixed model size of the classifier therefore has to be recorded in several scaling levels
  • the scaling levels are usually searched in ascending or descending order (see picture pyramid). This sequnetial execution is very poorly suited, in particular, for parallel architectures (eg FPGA).
  • the task is to enable an efficient and reliable recognition of a pattern.
  • Embodiments provide a 2D image analyzer having an image calibrator, an image generator, and a pattern finder.
  • the image calibrator is configured to receive an image having a searched pattern and to scale the received image according to a scaling factor.
  • the image generator is configured to generate a survey image having a plurality of copies of the received and scaled image, each copy scaled by a different scaling factor.
  • the pattern finder is configured to compare a predetermined pattern with the plurality of received or scaled images within the overview image, and to output information regarding a position in which a match between maximum is the searched pattern and the predetermined pattern, the position relating to a respective copy of the received and scaled image.
  • inventions of the present invention provide a bi-directional analyzer with a pattern finder, wherein the pattern finder is applied to an overview image that includes the image in which the searched pattern is contained at different scaling levels.
  • the largest match with the searched pattern is present, it is clear, on the one hand, which scaling level this match has delivered (position of the scaled image within the overview image) and, on the other hand, at which position (x, y coordinate in the image) this was (Position within the scaled image in the overview image, corrected by the scaling factor).
  • the bottom line is that it offers the advantage of being much more efficient, especially on parallel architectures such as e.g. FPGAs, the searched pattern can be detected in at least one of the scaling levels by only pattern recognition on the overview image. By knowing the scaling, the position of the desired pattern can then also be calculated in the absolute image.
  • each scaled image according to the respective scaling factor is associated with a respective position in the overview image.
  • the respective position can be calculated by an algorithm which takes into account a spacing between the scaled images in the overview image, a spacing of the scaled images from one or more of the boundaries of the overview image and / or other predefined conditions.
  • the pattern finder prepared in this way prepares for one or more local maxima in the census-transformed version of the overview image or in the version of the overview image transferred to the Hough feature space or in the gradient image Version of the overview image, or ai Wunsch transferred to a Merkmaisraum version of the Kochs tsslaves, identify a position of a local Ma ximums indicates the position of the identified predetermined pattern in the respective copy of the received and scaled image.
  • the pattern finder comprises a classification and a post-processing.
  • the classification is applied to the overview image transferred to the mark space and provides high values (local maxima) at locations where the image content matches the searched pattern.
  • Conventional methods according to the prior art can be used for the classification (eg, according to Viola-Jones).
  • the classified overview image can now be subjected to post-processing in accordance with further embodiments.
  • the classified overview image is first smoothed with a local sum filter and locally corrected the positions of the local maxima in this way.
  • a local maximum filter is applied to the score (a score is in most cases the result of a classification, ie a measure of the match of the image content with the searched pattern) to the local maximum corrected via the cumulative filter receive.
  • the result is again a classified overview image with Seores of the classifier but now locally corrected local maxima.
  • Each position of a local maximum in the overview image is assigned a corresponding scaling level according to the previous embodiments (except at the points where the scaling levels are spaced apart from one another and to the boundaries of the overview image).
  • each position in the overview image is therefore assigned an absolute position in original image coordinates (except where the scaling levels are spaced from one another and to the boundaries of the overview image). If a maximum has now been extracted within a scaling stage, its position can also be corrected once more by using the corresponding local maximum from the adjacent scaling stages and averaging over the adjacent scaling stages.
  • one embodiment includes the method of analyzing a 2D image comprising the steps of: scaling a received image having a searched pattern according to a scale factor and producing an overview image having a plurality of copies of the received and scaled image, each Grouping is scaled by a different scaling factor.
  • the overview image is converted into a feature space (eg Hough feature space) and then a classification to determine a measure of the match with a predetermined pattern (such as an eye).
  • search is made for maxima in the plurality of received and scaled images within the overview image to output information regarding a position at which a match between searched pattern and the predetermined pattern is maximum, the position being at a respective copy of the received and relates to a scaled image. If necessary, the position can be corrected using a combination of local sum and maximum filters and averaged over adjacent scaling steps.
  • the 2D bi-axis analyzer may be combined into a 2D image analysis system, the analysis system then having an evaluation device that observes and status the predetermined pattern and in particular the pupil (or, more generally, the eye region or eye of a user) determined.
  • the analysis system then having an evaluation device that observes and status the predetermined pattern and in particular the pupil (or, more generally, the eye region or eye of a user) determined.
  • the 2D image analyzer may be connected to a further processing means comprising a selectively adaptive data processor adapted to exchange a plurality of sets of data with the image analyzer and to process those data sets.
  • a further processing means comprising a selectively adaptive data processor adapted to exchange a plurality of sets of data with the image analyzer and to process those data sets.
  • the data records are processed in such a way that only plausible data records are passed on, whereby implausible parts of the data record are replaced by plausible parts.
  • the 2D image analyzer can also be connected to a 3D image analyzer, which determines an orientation of an object in space (eg, a viewing angle based on at least a first set of image data in combination with additional information) two main units, namely position calculating means for determining the position of the Pattern in three-dimensional space and an orientation calculator for
  • the 2D image analyzer may also be connected to a Hough processor, which in turn is subdivided into the following two subunits: Pre-processor configured to receive a plurality of samples, each comprising one image and the image of the respective one To rotate and / or mirror samples and to output a plurality of versions of the image of the respective sample for each sample, Hough transform means adapted to detect a predetermined searched pattern in the plurality of samples based on the plurality of versions in which a characteristic of the Hough transformation device that is dependent on the searched pattern is adaptable.
  • the processing means for postprocessing the results of the invention is adapted to analyze the detected patterns and to output a set of geometry parameters describing a position and / or a geometry of the pattern for each sample.
  • FIG. 1 is a schematic block diagram of a 2D image analyzer according to one
  • FIG. 2a is a schematic block diagram of a Hough processor with a
  • a processor and a Hough transformation device according to an embodiment
  • FIG. 2b shows a schematic block diagram of a pre-processor according to an embodiment
  • Fig. 2c is a schematic representation of FIough cores for the detection of straight lines (sections);
  • 3a is a schematic block diagram of a possible implementation of a
  • 3b shows a single cell of a delay matrix according to an embodiment
  • 4a-d a Schernatisch.es block diagram of another implementation of a Hough transformation device according to an embodiment
  • FIG. 5a shows a schematic block diagram of a stereoscopic camera arrangement with two image processors and a post-processing device, wherein each of the image processors has a Hough processor according to exemplary embodiments;
  • Fig. 5b shows an exemplary recording of an eye to illustrate a with the
  • FIG. 6-7 further illustrations for explaining additional embodiments or aspects; 8a-e are schematic representations of optical systems; and
  • 9a-9i further illustrations for explaining background knowledge for the Hough transformation device.
  • FIG. 1 shows a 2D image analyzer 700 with an image scaler 702, an image generator 704 and a pattern generator 706.
  • the image scaler 702 is configured to receive an image 710 (see Fig. 7b) having a searched pattern 71 1.
  • the received image 710 is now scaled with different scaling factors.
  • this reduction based on the image 71 0, a plurality of copies of the received and scaled image 710 ', 710 ", 710'" to 710 "'''' arise, it being noted that the number of scaling stages is not limited is.
  • the respective position of a scaled image be calculated within the overview image by an algorithm that takes into account a spacing between the scaled images in the overview image, a spacing of the scaled images to one or more of the boundaries of the oversize image and / or other predefined conditions.
  • the desired pattern 71 1 can now be detected by means of the pattern finder 706.
  • the pattern recognition is applied only once to the overview image, whereby the respective scaling stage is "coded" as location information within the overview image
  • the pattern recognition is applied only once to the overview image, whereby the respective scaling stage is "coded" as location information within the overview image
  • Advantageous is this type of processing of individual scaling stages, in particular on FPGA architectures, since here The different scaled images would each have to be held in memory and processed separately and then the results would be merged, so that the overview image would be generated once and then processed in one step FPGA architectures are optimally utilized.
  • this overview image 710 into a feature image 710a, a census transformation typically being used here.
  • this census-transformed image 710a it would also be conceivable to perform a classification in order to determine the false-color image 710b (see Fig. 7c),
  • the Hough translator 104 includes a delay filter 106 which may include at least one, but preferably a plurality, of delay elements 108a, 108b, 108c, 110a, 110b and 110c.
  • the delay elements 108a to 108c and 10a to 10c of the delay filter 106 are typically arranged as a matrix, that is, in columns 108 and HO and lines a to c, and are signal-coupled with one another. According to the perennialsbeispie!
  • At least one of the delay elements 108a to 108c or 110a to 110c has an adjustable delay time, here symbolized by the "+/-" symbol, for driving the delay elements 108a to 08c and 110a to 1.10c or to control the same, a separate control logic or an AnSteuerregister (not shown) may be provided.
  • This control logic controls the delay time of the individual delay elements 108a to 108c and 1 10a to 1 10c via optional switchable elements 109a to 109c and
  • the Hough transform means 104 may comprise an additional configuration register (not shown) for initially configuring the individual delay elements 108a-108c and 110a-110c.
  • the purpose of the pre-processor 102 is to prepare the individual samples 1 12a, 12b and 1 12c so that they can be efficiently processed by the Hough transformation device 104.
  • the pre-processor 102 receives the bi-file (s) 12a, 12b, and 12c and performs preprocessing, e.g. in the form of a rotation and / or in the form of a reflection, in order to output the multiple versions (cf., FIGS. 12a and 31a ') to the Hougl transformation device 104.
  • the output may be serial if the Hough transformer 104 has a Hough core 106, or parallel if more than one Hough core is provided.
  • the pre-processing in the pre-processor 102 which serves the purpose of detecting a plurality of similar patterns (rising and falling straight line) with a search pattern or a Hough core configuration, is explained below on the basis of the first sample 12a.
  • this sample can be rotated, eg rotated 90 °, to obtain the rotated version 1 12a '.
  • This process of rotation is provided with the reference numeral 1 14.
  • the rotation can take place either by 90 °, but also by 180 ° or 270 ° or generally by 360 ° / n, it being noted that depending on the downstream Hough transformation (see Hough transformation device 104), it can be very efficient can perform only a 90 ° turn.
  • the image 1 12a can also be mirrored to obtain the mirrored version 1 12a "The process of mirroring is designated by the reference numeral 16 1.
  • the mirror 16 corresponds to reading back the memory from the rear, both starting from the mirrored version 1 32a "as well as of the rotated version 1 12a ', a fourth version can be obtained by a rotated and mirrored version 1 12a'" by carrying out either the process 1 14 or 1 16.
  • two similar patterns eg, right half-circle and half-circle opened to the left
  • Hough-core configuration as described below.
  • the Hough transform means 104 is adapted to provide, in the version 1 12a or 1 12a '(or 1 12a "or 1 12a"') provided by the preprocessor 102, a predetermined searched pattern, such as an ellipse or to detect a segment of an ellipse, a circle or segment of a circle, or a straight line or a trench segment.
  • a predetermined searched pattern such as an ellipse or to detect a segment of an ellipse, a circle or segment of a circle, or a straight line or a trench segment.
  • the Fiheranordnvvng is configured according to the sought predetermined pattern.
  • some of the delay elements 108a to 108c and 110a to 110c are activated or bypassed, respectively.
  • a desired pattern eg angle of a straight line or radius of a circle
  • variable delay elements 108a to 108c or 110a to 10c delay elements
  • the desired characteristic that is to say, for example, the radius or the rise
  • the delay time of one of the delay elements 108a to 108c or 110a to 110c changes the overall filter characteristic of the filter 106.
  • the flexible adaptation of the filter characteristic of the filter 106 of the Hough Transformer 104 it is possible to adapt the Transformationkcrn 106 during the passage, so that, for example, dynamic image content such for small and large pupils, with the same Hough core 106 can be detected and tracked. The exact implementation of how the delay time can be adjusted is discussed in FIG. 3c.
  • all delay elements c 108a, 108b, 108c, 110a, 110b and / or 310c are preferably provided with one variable or discretely schalibaren delay time, so that during operation active between different patterns to be detected or between different forms of the patterns to be detected can be switched over and over.
  • the size of the illustrated Hough core 104 is configurable (either in operation or in advance) so that additional Hough cells may be enabled or disabled.
  • the transformation means 104 may be provided with means for adjusting the same or, to be specific, for adjusting the individual delay elements 108a-108c and 110a-110c, such as with a controller (not shown).
  • the controller is arranged, for example, in a downstream processing device and is designed to adapt the delay characteristic of the filter 106 if no pattern can be detected or if the recognition is not sufficiently good (low match of the image input). Folded with the searched patterns of existence of the searched pattern). This controller will be discussed with reference to FIG. 5a.
  • a very high frame rate of, for example, 60 FPS at a resolution of 640x480 could be achieved using a 96 MHz clock frequency because of the structure 104 described above with the plurality of columns 108 and 1 10 a parallel processing or a so-called parallel Hough transformation is possible.
  • the pre-processor 102 is designed to receive the samples 1 12 as binary edge images or also as gradient images and to carry out the rotation 114 or the reflection 1 16 on the basis of these in order to produce the four versions 1 12a, 12a ', 12a "and 1 12a '".
  • the background to this is that typically the parallel Hough transform as performed by the Hougfa transform means is limited to two or four preprocessed, e.g. offset by 90 °, versions of a picture 1 12a builds. As shown in Fig.
  • a 90 ° rotation (1 12a to 112a ') first takes place before the two versions 1 12a and 1 12a' are mirrored horizontally (compare 1 12a to 1 12a "and 1 12a ') 1 12a '").
  • the preprocessor in corresponding exemplary embodiments, has an internal or external memory which serves to hold the received image files 12.
  • the processing rotation 14 and / or mirror 16 of the pre-processor 102 depends on the downstream FIough transformer, the number of parallel harmonics (degree of parallelization) and the configuration thereof, in particular as described with reference to FIG. 2c ,
  • the pre-processor 102 may be configured to output the preprocessed video stream via the output 126 according to one of the three following constellations, depending on the degree of parallelization of the downstream Hough transcription unit 104: 100%
  • Parallelization Simultaneous output of four video streams, namely a non-rotated and non-mirrored version 1 12a, a 90 ° rotated version 1 12a ', and a mirrored version 1 12a "and 1 12a'" respectively.
  • 50% parallelization Output of two video data streams, namely non-rotated 1 12a and 90% mirrored 1 12a 'in a first step and output of the respective mirrored variants 1 12a "and 1 12a'" in a second step.
  • the pre-processor 102 may be configured to include other image processing steps, such as up-sampling. perform.
  • the pre-processor it would also be possible for the pre-processor to generate the gradient image. If the gradient image generation becomes part of the image pre-processing, the gray value image (output image) in the FPGA could be rotated.
  • Figure 2c shows two Hough Kerukonilgurationen 128 and 130, eg for two parallel 3 1x3 Hough Heme, configured to recognize a straight or straight section. Furthermore, a unit circle 132 is plotted to illustrate in which angular ranges the detection is possible. It should be noted at this point that the hauer kernel gauges 128 and 130 are each to be seen so that the white dots illustrate the delay elements.
  • the Hou gh core confi guration 128 corresponds to a so-called Type 1 Hough core, while the Hough core configuration 130 corresponds to a so-called Type 2 Hough core. As can be seen from the comparison of the two Hough core configurations 128 and 130, one represents the inverse of the other.
  • the Hough core configuration 1228 With the first Hough core configuration 128, a straight line in the region 1 between 3 ⁇ / 4 and ⁇ / 2 can be detected, while a straight line in the range 3 ⁇ / 2 and 5 ⁇ / 4 (range 2) is detectable by means of the Hough core configuration 130.
  • the Hough core configuration 128 and 130 is applied to the rotated version of the respective image. Consequently, by means of the Hough core configuration 128, the range lr between ⁇ / 4 and zero and, by means of the Hough core configuration 130, the range 2r between ⁇ and 3 ⁇ / 4 can be detected. 2015/052009
  • Hough core e.g., a Type 1 Hough core
  • only one Hough core type can be used, which is reconfigured during operation or in which the individual delay elements can be switched on or off, that of the Hough core corresponds to the inverted type.
  • Fig. 3a shows a Hough kernel 104 with m columns 108, 110, 138, 140, 141 and 143 and n rows a, b, c, d, e and f such that m x n rows are formed.
  • the column 108, 110, 138, 140, 141 and 143 of the filter represents a particular feature of the structure sought, e.g. for a certain curvature or a certain straight rise.
  • Each cell comprises a delay element which can be set with respect to delay time, wherein in this exemplary embodiment the adjustment mechanism is realized by providing in each case a switchable delay element with a bypass.
  • the cell (108a) of Fig. 3b comprises the delay element 142, a remotely operable switch 144, such as e.g. a multiplexer, and a bypass 146.
  • the remote-controllable switch 144 By means of the remote-controllable switch 144, either the line signal can be routed via the delay element 142 or fed to the node 148 without delay.
  • the node 148 is connected on the one hand to the sum element 150 for the column (for example 108), on the other hand via this node 148 also the next cell (for example 1 10a) is connected.
  • the multiplexer 1 4 is configured via a so-called configuration register 160 (see Fig. 3a). It should be noted at this point that the reference numeral 160 shown here refers only to a portion of the configuration register 160 which is coupled directly to the multiplexer 144.
  • the element of the configuration register 160 is configured to control the multiplexer 144 and receives, via a first information input 160a, configuration information that originates, for example, from a configuration matrix stored in the FPGA-internal BRAM 163.
  • This conflagration tion information can be a column-wise bit string and refers to the configuration of several of the (also during the transformation) configurable delay cells (142 + 144). Therefore, the configuration information can be further forwarded via the output 160b.
  • the configuration register 160 or the cell of the configuration register 160 receives a so-called Enabler signal via a further signal input 160c, by means of which the reconfiguration is initiated.
  • the background to this is that the reconfiguration of the hauer requires a certain amount of time, which depends on the number of delay elements or, in particular, on the size of a column. In this case, one clock cycle is assigned for each column element and it is grain red to a latency of a few clock cycles by the BRAM 163 or the configuration logic 160.
  • the overall latency for the reconfiguration is typically negligible for video-based image processing.
  • the video data streams recorded with a CMOS sensor have horizontal and vertical blanking, and the horizontal blanking or horizontal blanking time can be used for reconfiguration.
  • the size of the Hough core structure implemented in the FPGA dictates the maximum size possible for Hough core configurations. For example, if smaller configurations are used, they are vertically centered and aligned horizontally on column 1 of the Hough core structure. Unused elements of the Hough core structure are all populated with activated delay elements.
  • the evaluation of the data streams thus processed with the individual delay cells (142 + 144) takes place column by column.
  • the sum is added by columns in order to detect a local sum maximum, which indicates a recognized desired structure.
  • the summation per column 108, 110, 138, 140, 141 and 143 serves to determine a value that is representative of the degree of agreement with the sought structure for an expression of the structure assigned to the respective column.
  • comparators 108v, 110, 138v, 140v, 141v and 143v are provided per column, which are connected to the respective summation elements 150 .
  • comparators I08v, HOv, 138v. 140v, 141v, 143v of the different column 108, 1 10, 138, 140, 141 or 143 also further delay elements 153 may be provided, which serve to compare the column sums of adjacent columns.
  • the column 108, 110, 138 or 140 is always passed out of the filter with the greatest degree of agreement for a characteristic of the desired pattern.
  • the result comprises a so-called multidimensional Hough space, which contains all the relevant parameters of the desired structure, such as type of pattern (eg straight line or semicircle), degree of conformity of the pattern, extent of the structure (curvature of curve segments or slope and length of straight line segments) and Location or orientation of the searched pattern includes.
  • type of pattern eg straight line or semicircle
  • degree of conformity of the pattern e.g. straight line or semicircle
  • extent of the structure curvature of curve segments or slope and length of straight line segments
  • Location or orientation of the searched pattern includes.
  • the Hough-core cell of FIG. 3b may include an optional pipeline delay element 162, arranged, for example, at the output of the cell and configured to receive both the signal delayed by means of the delay element 142 and that by means of the bypass 146 delay delayed signal.
  • FIG. 5a shows an FPGA-implemented image processor 10a with a preprocessor 102 and a Hough transform device 104. Before the preprocessor 102, FIG Further, an input stage 12 may be implemented in the image processor 10a configured to receive image data or image sample from a camera 14a.
  • the input stage 12 may comprise, for example, an image transfer interface 12a, a segmentation and edge detector 12b and means for camera control 12c.
  • the camera control means 12c are connected to the image interface 12a and the camera 14 and serve to control factors such as gain and / or exposure.
  • the image processor 10a further includes a so-called Hough feature extractor 16 adapted to receive the multi-dimensional Hough space output by the Hough transform means 1.04 and all relevant information for pattern recognition involves analyzing and outputting a compilation of all Hough features based on the analysis result.
  • a smoothing of the Hough feature spaces takes place here, ie a spatial smoothing by means of a local filter or a thinning out of the Hough space (suppression of non-relevant information for the pattern recognition). This thinning is done taking into account the nature of the pattern and the nature of the structure so that non-maxima in the Hough probability space are masked out.
  • threshold values can also be defined for the thinning, so that, for example, minimum or maximum permissible characteristics of a structure, such as a minimum or maximum curvature or a smallest or largest increase, can be determined in advance.
  • noise suppression in the 1 lough probability space can also be carried out.
  • the analytic inverse transformation of the parameters of all remaining points in the original image area yields e.g. following Hough features: For the curved structure, position (x and y coordinates), probability of occurrence, radius and angle, which indicates in which direction the sheet is opened, can be forwarded. For a straight line, parameters such as position (x and y coordinates), probability of occurrence, angle indicating the slope of the straight line, and length of the representative straight section can be determined.
  • This thinned-out Hough-Rautn is outputted by the FIough feature extractor 16 or generally by the image processor 10a to a post-processing device 18 for further processing.
  • the processing device 18 may include a Hough feature-to-geometry converter 202.
  • This geometry converter 202 is configured to analyze one or more predefined searched patterns output by the Hough feature extractor and output the geometry describing parameters per sample.
  • the geometry converter 202 may be configured to output geometry parameters such as first diameter, second diameter, tilt and position of the center at an ellipse (pupil) or a circle based on the detected Hough features.
  • the geometry converter 202 is operable to detect and select a pupil based on 2 to 3 Hough features (eg, curvatures).
  • Criteria such as the degree of agreement with the sought-after structure or the Hough features, the curvature of the Hough features or the predetermined pattern to be detected, the location and orientation of the Hough features.
  • the selected ones Hough feature combinations are sorted, where in the first line sorting is according to the number of Hough features obtained and secondarily after May3 matching the searched structure. After sorting, the first Hough-Fcature combination is selected to fit the ellipse that most closely represents the pupil in the camera's back.
  • the post-processing device 18 comprises an optional controller 204 which is adapted to output a control signal back to the image processor 10a (see control channel 206) or, in other words, back to the Hough transform means 104 on the basis of which Filter characteristic of the filter 106 is adjustable.
  • the controller 204 is typically coupled to the geometry converter 202 to analyze the geometry parameters of the detected geometry and to track boundaries defined by the Hough kernel i such that more accurate geometry recognition is possible. This process is a successive process that starts, for example, with the last flough kernel configuration (size of the last used Hough kernel) and is updated as soon as the recognition 202 yields insufficient results.
  • the controller 204 may adjust the ellipse size, e.g. is dependent on the distance between the object to be photographed and the camera 14a when the associated person nourishes the camera 14a.
  • the filter characteristic is controlled on the basis of the last settings and on the basis of the geometry parameters of the ellipse.
  • the post-processing device 18 may comprise a selective-adaptive data processor 300.
  • the purpose of the data processor is to rework outliers and dropouts within the data series in order to smooth the data series, for example. That's why the selective-adaptive
  • Data processor 300 is configured to receive a plurality of sets of values output by geometry converter 202, each set being assigned to a respective sarnple.
  • the filter processor of data processor 300 performs a selection of values based on the multiple sets such that the data values of implausible sets (eg, outliers or dropouts) are replaced by internally-determined data values (substitute values) and the data values of the remainder Phrases continue to be used unchanged.
  • the data values of plausible sentences (containing no outliers or suspects) are forwarded and the data values of implausible sentences (containing outliers or dropouts) are represented by data values of a plausible sentence, e.g. For example, the previous data value or an averaging of several previous data values replaced.
  • the data processor can smooth over the data value of a newly received sentence if it does not meet one of the following criteria:
  • the associated size parameter or geometry parameter is a dropout when z.
  • the size of the current object is too strong
  • the current data value eg the current position value
  • An illustrative example of this is when z.
  • the current position coordinate (data value of the sentence) of an object deviates too much from the ionization coordinate previously determined by the selectively adaptive data processor.
  • the previous value is still output or at least used to smooth the current value.
  • the current values are optionally weighted more heavily than the past values. For example, using exponential smoothing, the current value can be determined by the following formula:
  • the smoothing coefficient is dynamically adjusted within defined limits to the trend of the data to be smoothed, eg reduction in rather constant value curves or increase in ascending or descending value curves. If, in the long term, there is a major jump in the geometry parameters (ellipse parameters) to be smoothed, the data processor and thus also the smoothed value curve adapt to the new value.
  • the selective-adaptive data processor 300 can also be configured by means of parameters, for example during the initialization, with the parameters behavior, x, b. maximum duration of a misfire, or maximum smoothing factor.
  • the selective adaptive data processor 300 or generally the post-processing device 118 can plausibly output values describing the position and geometry of a pattern to be recognized with high accuracy.
  • the post-processing device to an interface 1 8a, via the optional control commands can be received externally. If several data series are to be smoothed, it is conceivable to use a separate selectively adaptive data processor for each data series or to adapt the selectively adaptive data processor so that data sets of different data series can be processed per set.
  • the data processor 300 may, for. B. have two or more inputs and an output. One of the inputs (receives the data value and) is for the data series to be processed. The output is a smoothed row based on selected data. For selection, the further inputs (receive the additional values for a more accurate assessment of the data value) are used and / or the data series itself. During processing within the data processor 300, a change in the data rub takes place between the treatment of outliers and the handling of dropouts within the data series.
  • outliers In the selection, outliers (within the data series to be processed) are sorted out and replaced by other (internally determined) values.
  • Dropouts One or more additional input signals (additional values) are used to assess the quality of the data series to be processed. The assessment is based on one or more threshold values, which subdivide the data into "high” and “low” grades. Low quality data is rated as a dropout and replaced by other (internally determined) values.
  • the post-processing device 18 has an image analyzer. such as a 3-bit analyzer 400.
  • the post-processing device 18 can also be provided with a further image capture device consisting of image processor 10b and camera 14b.
  • the 3D image analyzer 400 is configured to generate at least a first set of image data, which is determined on the basis of a first image (see camera 14a), and a second set of image data, which is based on a second image (cf. Camera 14b), wherein the first and second images map a pattern from different perspectives, and to calculate an angle of view and an SD view vector, respectively.
  • the 3D image analyzer 400 includes a position calculator 404 and an alignment calculator 408.
  • the position calculator 404 is configured to determine a position of the pattern in a three-dimensional space based on the first set, the second set, and a geometric relationship between the three Perspectives and the first and the second camera 14a and 14b to calculate.
  • the alignment calculator 408 is configured to provide a 3D view vector, e.g. a viewing direction, according to which the recognized pattern is aligned in the three-dimensional space, the calculation being based on the first sentence, the second sentence and the calculated position (see position calculator 404).
  • a so-called 3 D amerasystemmodel 1 can be consulted, which has stored, for example, in a configuration file all model parameters, such as position parameters, optical parameters (see camera 14a and 14b).
  • the model stored or read in the 3D-Biidanalysator 400 includes data with regard to the camera unit, ie with respect to the camera sensor (eg pixel size, sensor size and resolution) and lenses used (eg focal length and ob] eke i nc), data or characteristics of the object to be recognized (eg, characteristics of an eye) and data relating to other relevant objects (eg, a display in the case of using the system 1000 as an input device).
  • the 3D position calculator 404 calculates the Auger position or the pupil center point on the basis of the two or more camera images (cf. FIGS. 14a and 14b) by triangulation.
  • the viewing angle calculator 408 can determine the viewing direction from two elliptical projections of the pupil to the camera sensors without calibration and without knowledge of the distance between the eyes and the camera system.
  • the viewing direction calculator 408 uses, in addition to the 3D positional parameters of the image sensors, the ellipse parameters which are determined by means of the geometry analyzer 202 and the position determined by means of the position calculator 404. From the 3D position of the pupil center and the position of the image sensors, virtual camera units are calculated by rotation of the real camera units whose optical axis extends through the 3D pupil center.
  • projections of the pupil on the virtual sensors are respectively calculated from the projections of the pupil on the real sensors, so that two virtual ellipses are created.
  • the two sensors can each be calculated two points of view of the eye on any plane parallel to the respective virtual sensor plane.
  • four view direction vectors can be calculated, ie two vectors per camera.
  • exactly one of the one camera is always approximately identical to one of the other cameras.
  • the two identical vectors indicate the searched eye direction (gaze direction), which is then output from the sight line calculator 404 via the interface 1 8a.
  • a particular advantage in this 3D calculation lies in the fact that a non-contact and complete calibration-free determination of the 3D eye position and the pupil size independently of the knowledge of the position of the eye to the camera. rnera is possible.
  • An analytical determination of the 3D eye position and SD viewing direction, including a 3D room model allows any number of cameras (greater than 1) and any camera position in 3D space.
  • the short latency with the simultaneously high frame rate enables a real-time capability of the described system 1000.
  • the so-called time regimes are also fixed, so that the time differences between successive results are constant.
  • the "3D image analyzer" which includes the method for calibration-free eye tracking, at least two camera images from different perspectives have been assumed so far.At the calculation of the viewing direction, there is a point at which exactly two possible pixels per camera are used In each case, the second vector corresponds to a mirroring of the first vector at the connecting line between the camera and the pupil center point .Thus of the two vectors which result from the other camera image, exactly one vector agrees with almost one calculated from the first camera image Vector match These matching vectors indicate the direction of view to be determined.
  • the actual sight line vector (hereinafter referred to as "vb") must be used ) to be selected.
  • 5b shows the visible part of the eyeball (bordered in green) with the pupil and the two possible viewing directions vi and v2.
  • one the white dermis around the iris
  • the two beams are projected into the camera image of the eye and run there from the pupil center to the edge of the image.
  • the ray which passes over fewer pixels belonging to the sclera, belongs to the actual line of sight vector vb.
  • the pixels of the sclera differ by their gray value from those of the iris bordering on them and those of the eyelids.
  • This method reaches its limits if the face belonging to the recorded eye is too far away from the camera (ie the angle between the optical axis of the camera and the vector perpendicular to the face plane becomes too large).
  • an evaluation of the position of the pupil center can be made within the eye opening.
  • the position of the pupil center within the visible part of the eyeball or within the eye opening can be used to select the actual line of sight vector.
  • One way to do this is to define 2 rays (starting at the pupillary center and infinitely long), one in the direction of vi and one in the direction of v2.
  • the two beams are projected into the camera image of the eye and run there from the pupil center to the edge of the image.
  • the distance between the pupil center and the edge of the eye opening (shown in green in FIG. 5b) is determined along both beams in the camera image.
  • the ray that results in the shorter distance belongs to the actual line of sight vector.
  • This method reaches its limits if the face belonging to the recorded eye is too far away from the camera (ie the angle between the optical axis of the camera and the vector perpendicular to the face plane becomes too large).
  • an evaluation of the position of the pupil center can be made to a reference pupil center.
  • the position of the pupum center point determined in the camera image within the visible part of the eyeball or within the eye opening can be used together with a reference pupil center to select the actual line of sight vector.
  • One way to do this is to define 2 rays (starting at the pupillary center and infinitely long), one in the direction of vi and one in the direction of v2.
  • the two beams are projected into the camera image of the eye and run there from the pupil center to the edge of the image.
  • the reference pupil center within the eye opening corresponds to the pupil center when the eye is looking directly toward the camera sensor center of the camera used for imaging.
  • the beam projected into the camera image, which has the smallest distance to the reference pupil center in the image belongs to the actual sight line vector.
  • To determine the reference pupil 11 There are several possibilities, some of which are described below:
  • Possibility 1 The center of the pupil center results from the determined pupil center, in the case where the eye looks directly towards the center of the camera sensor. This is the case if the pupil contour on the virtual sensor plane (see description for viewing direction calculation) describes a circle
  • option 2 The center of gravity of the area of the eye opening could be used as an estimate of the position of the reference pupil center point. This method of estimation reaches its limits when the plane in which the face lies is not parallel to the sensor plane of the camera. This constraint can be compensated if the inclination of the face plane to the camera sensor plane is known (eg, by a previously determined head position and orientation orientation) and this is used to correct the position of the estimated retraction pupil point.
  • Option 3 (general application): If the SD position of the center of the eye is available, a straight line between the 3D eye center and the virtual sensor center can be determined and its intersection with the surface of the eyeball. The reference pupil center results from the position of this intersection converted into the camera image.
  • an ASIC application-specific chip
  • the FPGAs 10a and 10b can be used instead of the FPGAs 10a and 10b, which can be realized very cost-effectively, particularly in the case of high quantities.
  • Hough processor 100 illustrated in FIG. 1 may be used in different combinations, in different combinations with features presented differently in particular with respect to FIG. 5.
  • Applications of the H ou gh processor according to FIG. 1 are, for example, microsleepwatchers or fatigue detectors as driver assistance systems in the automotive sector (or in general at safety-relevant human machine interfaces), whereby evaluation of the eyes (eg covering of the pupil as a measure of the degree of opening) and taking into account the viewpoints and the focus, a specific pattern of fatigue can be detected.
  • the Hough processor can be used on input devices or input interfaces for technical devices; Here the eye position and viewing direction are used as input parameters.
  • a concrete application here would be the support of the user when viewing screen contents, eg when highlighting certain focused areas.
  • Such applications are particularly interesting in the field of assisted iiving, in computer games, in the optimization of 3D visualization by including the line of vision, in market and media research or in ophthalmological diagnostics and therapies.
  • the implementation of the method presented above is platform-independent, so that the method presented above can also be executed on other units, such as a PC.
  • another embodiment relates to a method for Hough processing comprising the steps of processing a plurality of samples, each having an image, using a pre-processor, wherein the image of the respective sample is rotated and / or mirrored, such that a plurality of versions of the image of the respective sample are output for each sample and detecting predetermined patterns in the plurality of samples based on the plurality of versions using a Hough transform means having a delay filter having a filter characteristic, the filter characteristic of which depends is set by the selected predetermined pattern set.
  • Fig. 4a shows a processing chain 1000 of a fast 2D correlation.
  • the processing chain of the 2D correlation comprises at least the functional blocks 1 105 for 2D bending and 1 1 1 0 for the fusion.
  • the procedure for 2D bending is illustrated in FIG. 4b.
  • FIG. 4h shows examples of an overall position on templates.
  • a Hough feature can be extracted, will be apparent from Fig. 4c together with Fig. 4d.
  • FIG. 4c shows the pixel-by-pixel correlation with n templates illustrated (in this case, for example, for straight lines of different pitch) for recognizing the ellipse 1 1 15, while FIG. 4d shows the result of the pixel-by-pixel correlation, typically with a maximum search via the n result images ,
  • Each result image contains a hough feature per pixel. Hough processing in the overall context is explained below.
  • the narrative filter is replaced by fast 2D correlation.
  • the previous delay editor is able to map n characteristics of a specific pattern. These n values are stored as templates in the memory.
  • the preprocessed image (z, B. binary edge image or gradient image) is traversed pixel by pixel.
  • all stored templates are compared with the underlying image content (corresponding to a postprocessing characteristic) (the environment of the pixel position (in terms of the size of the template, so to speak) is evaluated).
  • This procedure is also referred to as correlation in digital image processing.
  • For each template therefore, one obtains a correlation value-that is, a measure of the match-with the underlying image content. These correspond, as it were, to the column sums from the previous delay filter. Now one decides (per pixel) for the template with the highest correlation value and remembers its template number (the template number describes the characteristics of the searched structure, eg slope of the straight line section).
  • the correlation of the individual templates with the image content can be carried out both in the local and in the frequency domain. This means that the input image is first correlated with all n templates. One receives n resultants. If you put these result images on top of each other (like a cuboid) you would search per pixel for the highest correlation value (over all levels), where individual levels within the cuboid stand for individual templates. The result is again a single image, which then contains a correlation measure and a tempo number per pixel - that is, one hough feature per pixel.
  • the second sleep detector is a system that consists of at least one of an image acquisition device, a lighting unit, a processing unit, and an acoustic and / or visual signaling device. By evaluating an image recorded by the user, the device is capable of detecting incoming microsleep or fatigue or distraction of the user and of warning the user.
  • the processing unit used is an embedded processor system that executes a software code on an underlying operating system.
  • the signaling device currently consists of a multi-frequency buzzer and an RGB LED.
  • the evaluation of the recorded image can take the form that in a first processing stage, face and eye detection and eye analysis are performed with a classifier. This level of processing provides initial clues for the orientation of the face, the eye positions, and the degree of lid closure.
  • An eye model used for this purpose can, for. Example, consist of: a pupil and / or iris position, a pupil and / or iris size, a description of the eyelids and the corner of the eyes. It is sufficient if at any time some of these components are found and evaluated. The one Components can also be tacked across multiple images so they do not have to be completely re-searched in each image.
  • the previously described Hough features can be used to perform face detection or eye detection or eye analysis or eye-contour analysis.
  • the 2D image analyzer described above can be used for face detection or for eye detection or eye analysis.
  • the described adaptive-selective data processor can be used for smoothing the result values or intermediate results or values profiles determined during face detection or eye detection or eye analysis or eye fine analysis.
  • a temporal evaluation of the LidscMussgtades and / or the results of the eye-fine analysis can be used to determine the microsleep or the tiredness or distraction of the user.
  • the calibration-free sighting direction described in connection with the 1 ) image analyzer can also be used to obtain better results in the determination of the microsleep or the fatigue or distraction of the user.
  • the adaptive-selective data processor can also be used.
  • the procedure for determining the eye position described in the exemplary embodiment "second scarf detector" can also be used to determine any other defined 2D position, such as a nose position or nose root position, in one
  • the Hough processor in the image input stage may include camera control equipment.
  • aspects described in connection with or as a method step also provide a description of a corresponding block or method
  • Some or all of the method steps may be performed by an apparatus (using a hardware apparatus), such as a microprocessor, a programmable computer, or an electronic circuit. In some embodiments, some or more of the more important. Process steps are performed by such an apparatus.
  • Invention be implemented in hardware or in software.
  • the implementation may be performed using a digital storage medium, such as a floppy disk, a DVD, a Blu-ray Disc, a CD, a ROM, a PROM, an EPRO, an EEPROM or FLASH memory, a hard disk, or other magnetic disk or optical Speiehers are stored on the electronically readable control signals that can cooperate with a programmable computer system or cooperate such that the respective method is performed. Therefore, the digital storage medium can be computer readable.
  • some embodiments according to the invention include a data carrier having electronically readable control signals capable of interacting with a programmable computer system such that one of the methods described herein is performed.
  • embodiments of the present invention may be implemented as a computer program product having a program code, wherein the program code is operable to perform one of the methods when the computer program product runs on a computer.
  • the program code can also be stored, for example, on a machine-readable carrier.
  • Other embodiments include the computer program for performing any of the methods described herein, wherein the computer program is stored on a machine-readable medium.
  • an exemplary embodiment of the method according to the invention is thus a computer program which has a program code for carrying out one of the methods described here when the computer program runs on a computer.
  • a further embodiment of the invention is therefore like a process data carrier (or a digital storage medium or a computer-readable medium) on which the computer program for. Performing one of the methods described herein is recorded.
  • a further embodiment of the inventive method is thus a data stream or sequence of signals representing the computer program for performing any of the methods described herein.
  • the data stream or the sequence of signals may be configured, for example, to be transferred via a data communication connection, for example via the Internet.
  • a further exemplary embodiment comprises a colorimg.
  • a computer or programmable logic device that is configured or adapted to perform one of the methods described herein.
  • Another embodiment includes a computer on which the computer program is installed to perform one of the methods described herein.
  • Another embodiment according to the invention comprises a device or system adapted to transmit a computer program for performing at least one of the methods described herein to a receiver.
  • the receiver may be, for example, a computer, a mobile device, a storage device or a similar device.
  • the device or system may include a file server for transmitting the computer program to the recipient.
  • a programmable logic device eg, a field programmable gate array, an FPGA
  • a field programmable gate array may cooperate with a microprocessor to perform any of the methods described herein.
  • the methods are performed by any hardware device. This may be a universally applicable hardware such as a Compiiter polishor (CPU) or specific to the process hardware, such as an ASIC.
  • CPU Compiiter polishor
  • the "Integrated Eyetracker” includes a collection of FPGA-optimized algorithms that are capable of extracting (elliptical) features (Hough features) from a live camera image using a parallel flough transformation
  • elliptical features Hough features
  • the calculation uses the position and shape of the ellipses in the camera images.There is no calibration of the system required for the respective user and no knowledge about the distance between the cameras and the analyzed eye.
  • the image processing algorithms used are in particular characterized in that they are optimized for processing on an FPGA (field programmable gate array).
  • the algorithms enable very fast image processing with a constant frame rate, minimal latency and minimal resource consumption in the FPGA.
  • these modules are predestined for time / latency / safety critical applications (eg driver assistance systems), medical diagnostic systems (eg perimeter) as well as applications such as human machine interfaces (eg for mobile devices) require low construction volume.
  • the overall system determines a list of multi-dimensional Hough features from two or more camera tabs in which the same eye is depicted, and calculates the position and shape of the pupillary ellipse on the basis thereof. From the parameters of these two ellipses as well as solely from the position and orientation of the cameras relative to one another, the 3D position of the pupillary midpoint as well as the 3D viewing direction and the pupil diameter can be determined completely without calibration.
  • a hardware platform we use a combination of at least two image sensors. FPGA and / or downstream. Microprocessor system (without a PC is absolutely necessary).
  • FIG. 6 shows a block diagram of the individual functional modes in the Integrated Eyctracker.
  • the block diagram shows the individual processing stages of the Integrated Eyetracker. The following is a detailed description of the modules.
  • one or more video data streams with prepared pixel data from the input are provided.
  • the parallel Hough transformation can be applied to the image content from four main directions offset by 90 ° each
  • delay elements can be switched on and off during runtime.
  • Each column of the filter stands for a certain characteristic of the sought structure (curvature or straight line increase)
  • the filter For each image pixel, the filter provides a point in the Hough space containing the following information
  • Type of pattern e.g., straight or semicircle
  • Hough feature to ellipse converter calculated ellipse, the Hough core size is tracked within defined limits around the "EP2015 / 052009
  • the gating coefficient is dynamically adjusted within defined limits to the trend of the data to be smoothed: o Reduction in the case of a rather constant value progression of the data series
  • This model can be used, among other things, to calculate the 3D line of sight (consisting of the center of the pupil and the directional vector (corrected according to the biology and physiology of the human eye))
  • the light beams that have depicted the 3D point as 2D points on the sensors are calculated from the transferred 2D coordinates for both cameras
  • the 3D position of the pupil center and the position of the image sensors are used to calculate virtual camera units whose optical axis passes through the 3 D pupil center by rotating the real camera units
  • bitfiles can be bound to an FPGA ID -> copying would then only be possible if FPGAs with the same ID are used. Proof of patent infringement by "disassembling" the FPGA bit-n-bit network
  • microsleep detector or fatigue detector as a driver assistance system in the automotive sector, by evaluating the eyes (eg, covering the pupil as a measure of the degree of opening) and taking into account the viewpoints and the focus
  • One aspect of the invention relates to an autonomous (PC-independent) system, which in particular uses FPGA-optimized algorithms, and is suitable for this purpose. Detect face in a camera live image and determine its (spatial) position.
  • the algorithms used are characterized in particular by the fact that they are optimized for processing on a FPGA (field programmable gate airay) and can do without processing recursions in comparison to the existing methods.
  • the algorithms enable very fast image processing with a constant frame rate, minimal latency and minimal resource consumption in the FPGA.
  • modules are therefore predestined for time-critical applications (eg driver assistance systems) or applications such as human machine intcrfaces (eg for mobile devices), which require a low volume of construction.
  • time-critical applications eg driver assistance systems
  • human machine intcrfaces eg for mobile devices
  • the spatial position of the user for specific points in the image can be determined with high precision, without calibration and without contact.
  • classifiers Based on the detected face position, classifiers only deliver inaccurate eye positions (the position of the eyes - in particular the pupil center - is not analytically determined (or measured) and is therefore high
  • the determined face and eye positions are only available in 2D image coordinates, not in 3D
  • the overall system determines the facial position from a cardiac image (in which a face is depicted) and, using this position, determines the positions of the pupil centers of the left and right eye. If two or more cameras with a known orientation are used, these two points can be specified for the 3-dimensional space.
  • the two determined eye positions can be further processed in systems that use the "Integrated Eyetracker".
  • FIG. 7a shows a block diagram of the individual function modules in the FPGA Facetracker 800.
  • the function modules "3D camera system model 802" and “3D position calculation” 804 are not absolutely necessary for facetracking, but are made when using a stereoscopic camera system and offsetting suitable points used on both cameras to determine spatial positions (for example, to determine the 3-D head position when calculating the 2D facial centers in both camera images).
  • the module "Feature extraction (Classification) 806" of the FPGA Facetracker builds on the feature extraction and classification of kublbeck / Ernst from Fraunhofer IIS (Er GmbH) and uses an adapted version of its classification based on Census characteristics.
  • the block diagram shows the individual processing stages of the FPGA Facetracking System. The following is a detailed description of the modules.
  • FIG. 7b shows the output image 710 (original image) and result 712 (downscaling image) of the parallel image scaler.
  • the image coordinates of the respective scaling stage are transformed into the image coordinate system of the target matrix on the basis of various criteria:
  • Detects a face from classification results of several scaling levels, which are arranged together in a matrix.
  • the parallel face finder 808 is similar to the finder 706 of FIG. 1, with the finder 706 including a generalized functionality (recognition of other patterns, such as pupil recognition). As shown in Fig. 7c, the result of the classification (right) represents the input for the parallel face fmder.
  • the eye search described below for each eye is performed in a defined area (eye area) within the face region provided by the "Parallel Face Finder":
  • Filters are detected within the eye area probabilities for the presence of an eye (the eye is described in this image area in simplified terms as a small dark area with a bright environment)
  • the exact eye position including its probability results from a maximum search in the previously calculated probability complex rallel pupil analyzer 812 '
  • a set of filter parameters can be used when initializing the
  • the current input value is used for smoothing if it does not fall into one of the following categories:
  • the corresponding downscaling level is a nonsensical value (value found in a downscaling level that is too far away)
  • the smoothing coefficient is dynamically adjusted within defined limits to the trend of the data to be smoothed: o Reduction in the case of more or less constant value progression of the data series
  • Configuration file containing the model parameters (attitude parameters, optical parameters, etc.) of all elements of the model
  • the model includes the following elements at the present time:
  • the viewpoint of a viewer on another object in the 3D model can be calculated as well as the focused area of the viewer
  • o function ⁇ Calculation of the spatial position (3D coordinates) of a point captured by two or more cameras (eg, pupil center)
  • ⁇ Error measure describes the accuracy of the passed 2D coordinates in conjunction with the model parameters
  • the 2D beams are used to calculate the light beams for both cameras that have reproduced the 3D point as 2i) points on the sensors Light rays are described as straight lines in the 3 D of the model
  • bitfiles can be bound to an FPGA-iD - the data would only be copied if FPGAs with the same ID were used
  • second-level detector in the automotive sector by evaluating the eyes (opening degree) and the eye and head movement o Human-machine communication
  • Head and eye parameters (inter alia position)
  • an elliptical pupil projection is produced on the image sensors 802a and 802b (see FIG.
  • the center of the pupil is always imaged on both sensors 802a and 802b and thus also in the corresponding camera images as the center EP KS and E M p K2 of the ellipse.
  • Dalier can be determined by stereoscopic backprojection of these two Ellipsenrnittelans E M p KI and EP K2 using the lens model of the 3D pupil center.
  • An optional prerequisite for this is an ideally time-synchronized image acquisition so that the scenes reproduced by both cameras are identical and thus the pupil center has been recorded at the same position.
  • the back projection beam RS of the ellipse center has to be calculated, which runs along the node beam between the object and the object-side node (Hl) of the optical system (FIG. 8a),
  • RS (l) RSo + t - RS s
  • This back projection beam is defined by equation (AI). It consists of a starting point RSo and a normalized directional vector RS-, which are used in the
  • the 3D ellipse center point in the camera coordinate system can be determined from the previously determined E-hopping center line parameters Tri x ", and y", which are present in image coordinates by means of an equation
  • P B üd is the resolution of the camera image in pixels
  • S 0 f fSe is the position on the sensor at which the image is read out
  • S rcs is the resolution of the sensor
  • S xör is the pixel size of the sensor.
  • the desired pupil center is ideally the intersection of the two back propagation beams RS K1 and RS K2 .
  • Two straight lines in this constellation which neither intersect nor run parallel, are called skewed straight lines in geometry.
  • the two skewed straight lines each pass very close to the pupil center track.
  • the pupil center is at the point of their smallest distance to each other halfway between the two lines.
  • the shortest distance between two skewed straight lines is indicated by a connecting line perpendicular to both straight lines.
  • the distance perpendicular to both rear projection beams can be calculated according to equation (A4) as a cross product of their direction vectors.
  • Equation (A4) The location of the shortest link between the back projection beams is defined by Equation (A5).
  • RS kl (s), RS K2 (f) this results in a system of equations from which s, l and u can be calculated.
  • the sought pupil center P P which lies halfway between the back projection beams, thus results from equation (A6) after the onset of the values calculated for s and u.
  • the calculated pupil center is one of the two parameters that determine the eye's eye's line of sight. It is also needed to calculate the line of sight vector, which is described below.
  • the sight line vector P fi to be determined corresponds to the normal vector of the circular pupil surface and is thus determined by the orientation of the pupil in the 3-D space.
  • the position can be and orientation of the pupil.
  • the lengths of the two half-axes and the rotation angle of the projected ellipses are characteristic of the orientation of the pupil or the viewing direction relative to the camera positions.
  • An approach for calculating the viewing direction from the ellipse parameters and fixed distances in the eye tracking system between the cameras and the eye is z. As described in the patent DE 10 2004 046 617 AI. As shown in FIG.
  • this approach is based on a parallel projection, wherein the straight line defined by the sensor normal and the center point of the pupil projected onto the sensor runs through the pupil center.
  • the distances between the cameras and the eye must be known in advance and stored firmly in the eye-tracking system.
  • the model of the camera lens used in the approach presented here which describes the imaging behavior of a real objective
  • a perspective projection of the object onto the image sensor As a result, the calculation of the pupil center can be made and the distances of the cameras to the eye need not be known in advance, which is one of the significant innovations over the above-mentioned patent.
  • perspective projection the shape of the pupil ellipse imaged on the sensor does not result from the inclination of the pupil with respect to the sensor surface, unlike the image elimination.
  • the deflection ⁇ of the pupil center from the optical axis of the camera lens has, as shown in FIG 8b also shows an influence on the shape of the pupil 1 projection and thus on the ellipse parameters determined therefrom.
  • the distance between pupil and camera with several hundred millimeters is very large compared to the pupil radius, which is between 2 mm. and 8 mm. Therefore, the deviation of the pupil 1projection from an ideal ellipse shape, which arises at a tilt of the pupil with respect to the optical axis, becomes very small and can be neglected.
  • the influence of the angle ⁇ on the ellipse parameters need not be eliminated, so that the shape of the pupil projection is influenced solely by the orientation of the pupil. This is always the case when the pupil center PMP lies directly in the optical axis of the camera system. Therefore, the influence of the angle ⁇ can be eliminated by calculating the pupil projection on the sensor of a virtual camera system vK whose optical axis passes directly through the previously calculated pupil center MP, as shown in Fig. 8c.
  • the position and orientation of such a virtual camera system 804a '(vk in Fig. 8c) can be calculated from the parameters of the original camera system 804a (K in Fig.
  • the normalized normal vector vK "of the virtual camera vK is as follows:
  • Unit vector of the y-direction of the eye tracker coordinate system by the angles ⁇ ⁇ , ⁇ ⁇ and ⁇ ⁇ can the vectors V ⁇ ' . and vK y - which are the x- and y-
  • the required distance d between the main points and the distance b between the main plane 2 and the sensor plane must be known or z. B. be determined experimentally with a test setup.
  • first edge points RP.sub.y , ⁇ of the previously determined ellipse on the sensor in the original position are required.
  • Edge points K 20 of the ellipse E in the camera image, corresponding to Fig.8d E A is .Short half-axis of the ellipse
  • E h is the long semiaxis of the ellipse
  • E SM and ⁇ ⁇ ⁇ the center coordinates of the ellipse
  • E A of the rotation angle of the ellipse The position of a point RP '° in the eyetracker coordinate system can be calculated by the equations (AI 1) to (AI 4) from the parameters of the ellipse E, the sensor 5 "and the camera K, where ⁇ is the position of a boundary point RP 2D according to Fig.8d indicates on the EHipsenache.
  • the virtual node point beam and the virtual sensor plane corresponding to the x-y plane of the virtual camera vK are set equal in equation (A 16), where the parameters of their intersection result by resolving to $ z and h. With these, by equation (AI 7), the ellipse edge point can be calculated in pixel coordinates in the image of the virtual camera.
  • the parameters of the virtual ellipse vE shown in Figure 8c may be computed by using at least six virtual boundary points vRP 20 , by substituting different ⁇ into equation (Al l) can be calculated by the above-described route.
  • the shape of the virtual ellipse vE thus determined depends only on the orientation of the pupil. Moreover, its center is always in the center of the virtual sensor and, together with the sensor normal, which corresponds to the camera normal vK-, forms a straight line running along the optical axis through the pupil center ⁇ . Thus, the prerequisites are fulfilled in order to subsequently calculate the line of sight based on the approach presented in the patent DE 10 2004 046 617 A I. With this approach it is now also possible to determine the viewing direction by using the virtual camera system described above, when the pupil center is located outside the optical axis of the real camera system, which is almost always the case in real applications.
  • the previously calculated virtual ellipse vE is now assumed in the main virtual plane 1. Since the center of vE is at the center of the virtual sensor and thus in the optical axis, the 3-D ellipse center point VE 'MP corresponds to the virtual main point 1. It is also the lotus point of the pupillary center PMP in the main virtual plane 1. At the same time, only the axis sen proportions and the Roiationswinkel the ellipse ve used.
  • This Fotmparaiiieter of vE can also be used unchanged with respect to the main level 1, since the alignments of the x- and y-axis of the 2-D sensor plane to which they relate, the orientation of the 3-D-Scnsorebene correspond and thus also the orientation of the main level 1.
  • Each image of the pupil 806a in a camera image can be formed by two different orientations of the pupil.
  • two virtual intersections v S of the two possible straight lines with the main virtual plane 1 result from the results of each camera.
  • the two possible viewing directions p. , and p. 2 can be determined as follows.
  • the distance A between the known pupil center and the ellipse center vE 'MP is:
  • the angle w dijj between the two averaged vectors P KI and ° which indicate the actual viewing direction, can be calculated.
  • M> dijf the more accurate were the model parameters and ellipse centers used for previous calculations.
  • angles of view 9 B w and fair with respect to the normal position of the pupil (P ⁇ lies paraiiel to the z-axis of the eye tracker coordinate system) can be calculated using the equations
  • LoS t P MP + l - P "-.
  • the implementation of the above-presented method is platform independent, so the above-presented method can be used on various hardware platforms, e.g. a PC can be executed.
  • the aim of the present following is to develop on the basis of the parallel Hough transformation and a robust method for feature extraction. This will be done by revising the Houghcore and introducing a feature extraction method that reduces the results of the transformation and breaks them down to a few "feature vectors" per image, then implements and tests the newly developed method in a Matlab toolbox, eventually becoming an FPGA implementation of the new procedure.
  • the Hough parallel transformation uses Houghcores of different sizes, which must be configured using configuration matrices for each application.
  • the mathematical relationships and methods for creating such configuration matrices are shown below.
  • the Matlab alc config lines curvatures.m script uses these techniques to create straight-line and semicircle configuration matrices of various sizes.
  • To create the configuration matrices it is first necessary to calculate a set of curves in discrete representation and for different Houghcore sizes.
  • the requirements (educational regulations) on the group of curves have already been shown. Under In particular, straight lines and semicircles are suitable for configuring the Houghcores, taking these educational regulations into account. For the viewing direction determination
  • the increase can be tuned by the variable Y core from 0 to core he j gl .
  • the radius is missing, which is obtained by inserting (B6) in (B7) and by further transformations.
  • variable h must be from 0 to
  • variable y core which runs from 0 to core, h
  • the circle configurations always represent circular arcs around the vertex of the semicircle. Only the largest y-index of the family of curves (smallest radius) represents a complete semicircle.
  • the developed configurations can be used for the new Houghcore.
  • a major disadvantage of Holland-Neil's FPGA implementation is the rigid configuration of the Houghcores.
  • the delavlines must be parameterized before the synthesis and are then stored permanently in the hardware structures (Holland-Neil, p. 48-49). Changes during runtime (eg Houghcore size) are no longer possible. The new procedure should become more flexible at this point. The new Houghcore should be too completely reconfigured during runtime in the FPGA.
  • Previous Hou gh c ore structure consist of a delay and a bypass and it is determined before the FPGA synthesis which path is to be used.
  • this structure is extended by one multiplexer, another register for configuring the delay element (switching of the multiplexer) and a pipeline delay.
  • the configuration registers can be modified during runtime. In this way, different configuration matrices can be imported into the Hougheore.
  • the synthesis tool in the FPGA has more freedom in implementing the Houghcore design, and higher clock rates can be achieved.
  • Pipe-linedelays break through time-critical paths within FPGA structures. In Fig. 9d, the new design of the delay elements is illustrated.
  • the delay elements of the new Houghcore have a somewhat more complex structure.
  • An additional register is required for flexible configuration of the delay element and the multiplexer occupies additional logic resources (must be implemented in the FPGA in a LUT).
  • the pipeline delay is optional.
  • modifications were also made to the design of the Houghcore.
  • the new Hougheore is illustrated in Fig. 9e,
  • Each column element requires one clock cycle and there is a latency of a few clock cycles through the BRAM and configuration logic.
  • the overall latency for reconfiguration is disadvantageous, but can be accepted for video-based image processing, usually If the video streams recorded with a CMOS sensor have horizontal and vertical blanking, the reconfiguration can be done easily in the horizontal blanking time.
  • the size of the Houghcore structure implemented in the FPGA also dictates the maximum size possible for Houghcore configurations. When small configurations are used, they are vertically centered and aligned horizontally at column 1 of the Houghcore structure (see Figure 91). Unused elements of the Houghcore structure are all filled with delays. The correct alignment of smaller configurations is important for the correction of the x-coordinates (see Formulas (B l 7) to (B 1 9)).
  • the Houghcore is fed as before with a binary edge image that passes through the configured delay lines.
  • the column sums are calculated over the entire Houghcore and compared with the sum signal of the previous column, respectively. If a column returns a higher total, the sum value of the original column is overwritten.
  • the new Houghcore returns a column sum value and its associated column number. On the basis of these values, a statement can later be made about which structure was found (represented by the column number) and with which probability of occurrence it was detected (represented by the sum value).
  • the output signal of the Houghcore can also be referred to as Houghraum or Akkumulatorraum.
  • the Hough space of the parallel Hough transformation is present in the image coordinate system.
  • the feature extraction works on the records from the previous table. These data sets can be summarized in a feature vector (B! 6).
  • the feature vector can also be referred to below as a Hough feature.
  • MV iMV x , MVY VQ, MV KS, MV N> MV c i, MV A ]
  • a feature vector consists of the x and y coordinates for the found feature (MV X and MV y ), the orientation MVo, the curvature MV / cs, the frequency MV H , the houghcore size MVQ.I and the type of found Structure MVA.
  • the detailed meaning and the value range of the individual elements of the Merkmaisvektors can be seen in the following table.
  • the two elements MV 0 and M ' VKS have different meanings for straight lines and semicircles.
  • the combination of orientation and curvature strength forms the position angle of the detected straight line section in the angle! from 0 ° to 180 °.
  • the orientation addresses an angular range and the curvature strength stands for a concrete angle within this range. The larger the Houghcore (the more 11 o ugheo re- palte n are present), the more finer is the angular resolution.
  • the orientation is the position angle or the orientation of the semicircle. Semicircles can be detected by principle only in four orientations.
  • the curvature is in semicircular configurations for the radius.
  • the statement floor rounds off the fractional rational number, in the FPGA this corresponds to the simple descending of the binary decimal numbers.
  • the actual feature extraction can take place.
  • three thresholds are used in combination with a non-maximum suppression operator.
  • the non-maximum-suppression operator differs in straight lines and semicircles.
  • a minimum MV k: s and maximum curvature strength MV K are specified via the threshold values and a minimum frequency MV H ⁇ is set.
  • the non-maximum suppression operator can be considered a 3x3 local operator (see Figure 9h).
  • a valid semicircle feature (or curvature) always arises when the condition of the nms operator in (B23) is satisfied and the thresholds are exceeded according to formulas (B20) to (B22).
  • Non-maximum suppression suppresses Hough features that do not represent local Maxim a in the frequency domain of the feature vectors. This suppresses Hough features that do not contribute to the sought-after structure and are irrelevant to post-processing.
  • the feature extraction would only be parameterized via three thresholds that can be sensibly adjusted in advance. A detailed explanation of the threshold values can be found in the following table.
  • Threshold for a minimum frequency ie a column Hough Threshing value, which must not be undershot.
  • MV K, S behaves like MV KS only for a maximum curvature, top-line
  • Which nms operator to use depends on the Houghcore type as well as the angle range.
  • the angular range that a floughcore provides with straight line configurations is divided by the angular range bisector.
  • the angular range bisector can be given as a Houghcore-Spaite (decimal broken) (MV KSI IH ).
  • the mathematical relationship depending on the Houghcore size is described by formula (B24).
  • the range of the flough-fcature is based on the Houghcore-Spaite that delivered the hit (MVKS), which can be directly compared to the angular range-bisecting Houghcore-Spaite.
  • the condition can be queried via the respective nms-oprator, similar to the non-maximum suppression for curvatures (formulas (B25) to (B27)). If all conditions are fulfilled and if the threshold values according to the formulas (B20) to (B22) are additionally exceeded, the Hough feature can be adopted at position »w.5 ' 2.2.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Software Systems (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Eye Examination Apparatus (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Complex Calculations (AREA)

Abstract

L'invention concerne un analyseur d'image 2D qui comprend un dispositif de redimensionnement d'image, un générateur d'image et un chercheur de modèle. Le dispositif de redimensionnement d'image est réalisé pour redimensionner une image selon un facteur d'échelle. Le générateur d'image est réalisé pour générer une image de vue d'ensemble qui comprend une pluralité de copies de l'image reçue et redimensionnée, chaque copie étant redimensionnée selon un facteur d'échelle différent. Ce faisant, la position respective peut être calculée par un algorithme qui prend en compte un écart entre les images redimensionnées dans l'image de vue d'ensemble, un écart des images redimensionnées par rapport à une ou plusieurs frontières de l'image de vue d'ensemble et/ou d'autres conditions prédéfinies. Le chercheur de modèle est réalisé pour exécuter une transformation de caractéristique et une classification de l'image de vue d'ensemble, et pour sortir une position à laquelle une correspondance entre le modèle recherché et le modèle prédéfini est maximale. L'analyseur d'image 2d peut comprendre également de manière facultative un dispositif de post-traitement aux fins de lissage et de correction de position de maxima locaux dans l'image de vue d'ensemble classifiée.
EP15702739.2A 2014-02-04 2015-01-30 Analyseur d'image 2d Withdrawn EP3103060A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102014201997 2014-02-04
PCT/EP2015/052009 WO2015117906A1 (fr) 2014-02-04 2015-01-30 Analyseur d'image 2d

Publications (1)

Publication Number Publication Date
EP3103060A1 true EP3103060A1 (fr) 2016-12-14

Family

ID=52434840

Family Applications (4)

Application Number Title Priority Date Filing Date
EP21203252.8A Withdrawn EP3968288A2 (fr) 2014-02-04 2015-01-30 Analyseur d'image 3d pour déterminer une direction du regard
EP15701823.5A Withdrawn EP3103059A1 (fr) 2014-02-04 2015-01-30 Analyseur d'image 3d pour déterminer une direction du regard
EP15701822.7A Ceased EP3103058A1 (fr) 2014-02-04 2015-01-30 Processeur hough
EP15702739.2A Withdrawn EP3103060A1 (fr) 2014-02-04 2015-01-30 Analyseur d'image 2d

Family Applications Before (3)

Application Number Title Priority Date Filing Date
EP21203252.8A Withdrawn EP3968288A2 (fr) 2014-02-04 2015-01-30 Analyseur d'image 3d pour déterminer une direction du regard
EP15701823.5A Withdrawn EP3103059A1 (fr) 2014-02-04 2015-01-30 Analyseur d'image 3d pour déterminer une direction du regard
EP15701822.7A Ceased EP3103058A1 (fr) 2014-02-04 2015-01-30 Processeur hough

Country Status (6)

Country Link
US (3) US10192135B2 (fr)
EP (4) EP3968288A2 (fr)
JP (3) JP6268303B2 (fr)
KR (2) KR101991496B1 (fr)
CN (3) CN106104573A (fr)
WO (4) WO2015117905A1 (fr)

Families Citing this family (131)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150022664A1 (en) 2012-01-20 2015-01-22 Magna Electronics Inc. Vehicle vision system with positionable virtual viewpoint
WO2013173728A1 (fr) 2012-05-17 2013-11-21 The University Of North Carolina At Chapel Hill Procédés, systèmes, et support lisible par ordinateur pour une acquisition de scène et un suivi de pose unifiés dans un dispositif d'affichage portable
CN104715227B (zh) * 2013-12-13 2020-04-03 北京三星通信技术研究有限公司 人脸关键点的定位方法和装置
WO2015117905A1 (fr) * 2014-02-04 2015-08-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Analyseur d'image 3d pour déterminer une direction du regard
DE102015202846B4 (de) 2014-02-19 2020-06-25 Magna Electronics, Inc. Fahrzeugsichtsystem mit Anzeige
WO2015198477A1 (fr) 2014-06-27 2015-12-30 株式会社Fove Dispositif de détection de regard
US10318067B2 (en) * 2014-07-11 2019-06-11 Hewlett-Packard Development Company, L.P. Corner generation in a projector display area
US20180227735A1 (en) * 2014-08-25 2018-08-09 Phyziio, Inc. Proximity-Based Attribution of Rewards
US11049476B2 (en) 2014-11-04 2021-06-29 The University Of North Carolina At Chapel Hill Minimal-latency tracking and display for matching real and virtual worlds in head-worn displays
KR20160094190A (ko) * 2015-01-30 2016-08-09 한국전자통신연구원 시선 추적 장치 및 방법
JP6444233B2 (ja) * 2015-03-24 2018-12-26 キヤノン株式会社 距離計測装置、距離計測方法、およびプログラム
US20160363995A1 (en) * 2015-06-12 2016-12-15 Seeing Machines Limited Circular light element for illumination of cornea in head mounted eye-tracking
CN105511093B (zh) * 2015-06-18 2018-02-09 广州优视网络科技有限公司 3d成像方法及装置
US9798950B2 (en) * 2015-07-09 2017-10-24 Olympus Corporation Feature amount generation device, feature amount generation method, and non-transitory medium saving program
US9786715B2 (en) 2015-07-23 2017-10-10 Artilux Corporation High efficiency wide spectrum sensor
US10707260B2 (en) 2015-08-04 2020-07-07 Artilux, Inc. Circuit for operating a multi-gate VIS/IR photodiode
US10861888B2 (en) 2015-08-04 2020-12-08 Artilux, Inc. Silicon germanium imager with photodiode in trench
US10761599B2 (en) 2015-08-04 2020-09-01 Artilux, Inc. Eye gesture tracking
WO2017024121A1 (fr) 2015-08-04 2017-02-09 Artilux Corporation Appareil de détection de lumière à base de germanium-silicium
US10616149B2 (en) * 2015-08-10 2020-04-07 The Rocket Science Group Llc Optimizing evaluation of effectiveness for multiple versions of electronic messages
CN115824395B (zh) 2015-08-27 2023-08-15 光程研创股份有限公司 宽频谱光学传感器
JP6634765B2 (ja) * 2015-09-30 2020-01-22 株式会社ニデック 眼科装置、および眼科装置制御プログラム
WO2017059581A1 (fr) * 2015-10-09 2017-04-13 SZ DJI Technology Co., Ltd. Positionnement de véhicule basé sur des particularités
US10254389B2 (en) 2015-11-06 2019-04-09 Artilux Corporation High-speed light sensing apparatus
US10418407B2 (en) 2015-11-06 2019-09-17 Artilux, Inc. High-speed light sensing apparatus III
US10886309B2 (en) 2015-11-06 2021-01-05 Artilux, Inc. High-speed light sensing apparatus II
US10741598B2 (en) 2015-11-06 2020-08-11 Atrilux, Inc. High-speed light sensing apparatus II
US10739443B2 (en) 2015-11-06 2020-08-11 Artilux, Inc. High-speed light sensing apparatus II
CN106200905B (zh) * 2016-06-27 2019-03-29 联想(北京)有限公司 信息处理方法及电子设备
JP2019531560A (ja) 2016-07-05 2019-10-31 ナウト, インコーポレイテッドNauto, Inc. 自動運転者識別システムおよび方法
JP6799063B2 (ja) * 2016-07-20 2020-12-09 富士フイルム株式会社 注目位置認識装置、撮像装置、表示装置、注目位置認識方法及びプログラム
CN105954992B (zh) * 2016-07-22 2018-10-30 京东方科技集团股份有限公司 显示系统和显示方法
GB2552511A (en) * 2016-07-26 2018-01-31 Canon Kk Dynamic parametrization of video content analytics systems
US10417495B1 (en) * 2016-08-08 2019-09-17 Google Llc Systems and methods for determining biometric information
US10209081B2 (en) 2016-08-09 2019-02-19 Nauto, Inc. System and method for precision localization and mapping
US10733460B2 (en) 2016-09-14 2020-08-04 Nauto, Inc. Systems and methods for safe route determination
JP6587254B2 (ja) * 2016-09-16 2019-10-09 株式会社東海理化電機製作所 輝度制御装置、輝度制御システム及び輝度制御方法
EP3305176A1 (fr) * 2016-10-04 2018-04-11 Essilor International Procédé de détermination d'un paramètre géométrique d'un il d'un sujet
US11361003B2 (en) * 2016-10-26 2022-06-14 salesforcecom, inc. Data clustering and visualization with determined group number
US10246014B2 (en) 2016-11-07 2019-04-02 Nauto, Inc. System and method for driver distraction determination
CN110192390A (zh) * 2016-11-24 2019-08-30 华盛顿大学 头戴式显示器的光场捕获和渲染
EP3523777A4 (fr) * 2016-12-06 2019-11-13 SZ DJI Technology Co., Ltd. Système et procédé de correction d'une image grand angle
DE102016224886B3 (de) * 2016-12-13 2018-05-30 Deutsches Zentrum für Luft- und Raumfahrt e.V. Verfahren und Vorrichtung zur Ermittlung der Schnittkanten von zwei sich überlappenden Bildaufnahmen einer Oberfläche
CN110121689A (zh) * 2016-12-30 2019-08-13 托比股份公司 眼睛/注视追踪系统和方法
US10282592B2 (en) * 2017-01-12 2019-05-07 Icatch Technology Inc. Face detecting method and face detecting system
DE102017103721B4 (de) * 2017-02-23 2022-07-21 Karl Storz Se & Co. Kg Vorrichtung zur Erfassung eines Stereobilds mit einer rotierbaren Blickrichtungseinrichtung
KR101880751B1 (ko) * 2017-03-21 2018-07-20 주식회사 모픽 무안경 입체영상시청을 위해 사용자 단말과 렌티큘러 렌즈 간 정렬 오차를 줄이기 위한 방법 및 이를 수행하는 사용자 단말
JP7003455B2 (ja) * 2017-06-15 2022-01-20 オムロン株式会社 テンプレート作成装置、物体認識処理装置、テンプレート作成方法及びプログラム
US10430695B2 (en) 2017-06-16 2019-10-01 Nauto, Inc. System and method for contextualized vehicle operation determination
US10453150B2 (en) 2017-06-16 2019-10-22 Nauto, Inc. System and method for adverse vehicle event determination
EP3420887A1 (fr) 2017-06-30 2019-01-02 Essilor International Procédé de détermination de la position du centre de rotation de l'oeil d'un sujet et dispositif associé
EP3430973A1 (fr) * 2017-07-19 2019-01-23 Sony Corporation Système et procédé mobile
JP2019017800A (ja) * 2017-07-19 2019-02-07 富士通株式会社 コンピュータプログラム、情報処理装置及び情報処理方法
KR101963392B1 (ko) * 2017-08-16 2019-03-28 한국과학기술연구원 무안경식 3차원 영상표시장치의 동적 최대 시역 형성 방법
WO2019039997A1 (fr) * 2017-08-25 2019-02-28 Maker Trading Pte Ltd Système général de vision artificielle monoculaire et procédé d'identification d'emplacements d'éléments cibles
US10460458B1 (en) * 2017-09-14 2019-10-29 United States Of America As Represented By The Secretary Of The Air Force Method for registration of partially-overlapped aerial imagery using a reduced search space methodology with hybrid similarity measures
CN107818305B (zh) * 2017-10-31 2020-09-22 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备和计算机可读存储介质
EP3486834A1 (fr) * 2017-11-16 2019-05-22 Smart Eye AB Détection d'une pose d'un il
CN108024056B (zh) * 2017-11-30 2019-10-29 Oppo广东移动通信有限公司 基于双摄像头的成像方法和装置
KR102444666B1 (ko) * 2017-12-20 2022-09-19 현대자동차주식회사 차량용 3차원 입체 영상의 제어 방법 및 장치
CN108334810B (zh) * 2017-12-25 2020-12-11 北京七鑫易维信息技术有限公司 视线追踪设备中确定参数的方法和装置
JP7109193B2 (ja) 2018-01-05 2022-07-29 ラピスセミコンダクタ株式会社 操作判定装置及び操作判定方法
CN108875526B (zh) * 2018-01-05 2020-12-25 北京旷视科技有限公司 视线检测的方法、装置、系统及计算机存储介质
US10853674B2 (en) 2018-01-23 2020-12-01 Toyota Research Institute, Inc. Vehicle systems and methods for determining a gaze target based on a virtual eye position
US10817068B2 (en) * 2018-01-23 2020-10-27 Toyota Research Institute, Inc. Vehicle systems and methods for determining target based on selecting a virtual eye position or a pointing direction
US10706300B2 (en) * 2018-01-23 2020-07-07 Toyota Research Institute, Inc. Vehicle systems and methods for determining a target based on a virtual eye position and a pointing direction
US11105928B2 (en) 2018-02-23 2021-08-31 Artilux, Inc. Light-sensing apparatus and light-sensing method thereof
TWI762768B (zh) 2018-02-23 2022-05-01 美商光程研創股份有限公司 光偵測裝置
EP3759700B1 (fr) 2018-02-27 2023-03-15 Nauto, Inc. Procédé de détermination de directives de conduite
WO2019185150A1 (fr) * 2018-03-29 2019-10-03 Tobii Ab Détermination d'une direction du regard à l'aide d'informations de profondeur
CN114335030A (zh) 2018-04-08 2022-04-12 奥特逻科公司 光探测装置
CN108667686B (zh) * 2018-04-11 2021-10-22 国电南瑞科技股份有限公司 一种网络报文时延测量的可信度评估方法
KR20190118965A (ko) * 2018-04-11 2019-10-21 주식회사 비주얼캠프 시선 추적 시스템 및 방법
WO2019199035A1 (fr) * 2018-04-11 2019-10-17 주식회사 비주얼캠프 Système et procédé de suivi du regard
US10854770B2 (en) 2018-05-07 2020-12-01 Artilux, Inc. Avalanche photo-transistor
US10969877B2 (en) 2018-05-08 2021-04-06 Artilux, Inc. Display apparatus
CN108876733B (zh) * 2018-05-30 2021-11-09 上海联影医疗科技股份有限公司 一种图像增强方法、装置、设备和存储介质
US10410372B1 (en) * 2018-06-14 2019-09-10 The University Of North Carolina At Chapel Hill Methods, systems, and computer-readable media for utilizing radial distortion to estimate a pose configuration
US10803618B2 (en) * 2018-06-28 2020-10-13 Intel Corporation Multiple subject attention tracking
CN109213031A (zh) * 2018-08-13 2019-01-15 祝爱莲 窗体加固控制平台
KR102521408B1 (ko) * 2018-08-27 2023-04-14 삼성전자주식회사 인포그래픽을 제공하기 위한 전자 장치 및 그에 관한 방법
CN113366542A (zh) * 2018-08-30 2021-09-07 斯波莱史莱特控股有限责任公司 用于实现基于扩充的规范化分类图像分析计算事件的技术
CN109376595B (zh) * 2018-09-14 2023-06-23 杭州宇泛智能科技有限公司 基于人眼注意力的单目rgb摄像头活体检测方法及系统
JP7099925B2 (ja) * 2018-09-27 2022-07-12 富士フイルム株式会社 画像処理装置、画像処理方法、プログラムおよび記録媒体
JP6934001B2 (ja) * 2018-09-27 2021-09-08 富士フイルム株式会社 画像処理装置、画像処理方法、プログラムおよび記録媒体
CN110966923B (zh) * 2018-09-29 2021-08-31 深圳市掌网科技股份有限公司 室内三维扫描与危险排除系统
US11144779B2 (en) 2018-10-16 2021-10-12 International Business Machines Corporation Real-time micro air-quality indexing
CN109492120B (zh) * 2018-10-31 2020-07-03 四川大学 模型训练方法、检索方法、装置、电子设备及存储介质
JP7001042B2 (ja) * 2018-11-08 2022-01-19 日本電信電話株式会社 眼情報推定装置、眼情報推定方法、プログラム
CN111479104A (zh) * 2018-12-21 2020-07-31 托比股份公司 用于计算视线会聚距离的方法
US11113842B2 (en) * 2018-12-24 2021-09-07 Samsung Electronics Co., Ltd. Method and apparatus with gaze estimation
CN109784226B (zh) * 2018-12-28 2020-12-15 深圳云天励飞技术有限公司 人脸抓拍方法及相关装置
US11049289B2 (en) * 2019-01-10 2021-06-29 General Electric Company Systems and methods to semi-automatically segment a 3D medical image using a real-time edge-aware brush
US10825137B2 (en) * 2019-01-15 2020-11-03 Datalogic IP Tech, S.r.l. Systems and methods for pre-localization of regions of interest for optical character recognition, and devices therefor
KR102653252B1 (ko) * 2019-02-21 2024-04-01 삼성전자 주식회사 외부 객체의 정보에 기반하여 시각화된 인공 지능 서비스를 제공하는 전자 장치 및 전자 장치의 동작 방법
US11068052B2 (en) * 2019-03-15 2021-07-20 Microsoft Technology Licensing, Llc Holographic image generated based on eye position
DE102019107853B4 (de) * 2019-03-27 2020-11-19 Schölly Fiberoptic GmbH Verfahren zur Inbetriebnahme einer Kamerasteuerungseinheit (CCU)
US11644897B2 (en) 2019-04-01 2023-05-09 Evolution Optiks Limited User tracking system using user feature location and method, and digital display device and digital image rendering system and method using same
WO2020201999A2 (fr) 2019-04-01 2020-10-08 Evolution Optiks Limited Système et procédé de suivi de pupille, et dispositif d'affichage numérique et système de rendu d'image numérique et procédé associé
US20210011550A1 (en) * 2019-06-14 2021-01-14 Tobii Ab Machine learning based gaze estimation with confidence
CN110718067A (zh) * 2019-09-23 2020-01-21 浙江大华技术股份有限公司 违规行为告警方法及相关装置
US11080892B2 (en) * 2019-10-07 2021-08-03 The Boeing Company Computer-implemented methods and system for localizing an object
US11688199B2 (en) * 2019-11-13 2023-06-27 Samsung Electronics Co., Ltd. Method and apparatus for face detection using adaptive threshold
CN113208591B (zh) * 2020-01-21 2023-01-06 魔门塔(苏州)科技有限公司 一种眼睛开闭距离的确定方法及装置
WO2021171586A1 (fr) * 2020-02-28 2021-09-02 日本電気株式会社 Dispositif d'acquisition d'images, procédé d'acquisition d'images, et programme de traitement d'images
CN113448428B (zh) * 2020-03-24 2023-04-25 中移(成都)信息通信科技有限公司 一种视线焦点的预测方法、装置、设备及计算机存储介质
US10949986B1 (en) 2020-05-12 2021-03-16 Proprio, Inc. Methods and systems for imaging a scene, such as a medical scene, and tracking objects within the scene
CN111768433B (zh) * 2020-06-30 2024-05-24 杭州海康威视数字技术股份有限公司 一种移动目标追踪的实现方法、装置及电子设备
US11676255B2 (en) * 2020-08-14 2023-06-13 Optos Plc Image correction for ophthalmic images
CN111985384B (zh) * 2020-08-14 2024-09-24 深圳地平线机器人科技有限公司 获取脸部关键点的3d坐标及3d脸部模型的方法和装置
EP4208772A1 (fr) * 2020-09-04 2023-07-12 Telefonaktiebolaget LM Ericsson (publ) Agencement de module logiciel informatique, agencement de circuits, agencement et procédé pour une interface utilisateur améliorée
US10909167B1 (en) * 2020-09-17 2021-02-02 Pure Memories Ltd Systems and methods for organizing an image gallery
CN112633313B (zh) * 2020-10-13 2021-12-03 北京匠数科技有限公司 一种网络终端的不良信息识别方法及局域网终端设备
CN112255882A (zh) * 2020-10-23 2021-01-22 泉芯集成电路制造(济南)有限公司 集成电路版图微缩方法
CN112650461B (zh) * 2020-12-15 2021-07-13 广州舒勇五金制品有限公司 一种基于相对位置的展示系统
US12095975B2 (en) 2020-12-23 2024-09-17 Meta Platforms Technologies, Llc Reverse pass-through glasses for augmented reality and virtual reality devices
US12131416B2 (en) * 2020-12-23 2024-10-29 Meta Platforms Technologies, Llc Pixel-aligned volumetric avatars
US11417024B2 (en) 2021-01-14 2022-08-16 Momentick Ltd. Systems and methods for hue based encoding of a digital image
KR20220115001A (ko) * 2021-02-09 2022-08-17 현대모비스 주식회사 스마트 디바이스 스위블을 이용한 차량 제어 장치 및 그 방법
US20220270116A1 (en) * 2021-02-24 2022-08-25 Neil Fleischer Methods to identify critical customer experience incidents using remotely captured eye-tracking recording combined with automatic facial emotion detection via mobile phone or webcams.
WO2022259499A1 (fr) * 2021-06-11 2022-12-15 三菱電機株式会社 Dispositif de suivi des yeux
JP2022189536A (ja) * 2021-06-11 2022-12-22 キヤノン株式会社 撮像装置および方法
US11914915B2 (en) * 2021-07-30 2024-02-27 Taiwan Semiconductor Manufacturing Company, Ltd. Near eye display apparatus
TWI782709B (zh) * 2021-09-16 2022-11-01 財團法人金屬工業研究發展中心 手術機械臂控制系統以及手術機械臂控制方法
CN114387442A (zh) * 2022-01-12 2022-04-22 南京农业大学 一种多维空间中的直线、平面和超平面的快速检测方法
US11887151B2 (en) * 2022-02-14 2024-01-30 Korea Advanced Institute Of Science And Technology Method and apparatus for providing advertisement disclosure for identifying advertisements in 3-dimensional space
US12106479B2 (en) * 2022-03-22 2024-10-01 T-Jet Meds Corporation Limited Ultrasound image recognition system and data output module
CN114794992B (zh) * 2022-06-07 2024-01-09 深圳甲壳虫智能有限公司 充电座、机器人的回充方法和扫地机器人
CN115936037B (zh) * 2023-02-22 2023-05-30 青岛创新奇智科技集团股份有限公司 二维码的解码方法及装置
CN116523831B (zh) * 2023-03-13 2023-09-19 深圳市柯达科电子科技有限公司 一种曲面背光源的组装成型工艺控制方法
CN116109643B (zh) * 2023-04-13 2023-08-04 深圳市明源云科技有限公司 市场布局数据采集方法、设备及计算机可读存储介质

Family Cites Families (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3069654A (en) 1960-03-25 1962-12-18 Paul V C Hough Method and means for recognizing complex patterns
JP3163215B2 (ja) * 1994-03-07 2001-05-08 日本電信電話株式会社 直線抽出ハフ変換画像処理装置
JP4675492B2 (ja) * 2001-03-22 2011-04-20 本田技研工業株式会社 顔画像を使用した個人認証装置
JP4128001B2 (ja) * 2001-11-19 2008-07-30 グローリー株式会社 歪み画像の対応付け方法、装置およびプログラム
JP4275345B2 (ja) 2002-01-30 2009-06-10 株式会社日立製作所 パターン検査方法及びパターン検査装置
CN2586213Y (zh) * 2002-12-24 2003-11-12 合肥工业大学 实时实现Hough变换的光学装置
US7164807B2 (en) 2003-04-24 2007-01-16 Eastman Kodak Company Method and system for automatically reducing aliasing artifacts
JP4324417B2 (ja) * 2003-07-18 2009-09-02 富士重工業株式会社 画像処理装置および画像処理方法
JP4604190B2 (ja) 2004-02-17 2010-12-22 国立大学法人静岡大学 距離イメージセンサを用いた視線検出装置
DE102004046617A1 (de) * 2004-09-22 2006-04-06 Eldith Gmbh Vorrichtung und Verfahren zur berührungslosen Bestimmung der Blickrichtung
US8995715B2 (en) * 2010-10-26 2015-03-31 Fotonation Limited Face or other object detection including template matching
JP4682372B2 (ja) * 2005-03-31 2011-05-11 株式会社国際電気通信基礎技術研究所 視線方向の検出装置、視線方向の検出方法およびコンピュータに当該視線方向の検出方法を実行させるためのプログラム
US7406212B2 (en) 2005-06-02 2008-07-29 Motorola, Inc. Method and system for parallel processing of Hough transform computations
WO2007102053A2 (fr) * 2005-09-16 2007-09-13 Imotions-Emotion Technology Aps Système et méthode de détermination de l'émotion humaine par analyse des propriétés de l'oeil
DE102005047160B4 (de) * 2005-09-30 2007-06-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Vorrichtung, Verfahren und Computerprogramm zum Ermitteln einer Information über eine Form und/oder eine Lage einer Ellipse in einem graphischen Bild
KR100820639B1 (ko) * 2006-07-25 2008-04-10 한국과학기술연구원 시선 기반 3차원 인터랙션 시스템 및 방법 그리고 3차원시선 추적 시스템 및 방법
US8180159B2 (en) * 2007-06-06 2012-05-15 Sharp Kabushiki Kaisha Image processing apparatus, image forming apparatus, image processing system, and image processing method
JP5558081B2 (ja) * 2009-11-24 2014-07-23 株式会社エヌテック 画像形成状態検査方法、画像形成状態検査装置及び画像形成状態検査用プログラム
US8670019B2 (en) * 2011-04-28 2014-03-11 Cisco Technology, Inc. System and method for providing enhanced eye gaze in a video conferencing environment
JP2013024910A (ja) * 2011-07-15 2013-02-04 Canon Inc 観察用光学機器
US9323325B2 (en) * 2011-08-30 2016-04-26 Microsoft Technology Licensing, Llc Enhancing an object of interest in a see-through, mixed reality display device
US8798363B2 (en) 2011-09-30 2014-08-05 Ebay Inc. Extraction of image feature data from images
CN103297767B (zh) * 2012-02-28 2016-03-16 三星电子(中国)研发中心 一种适用于多核嵌入式平台的jpeg图像解码方法及解码器
US9308439B2 (en) * 2012-04-10 2016-04-12 Bally Gaming, Inc. Controlling three-dimensional presentation of wagering game content
CN102662476B (zh) * 2012-04-20 2015-01-21 天津大学 一种视线估计方法
US11093702B2 (en) * 2012-06-22 2021-08-17 Microsoft Technology Licensing, Llc Checking and/or completion for data grids
EP2709060B1 (fr) * 2012-09-17 2020-02-26 Apple Inc. Procédé et appareil permettant de déterminer un point de regard fixe sur un objet tridimensionnel
CN103019507B (zh) * 2012-11-16 2015-03-25 福州瑞芯微电子有限公司 一种基于人脸跟踪改变视点角度显示三维图形的方法
CN103136525B (zh) * 2013-02-28 2016-01-20 中国科学院光电技术研究所 一种利用广义Hough变换的异型扩展目标高精度定位方法
WO2014181770A1 (fr) 2013-05-08 2014-11-13 コニカミノルタ株式会社 Procédé de production d'élément électroluminescent organique présentant un motif émettant de la lumière
KR20150006993A (ko) * 2013-07-10 2015-01-20 삼성전자주식회사 디스플레이 장치 및 이의 디스플레이 방법
US9619884B2 (en) 2013-10-03 2017-04-11 Amlogic Co., Limited 2D to 3D image conversion device and method
WO2015117905A1 (fr) 2014-02-04 2015-08-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Analyseur d'image 3d pour déterminer une direction du regard
EP3119286B1 (fr) * 2014-03-19 2024-05-01 Intuitive Surgical Operations, Inc. Dispositifs médicaux et systèmes utilisant le suivi du regard
US9607428B2 (en) 2015-06-30 2017-03-28 Ariadne's Thread (Usa), Inc. Variable resolution virtual reality display system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHARLES DUBOUT: "Object Classification and Detection in High Dimensional Feature Space", PHD THESIS AT THE ÉCOLE PLYTECHNIQUE FÉDÉRALE DE LAUSANNE, SWITZERLAND, 17 December 2013 (2013-12-17), Switzerland, pages 1 - 128, XP055263147, Retrieved from the Internet <URL:http://publications.idiap.ch/downloads/papers/2013/Dubout_THESIS_2013.pdf> [retrieved on 20160406] *
FORREST IANDOLA ET AL: "DenseNet: Implementing Efficient ConvNet Descriptor Pyramids", 7 April 2014 (2014-04-07), XP055263218, Retrieved from the Internet <URL:http://arxiv.org/pdf/1404.1869.pdf> [retrieved on 20160406] *
PETER HUSAR ET AL: "Autonomes, kalibrationsfreies und echtzeitfähiges System zur Blickrichtungsverfolgung eines Fahrers", KONFERENZ: VDE-KONGRESS 2010 - E-MOBILITY: TECHNOLOGIEN - INFRASTRUKTUR - MÄRKTE, 8 November 2010 (2010-11-08), XP055713293, Retrieved from the Internet <URL:https://www.vde-verlag.de/proceedings-de/453304068.html> [retrieved on 20200709] *
See also references of WO2015117906A1 *

Also Published As

Publication number Publication date
EP3968288A2 (fr) 2022-03-16
WO2015117904A1 (fr) 2015-08-13
WO2015117907A2 (fr) 2015-08-13
JP2017514193A (ja) 2017-06-01
JP6483715B2 (ja) 2019-03-13
US20160342856A1 (en) 2016-11-24
JP2017508207A (ja) 2017-03-23
CN106133750A (zh) 2016-11-16
WO2015117907A3 (fr) 2015-10-01
EP3103059A1 (fr) 2016-12-14
JP6268303B2 (ja) 2018-01-24
JP2017509967A (ja) 2017-04-06
WO2015117906A1 (fr) 2015-08-13
US20170032214A1 (en) 2017-02-02
US10592768B2 (en) 2020-03-17
KR101858491B1 (ko) 2018-05-16
CN106133750B (zh) 2020-08-28
CN106258010A (zh) 2016-12-28
JP6248208B2 (ja) 2017-12-13
US20160335475A1 (en) 2016-11-17
KR20160119146A (ko) 2016-10-12
CN106104573A (zh) 2016-11-09
KR20160119176A (ko) 2016-10-12
CN106258010B (zh) 2019-11-22
KR101991496B1 (ko) 2019-06-20
WO2015117905A1 (fr) 2015-08-13
US10192135B2 (en) 2019-01-29
EP3103058A1 (fr) 2016-12-14
US10074031B2 (en) 2018-09-11

Similar Documents

Publication Publication Date Title
EP3103060A1 (fr) Analyseur d&#39;image 2d
EP3657236B1 (fr) Procédé, dispositif et programme informatique destinés à l&#39;adaptation virtuelle d&#39;une monture de lunettes
EP1228414B1 (fr) Procede assiste par ordinateur de determination sans contact par moyens video de la direction de vision de l&#39;oeil d&#39;un utilisateur pour l&#39;interaction homme-ordinateur pilotee par l&#39;oeil, et dispositif pour la mise en oeuvre de ce procede
DE112009000099T5 (de) Bildsignaturen zur Verwendung in einer bewegungsbasierten dreidimensionalen Rekonstruktion
DE102004049676A1 (de) Verfahren zur rechnergestützten Bewegungsschätzung in einer Vielzahl von zeitlich aufeinander folgenden digitalen Bildern, Anordnung zur rechnergestützten Bewegungsschätzung, Computerprogramm-Element und computerlesbares Speichermedium
DE602004011933T2 (de) Automatische bestimmung der optimalen betrachtungsrichtung bei herzaufnahmen
EP3924710B1 (fr) Procédé et dispositif de mesure de la force de rupture locale et / ou de la répartition de la force de rupture d&#39;un verre de lunettes
DE102014100352A1 (de) Fahrerblickdetektionssystem
EP3702832A1 (fr) Procédé, dispositifs et programme informatique de détermination d&#39;un point visuel proche
EP3959497B1 (fr) Procédé et dispositif de mesure de la force de rupture locale ou de la distribution de la force de rupture d&#39;un verre de lunettes
DE10145608B4 (de) Modellbasierte Objektklassifikation und Zielerkennung
DE102014108924B4 (de) Ein halbüberwachtes Verfahren zum Trainiern eines Hilfsmodells zur Erkennung und Erfassung von Mehrfachmustern
DE60216766T2 (de) Verfahren zum automatischen verfolgen eines sich bewegenden körpers
DE102010054168B4 (de) Verfahren, Vorrichtung und Programm zur Bestimmung der torsionalen Komponente der Augenposition
DE102014107185A1 (de) Verfahren zur Bereitstellung eines dreidimensionalen Topographiemodells, Verfahren zur Betrachtung einer Topographiedarstellung eines Topographiemodells, Visualisierungsvorrichtung und Mikroskopiersystem
DE102018121317A1 (de) Verfahren und Vorrichtung zur Schätzung einer durch eine Freiraumgeste vermittelten Richtungsinformation zur Bestimmung einer Benutzereingabe an einer Mensch-Maschine-Schnittstelle
WO2017186225A1 (fr) Système d&#39;analyse de déplacement et système de suivi de déplacement, le comprenant, d&#39;objets déplacés ou se déplaçant, se détachant thermiquement de leur environnement
DE102021129171B3 (de) Verfahren, system und computerprogramm zur virtuellen voraussage eines realen sitzes eines realen brillengestells am kopf einer person mit individueller kopfgeometrie
Crawford-Hines et al. Neural nets in boundary tracing tasks
EP2113865A1 (fr) Procédé de vérification de photos portrait
DE102021006248A1 (de) Objekterfassungsverfahren
DE102013221545A1 (de) Verfahren und Vorrichtung zur medizinischen Bilderfassung
Guo Image segmentation and shape recognition by data-dependent systems

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20160729

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20200720

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

RAP3 Party data changed (applicant data changed or rights of an application transferred)

Owner name: FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V.

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20220919