US20200219270A1 - Method and apparatus for tracking eyes of user and method of generating inverse-transform image - Google Patents
Method and apparatus for tracking eyes of user and method of generating inverse-transform image Download PDFInfo
- Publication number
- US20200219270A1 US20200219270A1 US16/822,204 US202016822204A US2020219270A1 US 20200219270 A1 US20200219270 A1 US 20200219270A1 US 202016822204 A US202016822204 A US 202016822204A US 2020219270 A1 US2020219270 A1 US 2020219270A1
- Authority
- US
- United States
- Prior art keywords
- illuminance
- image
- eye tracker
- low
- eyes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010801 machine learning Methods 0.000 claims description 18
- 230000001131 transforming Effects 0.000 description 18
- 238000000034 method Methods 0.000 description 15
- 230000000875 corresponding Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 210000001331 Nose Anatomy 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000003287 optical Effects 0.000 description 2
- 241000282326 Felis catus Species 0.000 description 1
- 230000003044 adaptive Effects 0.000 description 1
- 230000000295 complement Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000002708 enhancing Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006011 modification reaction Methods 0.000 description 1
- 230000000644 propagated Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S3/00—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G06K9/00604—
-
- G06K9/0061—
-
- G06K9/4642—
-
- G06K9/4661—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for recognising patterns
- G06K9/62—Methods or arrangements for pattern recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06K9/6232—Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for recognising patterns
- G06K9/62—Methods or arrangements for pattern recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06K9/6256—Obtaining sets of training patterns; Bootstrap methods, e.g. bagging, boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/60—Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
- H04N5/225—Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
- H04N5/232—Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
- H04N5/225—Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
- H04N5/235—Circuitry or methods for compensating for variation in the brightness of the object, e.g. based on electric image signals provided by an electronic image sensor
- H04N5/2351—Circuitry for evaluating the brightness variations of the object
-
- G06K9/54—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10152—Varying illumination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
Abstract
There is provided a method and apparatus for tracking eyes of a user. The method and apparatus may acquire an image of the user, acquire an illuminance of a viewpoint from which the image is captured, and output coordinates of the eyes tracked from the image by operating at least one of a high illuminance eye tracker that operates at a high illuminance or a low illuminance eye tracker that operates at a low illuminance based on the acquired illuminance.
Description
- This application is a continuation of U.S. patent application Ser. No. 15/810,730, filed on Nov. 13, 2017, which claims priority from Korean Patent Application No. 10-2016-0155600, filed on Nov. 22, 2016, in the Korean Intellectual Property Office, the disclosure of which is incorporated he n by reference in its entirety.
- Methods and apparatuses consistent with exemplary embodiments in this disclosure relate to a method and an apparatus for tracking eyes of a user, and more particularly, to a method of generating an inverse-transform image and an apparatus for generating an inverse-transform image.
- Related art camera-based eye tracking technology may be utilized in many fields, such as an ultra multi-view glasses-free three-dimensional (3D) display based on viewpoint tracking. The ultra multi-view glasses-free three-dimensional (3D) display may be used in a low lighted area (dark location) as well as a well-lit area (bright location), while watching a television (TV) or using a mobile device. However, while the related art camera-based eye tracker may properly operate in a high illuminance environment (for example, 400 lux), the related art camera-based eye tracker may not properly operate in a low illuminance environment due to a low quality of a camera image.
- Also, in a next generation 3D head-up display (HUD) technology for vehicles, night driving may need to be taken into consideration. Accordingly, there is a desire for an eye tracker that properly operates at a low illuminance.
- Exemplary embodiments may address at least the above problems and/or disadvantages and other disadvantages not described above. Also, the exemplary embodiments are not required to overcome the disadvantages described above, and an example embodiment may not overcome any of the problems described above.
- According to an aspect of an exemplary embodiment, there is provided a method of tracking eyes of an object, the method may comprise: acquiring an image of the object, acquiring an illuminance of a viewpoint from which the image is captured and outputting coordinates of the eyes tracked from the image by operating at least one of a first illuminance eye tracker that operates at a first illuminance range or a second illuminance eye tracker that operates at a second illuminance range based on the acquired illuminance.
- The first illuminance eye tracker may be trained by machine learning based on a first illuminance database (DB) comprising first illuminance images captured at the first illuminance range, and the second illuminance eye tracker may be trained by machine learning based on a second illuminance DB comprising inverse-transform images obtained by inversely transforming the first illuminance images into second illuminance images.
- The inverse-transform images may comprise feature points trained by the machine learning based on the first illuminance DB.
- The second illuminance DB may comprise the inverse-transform images generated by applying, to the first illuminance images, an image histogram characteristic of a second illuminance reference image modeled in advance based on second illuminance images captured at a second illuminance range.
- The second illuminance DB may comprise the inverse-transform images generated by applying, to the first illuminance images, a noise component of the second illuminance reference image in addition to the image histogram characteristic.
- Each of the first illuminance eye tracker and the second illuminance eye tracker may comprise a plurality of classifiers configured to recognize feature points corresponding to eyes of the object from the image.
- The outputting of the coordinates may comprise comparing the acquired illuminance to a preset reference value, outputting the coordinates by operating the second illuminance eye tracker when the acquired illuminance is less than or equal to the reference value and outputting the coordinates by operating the first illuminance eye tracker when the acquired illuminance is greater than the reference value.
- The reference value may be set to 10 lux.
- The acquiring of the image may comprise capturing the image using a single image sensor.
- According to an aspect of another embodiment, there is provided a method of generating an inverse-transform image, the method may comprise: modeling a first illuminance reference image based on first illuminance images captured at a first illuminance, acquiring an image histogram characteristic of the first illuminance reference image, generating a intermediate image by adjusting a brightness level in an image histogram of a second illuminance image captured at a second illuminance and inversely transforming the intermediate image into a first illuminance image by applying the image histogram characteristic to the intermediate image.
- The method may further comprise acquiring a noise component of the first illuminance reference image.
- The inversely transforming of the intermediate image may comprise inversely transforming the intermediate image into the first illuminance image by applying the noise component and the image histogram characteristic to the intermediate image.
- The noise component may comprise at least one of Gaussian noise of the illuminance reference image or Poisson noise of the first illuminance reference image.
- According to an aspect of another embodiment, there is provided a non-transitory computer-readable storage medium storing a program for causing a processor to perform the method of generating an inverse-transform image, the method may comprise: modeling a first illuminance reference image based on first illuminance images captured at a first illuminance, acquiring an image histogram characteristic of the first illuminance reference image, generating a intermediate image by adjusting a brightness level in an image histogram of a second illuminance image captured at a second illuminance and inversely transforming the intermediate image into a first illuminance image by applying the image histogram characteristic to the intermediate image.
- According to an aspect of another embodiment, there is provided an apparatus for tracking eye of an object, the apparatus may comprise an image sensor configured to capture an image of the object, an illuminance sensor configured to measure an illuminance of a viewpoint from which the image is captured, a first illuminance eye tracker configured to operate at a first illuminance range, a second illuminance eye tracker configured to operate at a second illuminance range and a processor configured to control coordinates of the eyes tracked from the image to be output by operating at least one of the first illuminance eye tracker or the second illuminance eye tracker based on the measured illuminance.
- The first illuminance eye tracker may be trained by machine learning based on a first illuminance database (DB) comprising first illuminance images captured at the first illuminance range, and the second illuminance eye tracker may be trained by machine learning based on a second illuminance DB comprising inverse-transform images obtained by inversely transforming the first illuminance images into second illuminance images.
- The second illuminance DB may comprise the inverse-transform images generated by applying, to the first illuminance images, an image histogram characteristic of a second illuminance reference image modeled in advance based on second illuminance images captured at a second illuminance range.
- The illuminance DB may comprise the inverse-transform images generated by applying, to the first illuminance images, a noise component of the second illuminance reference image in addition to the image histogram characteristic.
- According to an aspect of another embodiment, there is provided an illuminance eye tracker comprising: a illuminance database (DB) comprising inverse-transform images obtained by inversely transforming first illuminance images into second illuminance images; and an image processor configured to process a second illuminance image of an object by a parameter trained by machine learning based on the illuminance DB, and to output coordinates of eyes of the object.
- The illuminance DB may comprise the inverse-transform images generated by applying, to the first illuminance images, an image histogram characteristic of a second illuminance reference image modeled in advance based on second illuminance images captured at a second illuminance range.
- The illuminance DB may comprise the inverse-transform images generated by applying, to the first illuminance images, a noise component of the second illuminance reference image in addition to the image histogram characteristic.
- The above and other aspects of exemplary embodiments will become apparent and more readily appreciated from the following detailed description of certain example embodiments, with reference to the accompanying drawings of which:
-
FIG. 1 is a flowchart illustrating a method of tracking eyes of a user according to an exemplary embodiment; -
FIG. 2 is a flowchart illustrating an example of an operation of outputting coordinates of eyes of a user in the method ofFIG. 1 ; -
FIG. 3 is a diagram illustrating a concept of a method of generating an inverse-transform image according to an exemplary embodiment; -
FIG. 4 is a diagram illustrating a method of generating an inverse-transform image according to an exemplary embodiment; -
FIG. 5 is a flowchart illustrating a method of generating a low illuminance database (DB) including an inverse-transform image according to an exemplary embodiment; -
FIG. 6 is a block diagram illustrating an apparatus for tracking eyes of a user according to an exemplary embodiment; and -
FIG. 7 is a block diagram illustrating a low illuminance eye tracker according to an exemplary embodiment. - Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. Exemplary embodiments are described below in order to explain the present disclosure by referring to the figures.
- The following structural or functional descriptions are exemplary to merely describe the exemplary embodiments, and the scope of the exemplary embodiments is not limited to the descriptions provided in the present specification. Various changes and modifications can be made thereto by those of ordinary skill in the art.
- Although terms of “first” or “second” are used to explain various components, the components are not limited to the terms. These terms should be used only to distinguish one component from another component. For example, a “first” component may be referred to as a “second” component, or similarly, and the “second” component may be referred to as the “first” component within the scope of the right according to the concept of the present disclosure.
- It will be understood that when a component is referred to as being “connected to” another component, the component can be directly connected or coupled to the other component or intervening components may be present.
- As used herein, the singular forms are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, components or a combination thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
- Unless otherwise defined herein, all terms used herein including technical or scientific terms have the same meanings as those generally understood by one of ordinary skill in the art. Terms defined in dictionaries generally used should be construed to have meanings matching with contextual meanings in the related art and are not to be construed as an ideal or excessively formal meaning unless otherwise defined herein.
- In the following description, exemplary embodiments may describe methods and apparatuses used to track eyes of a user and output coordinates of the eyes when a glasses-free three-dimensional (3D) monitor, a glasses-free 3D tablet and/or smartphone, or a 3D head-up display (HUD) for vehicles is used in a dark environment with a relatively low illuminance. Also, exemplary embodiments may be implemented as, for example, a software algorithm in a chip or a hardware processor in a monitor, or an application in a tablet and/or smartphone, and may be implemented as a hardware eye tracking apparatus. For example, example embodiments may be applicable to an autonomous vehicle, an intelligent vehicle, a smartphone or a mobile device. Hereinafter, exemplary embodiments will be described in detail below with reference to the accompanying drawings, and like reference numerals refer to the like elements throughout.
-
FIG. 1 is a flowchart illustrating a method of tracking eyes of a user according to an exemplary embodiment. Referring toFIG. 1 , in operation 110, an apparatus for tracking eyes of a user (hereinafter, referred to as a “tracking apparatus”) according to an exemplary embodiment acquires an image of the user. The image may include, for example, a face image of the user or a body image including a face of the user. - According to an exemplary embodiment, the tracking apparatus may acquire an image of a user that is directly captured by an image sensor or an imaging device included in the tracking apparatus. The tracking apparatus may capture an image of a user using a single imaging device or a single age sensor. Also, the tracking apparatus may receive an image of a user captured outside the tracking apparatus. The image sensor may include, for example, a complementary metal-oxide-semiconductor (CMOS) image sensor or a vision sensor.
- In operation 130, the tracking apparatus acquires an illuminance of a viewpoint from which the image is captured. For example, the tracking apparatus may acquire an illuminance sensed by an imaging device that captures the image and/or an illuminance sensed at a location of the user. Also, the tracking apparatus may sense an illuminance by an illuminance sensor included in the tracking apparatus, or may receive a sensing result from a separate illuminance sensor outside the tracking apparatus.
- In operation 150, the tracking apparatus outputs coordinates of the eyes tracked from the image by operating at least one of a high illuminance eye tracker or a low illuminance eye tracker based on the acquired illuminance. The high illuminance eye tracker may operate at a high illuminance, and the low illuminance eye tracker may operate at a low illuminance. Each of the high illuminance eye tracker and the low illuminance eye tracker may include a plurality of classifiers configured to recognize feature points corresponding to the eyes from the image.
- A classifier may recognize landmarks, that is, feature points corresponding to eyes and a nose of a user from a face area of an image of the user using, for example, an active shape model (ASM) scheme, an active appearance model (AAM) scheme or a supervised descent method (SDM).
- According to another embodiment, the tracking apparatus may include a plurality of classifiers configured to recognize other feature points in the image, i.e., landmarks other than eyes and a nose from the image for tracking.
- For example, the high illuminance eye tracker may track eyes of a user by classifiers pre-trained to recognize feature points corresponding to the eyes and a nose of the user based on the eyes from a face of the user, and may output coordinates of the eyes. According to an exemplary embodiment, the feature points acquired by the high illuminance eye tracker may be used and trained for a low illuminance image obtained by inversely transforming a high illuminance image. The feature points acquired by the high illuminance eye tracker may be used and trained without a change for a low illuminance image obtained by inversely transforming a high illuminance image. In the present disclosure, a low illuminance image obtained by inversely transforming a high illuminance image may be referred to as an “inverse-transform image.” Inverse-transform images included in a low illuminance database (DB) may include the same feature points as those trained by machine learning based on a high illuminance DB.
- For example, machine learning may be performed by, for example,an adaptive boosting (AdaBoost) or a support vector machine (SVM) scheme, however, there is no limitation thereto. Accordingly, various learning methods may be applied.
- The low illuminance DB may include inverse-transform images generated by applying, to high illuminance images, an image histogram characteristic of a low illuminance reference image modeled in advance based on low illuminance images captured at a low illuminance. Also, the low illuminance DB may include inverse-transform images generated by applying, to high illuminance images, a noise component of the low illuminance reference image in addition to the image histogram characteristic.
- Hereinafter, an example of operation 150 will be further described with reference to
FIG. 2 . -
FIG. 2 is a flowchart illustrating an example of operation 150 ofFIG. 1 . Referring toFIG. 2 , in operation 210, the tracking apparatus compares the acquired illuminance to a preset reference value. The reference value may be set to, for example, 10 lux. - When the acquired illuminance is less than or equal to the reference value, the tracking apparatus may output the coordinates by operating the low illuminance eye tracker in operation 220. For example, when the acquired illuminance is less than or equal to 10 lux that is the reference value, the tracking apparatus may determine the acquired illuminance as a low illuminance, and may operate the low illuminance eye tracker. The low illuminance eye tracker may be trained by machine learning based on a low illuminance DB including inverse-transform images. The low illuminance eye tracker may process a low illuminance image of a user by a parameter trained by machine learning based on the low illuminance DB, and may output coordinates of eyes of the user.
- When the acquired illuminance is greater than the reference value, the tracking apparatus may output the coordinates by operating the high illuminance eye tracker in operation 230. For example, when the acquired illuminance is greater than 10 lux that is the reference value, the tracking apparatus may determine the acquired illuminance as a high illuminance and may operate the high illuminance eye tracker. The high illuminance eye tracker may process a high illuminance image of a user by a parameter trained by machine learning based on the high illuminance DB, and may output coordinates of eyes of the user.
-
FIG. 3 is a flow chart illustrating a method of generating an inverse-transform image according to an exemplary embodiment.FIG. 3 illustrates an image enhancement process performed by operations 301, 303, 305 and 307, and a process of generating an inverse-transform image through operations 310 and 330. - An inverse-transform image may be generated by inversely performing the image enhancement process, and accordingly the image enhancement process will be described prior to the process of generating an inverse-transform image.
- The image enhancement process may be an image processing scheme to enhance a dark and blurred image or an input unclear image caused by noise to be more suitable image for a special application purpose, to facilitate recognition. The image enhancement process may broadly include operations 301 through 307.
- For example, when a low illuminance image is input in operation 301, a histogram equalization to allow the low illuminance image to be clearly viewed may be performed in operation 303. In the histogram equalization, pixel values of an image may be uniformly distributed to fill an entire area of the image by a contrast adjustment so that the image may be clarified. For example, in the histogram equalization, a uniform function or a Gaussian function may be used.
- When the histogram equalization is performed, noise may be removed from the low illuminance image in operation 305. In operation 307, an enhanced image, that is, a high illuminance image may be output. The noise may be removed using various noise filters or various algorithms for removing noise, for example, a radiometric correction.
- An inverse transformation may be performed by inversely performing a process of generating a high illuminance image from the above-described low illuminance image, to generate a low illuminance image from a high illuminance image.
- For example, when the high illuminance image is input, a generation apparatus may add noise to the high illuminance image in operation 310. In this example, the noise may be noise of an average low illuminance image. The noise of the average low illuminance image may be, for example, Gaussian noise or Poisson noise of the average low illuminance image. The Gaussian noise may correspond to general noise with a normal distribution. The Poisson noise may correspond to Poisson distribution noise.
- In operation 330, the generation apparatus may perform a histogram specification of the high illuminance image with the noise and may inversely transform the high illuminance image into a low illuminance image. The histogram specification may refer to transformation for matching a histogram of a corresponding image and a predetermined histogram. The generation apparatus may inversely transform the high illuminance image by applying an image histogram characteristic of an image captured at a low illuminance to the high illuminance image. For example, the generation apparatus may transform a histogram of the high illuminance image with the noise so that the histogram of the high illuminance image with the noise may match a histogram of a low illuminance image, and may generate an inverse-transform image from the high illuminance image. An example of a method of configuring a low illuminance DB by generating an inverse-transform image in the generation apparatus will be further described with reference to
FIGS. 4 and 5 . - A low illuminance image may be generated by inversely transforming a high illuminance image, and thus it is possible to configure a low illuminance DB for training of a low illuminance eye tracker based on high illuminance images stored in a high illuminance DB even though separate low illuminance images are absent.
-
FIG. 4 is a diagram illustrating a method of generating an inverse-transform image according to an exemplary embodiment. Referring toFIG. 4 , a plurality images, for example, a high illuminance image 410, a low illuminance reference image 420, a dark image 430 and an inverse-transform image 440, represent a face of a user. Also, graphs 415, 425, 435 and 445 show image histograms corresponding to the high illuminance image 410 through the inverse-transform image 440, respectively. - The high illuminance image 410 may be a high illuminance image captured at a high illuminance, for example, 450 lux.
- The low illuminance reference image 420 may be an image modeled based on low illuminance images captured at a low illuminance, for example, 10 lux.
- The dark image 430 may be an image generated by adjusting a brightness level to a low illuminance, for example, 10 lux, in an image histogram of the high illuminance image 410.
- The inverse-transform image 440 may be a low illuminance image obtained by inversely transforming the dark image 430 by applying an image histogram characteristic of the low illuminance reference image 420 to the dark image 430.
- As described above, an inverse-transform image included in a low illuminance DB according to an exemplary embodiment may be generated by inversely transforming a high illuminance image into a low illuminance image. Hereinafter, a method of inversely transforming a high illuminance image into a low illuminance image will be further described.
- A generation apparatus may model the low illuminance reference image 420 based on a plurality of low illuminance images captured at a low illuminance, for example, 10 lux, and may acquire an image histogram characteristic of the low illuminance reference image 420 as shown in the graph 425. The graph 425 corresponds to the image histogram characteristic of the low illuminance reference image 420, and indicates that relatively large number of pixels are concentrated in an intensity less than or equal to “125.”
- The generation apparatus may generate the dark image 430 from the high illuminance image 410. The generation apparatus may generate the dark image 430 by adjusting the brightness in the image histogram of the high illuminance image 410. For example, the generation apparatus may adjust the brightness to be an intensity less than or equal to “125” by shifting the histogram corresponding to the high illuminance image 410 leftwards in the graph 415, and may generate the dark image 430 corresponding to the graph 435.
- The generation apparatus may generate the inverse-transform image 440 by applying the image histogram characteristic of the low illuminance reference image 420 shown in the graph 425 to the dark image 430. An image histogram of the inverse-transform image 440 may be obtained by combining an image histogram characteristic of the dark image 430 and the image histogram characteristic of the low illuminance reference image 420, as shown in the graph 445.
-
FIG. 5 is a flowchart illustrating a method of generating a low illuminance DB including an inverse-transform image according to an exemplary embodiment. Referring toFIG. 5 , in operation 510, an apparatus for generating a low illuminance DB (hereinafter, referred to as a “generation apparatus”) according to an exemplary embodiment performs modeling of a low illuminance reference image based on low illuminance images captured at a low illuminance. For example, the generation apparatus may perform modeling of a low illuminance reference image based on a plurality of low illuminance images captured at 10 lux. The low illuminance reference image may be modeled for each object. For example, the low illuminance reference image may be modeled for different types or for different objects, for example, a human, a cat or a dog. The low illuminance reference image may be modeled as, for example, the low illuminance reference image 420 ofFIG. 4 . - In operation 520, the generation apparatus acquires an image histogram characteristic of the low illuminance reference image. The image histogram characteristic of the low illuminance reference image may be represented, for example, as shown in the graph 425 of
FIG. 4 . - In operation 530, the generation apparatus generates a dark image by adjusting a brightness level in an image histogram of a high illuminance image captured at a high illuminance. For example, the generation apparatus may adjust the brightness to be a low illuminance by shifting the image histogram of the high illuminance image, and may generate the dark image.
- In operation 540, the generation apparatus inversely transforms the dark image into a low illuminance image by applying the image histogram characteristic of the low illuminance reference image to the dark image, to generate an inverse-transform image.
- For example, the generation apparatus may further acquire a noise component of the low illuminance reference image. In this example, the generation apparatus may apply the noise component and the image histogram characteristic of the low illuminance reference image to the dark image generated in operation 530, to inversely transform the dark image into a low illuminance image. The noise component may include, for example, Gaussian noise and Poisson noise of the low illuminance reference image.
- In operation 550, the generation apparatus stores the inverse-transform image in the low illuminance DB.
-
FIG. 6 is a block diagram illustrating a tracking apparatus 600 according to an exemplary embodiment. Referring toFIG. 6 , the tracking apparatus 600 includes an image sensor 610, an illuminance sensor 620, a processor 630, a low illuminance eye tracker 640 and a high illuminance eye tracker 650. The tracking apparatus 600 may further include a memory 660. - The image sensor 610 may capture an image of a user.
- The illuminance sensor 620 may measure an illuminance of a viewpoint from which the image is captured.
- The processor 630 may control coordinates of eyes of the user tracked from the image to be output by operating at least one of the low illuminance eye tracker 640 or the high illuminance eye tracker 650 based on the measured illuminance.
- Also, the processor 630 may perform at least one of the methods described above with reference to
FIGS. 1 through 5 . According to an exemplary embodiment, the processor 630 may perform the functions of the generation apparatus illustrated with respect toFIGS. 3, 4 and 5 . The processor 630 may execute a program and may control the tracking apparatus 600. A program code executed by the processor 630 may be stored in the memory 660. - The low illuminance eye tracker 640 may operate at a low illuminance, to track positions of the eyes from the image and to output coordinates of the eyes. The low illuminance eye tracker 640 may be trained by machine learning based on a low illuminance DB including inverse-transform images obtained by inversely transforming high illuminance images. For example, the low illuminance DB may include inverse-transform images generated by applying, to high illuminance images, an image histogram characteristic of a low illuminance reference image modeled in advance based on low illuminance images captured at a low illuminance. Also, the low illuminance DB may include inverse-transform images generated by applying, to high illuminance images, a noise component of the low illuminance reference image in addition to the image histogram characteristic of the low illuminance reference image.
- The high illuminance eye tracker 650 may operate at a high illuminance, to track positions of the eyes from the image and to output coordinates of the eyes. The high illuminance eye tracker 650 may be trained by machine learning based on a high illuminance DB including high illuminance images captured at the high illuminance.
- The memory 660 may store the image captured by the image sensor 610 and/or the coordinates of the eyes output by the low illuminance eye tracker 640 and the high illuminance eye tracker 650. The memory 660 may include the low illuminance DB and the high illuminance DB depending on example embodiments.
- Also, the memory 660 may store a variety of information generated in the above-described processing process of the processor 630. Furthermore, the memory 660 may store a variety of data and programs. The memory 660 may include, for example, a volatile memory or a nonvolatile memory. The memory 660 may include a mass storage medium, for example, a hard disk, to store a variety of data.
-
FIG. 7 is a block diagram illustrating a low illuminance eye tracker 700 according to an exemplary embodiment. Referring toFIG. 7 , the low illuminance eye tracker 700 includes a low illuminance DB 710 and an image processor 730. - The low illuminance DB 710 may include inverse-transform images obtained by inversely transforming high illuminance images. For example, the low illuminance DB 710 may include inverse-transform images generated by applying, to high illuminance images, an image histogram characteristic of a low illuminance reference image modeled in advance based on low illuminance images captured at a low illuminance. Also, the low illuminance DB 710 may include inverse-transform images generated by applying, to high illuminance images, a noise component of the low illuminance reference image in addition to the image histogram characteristic of the low illuminance reference image. The low illuminance DB 710 may be included in, for example, a memory (not shown).
- The image processor 730 may process a low illuminance image of a user by a parameter trained in advance by machine learning based on the low illuminance DB 710, and may output coordinates of eyes of the user. For example, the coordinates output by the image processor 730 may be directly used as an input of a glasses-free 3D display.
- Also, the image processor 730 may perform at least one of the methods described above with reference to
FIGS. 3, 4 and 5 . The image processor 730 may execute a program and may control the low illuminance eye tracker 700. According to an exemplary embodiment, the processor 730 may perform the functions of the generation apparatus illustrated with respect toFIGS. 3, 4 and 5 . A program code executed by the low illuminance eye tracker 700 may be stored in a memory. - The exemplary embodiments described herein may be implemented using hardware components, software components, or a combination thereof. A processing device may be implemented using one or more general-purpose or special purpose computers, such as, for example, a processor, a controller and an arithmetic logic unit, a digital signal processor, a microcomputer, a field programmable array, a programmable logic unit, a microprocessor or any other device capable of responding to and executing instructions in a defined manner. The processing device may run an operating system (OS) and one or more software applications that run on the OS. The processing device also may access, store, manipulate, process, and create data in response to execution of the software. For purpose of simplicity, the description of a processing device is used as singular; however, one skilled in the art will appreciated that a processing device may include multiple processing elements and multiple types of processing elements. For example, a processing device may include multiple processors or a processor and a controller. In addition, different processing configurations are possible, such a parallel processors.
- The software may include a computer program, a piece of code, an instruction, or some combination thereof, to independently or collectively instruct or configure the processing device to operate as desired. Software and data may be embodied permanently or temporarily in any type of machine, component, physical or virtual equipment, computer storage medium or device, or in a propagated signal wave capable of providing instructions or data to or being interpreted by the processing device. The software also may be distributed over network coupled computer systems so that the software is stored and executed in a distributed fashion. The software and data may be stored by one or more non-transitory computer readable recording mediums.
- The method according to the above-described exemplary embodiments may be recorded in non-transitory computer-readable media including program instructions to implement various operations which may be performed by a computer. The media may also include, alone or in combination with the program instructions, data files, data structures, and the like. The program instructions recorded on the media may be those specially designed and constructed for the purposes of the example embodiments, or they may be of the well-known kind and available to those having skill in the computer software arts. Examples of non-transitory computer-readable media include magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD ROM discs and DVDs; magneto-optical media such as optical discs; and hardware devices that are specially configured to store and perform program instructions, such as read-only memory (ROM), random access memory (RAM), flash memory, and the like. Examples of program instructions include both machine code, such as code produced by a compiler, and files containing higher level code that may be executed by the computer using an interpreter. The described hardware devices may be configured to act as one or more software modules in order to perform the operations of the above-described example embodiments, or vice versa.
- While this disclosure includes exemplary embodiments, it will be apparent to one of ordinary skill in the art that various changes in form and details may be made in these exemplary embodiments without departing from the spirit and scope of the claims and their equivalents. The exemplary embodiments described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each exemplary embodiment are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner and/or replaced or supplemented by other components or their equivalents. Therefore, the scope of the disclosure is defined not by the detailed description, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure.
Claims (20)
1. A method of tracking eyes of an object, the method comprising:
acquiring an image of the object;
determining an illuminance of an environment at which the image is captured; and
outputting coordinates of the eyes tracked from the image by operating at least one of a first illuminance eye tracker that operates at a first illuminance range or a second illuminance eye tracker that operates at a second illuminance range based on the illuminance.
2. The method of claim 1 , wherein the first illuminance range has a higher finance value than a second illuminance range.
3. The method of claim 1 , wherein the first illuminance eye tracker is a high illuminance eye tracker and the second illuminance eye tracker is a low illuminance eye tracker, the high illuminance eye tracker is different from the low illuminance eye tracker and the high illuminance eye tracker operates at a higher illuminance value than the low illuminance eye tracker.
4. The method of claim 1 , wherein the coordinates of the eyes tracked from the image are output by operating the first illuminance eye tracker based on the illuminance being greater than a threshold value, and the coordinates of the eyes tracked from the image are output by operating the second illuminance eye tracker based on the illuminance being lower than the threshold value.
5. The method of claim 4 , wherein the threshold value is set to 10 lux.
6. The method of claim 1 , wherein
the first illuminance eye tracker is trained by machine learning based on a first illuminance database (DB) comprising first illuminance images of the first illuminance range.
7. The method of claim 1 , wherein
the second illuminance eye tracker is trained by machine learning based on a second illuminance DB comprising second illuminance images of the second illuminance range.
8. The method of claim 1 , wherein the acquiring of the image comprises capturing the image using a single image sensor.
9. A non-transitory computer-readable storage medium storing a program for causing a processor to perform the method of claim 1 .
10. A method of tracking eyes of an object, the method comprising:
capturing an image of the object using an image sensor;
measuring an illuminance of an environment using an illuminance sensor; and
outputting coordinates of the eyes tracked from the image by operating at least one of a first illuminance eye tracker that operates at a first illuminance range or a second illuminance eye tracker that operates at a second illuminance range based on the illuminance.
11. The method of claim 10 , wherein the first illuminance range has a higher illuminance value than a second illuminance range.
12. The method of claim 10 , wherein the first illuminance eye tracker is a high illuminance eye tracker and the second illuminance eye tracker is a low illuminance eye tracker, the high illuminance eye tracker is different from the low illuminance eye tracker and the high illuminance eye tracker operates at a higher illuminance value than the low illuminance eye tracker.
13. The method of claim 10 , wherein the coordinates of the eyes tracked from the image are output by operating the first illuminance eye tracker based on the illuminance being greater than a threshold value, and the coordinates of the eyes tracked from the image are output by operating the second illuminance eye tracker based on the illuminance being lower than the threshold value.
14. The method of claim 13 , wherein the threshold value is set to 10 lux.
15. The method of claim 10 , wherein
the first illuminance eye tracker is trained by machine learning based on a first illuminance database (DB) comprising first illuminance images of the first illuminance range.
16. The method of claim 10 , wherein
the second illuminance eye tracker is trained by machine learning based on a second illuminance DB comprising second illuminance images of the second illuminance range.
17. The method of claim 10 , wherein the acquiring of the image comprises capturing the image using a single image sensor.
18. A non-transitory computer-readable storage medium storing a program for causing a processor to perform the method of claim 10 .
19. An apparatus for tracking eyes of an object, the apparatus comprising:
an image sensor configured to capture an image of the object; and
a processor configured to determine an illuminance of an environment at which the image is captured, and to control coordinates of the eyes tracked from the image to be output by operating at least one of the first illuminance eye tracker that operates at a first illuminance range or the second illuminance eye tracker that operates at a second illuminance range based on the illuminance.
20. An apparatus for tracking eyes of an object, the apparatus comprising:
an image sensor configured to capture an image of the object;
an illuminance sensor configured to measure an illuminance of an environment; and
a processor configured to control coordinates of the eyes tracked from the image to be output by operating at least one of the first illuminance eye tracker that operates at a first illuminance range or the second illuminance eye tracker that operates at a second illuminance range based on the illuminance.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2016-0155600 | 2016-11-22 | ||
KR1020160155600A KR102349543B1 (en) | 2016-11-22 | 2016-11-22 | Eye-tracking method and apparatus and generating method of inverse transformed low light image |
US15/810,730 US10628947B2 (en) | 2016-11-22 | 2017-11-13 | Method and apparatus for tracking eyes of user and method of generating inverse-transform image |
US16/822,204 US20200219270A1 (en) | 2016-11-22 | 2020-03-18 | Method and apparatus for tracking eyes of user and method of generating inverse-transform image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/822,204 US20200219270A1 (en) | 2016-11-22 | 2020-03-18 | Method and apparatus for tracking eyes of user and method of generating inverse-transform image |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date | |
---|---|---|---|---|
US15/810,730 Continuation US10628947B2 (en) | 2016-11-22 | 2017-11-13 | Method and apparatus for tracking eyes of user and method of generating inverse-transform image |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200219270A1 true US20200219270A1 (en) | 2020-07-09 |
Family
ID=62144519
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/810,730 Active 2038-05-29 US10628947B2 (en) | 2016-11-22 | 2017-11-13 | Method and apparatus for tracking eyes of user and method of generating inverse-transform image |
US16/822,204 Pending US20200219270A1 (en) | 2016-11-22 | 2020-03-18 | Method and apparatus for tracking eyes of user and method of generating inverse-transform image |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/810,730 Active 2038-05-29 US10628947B2 (en) | 2016-11-22 | 2017-11-13 | Method and apparatus for tracking eyes of user and method of generating inverse-transform image |
Country Status (2)
Country | Link |
---|---|
US (2) | US10628947B2 (en) |
KR (1) | KR102349543B1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107300968B (en) * | 2016-04-15 | 2021-05-04 | 中兴通讯股份有限公司 | Face recognition method and device and picture display method and device |
JP6953247B2 (en) * | 2017-09-08 | 2021-10-27 | ラピスセミコンダクタ株式会社 | Goggles type display device, line-of-sight detection method and line-of-sight detection system |
US10990168B2 (en) * | 2018-12-10 | 2021-04-27 | Samsung Electronics Co., Ltd. | Compensating for a movement of a sensor attached to a body of a user |
DE102020200221A1 (en) | 2020-01-09 | 2021-07-15 | Volkswagen Aktiengesellschaft | Method and device for estimating an eye position of a driver of a vehicle |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000090191A (en) | 1998-09-16 | 2000-03-31 | Ntt Data Corp | Device and method for face recognition |
KR100304663B1 (en) * | 1998-12-04 | 2001-09-29 | 윤종용 | Color illumination color temperature detecting device and method thereof |
EP1842152B1 (en) | 2005-01-26 | 2011-01-05 | Honeywell International Inc. | A distance iris recognition system |
JP2006259900A (en) * | 2005-03-15 | 2006-09-28 | Omron Corp | Image processing system, image processor and processing method, recording medium, and program |
JP5510951B2 (en) | 2006-02-01 | 2014-06-04 | トビー テクノロジー アーベー | Generating graphical feedback in computer systems |
KR101831247B1 (en) | 2012-01-31 | 2018-02-22 | 한국전자통신연구원 | Apparatus for focus measurement in eye tracking system using multi layer perception |
EP2748792B1 (en) * | 2012-08-08 | 2016-12-21 | Dolby Laboratories Licensing Corporation | Image processing for hdr images |
US9165180B2 (en) | 2012-10-12 | 2015-10-20 | Microsoft Technology Licensing, Llc | Illumination sensitive face recognition |
EP2790126B1 (en) | 2013-04-08 | 2016-06-01 | Cogisen SRL | Method for gaze tracking |
CN105324794A (en) | 2013-04-10 | 2016-02-10 | 奥克兰联合服务有限公司 | Head and eye tracking |
WO2014185423A1 (en) * | 2013-05-14 | 2014-11-20 | 株式会社Gfl | Questionnaire system, questionnaire response device, questionnaire response method, and questionnaire response program |
KR20150014290A (en) | 2013-07-29 | 2015-02-06 | 엘지전자 주식회사 | Image display device and operation method of the image display device |
JP6144429B2 (en) * | 2013-12-09 | 2017-06-07 | ゼンソモトリック インストゥルメンツ ゲゼルシャフト ヒューア イノベイティブ ゼンソリック エムベーハーSENSOMOTORIC INSTRUMENTS Gesellschaft fur innovative Sensorik mbH | Eye tracking device operating method and eye tracking device performing active illumination control to improve eye tracking robustness |
KR102257249B1 (en) * | 2014-07-08 | 2021-05-28 | 삼성전자주식회사 | Apparatus and method for displaying holographic 3-dimensional image |
KR102281149B1 (en) * | 2014-10-22 | 2021-07-23 | 삼성전자주식회사 | APPARATUS FOR TRACKING EYE POINT OPERABLE AT HIGH intensity of illumination AND LOW intensity of illumination AND METHOD THEREOF |
KR102362042B1 (en) | 2014-12-18 | 2022-02-11 | 삼성전자주식회사 | Method and apparatus for controling an electronic device |
US9704038B2 (en) | 2015-01-07 | 2017-07-11 | Microsoft Technology Licensing, Llc | Eye tracking |
-
2016
- 2016-11-22 KR KR1020160155600A patent/KR102349543B1/en active IP Right Grant
-
2017
- 2017-11-13 US US15/810,730 patent/US10628947B2/en active Active
-
2020
- 2020-03-18 US US16/822,204 patent/US20200219270A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US10628947B2 (en) | 2020-04-21 |
KR102349543B1 (en) | 2022-01-11 |
US20180144483A1 (en) | 2018-05-24 |
KR20180057225A (en) | 2018-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10628947B2 (en) | Method and apparatus for tracking eyes of user and method of generating inverse-transform image | |
US10962772B2 (en) | Method of removing reflection area, and eye tracking method and apparatus | |
US10262397B2 (en) | Image de-noising using an equalized gradient space | |
US9898080B2 (en) | Method and apparatus for eye tracking | |
US9600744B2 (en) | Adaptive interest rate control for visual search | |
KR20180105876A (en) | Method for tracking image in real time considering both color and shape at the same time and apparatus therefor | |
US10970821B2 (en) | Image blurring methods and apparatuses, storage media, and electronic devices | |
US9225898B2 (en) | Image pickup apparatus, image processing system, image pickup system, image processing method, and non-transitory computer-readable storage medium | |
KR101727438B1 (en) | Deformable expression detector | |
US10991111B2 (en) | Method and apparatus for refining depth image | |
KR20190090288A (en) | Electronic apparatus and controlling method of thereof | |
TW202014934A (en) | Electronic system and non-transitory computer-readable recording medium | |
US11132771B2 (en) | Bright spot removal using a neural network | |
US10848746B2 (en) | Apparatus including multiple cameras and image processing method | |
US20210133431A1 (en) | Image processing method and apparatus | |
US20210097650A1 (en) | Image processing method, storage medium, image processing apparatus, learned model manufacturing method, and image processing system | |
EP3937481A1 (en) | Image display method and device | |
KR102410834B1 (en) | Method of removing reflection area, eye-tracking method and apparatus thereof | |
US9298319B2 (en) | Multi-touch recognition apparatus using filtering and a difference image and control method thereof | |
US11327562B2 (en) | Method and apparatus for tracking eye based on eye reconstruction | |
CN110519526B (en) | Exposure time control method and device, storage medium and electronic equipment | |
US10026181B2 (en) | Method and apparatus for detecting object | |
US11315274B2 (en) | Depth determination for images captured with a moving camera and representing moving features | |
US20210166352A1 (en) | Method and apparatus for restoring image | |
KR20220013290A (en) | Auto-focus compensation method and auto-focus compensation device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |