KR20170084900A - Apparatus for providing additional information of eye tracking way and method using the same - Google Patents

Apparatus for providing additional information of eye tracking way and method using the same Download PDF

Info

Publication number
KR20170084900A
KR20170084900A KR1020160004306A KR20160004306A KR20170084900A KR 20170084900 A KR20170084900 A KR 20170084900A KR 1020160004306 A KR1020160004306 A KR 1020160004306A KR 20160004306 A KR20160004306 A KR 20160004306A KR 20170084900 A KR20170084900 A KR 20170084900A
Authority
KR
South Korea
Prior art keywords
additional information
processing unit
image processing
interest
infrared
Prior art date
Application number
KR1020160004306A
Other languages
Korean (ko)
Inventor
황인욱
김현철
서정일
이인재
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to KR1020160004306A priority Critical patent/KR20170084900A/en
Publication of KR20170084900A publication Critical patent/KR20170084900A/en

Links

Images

Classifications

    • G06K9/00604
    • G06K9/00617
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F9/00Details other than those peculiar to special kinds or types of apparatus
    • G07F9/02Devices for alarm or indication, e.g. when empty; Advertising arrangements in coin-freed apparatus

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Control Of Vending Devices And Auxiliary Devices For Vending Devices (AREA)

Abstract

A gaze tracking type additional information providing apparatus and method are disclosed. According to an aspect of the present invention, there is provided an eye-tracking-type additional information providing apparatus including: an infra-red illumination unit for illuminating an object with infrared light; An infrared camera for capturing an infrared image corresponding to the object to which the infrared ray is irradiated; An image processing unit for generating additional information based on the image; And a display unit for outputting the additional information; .

Description

[0001] APPARATUS FOR PROVIDING ADDITIONAL INFORMATION OF EYE TRACKING WAY AND METHOD USING THE SAME [0002]

The present invention relates to a technique for analyzing a line-of-sight region and automatically providing relevant information to a target.

Conventional vending machines were merely a function of simple sales to allow a consumer to select a desired product and to input a corresponding amount to purchase a product. In recent years, however, technologies for discriminating sex and age by using a face image of a consumer for a more effective sale of a product, and a technique for recommending a specific product in a vending machine to a consumer in accordance with the input amount or sales volume of the product .

Eye-tracking technology captures the movements of the pupil and determines the position of the eye. Eye-tracking technology includes wearable line-of-sight tracking technology that uses line-of-sight tracking technology and non-wearing line-of-sight tracking technology without using equipment. Wearable line of sight technology uses infrared illumination and cameras attached to eyeglasses or viewfinders that are usually close to the eye, such as the Eye Controlled Focus system used in Canon's EOS 5 cameras in 1993, It is a technique to find eye and pupil center point precisely and to find the position that person is looking at in the target plane based on the difference between the position where the reference infrared light is reflected from the cornea and the pupil center point. In the non-wearing eye tracking technology, it is necessary to find the eye area of the user in the whole image captured by infrared illumination and camera in a system located at a distance of several tens of centimeters or more from the user, and to estimate the positional relationship between the user and the system . Currently, it is a technology that is used for game, the input interface for the handicapped, and the analysis of the attention by the position of the product or advertisement.

Korean Patent Laid-Open No. 10-2013-0049099, entitled " Method and Apparatus for Estimating Age or Gender Using Facial Images ", and Korean Patent Laid-Open No. 10-2009-0129264, entitled "Display Device for Vending Machines, Control method "has been disclosed.

However, this technology can only indirectly predict the goods of interest by using the statistical approach to a specific consumer group, and it is very difficult to grasp the taste of the present time instantly by the individual consumers.

An object of the present invention is to promote sales of a vending machine using a visual tracking technology.

It is another object of the present invention to increase the desire to purchase a product and the advertising effect of a product by automatically outputting information that helps purchase the product using the eye tracking technology.

Another object of the present invention is to efficiently provide additional information even when a large number of users exist by using the eye tracking technology.

According to an aspect of the present invention, there is provided an apparatus for providing a line-of-sight tracking type additional information, including: an infrared ray illuminating an object with infrared rays; An infrared camera for capturing an infrared image corresponding to the object to which the infrared ray is irradiated; An image processing unit for generating additional information based on the image; And a display unit for outputting the additional information; .

At this time, the image processing unit can search the face region of the object based on the image.

At this time, the image processor may search for the pupil based on the infrared light reflected by the cornea of the face region and the pupil over the infrared light when the infrared light is illuminated.

At this time, the image processing unit may track the point of interest of the object based on the difference between the center of the pupil whose position changes according to the gaze direction of the object and the fixed reflected light position.

At this time, the image processing unit may generate the additional information corresponding to the point when the line of sight stays at the point for a predetermined time.

In this case, the image processor may calculate the relative distance to the device based on the brightness of the face area of the objects when the object is a plurality of objects, and designate the face area based on the calculation.

According to another aspect of the present invention, there is provided a method of providing a line-of-sight tracking type additional information, comprising the steps of: irradiating an object with infrared rays; Capturing an infrared image corresponding to the object to which the infrared ray is irradiated; Generating additional information based on the image; And outputting the additional information; .

In this case, the creating step may include searching for a face region of the object based on the determination; Searching for the pupil based on the infrared light reflected from the cornea of the face region and the phenomenon that the entire pupil is displayed black when the infrared region is irradiated on the face region based on the search; Tracking a point of interest based on the pupil; And generating the additional information based on the point of interest; . ≪ / RTI >

At this time, the tracking may track the point of interest based on the difference between the center of the pupil whose position changes in accordance with the gaze direction of the object and the fixed reflected light position.

The generating of the additional information based on the point of interest may generate the additional information corresponding to the point of interest when the point of interest stays at the point of interest for a certain period of time.

In this case, the searching step may calculate the relative distance to the device based on the brightness of the face area of the objects when the objects are plural, and designate the face area based on the calculation.

The present invention can facilitate the sale of merchandise of a vending machine using a visual tracking technique.

In addition, the present invention can automatically output information that helps purchase a product by using the eye tracking technology, thereby increasing the desire to purchase the product and the advertising effect of the product.

In addition, the present invention can efficiently provide additional information even when a large number of users exist by using the eye tracking technology.

1 is a view showing an example of a vending machine including a gaze tracking type additional information providing apparatus according to an embodiment of the present invention.
FIG. 2 is a block diagram illustrating a gaze tracking type additional information providing apparatus according to an exemplary embodiment of the present invention. Referring to FIG.
FIG. 3 is a view illustrating an example of a face region and a pupil search process of the image processing unit of FIG. 2 according to an embodiment of the present invention.
FIG. 4 is a flowchart illustrating an eye-tracking-type supplementary information providing method according to an exemplary embodiment of the present invention.
5 is an operation flowchart illustrating details of the additional information generating step of FIG. 4 according to an embodiment of the present invention.

The present invention will now be described in detail with reference to the accompanying drawings. Hereinafter, a repeated description, a known function that may obscure the gist of the present invention, and a detailed description of the configuration will be omitted. Embodiments of the present invention are provided to more fully describe the present invention to those skilled in the art. Accordingly, the shapes and sizes of the elements in the drawings and the like can be exaggerated for clarity.

Hereinafter, a preferred embodiment of the present invention will be described in detail with reference to the accompanying drawings.

1 is a view showing an example of a vending machine including a gaze tracking type additional information providing apparatus according to an embodiment of the present invention.

Referring to FIG. 1, the gaze tracking type additional information providing apparatus 110 according to an embodiment of the present invention may be included in the vending machine 100.

The eye-tracking-type additional information providing apparatus 110 may include an infrared light 111, an infrared camera 112, an image processing unit 113, and a display unit 130.

One or more infrared lights 111 may be disposed on the front surface of the vending machine 100.

At this time, the infrared ray illuminator 111 can irradiate the infrared rays to the eye level (about 1.0 to 1.8 m) of the object 200, which is about 0.5 to 1.5 m away from the vending machine 100.

The infrared camera 112 can track the gaze of the object in real time.

At this time, the infrared camera 112 can photograph the irradiation range of the infrared light 111.

In addition, the infrared camera 112 can capture an infrared image every predetermined period of five or more times per second.

The image processing unit 113 can search the face area 300 of the object 200 from the photographed infrared image.

At this time, the image processing unit 113 can search the face area 300 using an AdaBoost algorithm or other methods.

Here, if the position of the face area 300 is not detected, the image processing unit 113 may not use the eye tracking function.

The Adaboost algorithm can combine low-performance classifiers to meet one excellent classification performance.

In addition, the image processing unit 113 can determine that the object 200 is a plurality of face regions 300 when the number of the detected face regions 300 is plural.

At this time, when a plurality of face regions are searched, the image processing unit 113 can designate one of them.

In addition, the image processing unit 113 may designate one or a plurality of face regions based on the brightness of the plurality of face regions.

In addition, the image processing unit 113 can search for the pupil 301 based on the searched face area 300.

In addition, the image processing unit 113 can search for a plurality of pupils for the face regions when there are a plurality of searched face regions 300.

In this case, when there are a plurality of objects 200, the image processing unit 113 can designate only objects to which both pupils of both eyes among a plurality of objects are searched.

In addition, the image processing unit 113 may not use the line-of-sight tracking function when the line-of-sight tracking is difficult due to irregular reflection of infrared light, face clipping, or rapid movement.

Here, the image processing unit 113 performs a boundary detection technique such as adaptive threshold setting based on the phenomenon that the reflected light 301 fixed at a small point reflected at a high brightness is reflected at a certain position in the cornea of the eye, The pupil 301 of the object 200 can be searched using an elliptic fitting or the like.

Here, if the location search of the pupil 301 fails, the image processing unit 113 may not use the gaze tracking function.

The image processing unit 113 estimates the line-of-sight direction of the object 200 using the difference between the center of the pupil 301 whose position changes in accordance with the line-of-sight direction of the object 200 and the position of the fixed reflected light 302 .

When a plurality of objects 200 are present, the image processing unit 113 may use the difference between the centers of the plurality of pupils whose positions are changed according to the gaze directions of the plurality of objects and the positions of the reflected lights fixed on the cornea, The gaze directions can be estimated.

Then, the image processing unit 113 can estimate the position where the line of sight of the object 200 meets the line of sight of the automatic vending machine 100. [

The image processing unit 113 can estimate a plurality of positions where the gaze direction of the plurality of objects and the gaze direction of the vending machine 100 meet.

Also, the image processing unit 113 can track the point of interest by comparing the estimated position with the position of the real, merchandise, or image displayed on the vending machine 100. [

If there are a plurality of objects 200 to be estimated, the image processing unit 113 compares a plurality of locations and positions of objects, goods, or images displayed on the vending machine 100 to track a plurality of points of interest .

Also, the image processing unit 113 can determine that the product is a product of interest if the line of sight of the object 200 stays at a predetermined time (about 0.5 second or more) to the specific product corresponding to the point of interest.

In addition, when the object 200 is a plurality of objects, if the eyes of a plurality of objects remain in a specific product corresponding to the points of interest for a predetermined time (about 0.5 seconds or more), the image processing unit 113 determines can do.

In addition, the image processing unit 113 generates additional information based on various information related to the product of interest.

In addition, if there are a plurality of items of interest, the image processing unit 113 may generate various additional information corresponding to all the items of interest.

The additional information may be various information for facilitating the purchase of the product of interest.

Examples of the additional information may include an event related to a product of interest, an advertisement image, an animation, a graphic effect, detailed information of the interested product, similar product information, information related to the object 200, and the like.

In addition, the image processing unit 113 may generate additional information for comparing and recommending a new commodity with a commodity of interest based on a list of commodities that the target 200 has already purchased or has shown an interest.

If there are a plurality of objects 200, the image processing unit 113 may generate additional information for comparing and recommending products that are interested in a new product based on a list of items that have been purchased or interested by a plurality of objects .

In addition, the image processing unit 113 can generate additional information for comparing and recommending a new product and a product of interest by introducing game elements such as a statistical purchase result, a random slot machine, and a ladder rider.

In addition, the image processing unit 113 may transmit the generated additional information to the display unit 130. [

The display unit 130 may include a display structure such as an LCD or an LED.

The display unit 130 may receive additional information from the image processing unit 113. [

The display unit 130 may output the received additional information.

FIG. 2 is a block diagram illustrating a gaze tracking type additional information providing apparatus according to an exemplary embodiment of the present invention. Referring to FIG.

2, an apparatus 110 for providing eye-tracking type additional information according to an exemplary embodiment of the present invention includes an infrared light 111, an infrared camera 112, an image processing unit 113, and a display unit 130 can do.

One or more infrared lights 111 may be disposed on the front surface of the vending machine 100.

At this time, the infrared ray illuminator 111 can irradiate the infrared rays to the eye level (about 1.0 to 1.8 m) of the object 200, which is about 0.5 to 1.5 m away from the vending machine 100.

The infrared camera 112 can track the gaze of the object in real time.

At this time, the infrared camera 112 can photograph the irradiation range of the infrared light 111.

In addition, the infrared camera 112 can capture an infrared image every predetermined period of five or more times per second.

The image processing unit 113 can search the face area 300 of the object 200 from the photographed infrared image.

At this time, the image processing unit 113 can search the face area 300 using an AdaBoost algorithm or other methods.

Here, if the position of the face area 300 is not detected, the image processing unit 113 may not use the eye tracking function.

The Adaboost algorithm can combine low-performance classifiers to meet one excellent classification performance.

In addition, the image processing unit 113 can determine that the object 200 is a plurality of face regions 300 when the number of the detected face regions 300 is plural.

At this time, when a plurality of face regions are searched, the image processing unit 113 can designate one of them.

In addition, the image processing unit 113 may designate one or a plurality of face regions based on the brightness of the plurality of face regions.

In addition, the image processing unit 113 can search for the pupil 301 from the searched face area 300.

In addition, when a plurality of face regions are searched, the image processing unit 113 can search for a plurality of pupils.

In this case, when a plurality of face regions are searched, the image processing unit 113 can designate only the objects whose pupils of both eyes of the plurality of objects are searched.

In addition, the image processing unit 113 may not use the line-of-sight tracking function when the line-of-sight tracking is difficult due to irregular reflection of infrared light, face clipping, or rapid movement.

Here, the image processing unit 113 performs a boundary detection technique such as adaptive threshold setting based on the phenomenon that the reflected light 301 fixed at a small point reflected at a high brightness is reflected at a certain position in the cornea of the eye, The pupil 301 of the object 200 can be searched using an elliptic fitting or the like.

Here, if the location search of the pupil 301 fails, the image processing unit 113 may not use the gaze tracking function.

The image processing unit 113 estimates the line-of-sight direction of the object 200 using the difference between the center of the pupil 301 whose position changes in accordance with the line-of-sight direction of the object 200 and the position of the fixed reflected light 302 .

When a plurality of objects 200 are present, the image processing unit 113 may use the difference between the centers of the plurality of pupils whose positions are changed according to the gaze directions of the plurality of objects and the positions of the reflected lights fixed on the cornea, The gaze directions can be estimated.

Then, the image processing unit 113 can estimate the position where the line of sight of the object 200 meets the line of sight of the automatic vending machine 100. [

The image processing unit 113 can estimate a plurality of positions where the gaze direction of the plurality of objects and the gaze direction of the vending machine 100 meet.

Also, the image processing unit 113 can track the point of interest by comparing the estimated position with the position of the real, merchandise, or image displayed on the vending machine 100. [

If the object 200 is a plurality of images, the image processing unit 113 may track a plurality of points of interest by comparing the plurality of estimated positions with positions of real objects, goods, or images displayed on the automatic vending machine 100 .

Also, the image processing unit 113 can determine that the product is a product of interest if the line of sight of the object 200 stays at a predetermined time (about 0.5 second or more) to the specific product corresponding to the point of interest.

In addition, when the object 200 is a plurality of objects, if the eyes of a plurality of objects remain in a specific product corresponding to the points of interest for a predetermined time (about 0.5 seconds or more), the image processing unit 113 determines can do.

In addition, the image processing unit 113 generates additional information based on various information related to the product of interest.

In addition, if there are a plurality of items of interest, the image processing unit 113 may generate various additional information corresponding to all the items of interest.

The additional information may be various information for facilitating the purchase of the product of interest.

Examples of the additional information may include an event related to a product of interest, an advertisement image, an animation, a graphic effect, detailed information of the interested product, similar product information, information related to the object 200, and the like.

In addition, the image processing unit 113 may generate additional information for comparing and recommending a new commodity with a commodity of interest based on a list of commodities that the target 200 has already purchased or has shown an interest.

If there are a plurality of objects 200, the image processing unit 113 may generate additional information for comparing and recommending products that are interested in a new product based on a list of items that have been purchased or interested by a plurality of objects .

In addition, the image processing unit 113 can generate additional information for comparing and recommending a new product and a product of interest by introducing game elements such as a statistical purchase result, a random slot machine, and a ladder rider.

In addition, the image processing unit 113 may transmit the generated additional information to the display unit 130. [

The display unit 130 may include a display structure such as an LCD or an LED.

The display unit 130 may receive additional information from the image processing unit 113. [

The display unit 130 may output the received additional information.

FIG. 3 is a view illustrating an example of a face region and a pupil search process of the image processing unit of FIG. 2 according to an embodiment of the present invention.

Referring to FIG. 3, the image processing unit 113 can search the face region 300 based on the image of the object 200 captured by the infrared camera 112.

At this time, the image processing unit 113 can search the face area 300 using an AdaBoost algorithm or other methods.

Here, if the position of the face area 300 is not detected, the image processing unit 113 may not use the eye tracking function.

The Adaboost algorithm can combine low-performance classifiers to meet one excellent classification performance.

In addition, the image processing unit 113 can search the pupil 301 from the searched face area 300.

In addition, the image processing unit 113 may perform a boundary detection technique such as adaptive thresholding based on the phenomenon that the reflected light 301 fixed at a small point reflected at a high brightness is reflected at a predetermined position in the cornea of the eye, The pupil 301 of the object 200 can be searched using an elliptic fitting or the like.

The image processing unit 113 estimates the line-of-sight direction of the object 200 using the difference between the center of the pupil 301 whose position changes in accordance with the line-of-sight direction of the object 200 and the position of the fixed reflected light 302 .

Hereinafter, a preferred embodiment of the present invention will be described in detail with reference to the accompanying drawings.

FIG. 4 is a flowchart illustrating an eye-tracking-type supplementary information providing method according to an exemplary embodiment of the present invention.

Referring to FIG. 4, the gaze tracking additional information providing method according to an embodiment of the present invention may first irradiate infrared rays (S410).

That is, in step S410, the object 200 can be irradiated with infrared rays using the infrared light 111. [

In step S410, infrared light may be irradiated to the object 200 at an eye level (about 1.0 to 1.8 m) of the object 200, which is about 0.5 to 1.5 m away from the automatic vending machine 100, by using the infrared light 111.

In step S420, the gaze of the subject can be tracked in real time using the infrared camera 112.

At this time, in step S420, the irradiation range of the infrared ray illumination 111 can be photographed by using the infrared camera 112.

Here, in step S420, the infrared camera 112 may be used to take an infrared image every five or more times at regular intervals.

In operation S430, the facial region 300 may be searched first (S431).

At this time, the step S431 can search the face area 300 of the object 200 based on the infrared image photographed from the step S420.

In addition, the step S431 may search the face area 300 using a method such as an AdaBoost algorithm.

The Adaboost algorithm can combine low-performance classifiers to meet one excellent classification performance.

In addition, the step S431 may search the face area 300 using an algorithm other than AdaBoost.

Here, if step S431 fails to find the location of the face area 300, the gaze tracking type additional information providing method may be terminated.

In addition, the step S431 may determine that the object 200 is a plurality of face regions 300 when the number of the detected face regions 300 is plural.

At this time, in step S431, when a plurality of face areas are searched, one of them can be designated.

In addition, step S431 may designate one or a plurality of face areas based on the brightness of the plurality of face areas 300. [

In addition, step S432 can search for the pupil 301 from the searched face area 300. [

In addition, if there are a plurality of objects 200, step S432 may search for a plurality of pupils for a plurality of searched face regions.

In this case, in step S432, when there are a plurality of objects 200, only objects to which both pupils of both eyes among a plurality of objects are searched can be designated.

In addition, in step S432, in the case where it is difficult to track eye lines due to irregular reflection of infrared light, face clipping, quick movement, or the like, the gaze tracking type additional information providing method may be terminated.

In step S432, a boundary detection technique such as adaptive thresholding and an elliptic fitting (Ellipsoid Fitting) are performed based on the phenomenon that the reflected light 301 fixed at a small point reflected at a high brightness is reflected at a certain position in the cornea of the eye The pupil 301 of the object 200 can be searched.

The visual direction of the object 200 can be estimated using the difference between the center of the pupil 301 whose position changes according to the visual line direction of the object 200 and the position of the fixed reflected light 302 in step S433.

In step S433, when there are a plurality of objects 200, the center of the plurality of pupils whose positions are changed according to the gaze directions of the plurality of objects and the difference of the positions of the reflected light fixed on the cornea Directions can be estimated.

Next, in step S433, it is possible to estimate a position where the sight line direction of the object 200 and the sight line direction of the automatic vending machine 100 meet.

In step S433, when there are a plurality of objects 200, a plurality of positions where the gaze directions of the plurality of objects meet the gaze direction of the vending machine 100 may be estimated.

Step S433 may also track the point of interest by comparing the estimated location with the location of the real, merchandise, or image displayed on the vending machine 100. [

In step S433, when there are a plurality of objects 200, it is possible to track a plurality of points of interest by comparing the estimated plurality of positions with locations of objects, goods, or images displayed on the vending machine 100 have.

In step S434, if the line of sight of the object 200 stays at a predetermined time (about 0.5 second or less) to the specific product corresponding to the point of interest, it can be determined that the product is a product of interest.

In addition, if there are a plurality of objects 200 in step S434, if the eyes of a plurality of objects stay for a predetermined time (about 0.5 seconds or more) on the specific products corresponding to the points of interest, .

In addition, step S434 creates additional information based on various information related to the product of interest.

In addition, step S434 may generate various additional information corresponding to all the interested goods if there are plural items of interest.

The additional information may be various information for facilitating the purchase of the product of interest.

Examples of the additional information may include an event related to a product of interest, an advertisement image, an animation, a graphic effect, detailed information of the interested product, similar product information, information related to the object 200, and the like.

In addition, step S434 may generate additional information that compares and recommends a product that has shown interest with the new product based on the list of items that the object 200 has already purchased or is interested in.

In addition, step S434 may generate additional information for comparing and recommending products that are of interest to the new product based on the list of items that the plurality of objects have already purchased or have shown an interest, have.

In addition, step S434 may introduce game elements such as a statistical purchase result, a random slot machine, and a ladder rider to generate additional information for comparing and recommending a new product and a product showing interest.

Step S440 may output the additional information received from step S434 using the display unit 130. [

FIG. 5 is an operation flowchart illustrating details of the additional information generating step of FIG. 4 according to an embodiment of the present invention.

Referring to FIG. 5, step S430 may first search for the face area 300 (S431).

At this time, the step S431 can search the face area 300 of the object 200 based on the infrared image photographed from the step S420.

In addition, the step S431 may search the face area 300 using a method such as an AdaBoost algorithm.

The Adaboost algorithm can combine low-performance classifiers to meet one excellent classification performance.

In addition, the step S431 may search the face area 300 using an algorithm other than AdaBoost.

Here, if step S431 fails to find the location of the face area 300, the gaze tracking type additional information providing method may be terminated.

In addition, the step S431 may determine that the object 200 is a plurality of face regions 300 when the number of the detected face regions 300 is plural.

At this time, in step S431, when a plurality of face areas are searched, one of them can be designated.

In addition, step S431 may designate one or a plurality of face areas based on the brightness of the plurality of face areas 300. [

In addition, step S432 can search for the pupil 301 from the searched face area 300. [

In addition, if there are a plurality of objects 200, step S432 may search for a plurality of pupils for a plurality of searched face regions.

In this case, in step S432, when there are a plurality of objects 200, only objects to which both pupils of both eyes among a plurality of objects are searched can be designated.

In addition, in step S432, in the case where it is difficult to track eye lines due to irregular reflection of infrared light, face clipping, quick movement, or the like, the gaze tracking type additional information providing method may be terminated.

In step S432, a boundary detection technique such as adaptive thresholding and an elliptic fitting (Ellipsoid Fitting) are performed based on the phenomenon that the reflected light 301 fixed at a small point reflected at a high brightness is reflected at a certain position in the cornea of the eye The pupil 301 of the object 200 can be searched.

The visual direction of the object 200 can be estimated using the difference between the center of the pupil 301 whose position changes according to the visual line direction of the object 200 and the position of the fixed reflected light 302 in step S433.

In step S433, when there are a plurality of objects 200, the center of the plurality of pupils whose positions are changed according to the gaze directions of the plurality of objects and the difference of the positions of the reflected light fixed on the cornea Directions can be estimated.

Next, in step S433, it is possible to estimate a position where the sight line direction of the object 200 and the sight line direction of the automatic vending machine 100 meet.

In step S433, when there are a plurality of objects 200, a plurality of positions where the gaze directions of the plurality of objects meet the gaze direction of the vending machine 100 may be estimated.

Step S433 may also track the point of interest by comparing the estimated location with the location of the real, merchandise, or image displayed on the vending machine 100. [

In step S433, when there are a plurality of objects 200, it is possible to track a plurality of points of interest by comparing the estimated plurality of positions with locations of objects, goods, or images displayed on the vending machine 100 have.

In step S434, if the line of sight of the object 200 stays at a predetermined time (about 0.5 second or less) to the specific product corresponding to the point of interest, it can be determined that the product is a product of interest.

In addition, if there are a plurality of objects 200 in step S434, if the eyes of a plurality of objects stay for a predetermined time (about 0.5 seconds or more) on the specific products corresponding to the points of interest, .

In addition, step S434 creates additional information based on various information related to the product of interest.

In addition, step S434 may generate various additional information corresponding to all the interested goods if there are plural items of interest.

The additional information may be various information for facilitating the purchase of the product of interest.

Examples of the additional information may include an event related to a product of interest, an advertisement image, an animation, a graphic effect, detailed information of the interested product, similar product information, information related to the object 200, and the like.

In addition, step S434 may generate additional information that compares and recommends a product that has shown interest with the new product based on the list of items that the object 200 has already purchased or is interested in.

In addition, step S434 may generate additional information for comparing and recommending products that are of interest to the new product based on the list of items that the plurality of objects have already purchased or have shown an interest, have.

In addition, step S434 may introduce game elements such as a statistical purchase result, a random slot machine, and a ladder rider to generate additional information for comparing and recommending a new product and a product showing interest.

100: Vending machine
110: Eye-tracking type additional information providing device
111: Infrared lighting
112: Infrared camera
113:
130: Display
200: Target
300: face area
301: Pupil
302: Fixed reflected light

Claims (1)

Infrared illumination to illuminate objects with infrared;
An infrared camera for capturing an infrared image corresponding to the object to which the infrared ray is irradiated;
An image processing unit for generating additional information based on the image; And
A display unit for outputting the additional information;
Wherein the additional information is provided to the user.
KR1020160004306A 2016-01-13 2016-01-13 Apparatus for providing additional information of eye tracking way and method using the same KR20170084900A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160004306A KR20170084900A (en) 2016-01-13 2016-01-13 Apparatus for providing additional information of eye tracking way and method using the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160004306A KR20170084900A (en) 2016-01-13 2016-01-13 Apparatus for providing additional information of eye tracking way and method using the same

Publications (1)

Publication Number Publication Date
KR20170084900A true KR20170084900A (en) 2017-07-21

Family

ID=59462886

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160004306A KR20170084900A (en) 2016-01-13 2016-01-13 Apparatus for providing additional information of eye tracking way and method using the same

Country Status (1)

Country Link
KR (1) KR20170084900A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376666A (en) * 2018-10-29 2019-02-22 百度在线网络技术(北京)有限公司 A kind of commercial articles vending method, apparatus, vending machine and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08140937A (en) * 1994-11-22 1996-06-04 Nikon Corp Pupil center position detector, and visual line detector
JP2009140234A (en) * 2007-12-06 2009-06-25 Yamaha Motor Co Ltd Automatic dispenser
KR20110038568A (en) * 2009-10-08 2011-04-14 한국전자통신연구원 Apparatus and mehtod for tracking eye
JP2016112087A (en) * 2014-12-12 2016-06-23 株式会社三共 Game machine

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08140937A (en) * 1994-11-22 1996-06-04 Nikon Corp Pupil center position detector, and visual line detector
JP2009140234A (en) * 2007-12-06 2009-06-25 Yamaha Motor Co Ltd Automatic dispenser
KR20110038568A (en) * 2009-10-08 2011-04-14 한국전자통신연구원 Apparatus and mehtod for tracking eye
JP2016112087A (en) * 2014-12-12 2016-06-23 株式会社三共 Game machine

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376666A (en) * 2018-10-29 2019-02-22 百度在线网络技术(北京)有限公司 A kind of commercial articles vending method, apparatus, vending machine and storage medium
CN109376666B (en) * 2018-10-29 2022-01-25 百度在线网络技术(北京)有限公司 Commodity selling method and device, selling machine and storage medium
US11501299B2 (en) 2018-10-29 2022-11-15 Baidu Online Network Technology (Beijing) Co., Ltd. Method for selling commodity, vending machine and storage medium

Similar Documents

Publication Publication Date Title
Zhao et al. CueSee: exploring visual cues for people with low vision to facilitate a visual search task
US9953214B2 (en) Real time eye tracking for human computer interaction
US9924866B2 (en) Compact remote eye tracking system including depth sensing capacity
EP2309307B1 (en) Eye tracking using a GPU
US10692210B2 (en) Recording medium storing computer program for pupil detection, information processing apparatus, and pupil detecting method
JP6577454B2 (en) On-axis gaze tracking system and method
Eivazi et al. Improving real-time CNN-based pupil detection through domain-specific data augmentation
EP3158921A1 (en) Line of sight detection system and method
De Beugher et al. Automatic analysis of in-the-wild mobile eye-tracking experiments using object, face and person detection
Cho et al. Gaze Detection by Wearable Eye‐Tracking and NIR LED‐Based Head‐Tracking Device Based on SVR
US11308321B2 (en) Method and system for 3D cornea position estimation
JP6221292B2 (en) Concentration determination program, concentration determination device, and concentration determination method
Jafari et al. Eye-gaze estimation under various head positions and iris states
Hansen et al. An improved likelihood model for eye tracking
EP3207861A2 (en) Detection system and detection method
Wojciechowski et al. Single web camera robust interactive eye-gaze tracking method
Pathirana et al. Single-user 2D gaze estimation in retail environment using deep learning
Dostal et al. Estimating and using absolute and relative viewing distance in interactive systems
Shimata et al. A study of pupil detection and tracking by image processing techniques for a human eye-computer interaction system
Agarkhed et al. Human computer interaction system using eye-tracking features
KR20170084900A (en) Apparatus for providing additional information of eye tracking way and method using the same
Adiba et al. An adjustable gaze tracking system and its application for automatic discrimination of interest objects
Ahmed et al. Controlling multimedia player with eye gaze using webcam
Lu et al. Gaze tracking by binocular vision and LBP features
Keil et al. Real-time gaze tracking with a consumer-grade video camera

Legal Events

Date Code Title Description
E902 Notification of reason for refusal
E601 Decision to refuse application