KR20120058947A - Method and apparatus for displaying contents using eye tracking - Google Patents

Method and apparatus for displaying contents using eye tracking Download PDF

Info

Publication number
KR20120058947A
KR20120058947A KR1020100120487A KR20100120487A KR20120058947A KR 20120058947 A KR20120058947 A KR 20120058947A KR 1020100120487 A KR1020100120487 A KR 1020100120487A KR 20100120487 A KR20100120487 A KR 20100120487A KR 20120058947 A KR20120058947 A KR 20120058947A
Authority
KR
South Korea
Prior art keywords
user
area
content
unit
display unit
Prior art date
Application number
KR1020100120487A
Other languages
Korean (ko)
Inventor
박영진
Original Assignee
삼성전자주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 삼성전자주식회사 filed Critical 삼성전자주식회사
Priority to KR1020100120487A priority Critical patent/KR20120058947A/en
Publication of KR20120058947A publication Critical patent/KR20120058947A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

PURPOSE: A content display method using an eye tracking method and apparatus thereof are provided to induce a user to concentrate in content and to improve the readability of the content by differently displaying reading areas for the user. CONSTITUTION: Content is displayed(205). The activation of an eye tracking function is confirmed(207). When the eye tracking function is activated, a camera unit is activated(209). A reading area for a user is extracted form a content display area by using a facial image of the user captured by the camera unit(211). The reading area for the user is visually differentiated from other areas. The differentiated areas are displayed(213). Voice information is outputted by changing character information included in the area into the voice information.

Description

Method and device for outputting content using eye tracking {METHOD AND APPARATUS FOR DISPLAYING CONTENTS USING EYE TRACKING}

The present invention relates to a method and apparatus for outputting content, and more particularly, to a method and apparatus for outputting content using eye tracking, which can provide various effects by using eye tracking technology.

Thanks to remarkable developments in information and communication technology and semiconductor technology, recent portable terminals have not only general communication functions such as voice calls and message transmission and reception but also TV viewing functions (eg, digital multimedia broadcasting (DMB) or digital video broadcasting (DVB)). Mobile broadcasting), music playback function (eg MP3 (MPEG Audio Layer-3), photo recording function, data communication function, Internet connection function and location information providing function, etc.) are provided. For example, the portable terminal may include various sensors and additional devices, for example, the portable terminal may include a camera, an illuminance sensor, a proximity sensor, an infrared sensor, a vibration motor, a fragrance sensor, and the like.

On the other hand, portable terminals with large screen sizes, such as tablet PCs and E-Book terminals, have been increasing in recent years. Accordingly, interest in e-books through which books can be viewed through portable terminals is increasing. However, the conventional portable terminal provides only a simple function of outputting only text and images to the display unit. This can be boring and boring for the user, and can lead to reduced concentration. As such, the conventional portable terminal has a problem in that it does not effectively utilize various sensors when outputting content.

The present invention was devised to solve the above-mentioned problems of the prior art, and an object of the present invention is to distinguish between an area that the user is looking at and another area by using eye tracking technology, and to visually display the area being viewed by the user and another area. To provide a method and apparatus for outputting content using eye tracking that can be differentiated and displayed.

In addition, another object of the present invention is a content output method using eye tracking that can provide a variety of auditory, tactile, olfactory effects (effect sound, vibration, fragrance) corresponding to the information contained in the area that the user is watching (reading) and To provide a device.

In addition, another object of the present invention is to provide a method and apparatus for outputting content using pupil tracking that varies the output magnification of the content according to the distance between the user and the terminal.

According to a preferred embodiment of the present invention, a method for outputting a content using eye tracking includes: outputting content; Confirming whether the eye tracking function is activated; Activating a camera unit when the pupil tracking function is activated; Extracting an area viewed by the user from a content display area through a face image of the user photographed through the camera unit; And visually distinguishing and outputting the region viewed by the user from another region.

According to an aspect of the present invention, there is provided a content output apparatus using eye tracking, including: a display unit configured to output content; A camera unit for photographing a face image of a user; And checking whether the eye tracking function is activated when the content is output, activating the camera unit when the eye tracking function is activated, and a user of the content display area through the face image of the user captured by the camera unit. And a control unit which controls the display unit to extract an area viewed by the user and to visually differentiate and output the area viewed by the user from other areas.

As described above, the method and apparatus for outputting content using eye tracking according to an exemplary embodiment of the present invention may improve the readability of the content and induce the user's attention by differentiating and displaying the area viewed by the user from other areas. . In addition, the present invention may provide content realistically by providing tactile, auditory, and olfactory effects (vibration, sound effects, and scents) corresponding to information included in the area viewed by the user. In addition, by reducing the brightness of the other area that the user is not reading (reading), and turning off the display unit when the user is not viewing the content, it is possible to prevent the battery from being consumed. In addition, the present invention automatically controls the output magnification of the content according to the distance between the user and the terminal, the user can always view the content at the optimum magnification.

1 is a block diagram schematically illustrating a configuration of a portable terminal according to an exemplary embodiment of the present invention.
2 is a flowchart illustrating a content output method using eye tracking according to an exemplary embodiment of the present invention.
3 is a view showing a content output using eye tracking according to an embodiment of the present invention.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. Note that, in the drawings, the same components are denoted by the same reference numerals as possible. In addition, detailed descriptions of well-known functions and configurations that may blur the gist of the present invention will be omitted.

It should be noted that the embodiments of the present invention disclosed in the present specification and drawings are only illustrative of the present invention in order to facilitate the understanding of the present invention and are not intended to limit the scope of the present invention. It will be apparent to those skilled in the art that other modifications based on the technical idea of the present invention can be carried out in addition to the embodiments disclosed herein.

1 is a view schematically showing the configuration of a portable terminal 100 according to an embodiment of the present invention.

Referring to FIG. 1, the portable terminal 100 according to an exemplary embodiment of the present invention includes a control unit 110, a storage unit 120, a display unit 130, a camera unit 140, a sensor unit 150, and an audio processing unit. 160 may be included. The control unit 110 may include a pupil tracking unit 111.

The display unit 130 displays various menus of the portable terminal 100, information input by the user, or information for providing the information to the user. For example, the display unit 130 may display various screens such as a standby screen, a message writing screen, a call screen, etc. according to the use of the portable terminal 100. In particular, the display unit 130 according to an embodiment of the present invention may output content, for example, an e-book. The display unit 130 may display a part of the area that the user is reporting (reading) under the control of the controller 110 so as to be visually differentiated from other areas. For example, the display 130 may increase the brightness of some areas or decrease the brightness of the other areas under the control of the controller 110 when content is output. Alternatively, the display unit 130 may display different colors of some regions. Alternatively, the display unit 130 may enlarge and output a partial region. The display unit 130 may be formed of a liquid crystal display, an organic light emitting diode (OLED), an active matrix organic light emitting diode (AMOLED), or the like. When the display unit 130 is provided in the form of a touch screen, the display unit 130 may operate as an input unit (not shown).

The camera unit 140 may capture an image (still image or a moving image). In particular, the camera unit 140 according to an embodiment of the present invention is automatically activated when the eye tracking function is activated in the content output mode to take a picture of the user's face, and the control unit 110 ) Can be sent. To this end, the camera unit 140 is preferably mounted at a position where the user's face can be photographed when the user looks directly at the display unit 130. Since the camera unit 140 for capturing the still image and the moving image is obvious to those skilled in the art, detailed description thereof will be omitted.

Meanwhile, the camera unit 140 may include two or more cameras to measure the distance between the display unit 130 and the user by using stereo vision technology. Here, the distance measuring method using the stereo vision technology is a well-known technology and a detailed description thereof will be omitted.

The audio processor 160 may transmit an audio signal to a speaker SPK or an audio signal input from a microphone MIC to the controller 110. That is, the audio processor 160 converts the analog voice signal input from the microphone MIC into a digital voice signal and transmits the digital voice signal to the controller 110, or converts the digital voice signal into an analog voice signal to the speaker SPK. ) For example, the audio processor 160 may output a key input sound pre-stored in the storage 120, another effect sound for executing a function, and a music file (for example, an MP3) play sound. In particular, when the portable terminal 100 includes a text to speech (TTS) function, the audio processor 160 according to the present invention may output text of a region that the user is reporting (reading) as voice information. In addition, the audio processor 160 may output a sound effect when sound information (eg, an onomatopia) is included in an area that the user is watching (reading). For example, the audio processor 160 may output a car crash sound through the speaker SPK when a user views (reads) a scene of a car accident under the control of the controller 110. As such, the audio processor 160 may provide an auditory effect in response to an area viewed by the user when content is output.

The sensor unit 150 may provide a tactile effect and an olfactory effect corresponding to information included in an area that the user is viewing (reading). To this end, the sensor unit 150 may include a vibration motor (not shown) for providing a vibration effect, a scent sensor (not shown) for providing a fragrance effect, and the like. For example, when a user is reading (reading) information related to an earthquake, the sensor unit 150 activates a vibration motor (not shown) under the control of the controller 110 to provide a vibration effect similar to that of an earthquake. can do. Alternatively, when the user is reading (reading) information related to a flower, the sensor unit 150 may activate a scent sensor (not shown) to generate a corresponding flower scent. On the other hand, the sensor unit 150 may be omitted when not providing the tactile and olfactory effects.

The storage unit 120 may store a user data, including a program necessary for operating a function according to an embodiment of the present invention. The storage unit 120 may largely include a program area and a data area. The program area includes a program for controlling the overall operation of the portable terminal 100, an operating system (OS) for booting the portable terminal 100, an application program required for playing multimedia content, and other options of the portable terminal. For example, an application program required for a camera function, a sound playback function, an image or a video playback function, and the like can be stored. In particular, the program area according to the present invention may include a program for analyzing the image photographed by the camera unit 140 to extract the area that the user is looking at, and visually differentiate and output the extracted area. The visually differentiated program may provide a visual effect to the user by changing the brightness, color, etc. of a predetermined area of the display unit 130 or by changing the size of the content displayed in the predetermined area. In addition, the program area may store information that analyzes the information included in the area that the user is looking at and provides at least one of auditory, olfactory, and tactile effects in response to the analysis result. The program providing at least one of the auditory, olfactory, and tactile effects converts text information of the area in which the user is looking into voice information, or outputs sound effects corresponding to onomatopoeia, tactile (vibration) effects, and smell (smell). ) Can provide an effect.

The data area is an area in which data generated according to the use of the portable terminal 100 is stored, and may store phonebooks, audio data, information corresponding to the corresponding content or user data, and the like. In particular, the data area according to the present invention may store information for providing auditory, olfactory, and tactile effects corresponding to information included in the area that the user is viewing (reading). For example, the data area may store auditory information such as a car crash sound, thunder sound, rain sound, etc., tactile information for providing a vibration effect such as an earthquake, a collision, olfactory information such as flower fragrance, and the like. The auditory, tactile and olfactory information may be downloaded and stored by the user. Alternatively, the auditory, tactile, and olfactory information may be provided when the portable terminal 100 is manufactured or may be provided together with content.

The controller 110 may control an overall operation of the portable terminal 100 and a signal flow between internal blocks of the portable terminal 100. In particular, the control unit 110 according to the present invention may provide various visual, auditory, tactile and olfactory effects when outputting content by using the eye tracking function. To this end, the controller 110 may include a pupil tracking unit 111.

In detail, the controller 110 checks whether the pupil tracking function is activated (ON) when outputting content, activates the camera unit 140 when the pupil tracking function is activated, and through the camera unit 140. The face image of the user to be photographed may be transmitted. In this case, the pupil tracking unit 111 may recognize the pupil from the face image of the user, and extract the region that the user is reporting (reading) from the content display area through the pupil tracking. The controller 110 may control the display unit 130 to output an area that the user is reporting (reading) to visually distinguish it from other areas. For example, the controller 110 may improve readability by increasing the brightness of the area reported by the user (reading). Alternatively, the controller 110 may improve the readability by reducing the brightness of other areas that are not reported (read) by the user, and may reduce battery consumption. Alternatively, the controller 110 may enlarge and output the area that the user is reporting (reading). That is, the controller 110 may apply the magnifying glass effect to the area that the user sees (reads). Alternatively, the controller 110 may output a color of an area that the user is reporting (reading) in a color distinguished from other areas. At this time, it is preferable that the area reported by the user (read) is output in a color visually visible.

In addition, the controller 110 may analyze the information included in the area that the user is reporting (reading) and may provide at least one of auditory, tactile and olfactory effects in response to the analysis result. For example, when the controller 110 has a text to speech (TTS) function, the controller 110 of the audio processor 160 converts text information included in an area that the user is viewing (reading) into voice information. Can be output via SPK). Alternatively, the controller 110 may analyze the corresponding sound effect through the speaker SPK of the audio processor 160 when the region reported by the user includes auditory information such as a car crash, a thunder sound or a rain sound. You can print Alternatively, the controller 110 outputs a vibration effect such as an earthquake, a collision, or the like through a vibration motor (not shown) of the sensor unit 150, or aroma information through a fragrance sensor (not shown) of the sensor unit 150. You can output

The controller 110 may change the output magnification of the content according to the distance between the user and the display 130. For example, the controller 110 may reduce the output magnification of the content when the distance between the user and the display unit 130 is close, and increase the output magnification of the content when the distance is far. That is, the controller 110 may perform a zoom out / zoom in function according to the distance between the user and the display 130. To this end, the portable terminal 100 according to the present invention may be provided with a distance measuring sensor (not shown) separately, or in the case of including a plurality of cameras can measure the distance using stereo vision technology. The distance measuring sensor and the stereo vision technology will be apparent to those skilled in the art, and a detailed description thereof will be omitted.

The controller 110 may turn off the display unit 130 when the face image of the user is not captured through the camera unit 140, that is, when the user is not viewing (reading) the content. Through this, the present invention can prevent unnecessary waste of the battery (not shown).

On the other hand, although not shown in FIG. 1, the portable terminal 100 according to the present invention uses a wireless communication module for providing a voice or video call, a broadcast receiving module for broadcasting reception, a digital sound source reproducing module such as an MP3 module, and proximity sensing. The apparatus may further include components for providing additional functions such as a proximity sensor module and a short range wireless communication module. These components can not be enumerated because all the variations according to the convergence (digital convergence) trend of the digital device, the portable terminal 100 according to the present invention further comprises components of the same level as the above-mentioned components It may include.

2 is a flowchart illustrating a content output method using pupil tracking according to an exemplary embodiment of the present invention, and FIG. 3 is a view illustrating a content output method using pupil tracking according to an exemplary embodiment of the present invention.

1 to 3, the controller 110 may check whether the content output mode is present in step 201. If it is not the content output mode, the controller 110 may perform a corresponding function in step 203. For example, the controller 110 may perform a music playing function, a broadcast receiving function, a text message writing function, or the like in response to a user's request, or may be in a standby state. In contrast, in the content output mode, the controller 110 may output the content selected by the user in step 205. The content may be an e-book. However, the present invention is not limited thereto. The content may be any content that can be output to the display unit 130 in the portable terminal 100, such as a web page, an electronic document.

The controller 110 may check whether the pupil tracking function is activated (ON) in step 207. If the pupil tracking function is not activated, the controller 110 can proceed to step 221 described later. On the other hand, when the pupil tracking function is activated, the controller 110 may activate the camera unit 140 in step 209. When the camera unit 140 is activated, the controller 110 can extract the area that the user is watching (reading) in step 211. To this end, the controller 110 may include a pupil tracking unit 111. The eye tracking unit 111 may extract an area 10 that the user is watching (reading) from the content display area through the face image of the user captured by the camera unit 140. Thereafter, the eye tracking unit 111 may track the eye movement of the user by checking the eye and face movement of the user in the image captured by the camera unit 140.

When the extraction of the area 10 that the user reports (reads) is completed, the control unit 110 determines that the extracted area 10 is different from the content that the user does not see in step 213. The area 20 may be visually differentiated and displayed. For example, the controller 110 may improve readability by increasing the brightness of the area 10 that the user is viewing (reading). Alternatively, the controller 110 may reduce the brightness of the other area 20 that is not reported (read) by the user, thereby improving readability and reducing battery consumption. Alternatively, the controller 110 may enlarge and output the area 10 that the user is reporting (reading). That is, the controller 110 may apply the magnifying glass effect to the area 10 that the user is watching (reading). Alternatively, the controller 110 may output the color of the area 10 that the user is viewing (reading) in a color distinguished from other areas. In this case, it is preferable that the area 10 that the user sees (reads) is output in a color visually better than the other area 20.

The controller 110 may analyze the information included in the extracted region in step 215, and provide an auditory, tactile, and olfactory effect corresponding to the analysis result in step 217. For example, when the controller 110 has the TTS function, the controller 110 converts the text information included in the area 10 that the user is viewing (reading) into voice information through the speaker SPK of the audio processor 160. You can also output The controller 110 determines that the sound effect of the speaker SPK of the audio processor 160 is included in the region 10 that the user reports (reads) as a result of hearing information such as a car crash, a thunder sound or a rain sound. Can be output via Alternatively, the control unit 110 outputs a vibration effect such as an earthquake or a collision through a vibration motor (not shown) of the sensor unit 150, or the olfactory effect through a fragrance sensor (not shown) of the sensor unit 150. You can output

The controller 110 may check whether the pupil tracking function ends in step 219. When the pupil tracking function ends, the controller 110 may output the content as in the conventional step 220. In this case, the controller 110 may deactivate the camera unit 140. Thereafter, the controller 110 checks whether the content output mode end signal is input in step 221, and if the content output mode end signal is not input, returns to step 207 to repeat the above-described process. On the other hand, when the content output mode end signal is input in step 221, the controller 110 may end the content output mode and return to a previous step, for example, a standby screen or a content selection screen output state.

In contrast, when the pupil tracking function is not terminated in step 219, the controller 110 checks whether a content output mode end signal is input in step 222, and returns to step 211 when the content output mode end signal is not input. The above process can be repeated. On the other hand, when the content output mode end signal is input in step 222, the controller 110 may end the content output mode and return to the previous step, for example, a standby screen or a content selection screen output state.

Although not shown in FIG. 2, the controller 110 may change the output magnification of the content according to the distance between the user and the display 130. For example, the controller 110 may reduce the output magnification of the content when the distance between the user and the display unit 130 is close, and increase the output magnification of the content when the distance is far. To this end, the portable terminal 100 according to the present invention may be provided with a distance measuring sensor (not shown) separately, or in the case of including a plurality of cameras can measure the distance using stereo vision technology. In addition, the controller 110 may turn off the display unit 130 when the face image of the user is not captured through the camera unit 140, that is, when the user is not viewing (reading) any area of the content. Can be. Through this, the present invention can prevent unnecessary waste of the battery (not shown).

The content output method using eye tracking according to the embodiment of the present invention as described above may be implemented in the form of program instructions that can be executed by various computer means and recorded in a computer-readable recording medium. In this case, the computer-readable recording medium may include a program command, a data file, a data structure, etc. alone or in combination. On the other hand, the program instructions recorded on the recording medium may be those specially designed and configured for the present invention or may be available to those skilled in the art of computer software.

The computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tapes, optical media such as CD-ROMs, DVDs, and magnetic disks such as floppy disks. -Magneto-Optical Media, and hardware devices specifically configured to store and execute program instructions, such as ROM, RAM, flash memory, and the like. In addition, program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like. The hardware device described above may be configured to operate as one or more software modules to perform the operations of the present invention.

In the above description of the preferred embodiments through the specification and drawings with respect to the method and apparatus for outputting content using eye tracking according to an embodiment of the present invention, although specific terms are used, it is only easy to describe the technical content of the present invention. And only to be used in a general sense to help understand the invention, the present invention is not limited to the above-described embodiment. That is, it is apparent to those skilled in the art that various embodiments based on the technical idea of the present invention are possible.

100: portable terminal 110: control unit
120: storage unit 130: display unit
140: camera unit 150: sensor unit
160: audio processor 111: eye tracking unit

Claims (17)

Outputting content;
Confirming whether the eye tracking function is activated;
Activating a camera unit when the pupil tracking function is activated;
Extracting an area viewed by the user from a content display area through a face image of the user photographed through the camera unit; And
And visually distinguishing and outputting the region that the user is viewing and other regions.
The method of claim 1,
The process of visually differentiating and outputting
Increasing brightness of the area viewed by the user;
Enlarging and outputting an area viewed by the user;
Changing a color of an area viewed by the user; And
And a process of reducing the brightness of the other area.
The method of claim 1,
And outputting the text information included in the area that the user is viewing by converting the text information into voice information.
The method of claim 1,
Analyzing information of an area viewed by the user; And
And providing at least one effect of hearing, touch, and smell corresponding to the analysis result.
The method of claim 1,
And turning off the display unit when the face image of the user is not captured through the camera unit.
The method of claim 1,
Measuring a distance between the user and the display unit;
And adjusting the output magnification of the content according to the measured distance.
The method of claim 6,
The process of measuring the distance
Measuring through a distance measuring sensor; And
And at least one of measuring distances using stereo vision technology using at least two cameras.
A display unit for outputting content;
A camera unit for photographing a face image of a user; And
Check whether the eye tracking function is activated when the content is output, activate the camera unit when the eye tracking function is activated, and the user of the content display area through the face image of the user photographed through the camera unit. And a control unit configured to extract the viewing area and control the display unit to visually differentiate and output the area viewed by the user from other areas.
The method of claim 8,
The control unit
And the display unit is controlled to increase the brightness of the area that the user is viewing.
The method of claim 8,
The control unit
And the display unit controls the display unit to enlarge and output the area viewed by the user.
The method of claim 8,
The control unit
And the display unit controls the display unit to change and output a color of an area viewed by the user.
The method of claim 8,
The control unit
And the display unit is controlled to reduce the brightness of the other area.
The method of claim 8,
And an audio processing unit for converting text information included in an area viewed by the user into voice information and outputting the converted voice information.
The method of claim 8,
And a sensor unit configured to provide tactile and olfactory effects in response to the information included in the area viewed by the user.
The method of claim 8,
The control unit
And adjusting the output magnification of the content according to the distance between the user's pupil and the terminal.
16. The method of claim 15,
The control unit
Content tracking device using eye tracking, characterized in that for measuring the distance using a stereo vision technology using at least two cameras.
The method of claim 8,
The control unit
And the display unit is turned off when the face image of the user is not captured through the camera unit.
KR1020100120487A 2010-11-30 2010-11-30 Method and apparatus for displaying contents using eye tracking KR20120058947A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020100120487A KR20120058947A (en) 2010-11-30 2010-11-30 Method and apparatus for displaying contents using eye tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020100120487A KR20120058947A (en) 2010-11-30 2010-11-30 Method and apparatus for displaying contents using eye tracking

Publications (1)

Publication Number Publication Date
KR20120058947A true KR20120058947A (en) 2012-06-08

Family

ID=46610328

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020100120487A KR20120058947A (en) 2010-11-30 2010-11-30 Method and apparatus for displaying contents using eye tracking

Country Status (1)

Country Link
KR (1) KR20120058947A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101529241B1 (en) * 2013-10-07 2015-06-17 황성재 System and method for controlling an electronic device based upon contents displayed on the electronic device
KR20160074315A (en) * 2014-12-18 2016-06-28 한국과학기술원 User terminal and method for providing haptic service of the same
CN112306223A (en) * 2019-08-30 2021-02-02 北京字节跳动网络技术有限公司 Information interaction method, device, equipment and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101529241B1 (en) * 2013-10-07 2015-06-17 황성재 System and method for controlling an electronic device based upon contents displayed on the electronic device
KR20160074315A (en) * 2014-12-18 2016-06-28 한국과학기술원 User terminal and method for providing haptic service of the same
CN112306223A (en) * 2019-08-30 2021-02-02 北京字节跳动网络技术有限公司 Information interaction method, device, equipment and medium
CN112306223B (en) * 2019-08-30 2024-03-26 北京字节跳动网络技术有限公司 Information interaction method, device, equipment and medium

Similar Documents

Publication Publication Date Title
KR102281233B1 (en) Apparatus and method controlling display
CN110572722B (en) Video clipping method, device, equipment and readable storage medium
CN110708596A (en) Method and device for generating video, electronic equipment and readable storage medium
US11720179B1 (en) System and method for redirecting content based on gestures
IES20180181A2 (en) Maximizing the size of a wide-screen movie on a mobile device with a front camera
CN110097428B (en) Electronic order generation method, device, terminal and storage medium
CN109660855B (en) Sticker display method, device, terminal and storage medium
CN110097429B (en) Electronic order generation method, device, terminal and storage medium
CN108419113B (en) Subtitle display method and device
US20230315256A1 (en) Method for displaying application icon and electronic device
KR20150017131A (en) Mobile terminal and method for controlling the mobile terminal
KR20110071349A (en) Method and apparatus for controlling external output of a portable terminal
CN112929687A (en) Interaction method, device and equipment based on live video and storage medium
US9491401B2 (en) Video call method and electronic device supporting the method
WO2021013147A1 (en) Video processing method, device, terminal, and storage medium
US20190012129A1 (en) Display apparatus and method for controlling display apparatus
CN113407291A (en) Content item display method, device, terminal and computer readable storage medium
CN112257006A (en) Page information configuration method, device, equipment and computer readable storage medium
CN113225616A (en) Video playing method and device, computer equipment and readable storage medium
CN113409427A (en) Animation playing method and device, electronic equipment and computer readable storage medium
KR20120058947A (en) Method and apparatus for displaying contents using eye tracking
CN114845152B (en) Display method and device of play control, electronic equipment and storage medium
EP2587359A2 (en) Method and apparatus for making personalized contents
KR102005406B1 (en) Dispaly apparatus and controlling method thereof
CN112995760A (en) Video processing method, device, equipment and computer storage medium

Legal Events

Date Code Title Description
E902 Notification of reason for refusal
E601 Decision to refuse application