KR101705988B1 - Virtual reality apparatus - Google Patents

Virtual reality apparatus Download PDF

Info

Publication number
KR101705988B1
KR101705988B1 KR1020150122452A KR20150122452A KR101705988B1 KR 101705988 B1 KR101705988 B1 KR 101705988B1 KR 1020150122452 A KR1020150122452 A KR 1020150122452A KR 20150122452 A KR20150122452 A KR 20150122452A KR 101705988 B1 KR101705988 B1 KR 101705988B1
Authority
KR
South Korea
Prior art keywords
user
image
area
line
sight
Prior art date
Application number
KR1020150122452A
Other languages
Korean (ko)
Inventor
윤승훈
홍지완
Original Assignee
윤승훈
홍지완
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 윤승훈, 홍지완 filed Critical 윤승훈
Priority to KR1020150122452A priority Critical patent/KR101705988B1/en
Application granted granted Critical
Publication of KR101705988B1 publication Critical patent/KR101705988B1/en

Links

Images

Classifications

    • H04N13/0484
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • H04N13/0468
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Abstract

A virtual reality apparatus according to the present embodiment includes: an image output unit for providing an image that can be viewed three-dimensionally or from 360 degrees by displaying an image for each of users left and right eyes; a sound output unit outputting sound data included in content displayed through the image output unit; a line-of-sight tracking unit for checking the movement of users line-of-sight; and a user-adaptive content analysis unit detecting a line-of-sight position region showing the position of users line-of-sight in the image through the line-of-sight tracking unit and analyzing line-of-sight data with respect to an object showing the image displayed in the image based on the position and movement of the line-of-sight position region. The line-of-sight data includes information on the time when the line-of-sight position region is positioned for each object included in the image displayed through the image output unit.

Description

[0001] The present invention relates to a virtual reality apparatus,

The present invention relates to a virtual reality apparatus, and more particularly, to a device capable of analyzing a user's interest in an object of an image displayed in content through visual tracking of a user and providing various services through the analysis .

Interactive technology based on motion recognition for virtual reality and games has started to be commercialized and spreading to various product fields such as TV. And, with the popularization of 3D technology, it has been applied to various experience technologies that can interact with the home entertainment industry.

Behavior-based contents that can maximize the living and real feeling of living by converging stereoscopic images and interactive technologies that can view 3D contents in the same way as actual ones are also being developed. That is, development of experiential contents in the fields of sports, games, entertainment, education and the like is progressing by using a 3D or 360-degree viewable interface based on motion tracking that allows a user to directly interact with a virtual object.

However, the technology development is focused only on the user's movement tracking to be reflected in the contents, and development for providing a variety of services is lacking.

Prior Art Korean Patent Registration No. 10-0733964 discloses a game apparatus and method using motion recognition and speech recognition.

The present invention proposes a device capable of databaseing the taste or the interest of users with respect to contents that can be provided through a virtual reality apparatus.

It is possible to generate data that can be used for content creation by a content creator by making it possible to analyze a content / image having a high degree of user's taste or interest and a content / We propose a device that can generate data that can be used in the technical field.

The virtual reality apparatus of the present embodiment includes an image output unit for providing a three-dimensional or 360-degree viewable image by displaying an image on the left and right eyes of the user respectively, A visual line tracking unit for confirming a user's visual movement and a visual line position area indicating a user's visual line position in the image through the visual line tracking unit, And a user adaptive content analyzer for analyzing visual line data of an object representing an image displayed in the image based on a position and a movement of the area, wherein the visual line data includes an object included in an image displayed through the image output unit And information on the time at which the gaze position area is located.

Wherein the user adaptive content analyzing unit divides an image displayed through the video output unit into a plurality of screen areas, extracts a screen area corresponding to a user's gaze position area detected by the gaze tracking unit, A virtual reality apparatus for distinguishing a background area in an area and an image object.

The user adaptive content analyzing unit may analyze the gaze data with respect to an object positioned at a boundary between screen areas in which the gaze position area is located when the gaze position area moves in the divided screen areas And analyzing the gaze data with respect to all the objects included in the screen areas in which the gaze position area is located.

The user adaptive content analyzer may be configured in a device that can be detachably coupled to the virtual reality device, or configured in an external content platform connected to the network.

Through the embodiments of the present invention as proposed, it is possible to database users' tastes or interests, and to analyze content / images of a user's taste or interest and contents / images of a user's taste or interest Thus, it is possible to generate data that can be used for content creation by a content creator, and generate data that can be utilized in various technical fields such as marketing and psychotherapy.

1 is a diagram showing a system configuration to which the concept of the present invention can be applied.
FIG. 2 is a diagram illustrating a configuration of a virtual reality apparatus according to an embodiment of the present invention.
3 is a diagram showing an example in which a camera is arranged around a user's pupil.
FIG. 4 is a flowchart illustrating a gaze tracking method according to the present embodiment.
5 is a diagram for explaining a method of performing user correction in order to more accurately track the user's gaze according to the present embodiment.
FIGS. 6 to 8 are diagrams illustrating an analysis of objects in content using eye tracking according to the present embodiment.
FIG. 9 is a flowchart illustrating a method of analyzing user interest according to the present embodiment.
FIGS. 10 to 19 are diagrams illustrating various examples for analyzing user interest based on the gaze position area according to the present embodiment.
FIG. 20 is a flowchart for explaining a method of determining a degree of interest of a user by using both displayed image content and voice content according to an embodiment of the present invention. FIG. 21 is a flowchart illustrating a method of determining a degree of interest of a user by using an evasion gesture Fig.
22 is a diagram illustrating an example of an advertisement service using the gaze position analysis of the user according to the present embodiment.

Hereinafter, the present embodiment will be described in detail with reference to the accompanying drawings. It should be understood, however, that the scope of the inventive concept of the present embodiment can be determined from the matters disclosed in the present embodiment, and the spirit of the present invention possessed by the present embodiment is not limited to the embodiments in which addition, Variations.

Suffix "module" and " part "used in the description relating to the present invention are to be given or mixed in consideration only of ease of specification, and they do not have a meaning or role that distinguish themselves.

1 is a diagram showing a system configuration to which the concept of the present invention can be applied.

Referring to FIG. 1, there is shown a virtual reality apparatus 100 capable of tracking a user's head movement (head tracking) and a user's gaze tracking (gaze tracking) The content sharing platform 200 can be connected to the content sharing platform 200 and can produce content using information about user taste per contents or user taste analysis by content The content creator 300 may include a content creator 300.

The virtual reality apparatus 100 is an IT apparatus that has been popularized in recent years and displays images of different frames in both eyes, thereby enjoying a three-dimensional stereoscopic image or a 360-degree viewable image.

In addition, the virtual reality apparatus 100 according to the present invention is capable of not only head tracking but also visual tracking of the user, and utilizing information about the user's motion tracking (head and gaze tracking) as an analysis of the displayed content The user can analyze the taste or the degree of interest of the user and can perform advertisement based on the user's gesture or the user gesture detected through the gaze gesture or determine whether to display the displayed content.

The virtual reality apparatus 100 performs head tracking and gaze tracking of the user. The analysis of user taste or satisfaction based on the content image displayed in the virtual reality apparatus 100 is performed by the virtual reality apparatus 100 Or may be performed in a network-connected content sharing platform 200.

The virtual reality apparatus 100 may analyze user's head tracking information or gaze tracking information on the content and transmit the analyzed result to the content sharing platform 200. [ In addition, the virtual reality apparatus 100 may data only the head tracking information or visual tracking information of a user, and may transmit the data to the content sharing platform 200, and may transmit head tracking information The user's taste or interest information can be extracted based on the visual tracking information and the time-based display image. It goes without saying that the present invention can be variously configured according to the modification of the embodiment.

In the following description, an image displayed in the virtual reality apparatus may be referred to as content, and both the user's head tracking and gaze tracking may be referred to as a user gesture. The user's interest information may be data obtained by analyzing the user's gesture based on the user's gesture and the content displayed on the content.

Here, the user interest information is data obtained by digitizing a user's taste or rejection of objects displayed in the contents displayed in the virtual reality apparatus. The user interest information may include an object having a high user interest level or a relatively low level object, Collect information about these objects and make them available.

Hereinafter, it will be described that the virtual reality apparatus of the present embodiment performs content analysis according to a user gesture and analysis (user interest) on an object in the content according to the user gesture. As described above, Of course, the content analysis can be performed on a network-connected platform or the like.

FIG. 2 is a diagram illustrating a configuration of a virtual reality apparatus according to an embodiment of the present invention.

The virtual reality apparatus of the embodiment includes an image output unit 120 having a pair of display areas for displaying images on both eyes of the user, An audio output unit 130 including a headset for outputting data, a head tracking unit 140 for reading a head movement direction, an angle and a speed of a user, And a line-of-sight tracking unit 150 capable of tracking a line of sight to both eyes.

The user interest information is generated using the head tracking information and eye tracking information for the objects output through the video output unit 120 and the audio output unit 130 and output to the audio content. And a user adaptive content analysis unit 110.

The virtual reality apparatus 100 may further include a communication unit 160 for performing a network / communication connection with an external device. Using the communication unit 160, It is possible to detect whether or not the user interest information is installed in the virtual reality mode or to determine whether to operate in the virtual reality mode and to transmit the user interest information analyzed by the user adaptive content analysis unit 110 to an external platform.

On the other hand, the virtual reality apparatus of the embodiment of the present invention may incorporate sensors capable of head tracking, but it is also possible that a virtual reality apparatus is configured by mounting a smart device equipped with a sensor capable of head tracking. For example, a virtual reality apparatus includes a sensor and a lens capable of gaze tracking, performs head tracking using sensors mounted on a smart device that can be mounted, and displays virtual reality images (contents) on the display screen of the smart device It is also possible to display it. In this case, in the virtual reality mode, two screens corresponding to the left and right eyes are displayed on the screen of the smart device.

In detail, the head tracking unit 140 includes a gyro sensor capable of detecting a tilt, an acceleration sensor capable of detecting a moving state of three axes, and an acceleration sensor capable of detecting the direction of action of gravity And may further include an altimeter capable of measuring altitude by measuring atmospheric pressure.

And, as described above, the gyro sensor, the acceleration sensor and the gravity sensor can be mounted in a virtual device mounted on a user's head or in a smart device capable of being coupled to a virtual reality device.

The gaze tracking unit 150 will be described with reference to FIG. 3 attached herewith.

3 is a diagram showing an example in which a camera is arranged around a user's pupil.

The line of sight camera 20 includes a lens 22 and a body 21 for supporting the lens, and further includes an illumination unit for irradiating light to the user's pupil. The lens 22 may be configured as a zoom lens capable of adjusting the photographing region of the pupil portion and capable of photographing only the pupil necessary for visual tracking.

The illumination unit mounted on the line-of-sight camera 20 may be a far-infrared LED lamp having a wavelength band ranging from 700 nm to 900 nm to prevent glare. In particular, in order to acquire an image having a constant brightness not influenced by a clear pupil boundary and external light, the line-of-sight camera 20 may be configured with an infrared camera equipped with an infrared ray transmission filter. That is, in the case of a general camera, an infrared ray cutoff filter is configured in front of the sensor. In the eye-gaze camera of this embodiment, the eye image is acquired by the reflected light of the infrared ray irradiated from the illuminating unit so that the infrared ray- can do.

The gaze camera 20 may be positioned on a vertical line passing through the pupil center 12 of the left eye / right eye 10 as shown and the pupil center 12 surrounding the iris 11 And the lens 22 of the line-of-sight camera is positioned on a vertical line passing through the center. This is because it can be coincided with the gaze direction by the head movement of the user, and the gaze camera 20 can be located as close as possible to the eye without obstructing the user's view.

In the visual path photographed by the line-of-sight camera 20, the pupil and the irises 11 and 12 become objects to be photographed. A diaphragm may further be disposed around the video output unit 120 in which the content is displayed. The principal ray of a different object field in the display path becomes a peripheral ray of a point of interest of the temperament in the pupil photographing path and all rays passing through the same point on the pupil and iris 11 and 12 are photographed at the same point on the visual camera 20 . In the photographic path, the diaphragm does not affect the display path, but may be disposed about the image output portion 120, which is designed to be sufficient to focus the light beam to form an eye image within the line-of-sight path.

Various methods other than the gaze tracking method and the head tracking method may be further used, and a gaze tracking method applicable to the present embodiment will be described with reference to FIG.

FIG. 4 is a flowchart illustrating a gaze tracking method according to the present embodiment.

First, when a user wears a virtual reality device capable of tracking a line of sight, or wears a virtual reality device equipped with a smart device, a user calibration (S101) is performed.

Then, while obtaining the gaze image and the forward image (S102), the gaze tracking (S103) is performed. The line-of-sight tracking can be performed using the above-described line-of-sight camera, and it is a matter of course that various known cameras can be applied.

In addition, the operation of extracting the content displayed by the user's left and right eyes, that is, the feature of the display image is performed (S104).

Then, visual line data for each object in the display image is analyzed (S105). It is possible to acquire information about the objects in the display image and statistically calculate the time spent by the user's line of sight from the analyzed line of sight data and further to select an object of high interest by the user, You can analyze the objects you are avoiding.

Through this analysis, data for each object in the content can be generated for the content viewed by the user (S106), and the data for each object can be the user interest information for each object. By generating such user interest information, the content creator can produce customized content for each user at the time of content creation, and can utilize it for psychotherapy of a user, perform various advertisements, and the like.

5 is a diagram for explaining a method of performing user correction in order to more accurately track the user's gaze according to the present embodiment.

In order to set the relationship between the display image area S and the visual line position area T, the visual line position area T analyzed by the visual camera and the display image area of the image displayed through the image output part 120 (S) are divided into the same number. For example, each area is divided into four areas.

The line-of-sight area T detected by the line-of-sight camera is divided into the first line of sight T 11 and T 12 , the second line of sight T 51 and T 52 , the third line of sight T 61 and T 62 , If it is the area surrounded by the four line of sight (T 81 , T 82 ), the display image area S corresponding thereto is confirmed.

For example, the first image point (S 11 , S 12 ), the second image point (S 51 , S 52 ), the third image point (S 61 , S 62 ), and the fourth image point S 81 , and S 82 ).

In the case of the display image, it is also possible that the image segmentation is performed only for the image displayed in the left eye and the image displayed in the right eye.

In the example of dividing the visual line region into four visual line position regions, it is also possible to guide the user to move the pupil of the user through the video output unit 120 up and down and left and right when the user wears the virtual reality apparatus. That is, a process of setting the pupil movement range of the user first is performed in order to accurately determine the user's gaze movement before displaying the images through the virtual reality apparatus and reproducing the contents.

(T 11 , T 12 ), (T 21 , T 22 ), (T 31 , T 32 ), and (T) of the outermost gaze points 41 , and T 42 ) is confirmed.

Hereinafter, various examples for judging the degree of interest of the user in viewing the content in the virtual reality mode using the above-described head tracking and gaze tracking will be described with reference to the drawings.

FIGS. 6 to 8 are diagrams illustrating an analysis of objects in content using eye tracking according to the present embodiment.

Referring to FIG. 6, when a user wears the virtual reality apparatus 100, a content image is displayed on the left and right eyes, thereby displaying a three-dimensional image or a 360-degree viewable image.

The content to be displayed includes, for example, a first object A of a tree image, a second object B of a house image, a third object C of a cylindrical stadium image, a fourth object D of a fold image, . ≪ / RTI >

If it is determined that the third object C is intensively gazing as a result of checking the gaze position area followed by the gaze tracking unit 150 in the virtual reality apparatus, Counts the gaze data. That is, in a case where the eye position area following the eye camera is judged to be for the third object C, the gaze data for the third object C is counted and data is counted.

On the other hand, as shown in FIG. 6, the user's sight line position area S can be displayed on the content image as a UI displayed to the user. When the user's gaze position area is located on the object for a predetermined time or longer, the size of the displayed gaze position area S can be varied as shown in Fig. As described above, when the user's sight line position area S is displayed on the screen to allow the user to recognize his or her gaze position, various services can be provided. For example, as shown in FIG. 22, an advertisement 500 inserted by a service provider (or a content creator) is displayed on the content, and the service provider displays the advertisement 500 for a predetermined time or more Thus, in the case of paying the advertisement cost, the user desires to intentionally position the sight line position area S on the advertisement. In this case, by displaying the sight line position area S on the screen, I can assist the act.

8, for each of the objects included in the displayed content image, the visual line data for each object (when the visual line position area is located, Time) can be calculated.

In another example, it is possible for the user to directly control the visual line position area to be displayed on the content screen through a button provided on the virtual reality device or another input device. For example, when the sight line position area S indicating the sight line position of the user is displayed on the content image, the content control can be performed such that the corresponding content object is enlarged or concentrated by the sight line position area S.

Hereinafter, a method of controlling content display using head tracking information and eye tracking information, and examples thereof will be described in detail.

FIG. 9 is a flowchart illustrating a method of analyzing user interest according to the present embodiment.

The content image to be displayed is divided by a predetermined number through the video output unit 120 (S201). For example, images displayed in the left eye and / or right eye can be divided into four regions. As described above, in order to more precisely track the gaze, a user correcting step of confirming the user's gaze movable area to limit the user's movement range of the pupil, May be divided into a predetermined number.

After the displayed content image is divided, features in each divided image are extracted (S202). That is, for each divided image, the background region and the object are identified, and the objects in each divided image are identified and classified. The distinction between the background area and the object may be performed on a virtual reality device or a network-connected platform using a histogram, or may be preset by the content creator.

Next, the user's gaze position area is detected with respect to the content image displayed through the video output unit (S203), and the movement path of the gaze position area is confirmed (S204).

Then, the time of staying in each eye object area (i.e., the reference of the user interest), which is confirmed by eye tracking, is tracked, and the eye focus time for each image feature is analyzed (S205).

Through this method, it is possible to derive the interest level of the user per object displayed in each content, based on the gaze data (S206).

A method of extracting gaze data, i.e., user interest information, for an object in a content and various methods for extracting an object from a user's gaze position area will be described with reference to the above-described method.

FIGS. 10 to 19 are diagrams illustrating various examples for analyzing user interest based on the gaze position area according to the present embodiment.

Referring to FIG. 10, there is shown a screen in which one of the images shown on the left eye and the right eye, respectively, or each of the left eye and right eye images are finally displayed to the user. The case where the images classified as objects in the displayed content image are the first object A to the fourth object D will be described as an example.

11, when it is determined that the current user's gaze, that is, the gaze position area S is located in the first divided area where the first object A is located, according to the user's gaze tracking, The time at which the gaze position area S remains with respect to the first object A is calculated.

The user adaptive content analyzer 110 determines that the user's gaze position area S is an interest level of an object in the divided video area when the user's sight line position area S is located over a predetermined time in the divided video area, .

In addition, in order to more accurately determine the gaze movement of the user, not only the area division of the user's pupil movement but also the division of the displayed content image may be performed. It is possible to judge the user's interest object from the motion of the eye-point position area S if it is determined that the eye-point position area S is located.

For example, as shown in FIG. 12, when the user's sight line position area S1 is located in the first screen area 12A and then moved to the second screen area 12A, that is, The case of changing from the position area S1 to the second gaze position area S2 will be described as an example.

If the first gaze position area S1 is changed to the second gaze position area S2 without reaching a predetermined time in the first screen area 12A, the user's interest object is displayed in the first screen area 12A ) And the second screen area 12B, as shown in FIG.

In this case, the user adaptive content analyzer 110 analyzes the second object B located at the boundary between the first screen area 12A and the second screen area 12B as an interest for the second object B, It is also possible to display an image emphasizing the second object B as shown in Fig.

As described above, when the user's sight line position area S is located in a certain screen area for a predetermined time or longer, i.e., in the second screen area 12B as in the case of Fig. 14, The object C existing in the screen area can be emphasized (see Fig. 15).

On the other hand, if the user's gaze position area S is repeatedly moving the divided screen area, not only the object positioned at the boundary between the screen areas but also the object in the screen area where the gaze position area S is located The object emphasis in this image is the basic data for extracting the user interest information.

16 and 17, the user's sight line position area S3 is located in the second screen area 12B and then moves to the fourth screen area 12D and again in the fourth screen area 12D A third object C positioned in the second screen area 12B and a second object C positioned on the boundary between the second screen area 12B and the fourth screen area 12D when moving to the second screen area 12B, It is possible to determine the degree of interest of the fourth object (D).

On the other hand, as another embodiment, when the user's sight line position area moves from the second screen area 12B to the fourth screen area 12D as shown in Fig. 16, The image of the third object (C)

When the user's gaze position area again moves from the fourth screen area 12D to the second screen area 12B, the object in the fourth screen area 12D, the second screen area and the fourth screen area 12B, The image of the fourth object D, which is an object positioned at the boundary of the fourth object D, can be emphasized and displayed.

In the image emphasis of such an object, only the object image to be emphasized may be provided as a clear image in the content image shown to the user, but the user adaptive content analyzing unit 110 may collect the per- And may be understood to be illustrative.

On the other hand, in the present embodiment, it is possible to assign a weight to the degree of interest of the in-content object by using the voice data together with the sight line position area of the user. A related embodiment will be described with reference to Figs. 20 and 21. Fig.

FIG. 20 is a flowchart for explaining a method of determining a degree of interest of a user by using both displayed image content and voice content according to an embodiment of the present invention. FIG. 21 is a flowchart illustrating a method of determining a degree of interest of a user by using an evasion gesture Fig.

First, the user adaptive content analysis unit 110 of the embodiment checks the video content and the audio content output through the virtual reality apparatus (S301).

The user's gesture (head tracking and gaze tracking) is analyzed by the head tracking unit 140 and the gaze tracking unit 150 (S302), and information about the user's interest contents and interest objects is collected.

21, an image that can be avoided by the user (hereinafter referred to as the avoided image F) is displayed in the image content, and the audio content associated with the displayed image in the avoided image F is And output to one speaker or headset 132 is assumed.

At this time, the user can judge the avoidance gesture as the avoidance gesture of the user when head tracking or gaze tracking is performed to the screen area side opposite to the screen area where the avoiding image F is displayed (S303).

That is, when a new object F is displayed on the previously displayed image and the user's head tracking or gaze tracking appears in a direction opposite to the direction in which the corresponding object F is displayed, And confirms information about the object F (S304).

Particularly, even when head tracking or gaze tracking to the left occurs although sound data related to the object F displayed only through the user's right speaker or the headset 132 is being output, it is judged as a user's evasive gesture can do.

Through this method, it is also possible to identify the object to be avoided as the object interest information of the user.

Through the embodiments of the present invention as described above, it is possible to database the taste or the interest of the users, and to analyze the content / image of the user's taste or interest and the content / image of the user's taste or interest Thus, it is possible to generate data that can be used for content creation by a content creator, and generate data that can be utilized in various technical fields such as marketing and psychotherapy.

Claims (5)

An image output unit for providing a three-dimensional image or a 360-degree viewable image by displaying an image on each of the left and right eyes of the user,
An audio output unit for outputting audio data included in the content displayed through the video output unit,
A line-of-sight tracking unit for confirming a line-of-sight motion of the user,
A line-of-sight position detecting unit for detecting a line-of-sight position area indicating a user's line-of-sight position in the image through the line-of-sight tracking unit, and analyzing line-of-sight data for an object representing an image displayed in the image, And an adaptive content analysis unit,
Wherein the visual line data includes information on a time at which the visual line position area is located for each object included in the image displayed through the video output unit,
Wherein the user adaptive content analyzing unit divides an image displayed through the video output unit into a predetermined plurality of screen areas, extracts a screen area corresponding to a user's sight line position area detected by the sight line tracking unit, In the screen area, the histogram is used or the pre-set objects are separated into the background area and the image object,
Wherein the user adaptive content analyzing unit analyzes the gaze data for an object positioned at a boundary between screen areas in which the gaze position area is located when the gaze position area moves in the divided screen areas, Analyzing the gaze data with respect to all the objects included in the screen areas in which the gaze position area is located,
Wherein the user adaptive content analyzing unit includes information for giving a weight of the gaze data as information about the object when the gaze position area moves in the direction opposite to the direction in which the object is located in the image,
Wherein the user adaptive content analyzer calculates time as a degree of interest for an object in the divided image area when the visual line position area is positioned over a predetermined time in the image area divided by the preset number, When the visual line position area is changed to the second screen area without reaching a preset time in the first screen area which is one of the divided image areas, the second screen area is located at the boundary between the first screen area and the second screen area The virtual reality apparatus comprising:
delete delete The method according to claim 1,
Wherein the user adaptive content analyzer is configured in a device that can be detachably coupled to the virtual reality device or configured in an external content platform connected to the network.
The method according to claim 1,
Wherein the visual line tracking unit includes an illumination unit for irradiating light around the user's pupil and a lens for receiving light reflected from the pupil.
KR1020150122452A 2015-08-31 2015-08-31 Virtual reality apparatus KR101705988B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150122452A KR101705988B1 (en) 2015-08-31 2015-08-31 Virtual reality apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150122452A KR101705988B1 (en) 2015-08-31 2015-08-31 Virtual reality apparatus

Publications (1)

Publication Number Publication Date
KR101705988B1 true KR101705988B1 (en) 2017-02-23

Family

ID=58315360

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150122452A KR101705988B1 (en) 2015-08-31 2015-08-31 Virtual reality apparatus

Country Status (1)

Country Link
KR (1) KR101705988B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020171637A1 (en) * 2019-02-20 2020-08-27 Samsung Electronics Co., Ltd. Apparatus and method for displaying contents on an augmented reality device
KR20210090879A (en) * 2020-01-13 2021-07-21 주식회사 비주얼캠프 Method for gaze analysis and apparatus for executing the method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005071285A (en) * 2003-08-28 2005-03-17 New Industry Research Organization Collision detection method that change detail degree according to interaction in space and virtual space formation device using its method
KR20140044663A (en) * 2012-10-05 2014-04-15 삼성전자주식회사 Information retrieval method by using broadcast receiving apparatus with display device and the apparatus thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005071285A (en) * 2003-08-28 2005-03-17 New Industry Research Organization Collision detection method that change detail degree according to interaction in space and virtual space formation device using its method
KR20140044663A (en) * 2012-10-05 2014-04-15 삼성전자주식회사 Information retrieval method by using broadcast receiving apparatus with display device and the apparatus thereof

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020171637A1 (en) * 2019-02-20 2020-08-27 Samsung Electronics Co., Ltd. Apparatus and method for displaying contents on an augmented reality device
KR20210090879A (en) * 2020-01-13 2021-07-21 주식회사 비주얼캠프 Method for gaze analysis and apparatus for executing the method
WO2021145515A1 (en) * 2020-01-13 2021-07-22 주식회사 비주얼캠프 Method for gaze analysis and apparatus for executing same
KR102287281B1 (en) * 2020-01-13 2021-08-06 주식회사 비주얼캠프 Method for gaze analysis and apparatus for executing the method

Similar Documents

Publication Publication Date Title
US10182720B2 (en) System and method for interacting with and analyzing media on a display using eye gaze tracking
US8899752B2 (en) Visual fatigue level measuring device, visual fatigue level measuring method, visual fatigue level measuring system, and three-dimensional glasses
US10048750B2 (en) Content projection system and content projection method
KR102164723B1 (en) System and method for generating 3-d plenoptic video images
US8199186B2 (en) Three-dimensional (3D) imaging based on motionparallax
US20120200667A1 (en) Systems and methods to facilitate interactions with virtual content
CN107636514B (en) Head-mounted display device and visual assistance method using the same
US20050190989A1 (en) Image processing apparatus and method, and program and recording medium used therewith
US20110228051A1 (en) Stereoscopic Viewing Comfort Through Gaze Estimation
TW201234838A (en) Stereoscopic display device and control method of stereoscopic display device
US10382699B2 (en) Imaging system and method of producing images for display apparatus
CN110018736B (en) Object augmentation via near-eye display interface in artificial reality
KR20110098988A (en) Information display device and information display method
KR20140125183A (en) Eye-glasses which attaches projector and method of controlling thereof
KR101705988B1 (en) Virtual reality apparatus
KR101740728B1 (en) System and Device for Displaying of Video Data
US20190347864A1 (en) Storage medium, content providing apparatus, and control method for providing stereoscopic content based on viewing progression
EP2583131B1 (en) Personal viewing devices
WO2017122004A1 (en) Detection system
KR100917100B1 (en) Apparatus for displaying three-dimensional image and method for controlling location of display in the apparatus
JPWO2017141584A1 (en) Information processing apparatus, information processing system, information processing method, and program
US20130050448A1 (en) Method, circuitry and system for better integrating multiview-based 3d display technology with the human visual system
GB2546983A (en) Entertainment system
CN109799899B (en) Interaction control method and device, storage medium and computer equipment
KR102334091B1 (en) Augmented reality device for audio identification and control method thereof

Legal Events

Date Code Title Description
AMND Amendment
AMND Amendment
X701 Decision to grant (after re-examination)
GRNT Written decision to grant