KR101372484B1 - User device which enhanced response speed and response speed enhancing method using data pre-load - Google Patents

User device which enhanced response speed and response speed enhancing method using data pre-load Download PDF

Info

Publication number
KR101372484B1
KR101372484B1 KR1020130050090A KR20130050090A KR101372484B1 KR 101372484 B1 KR101372484 B1 KR 101372484B1 KR 1020130050090 A KR1020130050090 A KR 1020130050090A KR 20130050090 A KR20130050090 A KR 20130050090A KR 101372484 B1 KR101372484 B1 KR 101372484B1
Authority
KR
South Korea
Prior art keywords
gaze
preload
target object
area
unit
Prior art date
Application number
KR1020130050090A
Other languages
Korean (ko)
Inventor
안치득
이승엽
유주완
Original Assignee
연세대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 연세대학교 산학협력단 filed Critical 연세대학교 산학협력단
Priority to KR1020130050090A priority Critical patent/KR101372484B1/en
Priority to PCT/KR2013/009039 priority patent/WO2014058233A1/en
Priority to US14/434,970 priority patent/US9886177B2/en
Application granted granted Critical
Publication of KR101372484B1 publication Critical patent/KR101372484B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0354Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Disclosed are a user device with an increased GUI response rate and a method of increasing a GUI response rate. In more detail, the user device and the method include: an eye recognition unit calculating gaze coordinates indicating the location of user′s eyes; a pre-load performing unit, when a pointer is positioned for more than a predetermined time in a pre-load area for a selection target object and a gaze position determined by the gaze coordinates and the pre-load area are included in the same gaze division area, once the selection target object is selected, pre-loading target object data on a selection target object to be loaded; and a data access unit accessing the pre-loaded target object data when the selection target object is selected by a user′s manipulation. [Reference numerals] (100) DISPLAY UNIT; (110) IMAGE SENSOR UNIT; (120) EYE RECOGNITION UNIT; (130) POINTER COORDINATE CALCULATION UNIT; (140) PRE-LOAD PERFORMING UNIT; (150) PRE-LOAD CACHE UNIT; (160) STORAGE UNIT; (170) DATA ACCESS UNIT; (180) NETWORK INTERFACE

Description

User device which enhanced response speed and response speed enhancing method using data pre-load}

The present invention relates to a user device and a method for increasing GUI response speed. More specifically, the present invention relates to a user device and a method for pre-loading data so that an immediate response can be made when a user selects a particular GUI object.

Background of the Invention Devices equipped with an operating system using a graphical user interface (GUI) as a human-computer interfacing means are widely used. In addition, various attempts have been made to increase the GUI response speed of the mobile terminal.

Korean Patent Publication No. 2010-0045868 describes a technique in which a specific function is performed at the time of mouse-over. In addition, U.S. Patent No. 8,112,619 discloses a technique for preloading an application in order to increase the execution speed of the application. However, these documents do not disclose a user device and method for pre-loading data so that an immediate response can be made when the user selects a particular GUI object.

The technical problem to be solved by the present invention, before the user makes a selection for a specific object provided on the graphical user interface (GUI), predict the selection in advance based on the position of the pointer and the user's eyes, It is to provide a user device with increased response speed by preloading data to be loaded when selecting an object.

Another technical problem to be solved by the present invention is to predict the selection in advance based on the position of the pointer and the user's gaze before the user makes a selection for a specific object provided on a graphical user interface (GUI), It is a method of preloading data that can increase the response speed of a user device by preloading data to be loaded when a specific object is selected.

The objects of the present invention are not limited to the above-mentioned objects, and other objects, which are not mentioned above, will be clearly understood by those skilled in the art from the following description.

According to another aspect of the present invention, there is provided a user device including: a pupil recognizing unit that calculates a gaze coordinate indicating a pupil position of a user, and a pointer is located within a preload area for a selected object; A preload performing unit which preloads target object data to be loaded when the selection target object is selected when the gaze position determined by the gaze coordinates and the preload region are included in the same gaze partitioning region; And a data access unit for accessing the preloaded target object data when the selection target object is selected by the user's manipulation.

According to an embodiment of the present disclosure, when the position of the gaze and the preload area are included in different gaze division regions, the preload performing unit may determine the position of the gaze position based on the gaze division area including the preload area. The preload may be performed according to a moving direction and a moving distance, and the moving direction may be a direction toward the gaze dividing area including the preload area.

According to an embodiment, the direction toward the gaze dividing area may be determined based on whether an angle determined according to the current gaze position and the gaze position after the movement is included in a preset angle range. The preload execution unit may load the target object data and store the target object data in the preload cache unit.

According to an embodiment of the present disclosure, when the input regarding the selection of the selection target object is not made during the predetermined time, the preload execution unit may delete the target object data stored in the preload cache unit. A gaze recording unit for recording the movement history of the gaze coordinates, a gaze movement pattern determination unit for determining a gaze movement pattern using data recorded in the gaze recording unit, and disposing the gaze segmentation area according to the determined gaze movement pattern; The apparatus may further include an area division personalization unit.

According to another aspect of the present invention, there is provided a data preloading method, the method comprising: calculating a gaze coordinate indicating a position of a user's pupil; a pointer is positioned within a preload region for a selection target object for a predetermined time; Pre-loading target object data to be loaded when the selection target object is selected when the gaze position determined by the gaze coordinates and the preload region are included in the same gaze partitioning region; and The method may include accessing the preloaded target object data when the selection target object is selected by the user's manipulation.

According to an embodiment of the present disclosure, the performing of the preload may include: when the position of the gaze and the preload area are included in different gaze division areas, the gaze division area including the preload area is included in the gaze division area. The preload may be performed according to the moving direction and the moving distance of the gaze position, and the moving direction may be a direction toward the gaze dividing area including the preload area.

According to an embodiment, the direction toward the gaze dividing area may be determined based on whether an angle determined according to the current gaze position and the gaze position after the movement is included in a preset angle range.

According to the present invention, when a user selects a specific object on the GUI, it is possible to shorten the time for loading the content that can be obtained by selecting the specific object.

The effects according to the present invention are not limited by the contents exemplified above, and more various effects are included in the specification.

1 schematically illustrates a user device according to an embodiment of the present invention.
2 and 3 schematically illustrate an object, a preload area, a pointer position, and a gaze dividing area according to an embodiment of the present invention.
4 and 5 schematically illustrate a case where a pointer position and a gaze position exist in the same gaze dividing area according to an exemplary embodiment of the present invention.
6 schematically illustrates a case where a pointer position and a gaze position exist in different gaze division regions according to an exemplary embodiment of the present invention.
7 to 9 schematically illustrate conditions for preloading when a pointer position and a gaze position exist in different gaze division areas according to an exemplary embodiment of the present invention.
FIG. 10 schematically illustrates an embodiment in which the gaze dividing area is dynamically set according to an embodiment of the present invention.
11 schematically illustrates a case in which preloading is performed in a user device according to another embodiment of the present invention.
12 is a flowchart illustrating steps of preloading when a pointer position and a gaze position exist in the same gaze dividing area according to an exemplary embodiment of the present invention.
FIG. 13 is a flowchart illustrating steps of preloading when the pointer position and the gaze position exist in different gaze division regions according to an exemplary embodiment of the present invention.

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention and the manner of achieving them will become apparent with reference to the embodiments described in detail below with reference to the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Is provided to fully convey the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims. Like reference numerals refer to like elements throughout the specification.

Unless defined otherwise, all terms (including technical and scientific terms) used herein may be used in a sense commonly understood by one of ordinary skill in the art to which this invention belongs. Also, commonly used predefined terms are not ideally or excessively interpreted unless explicitly defined otherwise.

In addition, each block may represent a portion of a module, segment, or code that includes one or more executable instructions for executing a specified logical function (s). It should also be noted that in some alternative implementations, the functions mentioned in the blocks may occur out of order. For example, two blocks that are shown one after the other may actually be executed substantially concurrently, or the blocks may sometimes be performed in reverse order according to the corresponding function.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The singular forms as used herein include plural forms as long as the phrases do not expressly express the opposite meaning thereto. Means that a particular feature, region, integer, step, operation, element and / or component is specified, and that other specific features, regions, integers, steps, operations, elements, components, and / And the like.

Although the first, second, etc. are used to describe various components, it goes without saying that these components are not limited to these terms. These terms are used only to distinguish one element from another. Therefore, it goes without saying that the first component mentioned below may be the second component within the technical scope of the present invention.

The term '~' used in this embodiment refers to a hardware component such as an FPGA or an ASIC by software or hardware function, and '~' plays a role. However, 'part' is not meant to be limited to software or hardware. &Quot; to " may be configured to reside on an addressable storage medium and may be configured to play one or more processors. Thus, by way of example, 'parts' may refer to components such as software components, object-oriented software components, class components and task components, and processes, functions, , Subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. The functions provided in the components and components may be further combined with a smaller number of components and components or further components and components.

Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. The present invention may be embodied in many different forms and is not limited to the embodiments described below.

1 schematically illustrates a user device according to an embodiment of the present invention.

Referring to FIG. 1, the user device 10 according to an embodiment of the present invention may include a display unit 100, an image sensor unit 110, a pupil recognition unit 120, a preload performer 140, and a pointer. The coordinate calculator 130, the preload performer 140, the preload cache 150, the storage 160, the data access unit 170, and the network interface 180 may be included.

The display unit 100 may be configured to display the respective objects 400, 410, 420, and 430. The display unit 100 may include a display element and include a graphic, text, video, graphical user interface (GUI), and a combination thereof. The display device may include a liquid crystal display (LCD), a light emitting polymer display (LPD), an organic light-emitting diode (OLED), and an active matrix organic light-emitting diode (AMOLED). The display unit 100 may be configured to receive a touch input from a user. The display unit 100 may include a touch sensing element to receive a touch input. The touch sensing element detects a user's touch input by using a capacitive technology, a resistive technology, Infrared technology, surface acoustic wave technology, and the like.

Objects 410, 420, and 430, including the selection target object 400 as one embodiment of the present invention, may include various GUIs, and include objects for enlarging images, objects for playing video, and web pages. It may mean a GUI including a specific link object.

The image sensor unit 110 may be configured to acquire a pupil image of the user 660 to specify the gaze position 60 of the user that may be recognized by the user device 10 as the gaze coordinates. The image sensor included in the image sensor unit 110 is preferably a CMOS sensor, but is not limited thereto. The image sensor unit 110 may be configured to transmit the acquired image of the user to the eye recognition unit 120.

The eye recognition unit 120 may be electrically connected to the image sensor unit 110 to receive an image of the user 660 obtained by the image sensor unit 110. By recognizing and tracking the pupils from the received image of the user 660, the user 660 can extract the area of the display unit 100 gaze or calculate the coordinates. For this purpose, the pupil recognition unit 120 may include an image processing module (not shown). Data about the extracted area or data about the calculated coordinates (hereinafter, referred to as “eye coordinates” as necessary) may be transmitted to the preload performer 140 electrically connected to the eye recognition unit 120. . In the following description, the eye recognition unit 120 calculates a gaze coordinate of a user's gaze and transmits data on the gaze coordinate to the preload performer 140, but is not limited thereto. That is, the pupil recognizing unit 120 may be configured to transmit data regarding the specific gaze segmentation area 20 including the gaze coordinates of the user. The method of determining the eye gaze position 60 by recognizing the pupil by the pupil recognition unit 120 may be clearly understood by a person skilled in the art by, for example, a method disclosed in Korean Patent Laid-Open Publication No. 2010-0038897. Since it will be possible, a detailed description thereof will be omitted.

The pointer coordinate calculator 130 may calculate a coordinate with respect to the current pointer position 50. The pointer according to an embodiment of the present invention may be a stylus pen using an electromagnetic induction method. In some embodiments, it may mean a cursor. That is, the pointer in the present invention refers to various input / output (I / O) means that can be made for selection of each object 400, 410, 420, 430 such as a mouse, a stylus pen, a track ball, and the like. Can be. The calculated pointer coordinates may be transmitted to the preload performer 140 electrically connected to the pointer coordinate calculator 130.

The preload performing unit 140 may perform preloading based on the received gaze coordinates and the pointer coordinates. In the preload according to an embodiment of the present invention, the pointer position 50 is predetermined in any one of the preload areas 30 defined in the respective objects 400, 410, 420, and 430. If the time stays longer than the time, and the pointer position 50 and the user's gaze position 60 stay within the same gaze division area 20 for a predetermined time or more, the pointer position 50 among the objects 400, 410, 420, 430. ), The object to be selected by the user 660 (hereinafter, referred to as the "selection object 400" as necessary) is selected, and the load target data to be loaded is selected from the preload cache unit ( It may be inserted into 150). That is, the preload according to an embodiment of the present invention does not perform preload only by satisfying a condition (condition 1) in which the pointer position 50 stays in the preload region 30 for a predetermined time or more, and condition 1 In addition, preloading may be performed only when the pointer position 50 and the user's gaze position 60 satisfy the condition (condition 2) in which the user's gaze position 60 stays in the same gaze dividing area 20 for a predetermined time or more. According to another embodiment of the present invention, when the pointer position 50 and the gaze position 60 are located in different gaze division regions 20, the moving directions I and II of the gaze position 60 are different. , III) and whether the moving distances A, B, C and the moving directions I, II, III of the gaze position 60 satisfy the predetermined conditions based on the moving distances A, B, and C. It may be determined to perform the preload. Preloading according to another embodiment of the present invention will be described in detail with reference to FIGS. 7 to 9.

The preload performing unit 140 may perform preload by determining whether the gaze position 60 and the pointer position 50 exist in the same gaze dividing area 20. This may be configured to compare and determine the setting data for the gaze dividing area 20, the coordinate data of the gaze position 60 and the coordinate data of the pointer position 50. The coordinate data for the gaze dividing area 20 may be preset or may be configured to be arbitrarily set by the user 660. In addition, the number of gaze dividing areas 20 may also be set in advance or may be configured to be arbitrarily set by a user.

The preload performer 140 may preload the target object data when the gaze position 60 and the pointer position 50 remain in the same gaze division area 20 for a predetermined time. If the target object data for the selected target object 400 is stored in the storage unit 160, that is, the target object data is local data, the preload performing unit 140 may store the target object from the storage unit 160. The data may be configured to be loaded and inserted into the preload cache unit 150. If the target object data is not local data, the target object data may be connected to a network through the network interface 180 to receive the target object data from a predetermined external device and insert the target object data into the preload cache unit 150.

The preload cache unit 150 may be configured to store target object data preloaded by the preload performer 140. The preload performing unit 140 may be operated as, for example, a stack of a LIFO (Last In First Out) method, but is not limited thereto. In addition, the load target data inserted into the preload cache may be automatically deleted after a predetermined time so that the preload cache may maintain sufficient available space. In addition, it is preferable that the load target data inserted into the preload cache is removed from the preload cache after the access.

The storage unit 160 may be a storage medium in which target object data regarding the selection target object 400 is pre-stored. As described above, the target object data may be local data that is pre-stored in the storage 160, or data that should be transmitted from an external device through the network interface 180. The storage unit 160 may include a hard disk drive (HDD), a solid state drive (SSD), a tape drive, an optical drive, a RAID array, a random access memory (RAM), and a read-only memory (ROM). It may include, but is not limited thereto.

The data access unit 170 accesses the load target data stored in the preload cache unit 150 when the selection target object 400 is selected by the selection of the selection target object 400 displayed on the display unit 100. The display apparatus 100 may be configured to display the processing result of the load target data on the display unit 100.

2 and 3 schematically illustrate the objects 410, 420, and 430, the selection target 400, the preload area 30, and the gaze dividing area 20 according to an embodiment of the present invention.

2 and 3, each of the objects 400, 410, 420, and 430, including the selection target object 400, may have a preload area defined in each of the objects 400, 410, 420, and 430. 30). The preload area 30 according to an embodiment of the present invention is an area where each of the objects 400, 410, 420, and 430 is displayed, and when an input for selecting a mouse click and a touch is selected within the area. The preload area 30 may be enlarged or reduced within a range designated from the placement area in which the selection input for a specific object is processed.

The gaze dividing area 20 according to the exemplary embodiment of the present invention may be another criterion set to perform preloading. That is, in order to more accurately grasp the intention of the user 660 to select the selection target object 400, it may be set to perform preload in consideration of not only the pointer position 50 but also the user's gaze position 60. have. In FIGS. 2 and 3, the gaze dividing area 20 is divided into two or three, but the position and the number of the gaze dividing areas 20 are not limited thereto. In the gaze dividing area 20, four gaze dividing areas 20 may exist, and five gaze dividing areas 20 may be set. It may also be set in the horizontal or vertical direction. The gaze dividing area 20 may be set in advance or may be configured so that a user arbitrarily sets the number or arrangement direction of the gaze dividing area 20. As shown in FIG. 3, the gaze dividing area 20 may or may not include the respective objects 400, 410, 420, and 430.

4 and 5 schematically illustrate a case where the pointer position 50 and the gaze position 60 exist in the same gaze dividing area 20 according to an exemplary embodiment of the present invention.

4 and 5, when the gaze position 60 of the user 660 is included in the same gaze dividing area 20, the preload performing unit 140 may perform an operation on the selection target object 400. It may be to perform a preload. Considering not only the pointer position 50 but also the eye position 60 of the user 660, the user 660 accurately understands the intention of the user 660 regarding the selection of the selection target object 400 so as not to perform unnecessary preloading. As a result, battery or computing power can be reduced. In FIG. 6, as an embodiment of the present invention, a tablet PC in which a pointer device is used as a stylus pen is shown as an example. The stylus pen may be an electromagnetic induction method. Various objects 400, 410, 420, and 430 may be displayed on the screen of the tablet PC. Since the user's gaze position 60 is included in the same gaze partition area 20 as the pointer position 50 with respect to the selection target object 400, the preload performing unit 140 may perform an operation on the selection target object 400. Preload can be performed.

FIG. 6 schematically illustrates a case where the pointer position 50 and the gaze position 60 exist in different gaze segmentation regions 20 according to an embodiment of the present invention, and FIGS. 7 to 9 show the present invention. FIG. 7 schematically illustrates a condition for preloading when the pointer position 50 and the gaze position 60 exist in different gaze division regions 20 according to an exemplary embodiment.

Referring to FIG. 6, the object to be selected 400 of the two gaze dividing areas 20 is present in the gaze dividing area 20 above, and the gaze position 60 is located in the gaze dividing area 20 below. do. If such a case continues to be maintained, in principle, the preload execution unit 140 may not perform a proload on the selection target object 400. However, based on the moving directions I, II, III and the moving distances A, B, and C of the gaze position 60 of the user 660, the gaze position 60 and the pointer position 50 are the same gaze. It may be configured to perform preload even if not included in the partition area 20. That is, as another embodiment of the present invention for performing the preload, the moving direction (I, II, III) of the gaze position 60 is directed to the gaze segmentation area 20 where the pointer position 50 is located, When the moving distances A, B, and C of the position 60 are moved by more than a predetermined distance, the eye gaze position 60 and the pointer position 50 are configured to perform preload even if they are not included in the same gaze division area. Can be.

For example, as shown in FIG. 7, even when the gaze position 60 and the pointer position 50 are located at different gaze division regions 20, the gaze division region including the pointer position 50 ( In the case of moving more than a predetermined distance in the direction of 20), the preload performing unit 140 may be configured to perform preload. That is, the preload can be performed only when the conditions for the moving directions (I, II, III) and the distances (A, B, C) for performing the preload are satisfied. Conditions for I, II, and III) and conditions for moving distances A, B, and C will be described.

In FIG. 8, a criterion for determining the moving directions I, II, and III of the gaze position 60 is schematically illustrated. Referring to FIG. 8, the movement directions I, II, and III of the line of sight position 60 may exist in various ways. However, the moving directions I, II, and III for preloading by the preload performing unit 140 may determine the moving direction based on a predetermined angle range θ. For example, when an angle of 45 ° is set with respect to the first reference line 640, the extension line with respect to the gaze position 60 after moving based on the current gaze position 60 is the first reference line 640. If the angle between the angle and the angle is within 45 degrees, it may be configured to process in the direction toward the gaze segmentation area 20 where the pointer position 50 is located. In FIG. 8, the directions Ι and II may be included in a predetermined angular range θ to be processed as a direction for performing preload, but the III direction is larger than the angular range θ, so that preloading is performed. It may not be processed in the direction.

In FIG. 9, criteria for determining the moving distances A, B, and C of the gaze position 60 are schematically illustrated. Referring to FIG. 9, a movement distance for performing preload may also be a preset value. For example, even if the moving direction I is included in the angular range θ for performing preloading, A may not satisfy the moving distance condition and preloading may not be performed. That is, the preload performing unit 140 may perform preload only when it moves to the gaze position 610 that satisfies both the moving distance condition and the moving direction condition. However, this is only an exemplary embodiment of the present invention. When any one of the moving distance condition and the moving direction condition is satisfied, the preload performing unit 140 may be configured to perform preload.

FIG. 10 schematically illustrates an embodiment in which the gaze dividing area 20 is dynamically set according to an embodiment of the present invention.

Referring to FIG. 10, when the user 660 gazes at only a specific portion of the display unit 100, the gaze segmentation area 20 may be dynamically determined to more actively reflect the intention of the user 660. It may be configured. To this end, the user device 10 includes a gaze recording unit (not shown) for recording the movement history of the gaze position 60, and a gaze movement pattern determination unit (not shown) for determining a gaze movement pattern using data recorded in the gaze recording unit. ) And an area division personalization unit that arranges the gaze dividing area 20 according to the determined gaze movement pattern. For example in FIG. 10, the gaze position 60 is biased to the left side. The eye gaze recorder records the movement history of the eye gaze position 60, and may determine that the gaze divided area 20 is more suitable than the horizontal direction by using the data recorded in the eye gaze recorder. According to the determination result, the area division personalization unit may change the gaze division area 20 in the vertical direction in real time.

FIG. 11 schematically illustrates a case in which a preload is performed in the user device 10 according to another embodiment of the present invention.

Referring to FIG. 11, as another embodiment of the user device 10 of the present invention, the present invention may be applied to a personal computer. In this case, the pointer may be composed of a mouse, and the pointer position 50 may be defined by the cursor. The above-described embodiment of the present invention may be equally applied to a personal computer, which is another embodiment of the user device 10.

12 is a flowchart illustrating steps in which preloading is performed when the pointer position 50 and the gaze position 60 exist in the same gaze dividing area 20 according to an exemplary embodiment of the present invention.

Referring to FIG. 12, the user device 10 may calculate eye recognition and gaze coordinates of the user 660 (S100). The position of the pointer and the line of sight coordinates may be compared (S110) to determine whether the pointer is located within the same line of sight division area 20 (S120). In the case of being located within the same gaze dividing area 20, preloading is performed (S130), and it is determined whether a selection is made on a selection target object by the user 660 (S140). When the selection target object is selected by the user 660, the target object data is accessed (S150), and if the selection is not made on the selection target object 400, the preloaded target object data may be deleted. Can be (S120)

FIG. 13 is a flowchart illustrating a step in which preloading is performed when the pointer position 50 and the gaze position 60 exist in different gaze division regions 20 according to an exemplary embodiment of the present invention.

Referring to FIG. 13, the user device 10 recognizes the pupil of the user 660 and calculates the gaze coordinates (S100), and then compares the position of the coordinates of the pointer and the trial coordinates, that is, the same gaze division area. It may be determined whether the location is within 20 or within different gaze dividing areas (S110). In operation S120, it may be determined whether or not it is located within the same gaze split area (S120), and if it is positioned for a predetermined time, preloading of the selection target object 400 may be performed (S130). However, if it is located in different gaze dividing areas 20, it may be determined whether the gaze position 60 moves within a predetermined angle (S210), and it may be determined whether it moves more than a predetermined distance (S220). That is, when it is determined that the gaze position 60 of the user 660 is directed toward the gaze dividing area 20 where the pointer position 50 is located, the user 660 may be configured to perform preloading (S130). When the user 660 selects the selection target object 400, the user accesses the preloaded target object data (S150), and if the selection is not made on the selection target object 400, deletes the preloaded target object data. It may include the step (S160).

It will be understood by those skilled in the art that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Therefore, it should be understood that the above-described embodiments are illustrative in all aspects and not restrictive. The scope of the present invention is shown by the following claims rather than the detailed description, and all changes or modifications derived from the meaning of the claims and the equivalent concepts described below are interpreted as being included in the scope of the present invention. Should be.

10: user device 20: gaze partitioned area
30: preload area 400: selected object
410, 420, 430: object 50: pointer position
60: eye position

Claims (11)

A pupil recognition unit that calculates a gaze coordinate indicating a pupil location of a user;
When the pointer is positioned within the preload area for the selection target object for a predetermined time or more, and the gaze position determined by the gaze coordinates and the preload area are included in the same gaze division area, the selection target object should be loaded when the selection object is selected. A preload performing unit for preloading target object data for the selection target object; And
And a data access unit configured to access the preloaded target object data when the selection target object is selected by the user's manipulation.
The method of claim 1,
The preload execution unit,
When the position of the gaze and the preload area are included in different gaze division areas, the preload is performed according to the moving direction and the moving distance of the gaze position based on the gaze division area including the preload area. User device.
3. The method of claim 2,
The moving direction is,
And a direction toward the gaze dividing area including the preload area.
The method of claim 3,
The direction toward the gaze dividing area is
And an angle determined according to the current gaze position and the gaze position after the movement is included in a preset angle range.
The method of claim 1,
The preload execution unit,
The user device to load the target object data and to store in the preload cache.
6. The method of claim 5,
The preload execution unit,
And when the input regarding the selection of the selection target object is not made during the predetermined time, deleting the target object data stored in the preload cache unit.
The method of claim 1,
A line of sight recording unit for recording a movement history of the line of sight coordinates;
A gaze movement pattern determination unit that determines a gaze movement pattern using data recorded in the gaze recording unit; And
And an area division personalization unit to arrange the gaze division area according to the determined gaze movement pattern.
Calculating a gaze coordinate indicating a pupil position of the user;
When the pointer is positioned within the preload area for the selection target object for a predetermined time or more, and the gaze position determined by the gaze coordinates and the preload area are included in the same gaze division area, the selection target object should be loaded when the selection object is selected. Pre-loading target object data; And
And accessing the preloaded target object data when the selection target object is selected by the user's operation.
9. The method of claim 8,
Performing the preloading,
When the position of the gaze and the preload area are included in different gaze division areas, the preload is performed according to the moving direction and the moving distance of the gaze position based on the gaze division area including the preload area. How to preload the data of the user device.
10. The method of claim 9,
The moving direction is,
And a direction of the gaze dividing area including the preload area.
11. The method of claim 10,
The direction toward the gaze dividing area is
And determining whether an angle determined according to the current gaze position and the gaze position after the movement is included in a preset angle range.
KR1020130050090A 2012-10-11 2013-05-03 User device which enhanced response speed and response speed enhancing method using data pre-load KR101372484B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020130050090A KR101372484B1 (en) 2013-05-03 2013-05-03 User device which enhanced response speed and response speed enhancing method using data pre-load
PCT/KR2013/009039 WO2014058233A1 (en) 2012-10-11 2013-10-10 Method for increasing gui response speed of user device through data preloading, and said user device
US14/434,970 US9886177B2 (en) 2012-10-11 2013-10-10 Method for increasing GUI response speed of user device through data preloading, and said user device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020130050090A KR101372484B1 (en) 2013-05-03 2013-05-03 User device which enhanced response speed and response speed enhancing method using data pre-load

Publications (1)

Publication Number Publication Date
KR101372484B1 true KR101372484B1 (en) 2014-03-11

Family

ID=50648189

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130050090A KR101372484B1 (en) 2012-10-11 2013-05-03 User device which enhanced response speed and response speed enhancing method using data pre-load

Country Status (1)

Country Link
KR (1) KR101372484B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150108662A (en) * 2014-03-18 2015-09-30 연세대학교 산학협력단 Data pre-load management method and terminal device thereof
KR20150111158A (en) * 2014-03-25 2015-10-05 연세대학교 산학협력단 Pre-loaded data management method and terminal device thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10293689A (en) * 1997-04-18 1998-11-04 Kobe Nippon Denki Software Kk Object program preload method for window system and recording medium recorded with program for the same
KR100574045B1 (en) * 2004-11-10 2006-04-26 주식회사 네오엠텔 Aparatus for playing multimedia contents and method thereof
KR20100081406A (en) * 2009-01-06 2010-07-15 엘지전자 주식회사 Mobile terminal and method for inputting instructions thereto
KR20120035771A (en) * 2010-10-06 2012-04-16 엘지전자 주식회사 Mobile terminal and control method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10293689A (en) * 1997-04-18 1998-11-04 Kobe Nippon Denki Software Kk Object program preload method for window system and recording medium recorded with program for the same
KR100574045B1 (en) * 2004-11-10 2006-04-26 주식회사 네오엠텔 Aparatus for playing multimedia contents and method thereof
KR20100081406A (en) * 2009-01-06 2010-07-15 엘지전자 주식회사 Mobile terminal and method for inputting instructions thereto
KR20120035771A (en) * 2010-10-06 2012-04-16 엘지전자 주식회사 Mobile terminal and control method thereof

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150108662A (en) * 2014-03-18 2015-09-30 연세대학교 산학협력단 Data pre-load management method and terminal device thereof
KR101655832B1 (en) * 2014-03-18 2016-09-22 연세대학교 산학협력단 Data pre-load management method and terminal device thereof
KR20150111158A (en) * 2014-03-25 2015-10-05 연세대학교 산학협력단 Pre-loaded data management method and terminal device thereof
KR101639993B1 (en) * 2014-03-25 2016-07-15 연세대학교 산학협력단 Pre-loaded data management method and terminal device thereof

Similar Documents

Publication Publication Date Title
US9886177B2 (en) Method for increasing GUI response speed of user device through data preloading, and said user device
KR102059913B1 (en) Tag storing method and apparatus thereof, image searching method using tag and apparauts thereof
US10592050B2 (en) Systems and methods for using hover information to predict touch locations and reduce or eliminate touchdown latency
EP3234732B1 (en) Interaction with 3d visualization
JP6129879B2 (en) Navigation technique for multidimensional input
KR102230630B1 (en) Rapid gesture re-engagement
US9823753B2 (en) System and method for using a side camera for free space gesture inputs
AU2014200924B2 (en) Context awareness-based screen scroll method, machine-readable storage medium and terminal therefor
US10409366B2 (en) Method and apparatus for controlling display of digital content using eye movement
US20130145304A1 (en) Confirming input intent using eye tracking
KR102614046B1 (en) Method for obtaining bio data and an electronic device thereof
US20150316981A1 (en) Gaze calibration
EP2662756A1 (en) Touch screen palm input rejection
US20170344220A1 (en) Collaboration with 3d data visualizations
US20120131513A1 (en) Gesture Recognition Training
US10089000B2 (en) Auto targeting assistance for input devices
WO2015183766A1 (en) Gaze tracking for one or more users
US9317199B2 (en) Setting a display position of a pointer
US9092085B2 (en) Configuring a touchpad setting based on the metadata of an active application of an electronic device
US20170344104A1 (en) Object tracking for device input
CN108509133B (en) Search component display method and device
US20120287063A1 (en) System and method for selecting objects of electronic device
KR101372484B1 (en) User device which enhanced response speed and response speed enhancing method using data pre-load
US20160034027A1 (en) Optical tracking of a user-guided object for mobile platform user input
KR20140105116A (en) Method for controlling operation and an electronic device thereof

Legal Events

Date Code Title Description
A201 Request for examination
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20170227

Year of fee payment: 4

FPAY Annual fee payment

Payment date: 20180309

Year of fee payment: 5

FPAY Annual fee payment

Payment date: 20190318

Year of fee payment: 6