KR101752223B1 - Method, device and computer-readable recording media for providing user interface that supports restoration of interaction - Google Patents

Method, device and computer-readable recording media for providing user interface that supports restoration of interaction Download PDF

Info

Publication number
KR101752223B1
KR101752223B1 KR1020160001891A KR20160001891A KR101752223B1 KR 101752223 B1 KR101752223 B1 KR 101752223B1 KR 1020160001891 A KR1020160001891 A KR 1020160001891A KR 20160001891 A KR20160001891 A KR 20160001891A KR 101752223 B1 KR101752223 B1 KR 101752223B1
Authority
KR
South Korea
Prior art keywords
user
information
virtual
state
behavior
Prior art date
Application number
KR1020160001891A
Other languages
Korean (ko)
Inventor
박정민
이중재
장효종
Original Assignee
한국과학기술연구원
재단법인 실감교류인체감응솔루션연구단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국과학기술연구원, 재단법인 실감교류인체감응솔루션연구단 filed Critical 한국과학기술연구원
Priority to KR1020160001891A priority Critical patent/KR101752223B1/en
Application granted granted Critical
Publication of KR101752223B1 publication Critical patent/KR101752223B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object

Abstract

The present invention relates to a method, an apparatus and a computer-readable recording medium for providing a user interface for supporting restoration of an interaction. According to an aspect of the present invention there is provided a method of providing a user interface that supports restoration of an interaction, the method comprising: (a) if an input is obtained from the user requesting restoration of an interaction between a user and a virtual object, (B) providing, by the electronic device, a user interface corresponding to the recoverable information to the user; and (c) When an input for selecting the user interface is obtained from the user, the electronic device restores at least a part of the behavior state of the user and the state of the virtual object with reference to the specific interaction type corresponding to the selected user interface, The method comprising the steps of The.

Description

BACKGROUND OF THE INVENTION 1. Field of the Invention [0001] The present invention relates to a method, apparatus, and computer readable recording medium for providing a user interface supporting restoration of an interaction.

The present invention relates to a method, an apparatus and a computer-readable recording medium for providing a user interface for supporting restoration of an interaction. More specifically, when an input requesting restoration of an interaction between a user and a virtual object is acquired from a user, the control unit obtains restoration enabling information indicating at least one type of restorable interaction, And if the specific interaction type information is selected through the user interface, restoring and displaying at least a part of the behavior state of the user and the state of the virtual object with reference to the specific interaction type, And a recording medium.

Recently, users are experiencing various interactions different from the previous ones as the development of computer technology and equipment and the development of various products. So far, researches for efficient and improved interaction between people and computers have been conducted steadily. In the future, based on behaviors such as gestures, a natural user interface (NUI, Interface is expected to continue to evolve.

In the field of such interaction, an interaction technique related to manipulation by a user to change the state of a virtual object in a three-dimensional environment is important. In the case of reverting the interaction with a virtual object at the request of the user (undo / redo Request has occurred), only the state of the virtual object has been restored, and the behavior of the user has not been restored.

For example, in a hand-based user interface environment, when a human being is caught and moved to a virtual object, a hand-held error occurs and the object falls to the floor or is placed at an undesired arbitrary position, the interaction between the user and the virtual object In the conventional method for restoration, the hand recognition is performed again, and the object placed at an arbitrary position is retaken to continue the interaction, or the system is initialized and the object is picked up again at the position where the object is placed first .

The inventor of the present invention proposes a technique that enables undo / redo of meaningful and efficient interaction by restoring the state of a virtual object as well as a state of a user when restoring interaction between a user and a virtual object .

It is an object of the present invention to solve all the problems described above.

In addition, the present invention divides the behavior of a person appearing in the interaction between a user and a virtual object in a meaningful unit, thereby restoring the interaction state at a certain point during the operation of the virtual object in the 3D environment by the user, To be able to manipulate it.

In order to accomplish the above object, a representative structure of the present invention is as follows.

According to an aspect of the present invention there is provided a method of providing a user interface that supports restoration of an interaction, the method comprising: (a) if an input is obtained from the user requesting restoration of an interaction between a user and a virtual object, Wherein the recoverable information includes time-series information and interaction type information corresponding to the time-series information, (b) the electronic device further comprises a user interface corresponding to the recoverable information, (C) acquiring from the user an input for selecting specific interaction type information through the user interface, the electronic device is configured to perform, by referring to the specific interaction type information, At least some of the state and state of the virtual object are restored and displayed It is provided a method comprising the steps:

According to another aspect of the present invention, there is provided an electronic device for providing a user interface supporting restoration of an interaction, comprising: a database; When the input requesting the restoration of the interaction between the user and the virtual object is obtained from the user, the recoverable information includes the time series information and the interaction type information corresponding to the time series information, And providing the user interface corresponding to the recoverable information to the user; And, when an input for selecting specific interaction type information through the user interface is acquired from the user, restoring and displaying at least a part of the behavior state of the user and the state of the virtual object with reference to the specific interaction type information An electronic device including a processor is provided.

According to the present invention, the actions of the person appearing in the interaction between the user and the virtual object are divided and stored in meaningful units, so that the user restores the interaction state at any time during the operation of the virtual object in the three- Can be operated.

1 is a diagram illustrating a schematic configuration of an overall system in which a user operates a virtual object according to an embodiment of the present invention.
FIG. 2 is a diagram for explaining an assurance value of an interaction type according to an embodiment of the present invention.
3 is a diagram illustrating an example of an interaction type used in an application according to an embodiment of the present invention.
4 is a flowchart illustrating a method for restoring an interaction between a user and a virtual object according to an embodiment of the present invention.
5 is a diagram illustrating an interaction type generated by a user manipulating a virtual object according to an embodiment of the present invention.
FIG. 6 is a diagram illustrating an exemplary state in which a user enters an interaction restoration mode according to an embodiment of the present invention. Referring to FIG.
7 is a diagram illustrating an example of an interaction pose provided to a user according to an embodiment of the present invention.

The following detailed description of the invention refers to the accompanying drawings, which illustrate, by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. It should be understood that the various embodiments of the present invention are different, but need not be mutually exclusive. For example, certain features, structures, and characteristics described herein may be implemented in other embodiments without departing from the spirit and scope of the invention in connection with an embodiment. It is also to be understood that the position or arrangement of the individual components within each disclosed embodiment may be varied without departing from the spirit and scope of the invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is to be limited only by the appended claims, along with the full scope of equivalents to which such claims are entitled, if properly explained. In the drawings, like reference numerals refer to the same or similar functions throughout the several views.

Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings, so that those skilled in the art can easily carry out the present invention.

1 is a diagram illustrating a schematic configuration of an overall system in which a user operates a virtual object according to an embodiment of the present invention.

An interaction according to an embodiment of the present invention may mean an action that can affect each other among at least one virtual object and at least one person. As shown in FIG. 1, a user can perform an interaction to manipulate a virtual object in a three-dimensional environment through a user interface by taking a gesture using at least a part of the body. At this time, at least a part of the user's body manipulating the virtual object may be displayed as a graphic element in the three-dimensional image provided to the user.

Meanwhile, an interaction type according to an embodiment of the present invention may mean a high-level abstract description capable of describing an interaction.

In general, with respect to interactions, existing applications have limited access to the level of awareness, whereas the interaction type according to an embodiment of the present invention is based on not only the level of recognition but also other levels related to interpretation, . That is, in the present invention, the interaction type does not mean an interaction between a person and a virtual object generated by simply recognizing a gesture of a person but defines a meaningful action state given to a virtual object by a person.

A description of such an interaction type can be found in Korean Patent No. 10-1567995, which was filed on October 7, 2014 by the applicant of the present invention and has been patented, and Korean Patent No. 10-1567995 It is to be understood that all of the descriptions are included in the specification of the present invention.

In addition, the present invention can be considered as an improvement based on the interaction type according to the Korean Patent No. 10-1567995, but the present invention is not limited thereto.

Such an interaction type can be defined according to a classification scheme generated by analyzing a behavioral signal between a person and a virtual object. For example, the classification category of the interaction between a person and a virtual object may include a descriptor (Illustrator) and a manipulator (specifically, a descriptor category includes Pointing, Ideograph, and EyeGaze) And the category of the navigator may include an approach, a recede, a gaze, a grasp, a translation, and a rotation.

Referring to FIG. 2, the interaction type in the chess game may be classified into, for example, the following: 'to look at an object' straightly, 'to select (catch)' (chess) There may be an interaction type such as 'move' and 'chase'.

An electronic device according to an embodiment of the present invention includes at least one sensor for recognizing a user's interaction and may include a database, a user interface providing unit and a processor, or may be associated with at least some of them, .

Specifically, a sensor according to an exemplary embodiment of the present invention may acquire a user input such as a gesture of a person, a voice, etc., and generate a plurality of recognition results using a plurality of recognition algorithms.

Meanwhile, the processor according to an embodiment of the present invention may generate a confidence value of an interaction type by combining a plurality of recognition results according to the definition of an interaction type.

Here, the confidence value of the interaction type means a value indicating the degree of confidence in the interaction, and a plurality of recognition results such as a video and a voice may be a value proportional to a probability or probability of representing an interaction represented by the interaction type. For example, the confidence value of the interaction type may be generated by quantitatively combining a plurality of recognition results generated by a plurality of recognition algorithms, and may be a value of 0 or more and 1 or less. As the confidence value is higher, Or the like, are more likely to have content consistent with the meaning of the interaction type. However, the present invention is not limited to this.

Referring to FIG. 2, FIG. 2 shows names and confidence values of a plurality of interaction types for a three-dimensional image in which a user plays a chess game.

As shown in FIG. 2, there are a hand tracking recognition algorithm, a chess piece (virtual object, 200) tracking recognition algorithm and the like as a recognition algorithm that can be used to generate an assurance value of an interaction type when a user plays a chess game, , The confidence values of the approach type, the gaze (gaze), the instrument, the movement, and the rotation type belonging to the category of the navigator among the interaction types are exemplarily shown.

In the 3D image of FIG. 2, the confidence value of the interaction type for approaching, gazing, picking, and moving is 1.0, and the confidence value of the interaction type for rotation is 0.1 is that the 3D image is approaching, gazing, It is possible that the probability of occurrence is relatively high and the possibility of showing a relatively low rotation is low.

3 is a diagram illustrating an example of an interaction type used in an application according to an embodiment of the present invention.

At the top of Fig. 3, there are examples of applications: (i) Put that there, (ii) Virtual shopping, (iii) Virtual chess, and (iv) , And the interaction type used in each application is also shown. Illustratively, an interaction type (IT) such as IT 1, IT 6, IT 7 and IT 8 may be used in the virtual shopping application.

Interaction recognition units used to generate the meaning of each interaction type and the assurance value of the interaction type are shown at the middle and bottom of Fig. In FIG. 3, the interaction type is outlined as 'IT'. Illustratively, 'IT 1' may represent the first type of interaction. In FIG. 3, the interaction recognition unit is outlined as a 'PU' (Perception Unit). Illustratively, 'PU 1' may represent the first interaction recognition unit, and the plurality of recognition units of PU 1 to PU 7, A body tracking unit for performing a recognition algorithm for tracking a body, a hand tracking unit for performing a recognition algorithm for tracking a hand, a voice recognition unit for performing a recognition algorithm for recognizing a voice, A head tracking unit that performs a recognition algorithm that estimates a subject, a target tracking unit that performs a recognition algorithm that tracks a line of sight, an object tracking unit that performs a recognition algorithm that tracks an object, and a face recognition algorithm A face recognizing unit.

 Specifically, as shown in FIG. 3, the meaning of IT 1 may be 'look at', and PU 7 (face recognition part), PU 4 (head posture estimation part) and PU 5 (eye tracking part) may be used. That is to say, the recognition results of recognition algorithms of PU 7, PU 4 and PU 5 can be used to generate confidence values of IT 1.

 According to an embodiment of the present invention, a continuous action of a person manipulating a virtual object is classified as an interaction type, and not only a state of a virtual object but also a state of a person are restored through an interaction type, Can be restored.

FIG. 4 is a flowchart illustrating a method for restoring an interaction between a user and a virtual object according to an embodiment of the present invention. FIG. 5 is a flowchart illustrating a method of restoring an interaction type generated by a user manipulating a virtual object according to an embodiment of the present invention. Fig.

An electronic device according to an exemplary embodiment of the present invention can provide a three-dimensional user interface to a user, acquire a user's gesture, generate a plurality of recognition results using a plurality of recognition algorithms, The interaction type information, which is determined to have a high confidence value, may be generated by combining the interaction type information with the corresponding time-series information in the database (FIG. 4, indicated by the interaction history storage module) But it is not so limited, and at least some of the processes may be performed by the processor. Here, the interaction type information may include at least a part of the state information of the object and the action state information of the user.

The processor according to an embodiment of the present invention obtains restorable information from a database when an input requesting restoration of the interaction between the user and the virtual object is acquired from the user and provides the user interface corresponding to the recoverable information to the user And when a specific interaction type is selected through the user interface, it is possible to restore at least a part of the behavior state of the user and the state of the virtual object with reference to the specific interaction type, Lt; RTI ID = 0.0 > a < / RTI >

Referring to FIGS. 4 and 5, the user can move the pool 200, which is a virtual object existing in a three-dimensional environment, to another position while holding it by hand, as shown in FIG. At this time, the graphical element (virtual hand, 100) corresponding to the user's hand (action state of the user) may be displayed together with the pool 200.

Here, the interaction recognition unit may be a hand tracking unit for performing a recognition algorithm for tracking a hand, a gaze tracking unit for performing a recognition algorithm for tracking a line of sight, and the interaction recognition unit may acquire a plurality of recognition results through a plurality of recognition algorithms The interaction type can be derived by combining a plurality of recognition results and stored in the database. For example, as shown in FIG. 5, the interaction type derived by the interaction recognition unit may be 'Grasp', 'Trans / Rot', 'Lift Down', and 'Release' The time series information corresponding to each interaction type information may be <t-3, t-2, t-1, t>.

If the pool 200 is placed at an undesired arbitrary position due to an error in recognition of the hand while the user is lifting the pool 200, the user can perform a voice command or a gesture of the body part excluding the hand The user may enter a mode (indicated by the Undo / Redo mode in FIG. 4) for restoring the interaction between the user and the pool 200 using the blinking, blinking, or the like. For reference, it is to be understood that all of the user's inputs that can be used to enter the mode for restoring the interaction in addition to the user's voice command or the gesture of the body part other than the hand belong to the category of the present invention.

Referring again to FIG. 4, if an input (input to enter the Undo / Redo mode) requesting restoration of the interaction between the user and the pool 200 is obtained from the user, the processor is restored from the database (interaction history storage module) It is possible to acquire the available information and to provide the user with a user interface corresponding to the recoverable information. And, if a selection for a specific interaction type is detected through the user interface, the processor can assist in restoring at least some of the user's action state and the state of the virtual object with reference to a specific interaction type. In this case, only the action state of the user may be restored, only the state of the virtual object may be restored, and the behavior state of the user and the specific state of the virtual object may be restored together. In addition, the processor may display the restored state as a three-dimensional image.

6, the user enters the interaction restoration mode according to an embodiment of the present invention. In the upper right of FIG. 6, a user who manipulates the pool 200 accesses Graphical elements corresponding to restorable interaction types ('Approach', 'Grasp', 'Lift up', 'Lift Down') that can be performed are provided as a user interface. Here, the graphic element in which 't' is written may be a graphical element for accessing the last temporal interaction type among the interaction types stored in the database, and 't-1' It can be a graphical element to access the type. The rest are similar, so the explanation is omitted.

The user selects any one of the graphic elements 't' to 't-3' provided as a user interface to select a state of approaching the pool 200, a state of catching the pool 200, a state of lifting the pool 200 (The interaction between the user and the pool) among the state in which the user pulls down the pool 200, and the state in which the pool 200 is lowered. In FIG. 6, And the specific state of the virtual object is reconstructed and displayed as a three-dimensional image.

Here, when a behavioral graphic element of a user corresponding to a specific interaction type is referred to as a first virtual action graphic information and a graphic element representing a user's current action is referred to as a second virtual action graphic information, Action graph information and second virtual action graph information. For reference, the first virtual behavior graphical information and the second virtual behavior graphical information may be a user's action status graphical element.

In addition, the processor may provide a guide between the first virtual behavior graphical information and the second virtual behavior graphical information to allow the user currently taking action corresponding to the second virtual behavior graphical information to generate the first virtual behavior graphical information But the present invention is not limited thereto.

The processor determines the degree of conformity between the first virtual action graphic information and the second virtual action graphic information. If the degree of conformity is determined to be equal to or greater than a preset reference level, At least some of the state of the user's behavior and the state of the virtual object.

For example, when the user selects a graphic element corresponding to 'Lift up', the graphic element 100 of the current interaction pose recognized through the interaction recognition unit and the graphic element 100a of the interaction pose to be restored are provided , The user will be able to take a pose (action) such as the interaction pose to be restored, flashing the graphic element 100a of the interaction pose to be restored, or providing guidelines such as arrows, It may be possible to support it.

When an interaction input is generated from the user (when the user recognizes the interaction when posing), the user compares the current interaction pose with the restoration pose to determine the degree of conformity, It is possible to finally restore the interaction by checking the duration (indicated by Undo / Redo completion in FIG. 4).

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, Those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims.

Therefore, the spirit of the present invention should not be construed as being limited to the above-described embodiments, and all of the equivalents or equivalents of the claims, as well as the following claims, I will say.

100: User's Behavior Status Graphical Element
200: virtual object

Claims (16)

A method of providing a user interface for supporting restoration of an interaction,
(a) when an input requesting restoration of an interaction between a user and a virtual object is obtained from the user, the electronic device is configured to retrieve restorable information from the database, the restorable information comprising time-series information and an interaction Including type information,
(b) the electronic device providing the user with a user interface corresponding to the recoverable information, and
(c) when an input for selecting specific interaction type information through the user interface is obtained from the user, the electronic device determines at least some of the behavior state of the user and the state of the virtual object with reference to the specific interaction type information A step of restoring and displaying
&Lt; / RTI &gt;
The method according to claim 1,
The interaction type information includes:
Wherein the electronic device obtains a plurality of recognition results through a plurality of recognition algorithms, and combines the plurality of recognition results.
The method according to claim 1,
Wherein the recoverable information includes virtual object status information indicating a specific status of the at least one recoverable virtual object,
Wherein the electronic device obtains the virtual object state information from the database and restores the behavior state of the user and the specific state of the virtual object.
The method according to claim 1,
The step (c)
The electronic device generates the action status graphic element of the user corresponding to the specific interaction type information to be restored by referring to the restorability information and displays the generated action status graph element to the user.
5. The method of claim 4,
The step (c)
When a behavior graphical element of the user corresponding to the specific interaction type is referred to as a first virtual activity graph information and a graphical element representing a current action of the user is referred to as a second virtual activity graph information, Graphic information, and the second virtual behavior graph information.
6. The method of claim 5,
Wherein the electronic device provides a guide line between the first virtual behavior graphical information and the second virtual behavior graphical information to allow the user taking an action corresponding to the second virtual behavior graphical information, 1 &lt; / RTI &gt; virtual action graphic information.
6. The method of claim 5,
The step (c)
Wherein the electronic device includes a step of determining a degree of conformity between the first virtual behavior graph information and the second virtual behavior graph information,
And restoring at least a part of the behavior state of the user and the state of the virtual object and providing the restored state to the user if it is determined that the degree of fitness is equal to or greater than a preset reference.
8. The method of claim 7,
The step (c)
Wherein the electronic device further includes a step of determining whether the fit degree is maintained for a predetermined time in a state where the fit degree is equal to or greater than a predetermined reference,
And restoring at least a part of the action state of the user and the state of the virtual object and providing the restored state to the user if the fit degree is determined to be maintained for a predetermined time period exceeding a preset reference.
An electronic device that provides a user interface that supports restoration of an interaction,
Database;
When the input requesting the restoration of the interaction between the user and the virtual object is obtained from the user, the recoverable information includes the time series information and the interaction type information corresponding to the time series information, And providing the user interface corresponding to the recoverable information to the user; And
A processor for supporting at least a part of the behavior state of the user and the state of the virtual object to be restored and displayed by referring to the specific interaction type information when an input for selecting specific interaction type information through the user interface is acquired from the user, ;
&Lt; / RTI &gt;
10. The method of claim 9,
The interaction type information includes:
Wherein the processor obtains a plurality of recognition results through a plurality of recognition algorithms, and combines the plurality of recognition results.
10. The method of claim 9,
Wherein the recoverable information includes virtual object status information indicating a specific status of the at least one recoverable virtual object,
Wherein the processor obtains the virtual object state information from the database and restores the behavior state of the user and the specific state of the virtual object.
10. The method of claim 9,
Wherein the processor generates the action status graphical element of the user corresponding to the specific interaction type information to be restored by referring to the restorability information, and supports the display to the user.
13. The method of claim 12,
When the behavior graphical element of the user corresponding to the specific interaction type is referred to as a first virtual activity graph information and the graphical element representing the current action of the user is referred to as second virtual activity graph information, The first virtual behavior graph information and the second virtual behavior graph information.
14. The method of claim 13,
Wherein the processor is further configured to provide a guide line between the first virtual behavior graphical information and the second virtual behavior graphical information to cause the user taking an action corresponding to the second virtual behavior graphical information, And to take an action corresponding to the virtual behavior graphic information.
14. The method of claim 13,
Wherein the processor determines a degree of conformity between the first virtual behavior graphical information and the second virtual behavior graphical information and if at least one of the behavior state of the user and the state of the virtual object And restoring and providing the part to the user.
16. The method of claim 15,
Wherein the processor determines whether or not the adaptation degree is maintained for a predetermined time in a state where the adaptation degree is equal to or greater than a predetermined reference value and if it is determined that the adaptation degree is maintained for a predetermined time period exceeding a predetermined reference, And restores and provides at least part of the state to the user.
KR1020160001891A 2016-01-07 2016-01-07 Method, device and computer-readable recording media for providing user interface that supports restoration of interaction KR101752223B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020160001891A KR101752223B1 (en) 2016-01-07 2016-01-07 Method, device and computer-readable recording media for providing user interface that supports restoration of interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020160001891A KR101752223B1 (en) 2016-01-07 2016-01-07 Method, device and computer-readable recording media for providing user interface that supports restoration of interaction

Publications (1)

Publication Number Publication Date
KR101752223B1 true KR101752223B1 (en) 2017-07-06

Family

ID=59354267

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020160001891A KR101752223B1 (en) 2016-01-07 2016-01-07 Method, device and computer-readable recording media for providing user interface that supports restoration of interaction

Country Status (1)

Country Link
KR (1) KR101752223B1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210017166A (en) * 2019-08-07 2021-02-17 한국과학기술연구원 Method for undoing virtual object-based interaction of continuous three-dimensional manipulation interaction and device supporting the same
KR20210021720A (en) * 2019-08-19 2021-03-02 한국과학기술연구원 Method for control interaction interface and device supporting the same

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101567995B1 (en) 2014-10-07 2015-11-10 한국과학기술연구원 Method, appratus and computer-readable recording medium for providing user interface which enalbes interaction

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101567995B1 (en) 2014-10-07 2015-11-10 한국과학기술연구원 Method, appratus and computer-readable recording medium for providing user interface which enalbes interaction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
P.Horain et al., "Virtually enhancing the perception of user ations," In the Proc. fo ICAT 2005, pp.245~246, 2005.

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210017166A (en) * 2019-08-07 2021-02-17 한국과학기술연구원 Method for undoing virtual object-based interaction of continuous three-dimensional manipulation interaction and device supporting the same
KR102302388B1 (en) * 2019-08-07 2021-09-16 한국과학기술연구원 Method for undoing virtual object-based interaction of continuous three-dimensional manipulation interaction and device supporting the same
KR20210021720A (en) * 2019-08-19 2021-03-02 한국과학기술연구원 Method for control interaction interface and device supporting the same
KR102306392B1 (en) * 2019-08-19 2021-09-30 한국과학기술연구원 Method for control interaction interface and device supporting the same
US11656687B2 (en) 2019-08-19 2023-05-23 Korea Institute Of Science And Technology Method for controlling interaction interface and device for supporting the same

Similar Documents

Publication Publication Date Title
US20210104178A1 (en) System and method for three-dimensional augmented reality guidance for use of medical equipment
KR102014385B1 (en) Method and apparatus for learning surgical image and recognizing surgical action based on learning
US11042729B2 (en) Classifying facial expressions using eye-tracking cameras
US20230107040A1 (en) Human-computer interface using high-speed and accurate tracking of user interactions
Bosch et al. Detecting student emotions in computer-enabled classrooms.
US9715622B2 (en) System and method for predicting neurological disorders
KR102223693B1 (en) Detecting natural user-input engagement
JP2019517072A5 (en)
US10380603B2 (en) Assessing personality and mood characteristics of a customer to enhance customer satisfaction and improve chances of a sale
JP2021503662A (en) Neural network model training
CN109074166A (en) Change application state using neural deta
CN105518575A (en) Two-hand interaction with natural user interface
KR102233099B1 (en) Apparatus and method for machine learning based prediction model and quantitative control of virtual reality contents’ cyber sickness
Chen et al. Towards improving social communication skills with multimodal sensory information
JP2020505686A (en) Augmented reality for predictive workflow in the operating room
US20200268349A1 (en) System and method for analysis of medical equipment system results generated by augmented reality guidance of medical equipment systems
Essig et al. ADAMAAS: towards smart glasses for mobile and personalized action assistance
Fuchs et al. Gaze-based intention estimation for shared autonomy in pick-and-place tasks
Koochaki et al. A data-driven framework for intention prediction via eye movement with applications to assistive systems
KR101752223B1 (en) Method, device and computer-readable recording media for providing user interface that supports restoration of interaction
US20090292928A1 (en) Acquisition and particular association of inference data indicative of an inferred mental state of an authoring user and source identity data
Samuel et al. Unsupervised anomaly detection for a smart autonomous robotic assistant surgeon (saras) using a deep residual autoencoder
KR101847446B1 (en) Apparatus and method for eye-tracking base on cognition data network
Opromolla et al. A usability study of a gesture recognition system applied during the surgical procedures
JP2021144359A (en) Learning apparatus, estimation apparatus, learning method, and program

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant