KR101428014B1 - Apparatus and method for simulating NUI and NUX in virtualized environment - Google Patents
Apparatus and method for simulating NUI and NUX in virtualized environment Download PDFInfo
- Publication number
- KR101428014B1 KR101428014B1 KR1020130130792A KR20130130792A KR101428014B1 KR 101428014 B1 KR101428014 B1 KR 101428014B1 KR 1020130130792 A KR1020130130792 A KR 1020130130792A KR 20130130792 A KR20130130792 A KR 20130130792A KR 101428014 B1 KR101428014 B1 KR 101428014B1
- Authority
- KR
- South Korea
- Prior art keywords
- pattern
- virtual character
- nui
- command
- character
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
Abstract
A method and apparatus for simulating NUI and NUX in a virtualized environment, the method comprising: receiving a command for a virtualization character located in a virtualization environment The simulator recognizes a pattern of a virtual character according to a command using a motion detection object located in a virtualization environment. The simulator recognizes a pattern of a virtual character based on the virtualization character pattern and the mapping information of the NUI command, Wherein the motion sensing object simulates a NUX setting for performing the output NUI command by recognizing the gesture and voice of the virtual character based on a physical correlation within the virtualization environment.
Description
The present invention relates to a method for simulating NUI and NUX in a virtualized environment, and more particularly, to a method for simulating NUI and NUX by sensing the operation of a virtualized character in a virtualized environment.
Futuristic interface, the NUI (Natural User Interface), can interact with the system directly to the human body without additional devices such as a keyboard or mouse. The term NUI was originally proposed by Steve Mann in the 1970s, and he used the terminology of the NUI to describe the interface that reflects the real world as an alternative interface method of the CLI (Command Line Interface) and the GUI (Graphical User Interface) And introduced the terms DUI (Direct User Interface) and MFC (Metaphor-Free Computing) with the same concept. At this time, prototypes for NUI have already been developed. In the Put that there project, which was conducted at MIT Lab in 1979, we developed a prototype that pointed to the screen with a finger, commanded by voice, and interacted with the system. Since then, NUI has been attracting attention in the movie 'Minority Report', where the protagonist manipulates various data on the 3D screen with hand movements. In 2007, Apple's iPhone succeeded greatly and multi-touch technology became popular . In recent years, as Kinect has been released to the world, the gesture interface method that has been seen only in movies has been commercialized, and the term NUI has begun to spread. Now, the gesture function is introduced to the smartphone, and the research of NUI is proceeding rapidly.
As such, the NUI has a very low learning cost because it operates on the natural behavior inherent in the person as the name has the word Natural. Therefore, it is very easy to use compared to the CLI that was used only by experts in the past and the GUI which can be used skillfully to some extent.
In addition, a WIMP (Window-Icon-Menu-Pointer) method represented by a GUI requires a widget for manipulation. In order to display widgets, a space must be allocated on a screen having a limited size. There is a need for unnecessary efforts to repeatedly click. On the other hand, the NUI does not use an indirect input device such as a mouse or a keyboard, but directly uses the body as a input device by using a sensor or a camera to directly manipulate the contents. Thus, a user may feel that he / she directly dominates and manipulates the technology, and UX (User experience) can be improved.
On the other hand, prior art cited below suggests a user experience-based motion recognition technology for multimedia content control. However, when the NUI presented in the prior art document is implemented in a real environment, time and space limitations occur, and a large cost is incurred for NUI building. In the actual environment, There are fatal disadvantages that additional costs are incurred to adjust.
From this point of view, it can be seen that technical measures are needed to implement various types of NUI and NUX through a simulator in a virtual environment without installing a high-cost motion sensor in a real environment.
Therefore, a first problem to be solved by the present invention is to provide a method and apparatus for recognizing an operation of a virtual character using a motion sensing object through a simulator in a virtualized environment, performing NUI commands, and simulating NUX settings for executing NUI commands ≪ / RTI >
A second object of the present invention is to provide an apparatus capable of recognizing an operation of a virtual character using a motion sensing object in a virtualized environment, performing an NUI command, and simulating a NUX setting for performing an NUI command.
It is another object of the present invention to provide a computer-readable recording medium storing a program for causing a computer to execute the above-described method.
In order to achieve the first object of the present invention, there is provided a method for implementing a NUI (Natural User Interface) in which a user can directly interact with a system in a virtualized environment, the method comprising: Receiving a command for the terminal; Recognizing a pattern of the virtual character according to the command using the motion detection object located in the virtualization environment; And outputting the NUI command for the recognized pattern using the virtual character pattern and the mapping information of the NUI command, wherein the motion sensing object is physically correlated in the virtual environment (NUI) setting for performing the output NUI command by recognizing the gesture and voice of the virtual character on the basis of the output of the virtual character recognition unit.
According to an embodiment of the present invention, the motion sensing object is configured as a 3D sensor, a 2D camera, and a microcomputer, and the step of recognizing the pattern of the virtual character may include sensing the gesture and voice of the virtual character according to the command, Acquiring depth data, color data, and sound data, each of which is object data, by recognizing the object data using the object; And recognizing a pattern of the virtual character using an element description module capable of distinguishing a detailed operation of the virtual character from the obtained combination of object data, wherein the element description module comprises: Recognizes the body movement pattern of the virtual character and the face pattern of the virtual character through the depth data, recognizes the hand motion pattern of the virtual character through the depth data, and recognizes the voice of the virtual character through the sound data. Lt; / RTI >
In addition, the recognized pattern may be output as a recognition result of each of the element description modules to the recognition result display window, and when an error occurs in the pattern recognition among the respective element description modules, The control unit generates the error message in the recognition result window of the motion detection object, thereby to re-adjust the detailed setting of the motion detection object.
According to another embodiment of the present invention, the step of inputting a command for the virtualization character includes a step of inputting a gesture, a facial expression, and a voice through a character adjustment module to a virtual character positioned in the virtualization environment Lt; / RTI >
According to another embodiment of the present invention, the virtual character and the motion sensing object may be arranged to be position-adjustable within the virtualization environment.
According to another embodiment of the present invention, the mapping information may be such that one NUI instruction is mapped to one pattern for the virtual character, or one NUI instruction corresponds to a plurality of pattern combinations for the virtual character And then mapping is performed.
In order to achieve the second object of the present invention, there is provided an apparatus for implementing a NUI capable of directly interacting with a system in a virtualized environment, the apparatus comprising: an input unit for receiving a command for a virtualized character located in the virtualized environment; A processing unit for recognizing the pattern of the virtual character according to the command using the motion sensing object located in the virtualization environment and searching for the recognized pattern using the virtual character pattern and the mapping information of the NUI command; And an output unit for outputting the NUI command for the retrieved pattern, wherein the motion sensing object recognizes the gesture and voice of the virtual character based on a physical correlation in the virtualization environment, Lt; RTI ID = 0.0 > NUX < / RTI >
According to an embodiment of the present invention, the processing unit may acquire depth data, color data, and sound data, which are object data, by recognizing the gesture and voice of the virtual character according to the command using the motion detection object And recognizing a pattern of the virtual character using an element description module capable of distinguishing a detailed operation of the virtual character from the obtained combination of object data, wherein the element description module comprises: Recognizes the body movement pattern of the character and the face pattern of the virtual character, recognizes the hand motion pattern of the virtual character through the depth data, and recognizes the voice of the virtual character through the sound data. have.
In addition, the recognized pattern may be output as a recognition result of each of the element description modules to the recognition result display window, and when an error occurs in the pattern recognition among the respective element description modules, The control unit generates an error message in the recognition result window of the motion detection object to re-adjust the detailed setting of the motion detection object.
According to another embodiment of the present invention, the input unit may be a device for inputting body movements, facial expressions, and voices through a character adjustment module for a virtual character positioned in the virtual environment.
According to another embodiment of the present invention, the mapping information may be such that one NUI instruction is mapped to one pattern for the virtual character, or one NUI instruction corresponds to a plurality of pattern combinations for the virtual character And a mapping unit for mapping the received signal.
According to the present invention, it is possible to simulate the NUX setting for executing the output NUI command by adjusting the virtual character in the virtualization environment, recognizing the pattern using the arranged motion detection object, and outputting the NUI command .
According to another aspect of the present invention, there is provided an effect of acquiring object data through a motion sensing object and recognizing a pattern of a virtual character from a combination of object data.
Furthermore, the pattern recognition result for the virtualized character is output to the recognition result display window, and an error message is generated when the recognition error occurs, so that the detail setting of the motion detection object can be rescheduled.
1 is a flowchart illustrating a method for implementing an NUI in a virtualization environment according to an exemplary embodiment of the present invention.
FIG. 2 is a table showing motion detection objects capable of detecting motion according to operations of a virtual character according to another embodiment of the present invention.
3 is a table showing object data that can be used according to an element technology module according to another embodiment of the present invention.
4 is a diagram illustrating a recognition result display window according to another embodiment of the present invention.
5 is a flowchart illustrating a method of outputting an error message when a pattern recognition error occurs according to another embodiment of the present invention.
6 is a diagram illustrating a table in which an operation recognition pattern and an NUI instruction are mapped according to another embodiment of the present invention.
7 is a block diagram illustrating an apparatus for implementing an NUI in a virtualization environment according to an embodiment of the present invention.
FIG. 8 is a diagram illustrating a process of implementing a NUI in a virtualization environment according to another embodiment of the present invention and providing a NUX capable of executing an NUI command.
FIG. 9 is a diagram showing a screen configuration of a simulator according to another embodiment of the present invention and application services that can be developed through a simulator.
10 is a diagram illustrating a result display window of an NUI command according to another embodiment of the present invention.
Before explaining the embodiments of the present invention, the technical means adopted by the embodiments of the present invention will be outlined to solve the problems occurring in the existing NUI implementation method.
While the conventional general user interface (UI) is a concept in which a user uses a separate input device such as a keyboard or a mouse to transmit a signal to a computer, the NUI (Natural User Interface) NUI is a concept that can be interacted with a computer by recognizing the user's gesture, hand gesture, face, voice, etc., placed in a specific space without the necessity. NUI has a natural behavior The learning cost is very low. Therefore, it is very easy to use compared to the CLI that was used only by experts in the past and the GUI which can be used skillfully to some extent. However, the NUI is expensive to construct motion sensor that can detect human motion. In order to set the optimal position and direction for motion detection sensor, There is a fatal drawback that unnecessary time is consumed.
Therefore, by implementing various types of NUI and NUX in a virtualized environment through a simulator, it is possible to detect the movement of a human operator most easily through a simulator without installing a high-cost motion detection sensor in a real environment And to propose a technical means for pre-simulating the optimal position and direction of the robot.
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the following description and the accompanying drawings, detailed description of well-known functions or constructions that may obscure the subject matter of the present invention will be omitted. It should be noted that the same constituent elements are denoted by the same reference numerals as possible throughout the drawings.
FIG. 1 is a flowchart illustrating a method for implementing an NUI in a virtualization environment according to an embodiment of the present invention. In a virtual environment, a method for implementing a NUI (Natural User Interface) A simulator receives a command for a virtualized character located in the virtualization environment and the simulator recognizes a pattern of the virtualization character according to the command using a motion detection object located in the virtualization environment, Outputting the NUI command for the recognized pattern using the virtual character pattern and the mapping information of the NUI command, wherein the motion sensing object is based on a physical correlation in the virtualization environment By recognizing the gesture and voice of the virtual character, the output NUI command To simulate the Natural User Experience (NUX) settings to perform.
More specifically, in step S110, a simulator receives a command for a virtualized character located in the virtualization environment. In other words, for the virtual character positioned in the virtualization environment, the body coordination module, the facial expression, and the voice can be input. Herein, the body movement and the facial expression can be inputted by directly inputting the body movement and facial expression stored in advance in the simulator, or can be directly adjusted in real time through the character adjustment module. Also, the voice can be input by inputting a voice stored in advance in the simulator, and can be recorded and inputted through a microphone in real time.
In step S120, the simulator recognizes a pattern of the virtual character according to the command using a motion detection object located in the virtualization environment.
More specifically, the motion sensing object is configured as a 3D sensor, a 2D camera, and a microcomputer. The recognizing of the pattern of the virtual character recognizes the body motion and voice of the virtual character according to the command using the motion sensing object, Recognizing a pattern of the virtual character using an element description module that obtains depth data, color data, and sound data that are object data, respectively, and distinguishes the detailed operation of the virtual character from the obtained combination of object data Wherein the element technology module recognizes the gesture pattern of the virtual character and the face pattern of the virtual character through the depth data and the color data, recognizes the hand motion pattern of the virtual character through the depth data, The sound of the virtual character I can recognize the sex. Hereinafter, the motion sensing object will be described in detail with reference to FIG.
FIG. 2 is a table showing motion detection objects capable of detecting motion according to operations of a virtual character according to another embodiment of the present invention.
More specifically, the
Now, the pattern of the virtual character can be recognized by using the element description module which can distinguish the detailed operation of the virtual character from the obtained combination of object data. Hereinafter, the element description module will be described in detail with reference to FIG.
3 is a table showing object data that can be used according to an element technology module according to another embodiment of the present invention.
More specifically, the depth data and the color data, which are the object data 32 input in FIG. 2, recognize the body movement pattern and the face pattern of the virtual character through the body movement recognition module, which is the
The gesture recognition module and the gesture recognition module may be modules for recognizing a gesture of a person. Here, in order to recognize the body motion and the gesture, first, the color data or the depth images (depth images), which are color images obtained by photographing a human body through the 3D camera or the 3D camera, ) Can be obtained. A human's shape may be extracted from the depth data and the color data through image processing and then represented by a volumetric model or a skeletal model. The modeled data can then be used to determine which gesture a person has performed through pattern matching for comparison with an existing gesture database.
In addition, the face recognition module may be a module for extracting a feature from a face of a person and grasping the identity. Here, in order to recognize the face, data obtained by analyzing the relative position, size, and shape of eyes, nose, cheekbone, and chin can be obtained from the image of the human face captured by the 2D camera. Identification can be recognized through pattern matching for comparison with existing face information databases. Alternatively, the identity can be recognized by analyzing the outline information from the surfaces of eye sockets, nose, and chin through the 3D camera as the depth recognition camera.
In addition, the speech recognition module may be a module for extracting words from a human voice signal. In order to perform voice recognition, the analog voice signal input through the microphone is firstly converted into a digital signal by sampling and quantizing, and the converted digital signal is converted into a digital voice signal by a template matching method phonemes) to determine which phoneme is pronounced. Then, the recognized phonemes can be combined to identify a word.
Thereafter, they can be output to the recognition result display window as recognition results of the respective
FIG. 4 is a diagram illustrating a recognition result display window according to another embodiment of the present invention, and may be output to the recognition result display window as recognition results of the respective element technology modules.
More specifically, each of the recognition result display windows may output a result of recognizing patterns of gestures, hand movements, faces, and voices of the virtual character in the respective element description modules. The recognition result display window may be added by mapping the recognition result display window to the added element description module whenever the element description module is additionally selected.
Fig. 4 is a window for displaying a recognition result according to an element technology module such as gesture, hand gesture, face, and voice recognition. Here, the body motion recognition result
Meanwhile, if an error occurs in the pattern recognition of the virtual character using the motion detection object, the error message may be output to the recognition result display window. The process of outputting the error message will be described with reference to FIG.
5 is a flowchart illustrating a method of outputting an error message when a pattern recognition error occurs according to another embodiment of the present invention.
More specifically, when the NUI command recognition is not normally performed because the motion sensing object is not appropriately placed in the virtual environment, the position of the motion sensing object is changed or the motion of the user is changed to solve the above problem Informational messages or warnings can be implemented through the simulator.
If it is determined in step S510 that there is any virtual character whose pattern recognition is not normally performed through the motion detection object disposed in the virtual environment, if there is a virtual character that fails to recognize the pattern, the process proceeds to step S520.
In step S520, the virtual character that failed to recognize the pattern in step S510 may be highlighted with a highlight graphic effect to distinguish the virtual character from the virtual character that is normally performing the pattern recognition.
In step S530, an error message may be generated in the recognition result window of the element description module in which an error causing the pattern recognition failure occurs. For example, if the body motion recognition for the virtual character is failed, an error message may be output to the body motion recognition result display window, which is a result window for the body motion recognition module capable of recognizing the body motion. Thereafter, the virtualization character and the motion sensing object may be adjusted in the virtualization environment to guide the detailed setting of the motion sensing object disposed in the virtualization environment.
1 and step S120 and subsequent steps will be described again.
In step S130, the simulator outputs the NUI command for the recognized pattern using the virtual character pattern and the mapping information of the NUI command.
Step S130 will be described in detail with reference to FIG. 6 below.
FIG. 6 is a diagram illustrating a table in which an operation recognition pattern and an NUI instruction are mapped according to another embodiment of the present invention. Referring to FIG. 6, mapping is performed so that one pattern for the virtual character and one NUI instruction are mapped, A plurality of pattern combinations and one NUI command may be mapped.
More specifically, for example, the pattern of 'lifting the left or right arm' is detected through the
Thereafter, it is possible to simulate NUX (Natural User Experience) setting for executing the output NUI command. In other words, the NUX device object can simulate various NUX devices capable of providing a NUX to a virtual character located within a virtualization environment, such as a beam projector, directional speakers, and the like. The NUX device object simulates the time and audible NUX output from the service content and the simulation user can arrange or orient the NUX device object at a desired location within the virtualization environment through the simulator.
FIG. 7 is a block diagram illustrating an apparatus for implementing an NUI in a virtualization environment according to an embodiment of the present invention. The NUI and
In an apparatus for implementing a NUI in which a user can directly interact with a system in a virtualized environment, the
The
The
In addition, the motion sensing object simulates a NUX setting for performing the output NUI command by recognizing the gesture and voice of the virtual character based on a physical correlation in the virtualization environment.
In addition, the
In addition, the recognized pattern may be output as a recognition result of each of the element description modules to the recognition result display window, and when an error occurs in the pattern recognition among the respective element description modules, By generating an error message in the recognition result window of the motion detection object.
Also, the
The mapping information may be mapped so that one pattern for the virtual character and one NUI instruction are associated with each other, or a combination of a plurality of patterns for the virtual character and one NUI command.
8 is a flowchart illustrating a process of implementing a NUI in a virtualization environment according to another embodiment of the present invention and providing a NUX capable of executing NUI commands. As shown in FIG. Therefore, in order to avoid duplication of explanations, the function should be outlined with a brief description to help understanding each process.
The
The
The
The
The
The
Also, the
The
In addition, the
9 is a diagram showing a screen configuration of a simulator according to another embodiment of the present invention and an application service that can be developed through a simulator. Each screen configuration in FIG. 9 corresponds to each of the processes in FIGS. 1 to 6 . Therefore, in order to avoid duplication of explanations, the function will be outlined centering on a brief description to help understand each screen configuration.
The
The spatial
The NUI and NUX device
The virtual character
The element technology
The recognition
The 2D camera and the 3D camera reference
The placed
The
The virtual character bodily
The service
The NUI command analysis
FIG. 10 is a diagram showing a result display window of an NUI instruction according to another embodiment of the present invention. Since an operation pattern recognized by each element description module can be correspondingly combined with one NUI command, the body movement, And voice. In order to execute a specific NUI command of the service contents, the virtual character must satisfy all four specific patterns of motion such as gesture, hand gesture, face, and voice.
For example, in FIG. 10, when the NUI command interpretation
The application service developed through the simulator can be saved as a single file. The saved file can be retrieved by the simulator later to restore the application service at the time of storage.
Meanwhile, the embodiments of the present invention can be embodied as computer readable codes on a computer readable recording medium. A computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer system is stored.
Examples of the computer-readable recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device and the like, and also a carrier wave (for example, transmission via the Internet) . In addition, the computer-readable recording medium may be distributed over network-connected computer systems so that computer readable codes can be stored and executed in a distributed manner. In addition, functional programs, codes, and code segments for implementing the present invention can be easily deduced by programmers skilled in the art to which the present invention belongs.
The present invention has been described above with reference to various embodiments. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the disclosed embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present invention is defined by the appended claims rather than by the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present invention.
22: motion detection object 58: service content developer
31: element technology module 59: service content
51: Simulator 75: NUI and NUX simulation device
52: simulator user 76: input part
53: virtual character adjusting module 77:
56: NUI command table 78: Output section
57: NUX device object
Claims (12)
Receiving a command for a virtualization character located in the virtualization environment;
Recognizing a pattern of at least one of a gesture, a face, a gesture, and a voice of the virtual character according to the command using the motion sensing object located in the virtualization environment; And
The simulator generates the NUI command corresponding to the pattern of the recognized virtual character using the mapping information mapped so that one pattern or a plurality of pattern combinations of the behavior of the virtualized character and one NUI instruction representing the meaning of the pattern correspond to each other ; And,
Wherein the motion sensing object recognizes a gesture and voice of the virtual character based on a physical correlation in the virtualization environment, thereby generating a Natural User experience (NUX) device object to perform the output NUI command in the virtualization environment Wherein the NUX setting is configured to simulate a NUX setting for disposing or reorienting.
Wherein the motion sensing object is a 3D sensor, a 2D camera, and a microcomputer,
Wherein the step of recognizing the pattern of the behavior of the virtual character includes:
Acquiring depth data, color data, and sound data, which are object data, by recognizing the gesture and voice of the virtual character according to the command using the motion detection object; And
Recognizing a pattern of the virtual character using an element description module capable of distinguishing a detailed operation of the virtual character from one or two or more combinations of the obtained object data,
The element description module recognizes the gesture pattern of the virtual character and the face pattern of the virtual character through the depth data and color data, recognizes the hand motion pattern of the virtual character through the depth data, And recognizing the voice of the virtual character through the voice recognition unit.
The recognized pattern may include,
And outputting an error message to the recognition result display window as recognition results of the respective element description modules, and when an error occurs in the pattern recognition among the respective element description modules, Wherein the step of generating the motion detection object comprises the steps of:
Wherein the step of inputting a command for the virtualization character comprises:
And inputting gestures, facial expressions, and voices through a character adjustment module for a virtual character positioned within the virtualization environment.
Wherein the virtualization character and the motion detection object are arranged to be position-adjustable within the virtualization environment.
An input unit for receiving a command for a virtualized character located in the virtualization environment;
Recognizing a pattern of at least one of a gesture, a face, a gesture, and a voice of the virtual character according to the command using a motion sensing object located in the virtualization environment, A processor for searching for a pattern of the recognized virtualization counter using mapping information mapped so that one NUI instruction indicating a combination of the NUI patterns and the NUI instruction indicating the meaning of the pattern correspond; And
And an output unit outputting an NUI instruction corresponding to the searched pattern,
Wherein the motion sensing object recognizes gestures and voices of the virtual character based on physical correlations within the virtualization environment, thereby placing or orienting the NUX device object in the virtualization environment to perform the output NUI command, Lt; RTI ID = 0.0 > NUX < / RTI >
Wherein,
Acquiring depth data, color data, and sound data, which are object data, from one or more combinations of the obtained object data by recognizing the gesture and voice of the virtual character according to the command using the motion detection object Recognizing a pattern of the virtual character using an element description module that can distinguish the detailed operation of the virtual character,
The element description module recognizes the gesture pattern of the virtual character and the face pattern of the virtual character through the depth data and color data, recognizes the hand motion pattern of the virtual character through the depth data, To recognize the voice of the virtual character.
The recognized pattern may include,
And outputting an error message to the recognition result display window as recognition results of the respective element description modules, and when an error occurs in the pattern recognition among the respective element description modules, To thereby reset the detailed setting of the motion detection object.
Wherein the input unit comprises:
Wherein the virtual manipulation module inputs a gesture, a facial expression, and a voice through a character adjusting module with respect to a virtual character positioned within the virtualization environment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020130130792A KR101428014B1 (en) | 2013-10-31 | 2013-10-31 | Apparatus and method for simulating NUI and NUX in virtualized environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020130130792A KR101428014B1 (en) | 2013-10-31 | 2013-10-31 | Apparatus and method for simulating NUI and NUX in virtualized environment |
Publications (1)
Publication Number | Publication Date |
---|---|
KR101428014B1 true KR101428014B1 (en) | 2014-08-07 |
Family
ID=51749920
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020130130792A KR101428014B1 (en) | 2013-10-31 | 2013-10-31 | Apparatus and method for simulating NUI and NUX in virtualized environment |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101428014B1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008015942A (en) | 2006-07-07 | 2008-01-24 | Sony Computer Entertainment Inc | User interface program, device and method, and information processing system |
JP2013037454A (en) | 2011-08-05 | 2013-02-21 | Ikutoku Gakuen | Posture determination method, program, device, and system |
KR20130057390A (en) * | 2011-11-23 | 2013-05-31 | 삼성전자주식회사 | Method and apparatus for processing virtual world |
KR20130111234A (en) * | 2010-06-21 | 2013-10-10 | 마이크로소프트 코포레이션 | Natural user input for driving interactive stories |
-
2013
- 2013-10-31 KR KR1020130130792A patent/KR101428014B1/en active IP Right Grant
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008015942A (en) | 2006-07-07 | 2008-01-24 | Sony Computer Entertainment Inc | User interface program, device and method, and information processing system |
KR20130111234A (en) * | 2010-06-21 | 2013-10-10 | 마이크로소프트 코포레이션 | Natural user input for driving interactive stories |
JP2013037454A (en) | 2011-08-05 | 2013-02-21 | Ikutoku Gakuen | Posture determination method, program, device, and system |
KR20130057390A (en) * | 2011-11-23 | 2013-05-31 | 삼성전자주식회사 | Method and apparatus for processing virtual world |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7411133B2 (en) | Keyboards for virtual reality display systems, augmented reality display systems, and mixed reality display systems | |
US10664060B2 (en) | Multimodal input-based interaction method and device | |
KR102413561B1 (en) | Virtual user input controls in a mixed reality environment | |
CN107430437B (en) | System and method for creating a real grabbing experience in a virtual reality/augmented reality environment | |
CN105518575B (en) | With the two handed input of natural user interface | |
CN110716645A (en) | Augmented reality data presentation method and device, electronic equipment and storage medium | |
WO2018098861A1 (en) | Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus | |
CN111897431B (en) | Display method and device, display equipment and computer readable storage medium | |
US11373650B2 (en) | Information processing device and information processing method | |
WO2018000519A1 (en) | Projection-based interaction control method and system for user interaction icon | |
US10488918B2 (en) | Analysis of user interface interactions within a virtual reality environment | |
CN110568929B (en) | Virtual scene interaction method and device based on virtual keyboard and electronic equipment | |
KR102021851B1 (en) | Method for processing interaction between object and user of virtual reality environment | |
US20190138117A1 (en) | Information processing device, information processing method, and program | |
CN111433735A (en) | Method, apparatus and computer readable medium for implementing a generic hardware-software interface | |
KR101428014B1 (en) | Apparatus and method for simulating NUI and NUX in virtualized environment | |
CN112424736A (en) | Machine interaction | |
KR20180044613A (en) | Natural user interface control method and system base on motion regocnition using position information of user body | |
CN112612358A (en) | Human and large screen multi-mode natural interaction method based on visual recognition and voice recognition | |
US20190339864A1 (en) | Information processing system, information processing method, and program | |
US11676355B2 (en) | Method and system for merging distant spaces | |
KR102612430B1 (en) | System for deep learning-based user hand gesture recognition using transfer learning and providing virtual reality contents | |
Fuglseth | Object Detection with HoloLens 2 using Mixed Reality and Unity a proof-of-concept | |
US20240096043A1 (en) | Display method, apparatus, electronic device and storage medium for a virtual input device | |
US20230342026A1 (en) | Gesture-based keyboard text entry |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant | ||
FPAY | Annual fee payment |
Payment date: 20170802 Year of fee payment: 4 |
|
FPAY | Annual fee payment |
Payment date: 20180801 Year of fee payment: 5 |
|
FPAY | Annual fee payment |
Payment date: 20190801 Year of fee payment: 6 |