KR101428014B1 - Apparatus and method for simulating NUI and NUX in virtualized environment - Google Patents

Apparatus and method for simulating NUI and NUX in virtualized environment Download PDF

Info

Publication number
KR101428014B1
KR101428014B1 KR1020130130792A KR20130130792A KR101428014B1 KR 101428014 B1 KR101428014 B1 KR 101428014B1 KR 1020130130792 A KR1020130130792 A KR 1020130130792A KR 20130130792 A KR20130130792 A KR 20130130792A KR 101428014 B1 KR101428014 B1 KR 101428014B1
Authority
KR
South Korea
Prior art keywords
pattern
virtual character
nui
command
character
Prior art date
Application number
KR1020130130792A
Other languages
Korean (ko)
Inventor
조경은
엄기현
조성재
Original Assignee
동국대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 동국대학교 산학협력단 filed Critical 동국대학교 산학협력단
Priority to KR1020130130792A priority Critical patent/KR101428014B1/en
Application granted granted Critical
Publication of KR101428014B1 publication Critical patent/KR101428014B1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines

Abstract

A method and apparatus for simulating NUI and NUX in a virtualized environment, the method comprising: receiving a command for a virtualization character located in a virtualization environment The simulator recognizes a pattern of a virtual character according to a command using a motion detection object located in a virtualization environment. The simulator recognizes a pattern of a virtual character based on the virtualization character pattern and the mapping information of the NUI command, Wherein the motion sensing object simulates a NUX setting for performing the output NUI command by recognizing the gesture and voice of the virtual character based on a physical correlation within the virtualization environment.

Description

[0001] Apparatus and method for simulating NUI and NUX in a virtualized environment [

The present invention relates to a method for simulating NUI and NUX in a virtualized environment, and more particularly, to a method for simulating NUI and NUX by sensing the operation of a virtualized character in a virtualized environment.

Futuristic interface, the NUI (Natural User Interface), can interact with the system directly to the human body without additional devices such as a keyboard or mouse. The term NUI was originally proposed by Steve Mann in the 1970s, and he used the terminology of the NUI to describe the interface that reflects the real world as an alternative interface method of the CLI (Command Line Interface) and the GUI (Graphical User Interface) And introduced the terms DUI (Direct User Interface) and MFC (Metaphor-Free Computing) with the same concept. At this time, prototypes for NUI have already been developed. In the Put that there project, which was conducted at MIT Lab in 1979, we developed a prototype that pointed to the screen with a finger, commanded by voice, and interacted with the system. Since then, NUI has been attracting attention in the movie 'Minority Report', where the protagonist manipulates various data on the 3D screen with hand movements. In 2007, Apple's iPhone succeeded greatly and multi-touch technology became popular . In recent years, as Kinect has been released to the world, the gesture interface method that has been seen only in movies has been commercialized, and the term NUI has begun to spread. Now, the gesture function is introduced to the smartphone, and the research of NUI is proceeding rapidly.

As such, the NUI has a very low learning cost because it operates on the natural behavior inherent in the person as the name has the word Natural. Therefore, it is very easy to use compared to the CLI that was used only by experts in the past and the GUI which can be used skillfully to some extent.

In addition, a WIMP (Window-Icon-Menu-Pointer) method represented by a GUI requires a widget for manipulation. In order to display widgets, a space must be allocated on a screen having a limited size. There is a need for unnecessary efforts to repeatedly click. On the other hand, the NUI does not use an indirect input device such as a mouse or a keyboard, but directly uses the body as a input device by using a sensor or a camera to directly manipulate the contents. Thus, a user may feel that he / she directly dominates and manipulates the technology, and UX (User experience) can be improved.

On the other hand, prior art cited below suggests a user experience-based motion recognition technology for multimedia content control. However, when the NUI presented in the prior art document is implemented in a real environment, time and space limitations occur, and a large cost is incurred for NUI building. In the actual environment, There are fatal disadvantages that additional costs are incurred to adjust.

From this point of view, it can be seen that technical measures are needed to implement various types of NUI and NUX through a simulator in a virtual environment without installing a high-cost motion sensor in a real environment.

(Non-Patent Document 1) User experience-based motion recognition technology for multimedia content control, Korea Multimedia Society,

Therefore, a first problem to be solved by the present invention is to provide a method and apparatus for recognizing an operation of a virtual character using a motion sensing object through a simulator in a virtualized environment, performing NUI commands, and simulating NUX settings for executing NUI commands ≪ / RTI >

A second object of the present invention is to provide an apparatus capable of recognizing an operation of a virtual character using a motion sensing object in a virtualized environment, performing an NUI command, and simulating a NUX setting for performing an NUI command.

It is another object of the present invention to provide a computer-readable recording medium storing a program for causing a computer to execute the above-described method.

In order to achieve the first object of the present invention, there is provided a method for implementing a NUI (Natural User Interface) in which a user can directly interact with a system in a virtualized environment, the method comprising: Receiving a command for the terminal; Recognizing a pattern of the virtual character according to the command using the motion detection object located in the virtualization environment; And outputting the NUI command for the recognized pattern using the virtual character pattern and the mapping information of the NUI command, wherein the motion sensing object is physically correlated in the virtual environment (NUI) setting for performing the output NUI command by recognizing the gesture and voice of the virtual character on the basis of the output of the virtual character recognition unit.

According to an embodiment of the present invention, the motion sensing object is configured as a 3D sensor, a 2D camera, and a microcomputer, and the step of recognizing the pattern of the virtual character may include sensing the gesture and voice of the virtual character according to the command, Acquiring depth data, color data, and sound data, each of which is object data, by recognizing the object data using the object; And recognizing a pattern of the virtual character using an element description module capable of distinguishing a detailed operation of the virtual character from the obtained combination of object data, wherein the element description module comprises: Recognizes the body movement pattern of the virtual character and the face pattern of the virtual character through the depth data, recognizes the hand motion pattern of the virtual character through the depth data, and recognizes the voice of the virtual character through the sound data. Lt; / RTI >

In addition, the recognized pattern may be output as a recognition result of each of the element description modules to the recognition result display window, and when an error occurs in the pattern recognition among the respective element description modules, The control unit generates the error message in the recognition result window of the motion detection object, thereby to re-adjust the detailed setting of the motion detection object.

According to another embodiment of the present invention, the step of inputting a command for the virtualization character includes a step of inputting a gesture, a facial expression, and a voice through a character adjustment module to a virtual character positioned in the virtualization environment Lt; / RTI >

According to another embodiment of the present invention, the virtual character and the motion sensing object may be arranged to be position-adjustable within the virtualization environment.

According to another embodiment of the present invention, the mapping information may be such that one NUI instruction is mapped to one pattern for the virtual character, or one NUI instruction corresponds to a plurality of pattern combinations for the virtual character And then mapping is performed.

In order to achieve the second object of the present invention, there is provided an apparatus for implementing a NUI capable of directly interacting with a system in a virtualized environment, the apparatus comprising: an input unit for receiving a command for a virtualized character located in the virtualized environment; A processing unit for recognizing the pattern of the virtual character according to the command using the motion sensing object located in the virtualization environment and searching for the recognized pattern using the virtual character pattern and the mapping information of the NUI command; And an output unit for outputting the NUI command for the retrieved pattern, wherein the motion sensing object recognizes the gesture and voice of the virtual character based on a physical correlation in the virtualization environment, Lt; RTI ID = 0.0 > NUX < / RTI >

According to an embodiment of the present invention, the processing unit may acquire depth data, color data, and sound data, which are object data, by recognizing the gesture and voice of the virtual character according to the command using the motion detection object And recognizing a pattern of the virtual character using an element description module capable of distinguishing a detailed operation of the virtual character from the obtained combination of object data, wherein the element description module comprises: Recognizes the body movement pattern of the character and the face pattern of the virtual character, recognizes the hand motion pattern of the virtual character through the depth data, and recognizes the voice of the virtual character through the sound data. have.

In addition, the recognized pattern may be output as a recognition result of each of the element description modules to the recognition result display window, and when an error occurs in the pattern recognition among the respective element description modules, The control unit generates an error message in the recognition result window of the motion detection object to re-adjust the detailed setting of the motion detection object.

According to another embodiment of the present invention, the input unit may be a device for inputting body movements, facial expressions, and voices through a character adjustment module for a virtual character positioned in the virtual environment.

According to another embodiment of the present invention, the mapping information may be such that one NUI instruction is mapped to one pattern for the virtual character, or one NUI instruction corresponds to a plurality of pattern combinations for the virtual character And a mapping unit for mapping the received signal.

According to the present invention, it is possible to simulate the NUX setting for executing the output NUI command by adjusting the virtual character in the virtualization environment, recognizing the pattern using the arranged motion detection object, and outputting the NUI command .

According to another aspect of the present invention, there is provided an effect of acquiring object data through a motion sensing object and recognizing a pattern of a virtual character from a combination of object data.

Furthermore, the pattern recognition result for the virtualized character is output to the recognition result display window, and an error message is generated when the recognition error occurs, so that the detail setting of the motion detection object can be rescheduled.

1 is a flowchart illustrating a method for implementing an NUI in a virtualization environment according to an exemplary embodiment of the present invention.
FIG. 2 is a table showing motion detection objects capable of detecting motion according to operations of a virtual character according to another embodiment of the present invention.
3 is a table showing object data that can be used according to an element technology module according to another embodiment of the present invention.
4 is a diagram illustrating a recognition result display window according to another embodiment of the present invention.
5 is a flowchart illustrating a method of outputting an error message when a pattern recognition error occurs according to another embodiment of the present invention.
6 is a diagram illustrating a table in which an operation recognition pattern and an NUI instruction are mapped according to another embodiment of the present invention.
7 is a block diagram illustrating an apparatus for implementing an NUI in a virtualization environment according to an embodiment of the present invention.
FIG. 8 is a diagram illustrating a process of implementing a NUI in a virtualization environment according to another embodiment of the present invention and providing a NUX capable of executing an NUI command.
FIG. 9 is a diagram showing a screen configuration of a simulator according to another embodiment of the present invention and application services that can be developed through a simulator.
10 is a diagram illustrating a result display window of an NUI command according to another embodiment of the present invention.

Before explaining the embodiments of the present invention, the technical means adopted by the embodiments of the present invention will be outlined to solve the problems occurring in the existing NUI implementation method.

While the conventional general user interface (UI) is a concept in which a user uses a separate input device such as a keyboard or a mouse to transmit a signal to a computer, the NUI (Natural User Interface) NUI is a concept that can be interacted with a computer by recognizing the user's gesture, hand gesture, face, voice, etc., placed in a specific space without the necessity. NUI has a natural behavior The learning cost is very low. Therefore, it is very easy to use compared to the CLI that was used only by experts in the past and the GUI which can be used skillfully to some extent. However, the NUI is expensive to construct motion sensor that can detect human motion. In order to set the optimal position and direction for motion detection sensor, There is a fatal drawback that unnecessary time is consumed.

Therefore, by implementing various types of NUI and NUX in a virtualized environment through a simulator, it is possible to detect the movement of a human operator most easily through a simulator without installing a high-cost motion detection sensor in a real environment And to propose a technical means for pre-simulating the optimal position and direction of the robot.

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the following description and the accompanying drawings, detailed description of well-known functions or constructions that may obscure the subject matter of the present invention will be omitted. It should be noted that the same constituent elements are denoted by the same reference numerals as possible throughout the drawings.

FIG. 1 is a flowchart illustrating a method for implementing an NUI in a virtualization environment according to an embodiment of the present invention. In a virtual environment, a method for implementing a NUI (Natural User Interface) A simulator receives a command for a virtualized character located in the virtualization environment and the simulator recognizes a pattern of the virtualization character according to the command using a motion detection object located in the virtualization environment, Outputting the NUI command for the recognized pattern using the virtual character pattern and the mapping information of the NUI command, wherein the motion sensing object is based on a physical correlation in the virtualization environment By recognizing the gesture and voice of the virtual character, the output NUI command To simulate the Natural User Experience (NUX) settings to perform.

More specifically, in step S110, a simulator receives a command for a virtualized character located in the virtualization environment. In other words, for the virtual character positioned in the virtualization environment, the body coordination module, the facial expression, and the voice can be input. Herein, the body movement and the facial expression can be inputted by directly inputting the body movement and facial expression stored in advance in the simulator, or can be directly adjusted in real time through the character adjustment module. Also, the voice can be input by inputting a voice stored in advance in the simulator, and can be recorded and inputted through a microphone in real time.

In step S120, the simulator recognizes a pattern of the virtual character according to the command using a motion detection object located in the virtualization environment.

More specifically, the motion sensing object is configured as a 3D sensor, a 2D camera, and a microcomputer. The recognizing of the pattern of the virtual character recognizes the body motion and voice of the virtual character according to the command using the motion sensing object, Recognizing a pattern of the virtual character using an element description module that obtains depth data, color data, and sound data that are object data, respectively, and distinguishes the detailed operation of the virtual character from the obtained combination of object data Wherein the element technology module recognizes the gesture pattern of the virtual character and the face pattern of the virtual character through the depth data and the color data, recognizes the hand motion pattern of the virtual character through the depth data, The sound of the virtual character I can recognize the sex. Hereinafter, the motion sensing object will be described in detail with reference to FIG.

FIG. 2 is a table showing motion detection objects capable of detecting motion according to operations of a virtual character according to another embodiment of the present invention.

More specifically, the motion sensing object 22 is an object of a 3D sensor, a 2D camera, and a remote microphone for sensing the operation and voice of the virtual character in the simulator. The motion sensing object 22 includes a virtual character, Can be detected. In the virtual character operation 21, the body motion is detected using a 3D sensor and a 2D camera among the motion detection objects 22, and the hand operation among the virtual character operations 21 detects a 3D sensor among the motion detection objects 22 And the face of the virtual character operation 21 is detected using a 3D sensor and a 2D camera among the motion detection objects 22 and the voice among the virtual character operations 21 is detected as the motion detection object 22 It can be detected using a remote microphone. Here, the depth data is inputted through the 3D sensor, the color data is inputted through the 2D camera, and the sound data can be inputted through the remote microphone.

Now, the pattern of the virtual character can be recognized by using the element description module which can distinguish the detailed operation of the virtual character from the obtained combination of object data. Hereinafter, the element description module will be described in detail with reference to FIG.

3 is a table showing object data that can be used according to an element technology module according to another embodiment of the present invention.

More specifically, the depth data and the color data, which are the object data 32 input in FIG. 2, recognize the body movement pattern and the face pattern of the virtual character through the body movement recognition module, which is the element description module 31, ) Recognizes the hand movement pattern of the virtual character through the hand operation recognition module which is the element description module 31 and the acoustic data which is the object data 32 is transmitted through the voice recognition module, The voice pattern of the character can be recognized.

The gesture recognition module and the gesture recognition module may be modules for recognizing a gesture of a person. Here, in order to recognize the body motion and the gesture, first, the color data or the depth images (depth images), which are color images obtained by photographing a human body through the 3D camera or the 3D camera, ) Can be obtained. A human's shape may be extracted from the depth data and the color data through image processing and then represented by a volumetric model or a skeletal model. The modeled data can then be used to determine which gesture a person has performed through pattern matching for comparison with an existing gesture database.

In addition, the face recognition module may be a module for extracting a feature from a face of a person and grasping the identity. Here, in order to recognize the face, data obtained by analyzing the relative position, size, and shape of eyes, nose, cheekbone, and chin can be obtained from the image of the human face captured by the 2D camera. Identification can be recognized through pattern matching for comparison with existing face information databases. Alternatively, the identity can be recognized by analyzing the outline information from the surfaces of eye sockets, nose, and chin through the 3D camera as the depth recognition camera.

In addition, the speech recognition module may be a module for extracting words from a human voice signal. In order to perform voice recognition, the analog voice signal input through the microphone is firstly converted into a digital signal by sampling and quantizing, and the converted digital signal is converted into a digital voice signal by a template matching method phonemes) to determine which phoneme is pronounced. Then, the recognized phonemes can be combined to identify a word.

Thereafter, they can be output to the recognition result display window as recognition results of the respective element description modules 31, respectively. Hereinafter, the element description module will be described in detail with reference to FIG.

FIG. 4 is a diagram illustrating a recognition result display window according to another embodiment of the present invention, and may be output to the recognition result display window as recognition results of the respective element technology modules.

More specifically, each of the recognition result display windows may output a result of recognizing patterns of gestures, hand movements, faces, and voices of the virtual character in the respective element description modules. The recognition result display window may be added by mapping the recognition result display window to the added element description module whenever the element description module is additionally selected.

Fig. 4 is a window for displaying a recognition result according to an element technology module such as gesture, hand gesture, face, and voice recognition. Here, the body motion recognition result display window 41 displays the motion of each joint of the virtual character through the 3D skeleton model, and outputs the body motion performed by the virtual character at present as text. Also, the hand gesture recognition result display window 42 outputs the movement of the hand joint of the virtual character through the 3D skeleton model, and outputs the hand gesture currently performed by the virtual character's character as text. In addition, the face recognition result display window 43 outputs the face portion of the virtual character as a 2D image, and outputs the age, sex, and shape of the virtual character as text. In addition, the speech recognition result display window 44 may output a sound performed by the virtual character as a sound wave graph, and output the recognized speech as text.

Meanwhile, if an error occurs in the pattern recognition of the virtual character using the motion detection object, the error message may be output to the recognition result display window. The process of outputting the error message will be described with reference to FIG.

5 is a flowchart illustrating a method of outputting an error message when a pattern recognition error occurs according to another embodiment of the present invention.

More specifically, when the NUI command recognition is not normally performed because the motion sensing object is not appropriately placed in the virtual environment, the position of the motion sensing object is changed or the motion of the user is changed to solve the above problem Informational messages or warnings can be implemented through the simulator.

If it is determined in step S510 that there is any virtual character whose pattern recognition is not normally performed through the motion detection object disposed in the virtual environment, if there is a virtual character that fails to recognize the pattern, the process proceeds to step S520.

In step S520, the virtual character that failed to recognize the pattern in step S510 may be highlighted with a highlight graphic effect to distinguish the virtual character from the virtual character that is normally performing the pattern recognition.

In step S530, an error message may be generated in the recognition result window of the element description module in which an error causing the pattern recognition failure occurs. For example, if the body motion recognition for the virtual character is failed, an error message may be output to the body motion recognition result display window, which is a result window for the body motion recognition module capable of recognizing the body motion. Thereafter, the virtualization character and the motion sensing object may be adjusted in the virtualization environment to guide the detailed setting of the motion sensing object disposed in the virtualization environment.

1 and step S120 and subsequent steps will be described again.

In step S130, the simulator outputs the NUI command for the recognized pattern using the virtual character pattern and the mapping information of the NUI command.

Step S130 will be described in detail with reference to FIG. 6 below.

FIG. 6 is a diagram illustrating a table in which an operation recognition pattern and an NUI instruction are mapped according to another embodiment of the present invention. Referring to FIG. 6, mapping is performed so that one pattern for the virtual character and one NUI instruction are mapped, A plurality of pattern combinations and one NUI command may be mapped.

More specifically, for example, the pattern of 'lifting the left or right arm' is detected through the gesture recognition module 55 and the pattern of 'opening the two fingers of the right hand' is detected through the gesture recognition module When the voice pattern of 'coffee' is detected through the voice recognition module, the above three operation patterns are mapped to 'two orders of coffee' of the NUI command 60 through the table of FIG. 6, ) Can be output as 'Order 2 cups of coffee'. In addition, since the motion recognition pattern 55 is mapped to the 'game character flight' of the NUI command 60 when a pattern of 'raising both arms up and down' is sensed through the body motion recognition module, the NUI command 60 ), 'Game character flight' can be output.

Thereafter, it is possible to simulate NUX (Natural User Experience) setting for executing the output NUI command. In other words, the NUX device object can simulate various NUX devices capable of providing a NUX to a virtual character located within a virtualization environment, such as a beam projector, directional speakers, and the like. The NUX device object simulates the time and audible NUX output from the service content and the simulation user can arrange or orient the NUX device object at a desired location within the virtualization environment through the simulator.

FIG. 7 is a block diagram illustrating an apparatus for implementing an NUI in a virtualization environment according to an embodiment of the present invention. The NUI and NUX simulation apparatus 75 includes a configuration corresponding to each process of FIGS. 1 to 6 . Therefore, in order to avoid duplication of explanation, the function is outlined mainly in the hardware device.

In an apparatus for implementing a NUI in which a user can directly interact with a system in a virtualized environment, the input unit 76 receives a command for a virtualized character whose simulator is located in the virtualized environment.

The processor 77 recognizes the pattern of the virtualized character according to the command using the motion detection object located in the virtualization environment and outputs the virtual character using the virtualization character pattern and the mapping information of the NUI command Search for pattern.

The output unit 78 outputs the NUI command for the retrieved pattern.

In addition, the motion sensing object simulates a NUX setting for performing the output NUI command by recognizing the gesture and voice of the virtual character based on a physical correlation in the virtualization environment.

In addition, the processing unit 77 acquires depth data, color data, and sound data, which are object data, by recognizing the gesture and voice of the virtual character according to the command using the motion detection object, Wherein the element description module recognizes a pattern of the virtual character using an element description module that can distinguish the detailed operation of the virtual character from a combination of data, Recognizes the face pattern of the virtual character, recognizes the hand motion pattern of the virtual character through the depth data, and recognizes the voice of the virtual character through the sound data.

In addition, the recognized pattern may be output as a recognition result of each of the element description modules to the recognition result display window, and when an error occurs in the pattern recognition among the respective element description modules, By generating an error message in the recognition result window of the motion detection object.

Also, the input unit 76 can input body movement, facial expression, and voice through the character adjustment module to the virtual character positioned in the virtual environment.

The mapping information may be mapped so that one pattern for the virtual character and one NUI instruction are associated with each other, or a combination of a plurality of patterns for the virtual character and one NUI command.

8 is a flowchart illustrating a process of implementing a NUI in a virtualization environment according to another embodiment of the present invention and providing a NUX capable of executing NUI commands. As shown in FIG. Therefore, in order to avoid duplication of explanations, the function should be outlined with a brief description to help understanding each process.

The simulator user 52 is configured to place the virtualization character 54, the motion detection object 22 and the NUX device object 57 in the 3D virtualization environment via the simulator 51 or the virtualization character 54 (31), it is possible to construct a recalled reality space which is a virtualization environment.

The service content developer 58 can develop the NUI and NUX based service contents by constructing the NUI instruction table 56 corresponding to the specific pattern of the virtual character 54 output from the simulator 51. [

The motion sensing object 22 may simulate various NUI devices capable of sensing the motion of the virtual character 54 such as a 3D sensor, a 2D camera, and a remote microphone. Simulation user 52 may place or orient motion sense object 22 at a desired location within the virtualization environment via simulator 51. [ In addition, the operation of the virtual character 54 can be detected by various kinds of motion detection objects 22.

The element technology module 31 can use the data detected by the various motion detection objects 22 arranged in the virtual environment environment in combination with the data detected by the virtualization character 54 to control the motion of the virtualization character 54 such as gesture, Can be recognized.

The NUX device object 57 may simulate various NUX devices capable of providing NUX to the virtualization character 54, such as a beam projector, directional speaker, and the like. The NUX device object 57 can simulate the visual and audible NUX output from the service content 59. Here, the simulation user 52 may arrange or orient the NUX device object 57 at a desired location within the virtualization environment through the simulator 51. [

The service content 59 may be content for providing a virtual environment-based service to a person in a real environment. The service content 59 is a concept including various services for receiving NUI commands through a pattern recognizing a human action and providing NUX corresponding to the NUI commands to a person. Accordingly, the service content developer 58 can work with the simulator 51 after developing the service content 59 in the external application.

Also, the simulator 51 can simulate the function of the service content 59 on the virtual character 54 instead of the actual person. In order to enable the simulation, the service content interface transmits the recognition pattern 55 of the virtual character 54 recognized by the simulator 51 to the service content 59 and outputs the visual and auditory output The contents can be transmitted to the simulator 51. Here, the visual output contents are outputted through the object 57 which can display the image to the virtual character 54 like the beam projector object, and the auditory output contents are outputted to the virtual character 54 like the directional speaker Can be output through the object 57 that can be used.

The service content developer 58 also uses the NUI command 60 to define the NUI command 60 corresponding to the pattern to be used among the various recognition patterns 55 transmitted by the simulator 51 when the service content 59 is developed. The table 56 can be constructed. At this time, by combining two or more recognition patterns 55 in one NUI command 60 in the NUI command table 56, various NUI commands 60 can be utilized. By performing the NUI command 60 in the service content 59, the resulting time and audible NUX results are transmitted to the virtualized character 54 and the simulator user 52 via the NUX device object 57, such as a beam projector and a directional speaker, .

In addition, the NUX device object 57 can output the contents of the different service contents 59. Accordingly, a variety of application services such as smart interior, smart cafe, and smart home can be developed by providing various service contents 59 in a virtual reality space, which is a virtual environment.

9 is a diagram showing a screen configuration of a simulator according to another embodiment of the present invention and an application service that can be developed through a simulator. Each screen configuration in FIG. 9 corresponds to each of the processes in FIGS. 1 to 6 . Therefore, in order to avoid duplication of explanations, the function will be outlined centering on a brief description to help understand each screen configuration.

The 3D space window 83 allows a user of the simulator to visualize a space that can construct a virtual recalling reality in 3D. The user can arrange or adjust spatial objects, virtual characters, NUI, and NUX device objects through the 3D space window 83. In addition, the user can confirm the visual output content of the service content through the NUX device object disposed in the 3D space window 83. [

The spatial object list window 86 may display a list of objects including walls and various furniture to construct a virtual physical recalled reality space. The user can select a spatial object to be arranged in the spatial object list window 86 and arrange the spatial object at a desired position through the 3D spatial window 83. [

The NUI and NUX device object list window 84 includes a 3D sensor, a 2D camera, a remote microphone, a beam projector, a directional speaker, etc. for recognizing the gesture of the virtual character, the gesture, the face, NUI and NUX device objects. The user can select the NUI and NUX device objects to be placed in the NUI and NUX device object list window 84 and place them at desired positions through the 3D space window 83. [

The virtual character object list window 87 can display a virtual character list for simulating the behavior of a real person in a virtual recalled reality space, which is a virtualization environment. The user can select a virtual character object to be arranged in the virtual character object list window 87 and arrange the virtual character object at a desired position through the 3D space window 83. [

The element technology module list window 85 can display a module list for recognizing the gesture, hand gesture, face, and voice of the virtual character. The user can select the element technology module to be set in the element technology module list window 85 and adjust the setting of each element technology module in the object property window 81. [

The recognition result display window 88 for each element technology can visualize the result of recognizing the pattern of the virtual character of the virtual character, the gesture, the face, and the voice in each element technology module. The recognition result display window 88 of each element technology may be added to the recognition result display window 88 for each element technology every time the element description module is additionally selected. The recognition result screen can display a corresponding result screen according to the element technology module such as body movement recognition, hand movement recognition, face recognition, and voice recognition. Here, the recognition result display window 88 for each element technology can display a warning message to the simulator user when the plurality of NUI device objects can not properly detect the virtual character existing in the 3D virtual space. In the case of the body motion recognition result display window, if any one of all the virtual character objects in all of the 3D sensor objects and the 2D camera objects disposed in the virtual space can not detect the entire body, a warning message can be displayed on the corresponding window, A warning message can be displayed on the corresponding window if all of the 3D sensor objects disposed in the virtual space can not be properly detected by one of both hands of all the virtual characters. In the case of the face recognition result display window, all of the 3D sensors A warning message can be displayed on the window if the object and the 2D camera object can not be correctly detected by any of the faces of all the virtual characters. In the case of the speech recognition result display window, all of the remote microphones placed in the virtual space can be displayed as virtual characters One of the voices of If not how can display a warning message on the window. In addition, the warning target virtual character is emphasized through the highlight graphic effect in the 3D space window 83, thereby allowing the simulator user to check which virtual character is not properly detected. Thereafter, the user can correctly detect all the virtual characters existing in the 3D virtual space by adjusting the position and direction of the NUI device object disposed previously or by further arranging the corresponding device object.

The 2D camera and the 3D camera reference viewpoint display window 90 can visualize the time when the 2D camera and the 3D sensor among the NUI and NUX device objects are photographed in real time in each device. In the 2D camera and 3D camera reference viewpoint display window 90, a shooting time point of the corresponding device may be added to the 2D camera and the 3D camera reference viewpoint display window 90 whenever a 2D camera object or a 3D sensor object is additionally disposed. The simulator user can intuitively confirm whether the 2D camera object or the 3D sensor object is positioned at a position where the virtual character can be correctly recognized through the 2D camera and the 3D camera reference point view window 90. [

The placed object list window 82 may display a list of all the spatial objects, NUI device objects, NUX device objects, and virtual character objects disposed in the space. The name of each object may be added to the arranged object list window 82 every time a spatial object, an NUI device object, a NUX device object, and a virtual character object are added to the 3D virtual space window 83. Here, if one object is clicked in the arranged object list window 82, the property of the corresponding object can be adjusted in the object property window 81. If the property of the object is adjusted in the object property window 81, the real-time reflection of the change can be confirmed in the 3D space window 83.

The object property window 81 can adjust properties of a specific object or element description module. When an object to be adjusted is selected in the arranged object list window 82, the adjustable property list of the object appears in the object property window 81. The user selects an attribute to be adjusted from the adjustable property list, You can change the value of an attribute.

The virtual character bodily movement list window 94, the virtual character hand movement list window 93, and the virtual character voice list window 92 and the like display a list of gestures, hand movements, and voices that can be performed by the virtual characters, . Here, the face of the virtual character can be determined according to the virtual character selected in the virtual character object list window 87. Since it is the human body itself that operates the NUI, it provides a function of simulating the virtual character like a real person. In addition, the user can simulate the behavior or voice of a virtual character prepared in advance by clicking on a virtual character to perform an action or a voice in the arranged object list window 82 and then clicking a desired item in the virtual character action or voice list .

The service content list window 91 may display a list of contents developed by an external application so as to be output from an output device such as a virtual beam projector and a directional speaker. Here, the user can freely assign service contents to each NUX device object in accordance with the characteristics of the application service to be developed. In the arranged object list window 82, one of the beam projector and the directional speaker is selected, The service content to be output from the corresponding NUX device object can be specified. Also, whenever a service content is additionally designated, an NUI command analysis result window 89 of the service content can be added to the NUI command analysis result display window 89 for each service content.

The NUI command analysis result display window 89 for each service content can visualize the result of analyzing the operation pattern recognized by each element description module into an NUI command existing in the NUI command table of each service contents. The NUI command analysis result display window for each service content will be described in detail with reference to FIG.

 FIG. 10 is a diagram showing a result display window of an NUI instruction according to another embodiment of the present invention. Since an operation pattern recognized by each element description module can be correspondingly combined with one NUI command, the body movement, And voice. In order to execute a specific NUI command of the service contents, the virtual character must satisfy all four specific patterns of motion such as gesture, hand gesture, face, and voice.

For example, in FIG. 10, when the NUI command interpretation result display window 71 in which the service content name is an augmented reality based menu is displayed, the 'right arm lifting' is recognized as a gesture recognition recognition pattern, Eye: long and small 'is recognized as the face recognition pattern, and' nose: large and thin ', mouth: large and thin', and 'jaw line: long and rounded' are recognized, It can be seen that 'Order # 03, 2 cups of coffee' is output by the corresponding NUI command. The NUI command analysis result display window 72 in which the service content name is the augmented reality based web surfing and the NUI command analysis result display window 73 in which the service content name is the augmented reality based live performance is not recognized as the corresponding service content, It can be seen that there is no result.

The application service developed through the simulator can be saved as a single file. The saved file can be retrieved by the simulator later to restore the application service at the time of storage.

Meanwhile, the embodiments of the present invention can be embodied as computer readable codes on a computer readable recording medium. A computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer system is stored.

Examples of the computer-readable recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device and the like, and also a carrier wave (for example, transmission via the Internet) . In addition, the computer-readable recording medium may be distributed over network-connected computer systems so that computer readable codes can be stored and executed in a distributed manner. In addition, functional programs, codes, and code segments for implementing the present invention can be easily deduced by programmers skilled in the art to which the present invention belongs.

The present invention has been described above with reference to various embodiments. It will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the disclosed embodiments should be considered in an illustrative rather than a restrictive sense. The scope of the present invention is defined by the appended claims rather than by the foregoing description, and all differences within the scope of equivalents thereof should be construed as being included in the present invention.

22: motion detection object 58: service content developer
31: element technology module 59: service content
51: Simulator 75: NUI and NUX simulation device
52: simulator user 76: input part
53: virtual character adjusting module 77:
56: NUI command table 78: Output section
57: NUX device object

Claims (12)

A method for implementing a NUI (Natural User Interface) in which a user can directly interact with a system in a virtualized environment,
Receiving a command for a virtualization character located in the virtualization environment;
Recognizing a pattern of at least one of a gesture, a face, a gesture, and a voice of the virtual character according to the command using the motion sensing object located in the virtualization environment; And
The simulator generates the NUI command corresponding to the pattern of the recognized virtual character using the mapping information mapped so that one pattern or a plurality of pattern combinations of the behavior of the virtualized character and one NUI instruction representing the meaning of the pattern correspond to each other ; And,
Wherein the motion sensing object recognizes a gesture and voice of the virtual character based on a physical correlation in the virtualization environment, thereby generating a Natural User experience (NUX) device object to perform the output NUI command in the virtualization environment Wherein the NUX setting is configured to simulate a NUX setting for disposing or reorienting.
The method according to claim 1,
Wherein the motion sensing object is a 3D sensor, a 2D camera, and a microcomputer,
Wherein the step of recognizing the pattern of the behavior of the virtual character includes:
Acquiring depth data, color data, and sound data, which are object data, by recognizing the gesture and voice of the virtual character according to the command using the motion detection object; And
Recognizing a pattern of the virtual character using an element description module capable of distinguishing a detailed operation of the virtual character from one or two or more combinations of the obtained object data,
The element description module recognizes the gesture pattern of the virtual character and the face pattern of the virtual character through the depth data and color data, recognizes the hand motion pattern of the virtual character through the depth data, And recognizing the voice of the virtual character through the voice recognition unit.
3. The method of claim 2,
The recognized pattern may include,
And outputting an error message to the recognition result display window as recognition results of the respective element description modules, and when an error occurs in the pattern recognition among the respective element description modules, Wherein the step of generating the motion detection object comprises the steps of:
The method according to claim 1,
Wherein the step of inputting a command for the virtualization character comprises:
And inputting gestures, facial expressions, and voices through a character adjustment module for a virtual character positioned within the virtualization environment.
The method according to claim 1,
Wherein the virtualization character and the motion detection object are arranged to be position-adjustable within the virtualization environment.
delete A computer-readable recording medium storing a program for causing a computer to execute the method according to any one of claims 1 to 5. 1. An apparatus for implementing a NUI capable of directly interacting with a system in a virtualized environment,
An input unit for receiving a command for a virtualized character located in the virtualization environment;
Recognizing a pattern of at least one of a gesture, a face, a gesture, and a voice of the virtual character according to the command using a motion sensing object located in the virtualization environment, A processor for searching for a pattern of the recognized virtualization counter using mapping information mapped so that one NUI instruction indicating a combination of the NUI patterns and the NUI instruction indicating the meaning of the pattern correspond; And
And an output unit outputting an NUI instruction corresponding to the searched pattern,
Wherein the motion sensing object recognizes gestures and voices of the virtual character based on physical correlations within the virtualization environment, thereby placing or orienting the NUX device object in the virtualization environment to perform the output NUI command, Lt; RTI ID = 0.0 > NUX < / RTI >
9. The method of claim 8,
Wherein,
Acquiring depth data, color data, and sound data, which are object data, from one or more combinations of the obtained object data by recognizing the gesture and voice of the virtual character according to the command using the motion detection object Recognizing a pattern of the virtual character using an element description module that can distinguish the detailed operation of the virtual character,
The element description module recognizes the gesture pattern of the virtual character and the face pattern of the virtual character through the depth data and color data, recognizes the hand motion pattern of the virtual character through the depth data, To recognize the voice of the virtual character.
10. The method of claim 9,
The recognized pattern may include,
And outputting an error message to the recognition result display window as recognition results of the respective element description modules, and when an error occurs in the pattern recognition among the respective element description modules, To thereby reset the detailed setting of the motion detection object.
9. The method of claim 8,
Wherein the input unit comprises:
Wherein the virtual manipulation module inputs a gesture, a facial expression, and a voice through a character adjusting module with respect to a virtual character positioned within the virtualization environment.
delete
KR1020130130792A 2013-10-31 2013-10-31 Apparatus and method for simulating NUI and NUX in virtualized environment KR101428014B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020130130792A KR101428014B1 (en) 2013-10-31 2013-10-31 Apparatus and method for simulating NUI and NUX in virtualized environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020130130792A KR101428014B1 (en) 2013-10-31 2013-10-31 Apparatus and method for simulating NUI and NUX in virtualized environment

Publications (1)

Publication Number Publication Date
KR101428014B1 true KR101428014B1 (en) 2014-08-07

Family

ID=51749920

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130130792A KR101428014B1 (en) 2013-10-31 2013-10-31 Apparatus and method for simulating NUI and NUX in virtualized environment

Country Status (1)

Country Link
KR (1) KR101428014B1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008015942A (en) 2006-07-07 2008-01-24 Sony Computer Entertainment Inc User interface program, device and method, and information processing system
JP2013037454A (en) 2011-08-05 2013-02-21 Ikutoku Gakuen Posture determination method, program, device, and system
KR20130057390A (en) * 2011-11-23 2013-05-31 삼성전자주식회사 Method and apparatus for processing virtual world
KR20130111234A (en) * 2010-06-21 2013-10-10 마이크로소프트 코포레이션 Natural user input for driving interactive stories

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008015942A (en) 2006-07-07 2008-01-24 Sony Computer Entertainment Inc User interface program, device and method, and information processing system
KR20130111234A (en) * 2010-06-21 2013-10-10 마이크로소프트 코포레이션 Natural user input for driving interactive stories
JP2013037454A (en) 2011-08-05 2013-02-21 Ikutoku Gakuen Posture determination method, program, device, and system
KR20130057390A (en) * 2011-11-23 2013-05-31 삼성전자주식회사 Method and apparatus for processing virtual world

Similar Documents

Publication Publication Date Title
JP7411133B2 (en) Keyboards for virtual reality display systems, augmented reality display systems, and mixed reality display systems
US10664060B2 (en) Multimodal input-based interaction method and device
KR102413561B1 (en) Virtual user input controls in a mixed reality environment
CN107430437B (en) System and method for creating a real grabbing experience in a virtual reality/augmented reality environment
CN105518575B (en) With the two handed input of natural user interface
CN110716645A (en) Augmented reality data presentation method and device, electronic equipment and storage medium
WO2018098861A1 (en) Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus
CN111897431B (en) Display method and device, display equipment and computer readable storage medium
US11373650B2 (en) Information processing device and information processing method
WO2018000519A1 (en) Projection-based interaction control method and system for user interaction icon
US10488918B2 (en) Analysis of user interface interactions within a virtual reality environment
CN110568929B (en) Virtual scene interaction method and device based on virtual keyboard and electronic equipment
KR102021851B1 (en) Method for processing interaction between object and user of virtual reality environment
US20190138117A1 (en) Information processing device, information processing method, and program
CN111433735A (en) Method, apparatus and computer readable medium for implementing a generic hardware-software interface
KR101428014B1 (en) Apparatus and method for simulating NUI and NUX in virtualized environment
CN112424736A (en) Machine interaction
KR20180044613A (en) Natural user interface control method and system base on motion regocnition using position information of user body
CN112612358A (en) Human and large screen multi-mode natural interaction method based on visual recognition and voice recognition
US20190339864A1 (en) Information processing system, information processing method, and program
US11676355B2 (en) Method and system for merging distant spaces
KR102612430B1 (en) System for deep learning-based user hand gesture recognition using transfer learning and providing virtual reality contents
Fuglseth Object Detection with HoloLens 2 using Mixed Reality and Unity a proof-of-concept
US20240096043A1 (en) Display method, apparatus, electronic device and storage medium for a virtual input device
US20230342026A1 (en) Gesture-based keyboard text entry

Legal Events

Date Code Title Description
E701 Decision to grant or registration of patent right
GRNT Written decision to grant
FPAY Annual fee payment

Payment date: 20170802

Year of fee payment: 4

FPAY Annual fee payment

Payment date: 20180801

Year of fee payment: 5

FPAY Annual fee payment

Payment date: 20190801

Year of fee payment: 6