KR101525011B1 - tangible virtual reality display control device based on NUI, and method thereof - Google Patents
tangible virtual reality display control device based on NUI, and method thereof Download PDFInfo
- Publication number
- KR101525011B1 KR101525011B1 KR1020140134925A KR20140134925A KR101525011B1 KR 101525011 B1 KR101525011 B1 KR 101525011B1 KR 1020140134925 A KR1020140134925 A KR 1020140134925A KR 20140134925 A KR20140134925 A KR 20140134925A KR 101525011 B1 KR101525011 B1 KR 101525011B1
- Authority
- KR
- South Korea
- Prior art keywords
- gesture
- screen
- user
- hands
- coordinates
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention relates to an NUI-based real-time virtual space display control device and a control method for controlling a display device through a user gesture, and more particularly, The chain code is generated using the coordinates projected on the divided two-dimensional screen, and a different gesture model is applied to each screen to recognize the user gesture to change the screen or change the setting of the virtual object constituting the screen to display again .
Description
BACKGROUND OF THE
As the computing function has evolved recently, the ubiquitous era has come. Therefore, research on ubiquitous computing is under way. The key of ubiquitous computing is that users use computers and networks easily and conveniently. In this context, there are various studies related to NUI (Natural User Interface) / NUX (Natural User Experience), which allow users to interact with computers using gestures that are natural movements of human beings, out of universal input devices such as a mouse or a keyboard have.
Apparatuses for displaying a virtual space including realistic contents provide a realistic screen to a user and utilize a user gesture as a way of interacting with a user. A user gesture is a person-centered interface that is natural and intuitive to a person, which means that a user gesture is directly related to a function provided by the display device. Therefore, there is a need to recognize the gesture and to control the display device by reflecting this.
Further, a technique for recognizing a gesture needs to be processed at a high speed in order to provide quick feedback to the user, and it is necessary to identify whether the gesture is a gesture for recognizing the function accurately and controlling the function. That is, there is a need to process information on the user's movement inputted in real time within a time as fast as it is recognized by the user in real time.
SUMMARY OF THE INVENTION The present invention has been made in view of the above problems, and it is an object of the present invention to provide a device for controlling a display device through a natural and intuitive gesture operation of a user in order to solve a disadvantage that a device-dependent interface requires a simple memorization and a learning process through education . It is also an object of the present invention to provide means for recognizing gestures directly associated with the screens displayed by the virtual space display device and the virtual objects constituting the screens. Further, the present invention provides a means for displaying and outputting a result screen according to a gesture taken by the user in a time as fast as the user can recognize in real time.
In order to solve the above-mentioned problems, the virtual space display control apparatus according to an embodiment of the present invention extracts motion information on the user's face and body joints from the image, ; A sensing unit for generating an event signal indicating a screen switching according to a movement of a user when a change in one or more joints related to a rotation angle of a face or a user's step exceeds a preset threshold value; The grip state of the user's hands is determined and the trajectory of the movement of both hands of the user is projected on a two-dimensional screen perpendicular to the user's gaze direction according to the type of the gripped hand, A gesture model representing a gesture learning model is applied to the grip state of the hands, the two-dimensional coordinate, and the chain code, thereby recognizing a gesture corresponding to the movement of both hands of the user A gesture recognition unit; A control unit for changing a setting of any one of a screen, a screen menu, and a virtual object configuring the screen according to the event signal and the recognized gesture; And an output unit for outputting a setting value changed by the control unit.
In the virtual space display control apparatus according to an embodiment, the sensing unit may determine whether the user is walking according to the relative change value of the joints, wherein the joints related to the user's walking are the shoulders, waist, and joints of both hands .
In the sensing unit of the virtual space display control apparatus according to an embodiment, the event signal includes final coordinate values of the face or the joint and screen information currently being displayed.
In the virtual space display control apparatus according to an embodiment, the gesture recognition unit can recognize the movements of both hands while both hands or one hand are released from the grip state as one gesture.
In the gesture recognition unit of the virtual space display control apparatus according to an embodiment, the gesture model is provided for each screen, and further includes a gesture model for the left and right hands on the same screen.
In the above-described embodiment, a gesture model corresponding to a currently displayed screen is applied among the gesture models, and a gesture model corresponding to each hand can be applied when both hands are in a grip state.
In the virtual space display control apparatus according to an embodiment, the gesture recognizing unit divides the two-dimensional screen into a grid space of a predetermined size, and sequentially stores coordinates of the grid space so as to correspond to three-dimensional coordinates constituting the locus For each of the stored coordinates, a relative vector of the corresponding coordinates and the previous coordinates may be calculated and classified into the unit direction vectors.
In the above-described embodiment, the gesture recognition unit changes the diagonal vector to the previous coordinate and the next coordinate of the corresponding coordinate in a diagonal relationship, and stores the diagonal vector when the most recently stored direction vector is not a diagonal vector.
In the virtual space display control apparatus according to an exemplary embodiment, the gesture recognition unit can recognize the two chain codes having the same combination of direction vectors as different gestures when the start coordinates and the end coordinates of the movement of the hands are different.
In the virtual space display control apparatus according to an embodiment, the controller changes a screen setting by selecting a next screen in accordance with the recognized gesture or the final two-dimensional coordinate among screens linked to a currently displayed screen.
In the virtual space display control apparatus according to an exemplary embodiment, the controller selects a virtual object corresponding to a start coordinate of the two-dimensional coordinates according to the recognized gesture, and changes the state, position, or size of the selected virtual object.
According to another aspect of the present invention, there is provided a method for controlling a virtual space display device, the method comprising: inputting a captured image of a user by tracking the user, Extracting information; Generating an event signal indicating a screen change according to a movement of a user when a change in one or more joints related to a rotation angle or a step of the face exceeds a preset threshold value in the extracted motion information; Changing a screen according to the event signal when the event signal is present; If there is no event signal, the gesture recognition unit analyzes the extracted motion information to determine a grip state of the user's hands, and determines a trajectory of the movement of both hands of the user according to the type of the gripped hand, A chain code composed of a unit direction vector is generated using two-dimensional coordinates projected on a two-dimensional screen perpendicular to the visual line direction, and a gesture learning model for the grip state, the two-dimensional coordinate, Recognizing a gesture corresponding to a movement of both hands of the user by applying a gesture model; Changing a setting of any one of a screen, a menu of a screen or a virtual object constituting a screen according to the recognized gesture on the basis of a screen currently being displayed by the controller; And outputting a setting value changed by the control unit to an output unit.
In the control method of the virtual space display apparatus according to another embodiment, in the step of generating the event signal, the joints related to the user's walking correspond to the joints of the shoulders, the waist and both hands and the user walks according to the relative change value of the joints And the event signal includes final coordinate values of the face or the joint and screen information currently being displayed.
The step of recognizing the gesture in the control method of the virtual space display apparatus according to another embodiment recognizes the movements of both hands while both hands or one hand are released from the grip state as a single gesture.
In the method of controlling a virtual space display apparatus according to another embodiment of the present invention, in the step of recognizing the gesture, the gesture model is provided for each screen and further comprises a gesture model for each of the left and right hands on the same screen, A gesture model corresponding to a currently displayed screen is applied, and a gesture model corresponding to each hand can be applied when both hands are in a grip state.
The step of recognizing the gesture of the control method of the virtual space display apparatus according to another embodiment may include dividing the two-dimensional screen into a grid space of a predetermined size, and outputting coordinates of the grid space corresponding to the three- Are stored in order, and relative vectors of the coordinates and the previous coordinates are calculated for all the stored coordinates and classified into the unit direction vectors.
In the above-described embodiment, the recognition of the gesture may be performed by changing diagonal vectors when the previous coordinates and the next coordinates of the corresponding coordinates correspond to a diagonal relation, and the most recently stored direction vector is not a diagonal vector.
The step of recognizing the gesture of the control method of the virtual space display apparatus according to another embodiment recognizes the two chain codes having the same combination of direction vectors as a different gesture when the start coordinate and the end coordinate of the movement of the both hands are different .
The embodiments according to the present invention can provide a virtual space display device by changing a screen to be displayed, a menu of a screen, a virtual object constituting the screen, and a result screen through a gesture taken by the virtual space display device, Can be controlled. Further, by providing a gesture model for each of a plurality of screens that can be displayed by the virtual space display device, the gesture directly related to the screen and the virtual object constituting the screen is recognized, and the user's motion is analyzed using the two- It is possible to improve the accuracy of the gesture recognition and to feedback on the gesture in real time.
1 is a block diagram illustrating components of a virtual space display control apparatus according to an embodiment of the present invention.
Fig. 2 is a diagram showing a schematic function of the model house simulation system.
3B is a table showing a type of a gesture in which a right hand is gripped for setting an internal interior menu, and FIG. 3C is a table showing a type of a gesture in which a left hand is gripped for setting a system menu. FIG. 5 is a table showing a gesture in which both hands are gripped to change and set a virtual object to be configured.
4 is a diagram illustrating mapping of a user gesture to a lattice space screen by a virtual space display control apparatus according to an embodiment of the present invention.
5 is a diagram showing an 8-direction unit vector.
FIG. 6A shows a process of storing a two-dimensional coordinate by a virtual space display control apparatus according to an embodiment of the present invention with respect to a user gesture, FIG. 6B shows a process of generating a chain code using the two- Fig.
7 is a flowchart illustrating a method of controlling a virtual space display apparatus according to another embodiment of the present invention.
FIG. 8 is a flowchart showing the detailed steps of step S740 of FIG.
Prior to the description of the concrete contents for carrying out the present invention, for the convenience of understanding, the outline of the solution of the problem to be solved by the present invention or the core of the technical idea will be given first.
NUI technology allows users to accurately recognize gestures and reproduce them as they are, or to recognize and react quickly when a user takes a predefined gesture. When the user desires to control the device, the device corresponds to the gesture taken by the user among the predefined gestures and allows the device to perform a predetermined operation according to the gesture. Just as a computer command is defined, a gesture is predefined and the user controls the device through defined gestures. Unlike keyboards, mice, etc., NUI has the advantage of switching from a machine-centric interface to a human-centric interface because it is natural and intuitive to use. However, if you draw a circle with your right hand, it will be drawn differently for each person. Therefore, the user gesture is difficult for the user to recognize the same gesture as the same gesture as the device.
NUI technology is mainly used for virtual reality game or virtual reality simulation device. Conventional games or simulation devices have to learn how to operate input devices such as a keyboard, a mouse, and a remote control, and thus have a disadvantage in that the reality becomes inferior especially in a game providing a virtual reality. Therefore, we use the user gesture as an interface to make the user feel the reality about the provided virtual reality. In addition, in order to maintain the reality, the device must be able to react immediately to the user gesture.
Therefore, the present invention controls a display device through an intuitive and natural user gesture and displays a different learning model for each screen in order to accurately recognize a gesture taken by a user among gestures directly related to a screen provided by the display device and a virtual object constituting the screen And provides a virtual space display control apparatus and a control method thereof that ensure real-time processing by simply processing input data.
1 is a block diagram illustrating components of a virtual space display device according to an exemplary embodiment of the present invention. The virtual space
Hereinafter, the detailed operation of each component will be described in detail.
The
The
The screen switching according to the movement of the user may occur when the user takes a gesture operation to walk. Therefore, when the change of the joint related to the stepping exceeds the predetermined threshold value, an event signal indicating the screen change is generated. Among the joints of the body, the joints related to the steps can be the joints of the shoulders, the waist and both hands. It is to judge whether the user is walking based on the relative difference between the shoulder and waist joints moving forward and backward and the change value according to the shaking of both hands when walking. The threshold according to the step may be determined according to the screen currently being displayed as in the case of the change in the line of sight. The virtual space
When the
The
Before describing the order in which the
In reality, the user distinguishes the gesture because it is instantaneously recognizable as to which gesture it is at the same time as the recognition. However, there is a problem that the device needs to be unitized at the input stage in order to distinguish the gesture. It is a matter of what information should be input to the input unit in order to produce a result that can be known only after the device identifies the input object through analysis and processing. In other words, it is a problem that the user can determine the start and end of the gesture based on the moving information.
In the embodiments of the present invention, the grip of the user's hand is used as a criterion for classifying the type of the gesture in the movement of the user. That is, the movement of both hands during the release of both hands or one hand from the grip state can be recognized as one gesture. Since a person takes a gesture through the hand, the grip state of the hand is closely related to the gesture action. Actually, it is more natural to hold the hand and draw a circle or to push it to the right, as it is in the open hands. In particular, when interacting with a device displaying a virtual reality, a gesture represents a movement for controlling the virtual objects displayed on the screen. Therefore, using the grip state of the hand can easily solve the problem of determining a single instruction unit.
Next, the gesture needs to be defined differently for each displayed screen, and it is necessary to classify the type of the gesture according to the type of the function provided by the virtual-space
For example, in the interior simulation system of FIG. 2, the types of gestures can be defined by dividing categories such as a system menu setting for an interior, an interior menu setting, and a configuration object of a screen. 3A is a table showing a type of a gesture in which the left hand grips a system menu, FIG. 3B is a table showing a type of a gesture in which a right hand is set to grip an interior menu, and FIG. A table showing a gesture in which both hands are gripped to change and set a virtual object to be constructed. 3A and 3B, all the gestures having the same shape are swiped to the left. However, since both gestures correspond to the left-hand grip state and the right-hand grip state, Is set. This classification also allows for more variety of device control with a simple gesture.
When a gesture is taken with both hands gripped as shown in FIG. 3C, both movements of both hands can be identified by a gesture. In the interior simulation system, it is possible to define a gesture as if the object is actually scaled up or down (scale down) in order to change the virtual object on the screen. This natural and intuitive operation allows the user to control the device by inputting the same feeling as the real world.
If the type of the gesture is classified as described above, the virtual space
Now, the operation of the
First, in the case where the
After determining the grip state, the
FIG. 4 is a diagram illustrating sequential recording of corresponding grid space information along a user's motion in a two-dimensional screen divided into a grid space. In the case of Fig. 4, the screen is divided into 10 pieces and 6 pieces. The lattice space can be composed of two-dimensional coordinates. In the drawing, the lattice space is represented by a sequence number for each lattice space.
The coordinates corresponding to the lattice space are stored along the movement of the hand, and the relative vectors of the coordinates and the previous coordinates are calculated for all the stored coordinates and classified into unit direction vectors. The unit direction vector can basically use an 8-direction vector, as shown in FIG. A directional vector that is more subdivided than an eight-directional vector may be used, which may be determined by the size of the screen to be displayed or the complexity of the gesture to be used.
Since user gestures include both straight and curved movements, it is not easy to recognize them at once. Specifically, since the embodiments of the present invention utilize the lattice space, there arises a problem that the unit vectors in the direction of 2, 4, 6, and 8 in the diagonal direction are recognized as combinations of unit vectors in
6A shows an example of generating a chain code for hand motion using a two-dimensional screen divided into nine lattice spaces. The movement of the hand corresponds to the unit vector of direction (4) which falls down from the center to the lower right to judge by the human eye. However, human movement becomes a form of sinuous line, not a straight line, because of slight differences. First, as in each step of 6a, save the coordinates according to the movement of the hand, remove the overlapping coordinates, and if you simply do, the final result will be the coordinates of 5-8-9.
6B is an example in which the stored coordinates are divided into 8-direction unit vectors. For the 5-8-9 coordinates stored in FIG. 6A, a relative vector is calculated for the respective coordinates and for the previous coordinates. 5-8 corresponds to the unit vector in the direction of the ⑤ downward from the top, and 8-9 corresponds to the unit vector in the direction of the ③ swipe to the right. In this case, the previous coordinate 5 and the next coordinate 9 correspond to the lattice space located diagonally. Since the previous unit vector is not one of the unit vectors of
The
As described above, the
When applying the
The number of states of the HMM is set according to the pattern of the gesture, and the learning data is stored for each gesture. In the embodiment of the present invention, 2 to 8 states are set, and 40 pieces of data are used for each gesture.
The likelihood probability for the gesture model calculated by the chain code generated by the user's hand movements is calculated as shown in Equation (1), and the
A description of the variables in Equation (1) is as follows.
Is a vector chain code for hand action, Is a learned gesture model, Is the prior probability of a gesture. In the case of recognizing the gesture according to the movement of the hand as shown inFurther, the
The
A plurality of screens that can be displayed by the virtual space
The
7 is a flowchart illustrating a method of controlling a virtual space display apparatus according to another embodiment of the present invention. Each step of FIG. 7 corresponds to each component of the virtual space
In operation S710, the input unit captures an image of the user and extracts motion information on the user's face and body from the RGB-D camera. And corresponds to the
In step S720, when the rotation angle of the extracted face or the change of one or more joints related to the user's step exceeds a predetermined threshold, the sensing unit generates an event signal indicating a screen change according to the movement of the user. The joints related to the user's step are corresponding to the joints of the shoulders, the waist and both hands, and judge whether the user is walking according to the relative change value of the joints. The event signal may include final coordinate values of the face or joint and screen information currently being displayed. And corresponds to the
It is determined in step S730 whether an event signal has been generated. If an event signal has been generated, the process proceeds to step S750. Otherwise, the process proceeds to step S740.
The step S740 recognizes the gesture by analyzing the information of the body joint, in particular, the information of the joint of the hand. Fig. 8 is a flowchart showing the detailed steps of the above step.
Specifically, in the above step, the gesture recognition unit analyzes the read motion information to determine a grip state of the user's hands (S742), and determines a locus of movement of both hands according to the type of the gripped hand, Dimensional coordinate system (step S744) on the two-dimensional screen which is perpendicular to the direction (S744). Then, a gesture model representing a gesture learning model is applied to the grip state of the both hands, the two-dimensional coordinate and the chain code, and the gesture corresponding to the movement of both hands of the user is recognized. At this time, the movement of the both hands is recognized as one gesture while both hands or one hand are released from the grip state. In addition, the gesture model is provided for each screen, and further includes a gesture model for the left and right hands on the same screen. A gesture model corresponding to a currently displayed screen is applied from among the gesture models, and when both hands are in a grip state, a gesture model corresponding to each hand is applied, thereby recognizing the gestures with respect to the movements of both hands.
Generating the chain code divides the two-dimensional screen into a grid space of a predetermined size, stores coordinates of the grid space in order so as to correspond to the three-dimensional coordinates constituting the locus, and stores the coordinates in the coordinates and the previous coordinates Can be generated by calculating a relative vector for a unit direction vector. In this case, if the previous coordinates and the next coordinates correspond to the diagonal coordinates, the diagonal vectors are stored if the most recently stored direction vector is not a diagonal vector, and two chain codes having the same combination of direction vectors are stored in the gesture If the starting and ending coordinates are different, it is recognized as another gesture. And corresponds to the
In step S750, when the event signal is received, the control unit selects one of the plurality of screens to change the screen. If not, the new screen is selected according to the gesture recognized in step S740 based on the currently displayed screen Change the screen, select a menu provided on the screen, or change the setting of a virtual object constituting the screen. The selection of a screen to be changed is determined in advance according to the correspondence relationship between the gesture and the function according to the function provided by the virtual display device. Specifically, among the plurality of screens connected to the currently displayed screen, the gesture or the next screen And can select a virtual object corresponding to the start coordinates of the gesture according to the recognized gesture and change the state, position or size of the selected virtual object. This corresponds to the
In step S760, the output unit outputs the set value changed by the control unit. The display device will receive the output information and display it to the user. This corresponds to the
Meanwhile, the embodiments of the present invention can be embodied as computer readable codes on a computer readable recording medium. A computer-readable recording medium includes all kinds of recording apparatuses in which data that can be read by a computer system is stored.
Examples of the computer-readable recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device and the like, and also a carrier wave (for example, transmission via the Internet) . In addition, the computer-readable recording medium may be distributed over network-connected computer systems so that computer readable codes can be stored and executed in a distributed manner. In addition, functional programs, codes, and code segments for implementing the present invention can be easily deduced by programmers skilled in the art to which the present invention belongs.
As described above, the present invention has been described with reference to particular embodiments, such as specific elements, and specific embodiments and drawings. However, it should be understood that the present invention is not limited to the above- And various modifications and changes may be made thereto by those skilled in the art to which the present invention pertains.
Accordingly, the spirit of the present invention should not be construed as being limited to the embodiments described, and all of the equivalents or equivalents of the claims, as well as the following claims, belong to the scope of the present invention .
1: Virtual space display control device
10: Input unit
20:
30: Gesture recognition section 35: Gesture models
40: control unit 45: DB
50: Output section
2: RGB-D camera
3: Display device
Claims (18)
A sensing unit for generating an event signal indicating a screen switching according to a movement of a user when a change in one or more joints related to a rotation angle of a face or a user's step exceeds a preset threshold value;
The grip state of the user's hands is determined and the trajectory of the movement of both hands of the user is projected on a two-dimensional screen perpendicular to the user's gaze direction according to the type of the gripped hand, A gesture model representing a gesture learning model is applied to the grip state of the hands, the two-dimensional coordinate, and the chain code, thereby recognizing a gesture corresponding to the movement of both hands of the user A gesture recognition unit;
A control unit for changing a setting of any one of a screen, a screen menu, and a virtual object configuring the screen according to the event signal and the recognized gesture; And
And an output unit configured to output a setting value changed by the control unit.
Wherein the joint related to the user's walking in the sensing unit is a shoulder, a waist and a joint of both hands, and determines whether the user is walking according to a relative change value of the joints.
Wherein the event signal includes a final coordinate value of the face or the joint and screen information currently being displayed in the sensing unit.
Wherein the gesture recognition unit recognizes the movements of both hands as one gesture while both hands or one hand are released from the grip state.
Wherein the gesture recognition unit applies a gesture model corresponding to a currently displayed screen among a plurality of different gesture models and applies a gesture model to each of the left and right hands on the same screen, .
A gesture model corresponding to a currently displayed screen is applied from among the gesture models, and a gesture model corresponding to each hand is applied when both hands are in a grip state.
Wherein the gesture recognition unit divides the two-dimensional screen into a grid space of a predetermined size, sequentially stores the coordinates of the grid space so as to correspond to three-dimensional coordinates constituting the locus, Calculating a relative vector with respect to the coordinates, and classifying the vector into the unit direction vector.
Wherein the gesture recognizing unit changes the diagonal vector to a diagonal line when the previous coordinate and the next coordinate of the corresponding coordinate correspond to a diagonal line and the most recently stored direction vector is not a diagonal vector.
Wherein the gesture recognition unit recognizes the two chain codes having the same combination of direction vectors as different gestures when the start coordinates and the end coordinates of the movement of the hands are different.
Wherein the control unit changes the setting of the screen by selecting a next screen according to the recognized gesture or the final two-dimensional coordinate among the screens associated with the currently displayed screen.
Wherein the control unit selects a virtual object corresponding to a start coordinate of the two-dimensional coordinates according to the recognized gesture, and changes the state, position, or size of the selected virtual object.
Generating an event signal indicating a screen change according to a movement of a user when a change in one or more joints related to a rotation angle or a step of the face exceeds a preset threshold value in the extracted motion information;
Changing a screen according to the event signal when the event signal is present;
If there is no event signal, the gesture recognition unit analyzes the extracted motion information to determine a grip state of the user's hands, and determines a trajectory of the movement of both hands of the user according to the type of the gripped hand, A chain code composed of a unit direction vector is generated using two-dimensional coordinates projected on a two-dimensional screen perpendicular to the visual line direction, and a gesture learning model for the grip state, the two-dimensional coordinate, Recognizing a gesture corresponding to a movement of both hands of the user by applying a gesture model;
Changing a setting of any one of a screen, a menu of a screen or a virtual object constituting a screen according to the recognized gesture on the basis of a screen currently being displayed by the controller; And
And outputting a setting value changed by the control unit to the output unit.
Wherein the joint related to the user's walking in the step of generating the event signal corresponds to the joint of the shoulder, waist and both hands and determines whether the user is walking according to the relative change value of the joints, Or the last coordinate value of the joint and the screen information currently being displayed.
Wherein the step of recognizing the gesture recognizes the movement of both hands while the hands are released from the grip state as one gesture.
Wherein a gesture model corresponding to a currently displayed screen is applied from among a plurality of different gesture models in the recognition of the gesture and further comprises a gesture model for the left and right hands on the same screen,
And a gesture model corresponding to each hand is applied when both hands are in a grip state.
The step of recognizing the gesture may include dividing the two-dimensional screen into a grid space of a predetermined size, sequentially storing the coordinates of the grid space so as to correspond to three-dimensional coordinates constituting the locus, And calculating a relative vector with respect to the coordinates and the previous coordinates, and classifying the vector into the unit direction vector.
Wherein the step of recognizing the gesture is performed by changing a diagonal vector to a previous coordinate and a next coordinate of the corresponding coordinate and a diagonal vector when the most recently stored direction vector is not a diagonal vector. / RTI >
Wherein the step of recognizing the gesture recognizes the two chain codes having the same combination of direction vectors as a different gesture when the start coordinates and the end coordinates of the movement of the both hands are different.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140134925A KR101525011B1 (en) | 2014-10-07 | 2014-10-07 | tangible virtual reality display control device based on NUI, and method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020140134925A KR101525011B1 (en) | 2014-10-07 | 2014-10-07 | tangible virtual reality display control device based on NUI, and method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
KR101525011B1 true KR101525011B1 (en) | 2015-06-09 |
Family
ID=53503903
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020140134925A KR101525011B1 (en) | 2014-10-07 | 2014-10-07 | tangible virtual reality display control device based on NUI, and method thereof |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101525011B1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108243293A (en) * | 2016-12-23 | 2018-07-03 | 炬芯(珠海)科技有限公司 | A kind of method for displaying image and system based on virtual reality device |
KR20180131507A (en) * | 2017-05-31 | 2018-12-10 | 충남대학교산학협력단 | Motion direction search apparatus based on binary search |
CN116501234A (en) * | 2023-06-26 | 2023-07-28 | 北京百特迈科技有限公司 | User coupling intention rapid acquisition method, device, equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130111248A (en) * | 2010-06-29 | 2013-10-10 | 마이크로소프트 코포레이션 | Skeletal joint recognition and tracking system |
KR20140028064A (en) * | 2011-06-06 | 2014-03-07 | 마이크로소프트 코포레이션 | System for recognizing an open or closed hand |
-
2014
- 2014-10-07 KR KR1020140134925A patent/KR101525011B1/en active IP Right Grant
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130111248A (en) * | 2010-06-29 | 2013-10-10 | 마이크로소프트 코포레이션 | Skeletal joint recognition and tracking system |
KR20140028064A (en) * | 2011-06-06 | 2014-03-07 | 마이크로소프트 코포레이션 | System for recognizing an open or closed hand |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108243293A (en) * | 2016-12-23 | 2018-07-03 | 炬芯(珠海)科技有限公司 | A kind of method for displaying image and system based on virtual reality device |
KR20180131507A (en) * | 2017-05-31 | 2018-12-10 | 충남대학교산학협력단 | Motion direction search apparatus based on binary search |
KR102095235B1 (en) * | 2017-05-31 | 2020-04-01 | 충남대학교산학협력단 | Motion direction search apparatus based on binary search |
CN116501234A (en) * | 2023-06-26 | 2023-07-28 | 北京百特迈科技有限公司 | User coupling intention rapid acquisition method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10664060B2 (en) | Multimodal input-based interaction method and device | |
US10761612B2 (en) | Gesture recognition techniques | |
US10394334B2 (en) | Gesture-based control system | |
Rautaray | Real time hand gesture recognition system for dynamic applications | |
US8457353B2 (en) | Gestures and gesture modifiers for manipulating a user-interface | |
KR101956325B1 (en) | System for finger recognition and tracking | |
US20110289455A1 (en) | Gestures And Gesture Recognition For Manipulating A User-Interface | |
US20130335318A1 (en) | Method and apparatus for doing hand and face gesture recognition using 3d sensors and hardware non-linear classifiers | |
US10416834B1 (en) | Interaction strength using virtual objects for machine control | |
AU2012268589A1 (en) | System for finger recognition and tracking | |
WO2018000519A1 (en) | Projection-based interaction control method and system for user interaction icon | |
CN109145802B (en) | Kinect-based multi-person gesture man-machine interaction method and device | |
KR101525011B1 (en) | tangible virtual reality display control device based on NUI, and method thereof | |
Rehman et al. | Two hand gesture based 3d navigation in virtual environments | |
KR20160141023A (en) | The method of dynamic and static gesture recognition using depth camera and interface of immersive media contents | |
KR20130066812A (en) | Method, apparatus, and computer readable recording medium for recognizing gestures | |
Abdallah et al. | An overview of gesture recognition | |
KR101447958B1 (en) | Method and apparatus for recognizing body point | |
CN113807280A (en) | Kinect-based virtual ship cabin system and method | |
Leite et al. | A system to interact with CAVE applications using hand gesture recognition from depth data | |
Piumsomboon | Natural hand interaction for augmented reality. | |
Bernardes et al. | Comprehensive model and image-based recognition of hand gestures for interaction in 3D environments | |
Zhang et al. | Free-hand gesture control with" touchable" virtual interface for human-3DTV interaction | |
Kolhekar et al. | A reliable hand gesture recognition system using multiple schemes | |
Prema et al. | Gaming using different hand gestures using artificial neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant | ||
FPAY | Annual fee payment |
Payment date: 20180430 Year of fee payment: 4 |
|
FPAY | Annual fee payment |
Payment date: 20190430 Year of fee payment: 5 |