CN105446481A - Gesture based virtual reality human-machine interaction method and system - Google Patents
Gesture based virtual reality human-machine interaction method and system Download PDFInfo
- Publication number
- CN105446481A CN105446481A CN201510763565.XA CN201510763565A CN105446481A CN 105446481 A CN105446481 A CN 105446481A CN 201510763565 A CN201510763565 A CN 201510763565A CN 105446481 A CN105446481 A CN 105446481A
- Authority
- CN
- China
- Prior art keywords
- hand
- dimensional model
- virtual reality
- sensing region
- gesture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention discloses a gesture based virtual reality human-machine interaction method and system. The method comprises: displaying a three-dimensional model of an object in a virtual reality space; receiving hand depth information acquired by a depth sensor; determining whether a hand gesture is a waving action based on the hand depth information; and if yes, controlling the three-dimensional model to move or rotate in the virtual reality space. According to the gesture based virtual reality human-machine interaction method and system, based on the hand depth information acquired by the depth sensor, the gesture of a user is analyzed, and when the gesture is the waving action, the movement and rotation of the three-dimensional model of the object in the virtual reality space is controlled, so that the user can control the movement and rotation of the three-dimensional model in the virtual reality space only by the gesture without the need for using devices such as a mouse and a keyboard to perform human-machine interaction even if the user cannot see a human-machine interaction carrier in a real world after wearing a head-mounted display, the user operation is convenient, and the user experience is improved.
Description
Technical field
The present invention relates to technical field of virtual reality, particularly relate to a kind of virtual reality human-computer interaction method and system based on gesture.
Background technology
Human-computer interaction technology (Human-ComputerInteractionTechniques) refers to by computer input, output device, realizes the technology of people and computer dialog in an efficient way.In the man-machine interaction mode of user and computing machine, except the man-machine interaction mode that mouse and keyboard etc. are traditional, there is the man-machine interaction mode that the control of Voice command, touch-control and gesture control etc. are new-type in recent years, especially in the non-contact type human-machine interaction mode that gesture control is representative, by the action of depth transducer human body, human action is converted to the interactive command to computing machine according to the data detected by computing machine again, makes man-machine interaction seem more directly more natural.
The head mounted display of a new generation, such as virtual implementing helmet, provide a kind of man-machine interaction mode of immersive VR, it utilizes computing machine in head mounted display, generate a three-dimensional virtual world, there is provided user about the simulation of the sense organs such as vision, the sense of hearing, sense of touch, allow user as being personally on the scene, make user be immersed in virtual environment, observing the things in three dimensions in time, ad lib, is a kind of man-machine interaction mode of Consumer's Experience of more fitting.
In traditional virtual reality is mutual, user needs the display controlled by the mode such as keyboard, mouse in virtual world, it is much inconvenient that this brings user, particularly virtual implementing helmet, the man-machine interaction carrier such as keyboard, mouse in real world can be can't see after user brings the helmet completely, it reduce the experience effect of user.
Summary of the invention
The object of this invention is to provide a kind of virtual reality human-computer interaction method and system based on gesture, for user provides a kind of man-machine interaction mode based on gesture, can Consumer's Experience be improved.
The object of the invention is to be achieved through the following technical solutions:
Propose a kind of virtual reality human-computer interaction method based on gesture, said method comprising the steps of: the three-dimensional model of object is shown in virtual reality space; Receive the hand depth information that depth transducer obtains; Based on hand depth information, judge whether the gesture of hand is action of waving; If so, then control three-dimensional model move in virtual reality space or rotate.
Further, if gesture is action of waving, when control three-dimensional model moves or rotates in virtual reality space, described method also comprises: based on hand depth information, judges the amplitude of waving; Control the displacement of three-dimensional model in virtual reality space or the amplitude proportional example relation of rotating radian and waving.
Further, if gesture is action of waving, when control three-dimensional model moves or rotates in virtual reality space, described method also comprises: the fixing displacement of setting and fixing rotation radian; Often wave once, control three-dimensional model is mobile fixing displacement or rotation fixing rotation radian in virtual reality space.
Further, before judging that the gesture of hand is whether for action of waving, described method also comprises: the aware space dividing depth transducer is sensing region, middle part and avris sensing region; Then control the movement of three-dimensional model in virtual reality space or rotation, be specially:: based on hand depth information, judge hand be positioned in the middle part of sensing region or avris sensing region.When hand is positioned at sensing region, middle part, controls three-dimensional model and move in virtual reality space; When hand is positioned at avris sensing region, controls three-dimensional model and rotate in virtual reality space.
Further, hand be positioned at middle part sensing region time, control three-dimensional model move in virtual reality space, be specially: based on hand depth information judge the palm of the hand towards; If the palm of the hand is towards user, then control three-dimensional model to close user side to movement; If the palm of the hand is user dorsad, then control three-dimensional model to away from user side to movement; Described avris sensing region comprises sensing region, left side and sensing region, right side, described when hand is positioned at avris sensing region, controls three-dimensional model and rotates in virtual reality space, be specially: judge that hand is positioned at sensing region, left side or sensing region, right side; Based on hand depth information judge the palm of the hand towards; If hand be positioned at left side sensing region, and the palm of the hand towards user time, control three-dimensional model rotate counterclockwise; If hand be positioned at left side sensing region, and the palm of the hand dorsad user time, control three-dimensional model rotate clockwise; If hand be positioned at right side sensing region, and the palm of the hand towards user time, control three-dimensional model rotate clockwise; If hand be positioned at right side sensing region, and the palm of the hand dorsad user time, control three-dimensional model rotate counterclockwise.
Propose a kind of virtual reality human-computer interaction system based on gesture, comprise depth transducer, head mounted display, gesture judging unit and control module; Described depth transducer is placed on described head mounted display; Described depth transducer, obtains the hand depth information in its aware space; Described head mounted display, for building virtual reality space, the three-dimensional model of display object; Described gesture judging unit, receives the hand depth information that depth transducer obtains, based on hand depth information, judges whether the gesture of hand is action of waving; If so, then described control module control three-dimensional model moves or rotates in virtual reality space.
Further, described system also comprises gesture amplitude judging unit; Described gesture amplitude judging unit, based on hand depth information, judges the amplitude of waving; Then described control module, controls the displacement of three-dimensional model in virtual reality space or the amplitude proportional example relation of rotating radian and waving.
Further volume, described system also comprises setup unit; Described setup unit, for setting fixing displacement and fixing rotation radian; Then described control module, often wave one time time, control three-dimensional model mobile fixing displacement or rotate and fixingly rotate radian in virtual reality space.
Further, described system also comprises area division unit; Described area division unit, before judging unit judges that the gesture of hand is whether for action of waving, the aware space dividing depth transducer is sensing region, middle part and avris sensing region; Then judging unit is also for based on hand depth information, judges that hand is positioned at sensing region, middle part or avris sensing region; When hand is positioned at sensing region, middle part, described control module controls three-dimensional model and moves in virtual reality space; When hand is positioned at avris sensing region, described control module controls three-dimensional model and rotates in virtual reality space.
Further, described area division unit is also sensing region, left side and sensing region, right side for dividing avris sensing region; Described system also comprises the palm of the hand towards judging unit, for judge based on hand depth information the palm of the hand towards; When hand is positioned at sensing region, middle part, if the palm of the hand is towards user, then described control module controls three-dimensional model to close user side to movement; If the palm of the hand is user dorsad, then described control module control three-dimensional model to away from user side to movement; If hand is positioned at sensing region, left side, and the palm of the hand towards user time, described control module controls three-dimensional model and rotates counterclockwise; If hand is positioned at sensing region, left side, and the palm of the hand dorsad user time, described control module controls three-dimensional model and rotates clockwise; If hand is positioned at sensing region, right side, and the palm of the hand towards user time, described control module controls three-dimensional model and rotates clockwise; If hand is positioned at sensing region, right side, and the palm of the hand dorsad user time, described control module controls three-dimensional model and rotates counterclockwise.
Beneficial effect or the advantage of technical scheme provided by the invention are: the embodiment of the present application propose based in the virtual reality human-computer interaction method and system of gesture, based on the hand depth information that depth transducer obtains, whether the hand motion analyzing user is action of waving, if wave action, then the three-dimensional model controlling object in virtual reality space moves or rotates in virtual reality space; When hand is in the sensing region, middle part of depth transducer aware space, control the displacement of the three-dimensional model shown in virtual reality space, and hand is when being in the avris sensing region of depth transducer aware space, control the rotation of the three-dimensional model shown in virtual reality space; And in conjunction with the palm of the hand towards, control the moving direction of three-dimensional model and rotation direction; Meanwhile, the distance of three-dimensional model movement or the radian of rotation is controlled based on the amplitude of waving or the number of times of waving; Even if make user can't see man-machine interaction carrier in real world after wearing head mounted display, also without the need to using the equipment such as mouse-keyboard to carry out man-machine interaction, only need make the movement that uses gesture to control three-dimensional model in virtual reality space and rotation, user operation is convenient, improves Consumer's Experience.
Accompanying drawing explanation
Fig. 1 is the virtual reality human-computer interaction method flow diagram based on gesture that the embodiment of the present application proposes;
Fig. 2 is the virtual reality human-computer interaction schematic diagram based on gesture that the embodiment of the present application proposes;
Fig. 3 is the virtual reality human-computer interaction method flow diagram based on gesture that the embodiment of the present application proposes;
Fig. 4 is the virtual reality human-computer interaction system chart based on gesture that the embodiment of the present application proposes.
Embodiment
Below in conjunction with accompanying drawing, the technical scheme of the man-machine interaction method and system that realize non-contact type mouse control that the embodiment of the present invention provides is described in detail.
The virtual reality human-computer interaction method based on gesture that the embodiment of the present application provides, as shown in Figure 1, comprises the following steps:
Step S11: the three-dimensional model of object is shown in virtual reality space.
After the three-dimensional model structure of object, imported by man-machine interactive system and be shown in virtual reality space; Virtual reality space is by Practical computer teaching three-dimensional virtual world.
Head mounted display as shown in Figure 2, such as virtual implementing helmet, fitting depth sensor on head mounted display, forms an aware space identical with helmet visual angle.The action of hand in aware space of user obtains data by depth transducer.
The three-dimensional model of object, as the automobile three-dimensional model in Fig. 2, the object controlled according to actual needs builds, and can build in real time, also can build in advance, put into 3 d model library and call use according to depth transducer perceptual signal.
Step S12: receive the hand depth information that depth transducer obtains.
The hand depth information that depth transducer obtains, comprises skeleton character information and the movable information of hand; Skeleton character information includes but not limited to length, the width information of finger, palm, each joint bone, and wrist bone width information etc.; Movable information comprises the information such as position, speed of each bone.
After getting the depth information of hand, according to hand skeleton character information, a set of three-dimensional hand model with skeleton character can be generated, then obtain the depth information of hand from depth transducer according to certain frequency, and from depth information, obtain the movable information of each bone of hand, control motion and the display of hand three-dimensional model with this, the characteristic sum motion of the hand in aware space and the characteristic sum motion one_to_one corresponding of the hand three-dimensional model in virtual reality space can be made like this; In conjunction with the motion gesture of hand, steering order can be produced to control motion and the display of hand three-dimensional model and other object dimensional models in virtual reality space.
The hand three-dimensional model built and its motion, may be displayed in virtual reality space, also can not show.
Step S13: based on hand depth information, judges whether the gesture of hand is action of waving.
According to the depth information of hand, hand can be analyzed and comprise palm, each positional information pointed, velocity information and bone information etc., the gesture of hand can be analyzed according to these information.
The gesture of waving defines with user daily wave custom or popular conventional action of waving, and such as, palm is fixed, and finger motion; Or hand take wrist as action of axle motion etc.
If wave action, then perform step S14.
Step S14: control three-dimensional model and move in virtual reality space or rotate.
Here movement, comprises three-dimensional model close and away from the direction of user's sight line to user's direction of visual lines; Here rotation, comprising three-dimensional model take its center as rotating clockwise and rotating counterclockwise of benchmark; Here clockwise with counterclockwise, being as the criterion clockwise with counterclockwise when being all to overlook from three-dimensional model upper end.
The mode changing the rotation of the movement of three-dimensional model by waving is simple, user is when carrying out man-machine interaction, even if can't see the man-machine interaction carrier of real world, also movement and the rotation of the three-dimensional model using gesture to control virtual reality space can be made, make user affected more convenient, improve the experience of user.
Near or away from user's sight line; be by mobile object three-dimensional model make its near or move away from user and realize; in fact; by the position of mobile subscriber; and reduce or increase the distance between user and three-dimensional model; all can realize, all in the scope of the embodiment of the present application protection.
In embodiment illustrated in fig. 2, for user's right hand, in the embodiment of the present application, do not limit right-hand man, left-handed operation also can.The number of occurrence that degree that is mobile or that rotate can be waved according to user or amplitude adjust.
Concrete, as shown in Figure 1, perform step S15: based on hand depth information, judge the amplitude of waving; And step S16: control the displacement of three-dimensional model in virtual reality space or the amplitude proportional example relation of rotating radian and waving.
Also namely, if user waves, amplitude is large, then adjust three-dimensional model distance mobile larger in virtual reality space, or rotate larger radian; If user waves, amplitude is little, then adjust three-dimensional model mobile small distance or the less radian of rotation in virtual reality space.Specifically be greater than little defining to carry out according to setting in advance.
Or, step S17: the fixing displacement of setting and fixing rotation radian; And step S18: often wave once, control three-dimensional model is mobile fixing displacement or rotation fixing rotation radian in virtual reality space.
Also be, preset the fixed range of movement and the fixing radian of rotation, by judging that the number of times of waving controls three-dimensional model, often wave once, adjust movement or the rotation of three-dimensional model according to the distance set or radian, number of times of waving is few, that then moves or rotate lacks, wave often, then move or rotate many, meet the operating habit of user.
In the embodiment of the present application, for limiting a specific embodiment of once waving be: user points the number of times repeatedly moved to direction, the centre of the palm and is greater than restriction number of times, or user's hand is that the number of times that axle moves is greater than restriction number of times with wrist.
Limit number of times be preset define value, such as three times, then when palm is fixed, finger motion three times is action of once waving, or hand is that axle moves three times for action of once waving with wrist.
And to the control of three-dimensional model for mobile or rotate, residing diverse location and the palm of the hand towards distinguishing when can do gesture by hand.Before judging that the gesture of hand is whether for action of waving, as shown in Figure 3, step S31: the aware space dividing depth transducer is middle part aware space and avris aware space; Sensing region in the middle part of I district correspondence as shown in Figure 2, corresponding avris sensing region, II district; The division in region is associated with user's eye sight line angular field of view, the positive viewed area of sensing region, the middle part respective user sight line of aware space, and the avris region of the avris sensing region respective user sight line of aware space.When user's hand moves in the sensing region, middle part of aware space, be reacted in virtual reality space, hand three-dimensional model is presented at the positive viewed area of user's sight line; When the avris regional movement of user's hand at aware space, be reacted in virtual reality space, hand three-dimensional model is presented at the avris region of user's sight line.
Further, avris sensing region is divided into again sensing region, left side and sensing region, right side; Left side is corresponding with the left and right directions of user's eye sight line with right side, also corresponding with user right-hand man direction, sensing region, left side respective user eye sight line left field is also the region of left-handed operation, sensing region, right side, to being applied to eye sight line right side area, is also the region of right-hand operated.
After division aware space, step S32:: based on hand depth information, judges that hand is positioned at sensing region, middle part or avris sensing region; The region be in is different, performs different operations to object dimensional model.
Step S33: based on hand depth information judge the palm of the hand towards.
Behind the region of interpretation residing for hand, according to the direction towards the direction or rotation of distinguishing three-dimensional model movement of the palm of the hand.The judgement in palm of the hand direction, with the three-dimensional model of object for benchmark time, judgement is towards the three-dimensional model of object, still the three-dimensional model of object dorsad; With user's sight line for benchmark time, judgement is user's sight line dorsad, or towards user's sight line.Palm of the hand direction obtains in palm information from the depth information of hand.
When hand is in sensing region, middle part, if the palm of the hand is towards user, then step S34: control three-dimensional model to close user side to movement; If the palm of the hand is user dorsad, then step S35: control three-dimensional model to away from user side to movement.
When hand is positioned at avris sensing region, step S36: judge that hand is positioned at sensing region, left side or sensing region, right side; If hand be positioned at left side sensing region, and the palm of the hand towards user time, step S37: control three-dimensional model rotate counterclockwise; If hand be positioned at left side sensing region, and the palm of the hand dorsad user time, step S38: control three-dimensional model rotate clockwise; If hand be positioned at right side sensing region, and the palm of the hand towards user time, step S38: control three-dimensional model rotate clockwise; If hand be positioned at right side sensing region, and the palm of the hand dorsad user time, step S37: control three-dimensional model rotate counterclockwise.
Above-mentioned the embodiment of the present application propose based in the virtual reality human-computer interaction method of gesture, based on the hand depth information that depth transducer obtains, analyzing the gesture motion of user's hand, when gesture is for waving, controlling movement or the rotation of three-dimensional model; Based on the degree that the amplitude of waving or number of times control three-dimensional model move or rotate; When hand is in the sensing region, middle part of depth transducer aware space, can in conjunction with the palm of the hand towards, control displacement and its direction of the three-dimensional model shown in virtual reality space, and hand is when being in the avris sensing region of depth transducer aware space, can in conjunction with the palm of the hand towards, control rotation and its rotation direction of the three-dimensional model shown in virtual reality space; Even if user can't see the man-machine interaction carrier in real world after wearing head mounted display, also without the need to using the equipment such as mouse-keyboard to carry out man-machine interaction, only gesture of waving need be used to control movement and the rotation of three-dimensional model in virtual reality space, user operation is convenient, improves Consumer's Experience.
Based on the above-mentioned virtual reality human-computer interaction method based on gesture, the embodiment of the present application also proposes a kind of virtual reality human-computer interaction system based on gesture, as shown in Figure 4, this system comprises depth transducer 51, head mounted display 52, gesture judging unit 53 and control module 54.
Head mounted display 52, for building virtual reality space, the three-dimensional model of display object; Depth transducer 51 is placed on head mounted display 52, for obtaining the hand depth information in its aware space; Now, depth transducer forms an aware space identical with head mounted display visual angle.
Gesture judging unit 53, receives the hand depth information that depth transducer 52 obtains, based on hand depth information, judges whether the gesture of hand is action of waving; Control module 54, gesture for wave action time, control three-dimensional model move in virtual reality space or rotate.
Use the user of this system, even if can't see the man-machine interaction carrier in real world after wearing head mounted display, also without the need to using the equipment such as mouse-keyboard to carry out man-machine interaction, the display using gesture to control three-dimensional model in virtual reality space only need be made, easy to operate, improve Consumer's Experience.
This system also comprises gesture amplitude judging unit 56, this gesture amplitude judging unit, for based on hand depth information, judge the amplitude of waving, then control module 54 controls the displacement of three-dimensional model in virtual reality space or rotation amplitude relation routine with the amplitude proportional of waving; Concrete, mobile control unit 541 for controlling the distance size of three-dimensional model movement, and rotates control module 542 for controlling the radian size of three-dimensional model rotation.
This system also comprises setup unit 57, this setup unit, for setting fixing displacement and fixing rotation radian; Then control module 54 often wave one time time, control three-dimensional model mobile fixing displacement or rotate and fixingly rotate radian in virtual reality space; Concrete, mobile control unit 541 often wave one time time, control three-dimensional model and move and once fixes displacement, and rotate control module 542 often wave one time time, control three-dimensional model and rotate once and fixingly rotate radian.
This system also comprises area division unit 55, and control module is divided into mobile control unit 541 and rotates control module 542; Area division unit 55 is before judging unit 53 judges the position of hand, and the aware space dividing depth transducer 51 is sensing region, middle part and avris sensing region; Then judging unit 53 is based on hand depth information, judges that hand is positioned at sensing region, middle part or avris sensing region; Mobile control unit 541, when hand is positioned at sensing region, middle part, controls three-dimensional model and moves in virtual reality space; Rotate control module 542 when hand is positioned at avris sensing region, control three-dimensional model and rotate in virtual reality space.
Area division unit 55 is also sensing region, left side and sensing region, right side for dividing avris sensing region; This system also comprises the palm of the hand towards judging unit 58, for judge based on hand depth information the palm of the hand towards; When hand is positioned at sensing region, middle part, if the palm of the hand is towards user, then control module 54 controls three-dimensional model to close user side to movement; If the palm of the hand is user dorsad, then control module 54 control three-dimensional model to away from user side to movement; If hand be positioned at left side sensing region, and the palm of the hand towards user time, control module 54 controls three-dimensional model and rotates counterclockwise; If hand be positioned at left side sensing region, and the palm of the hand dorsad user time, control module 54 controls three-dimensional model and rotates clockwise; If hand be positioned at right side sensing region, and the palm of the hand towards user time, control module 54 controls three-dimensional model and rotates clockwise; If hand be positioned at right side sensing region, and the palm of the hand dorsad user time, control module 54 controls three-dimensional model and rotates counterclockwise.
The method of work of the concrete virtual reality human-computer interaction system based on gesture and technique effect, above-mentioned based on the virtual reality human-computer interaction method of gesture in describe in detail, it will not go into details herein.
Although describe the preferred embodiments of the present invention, those skilled in the art once obtain the basic creative concept of cicada, then can make other change and amendment to these embodiments.So claims are intended to be interpreted as comprising preferred embodiment and falling into all changes and the amendment of the scope of the invention.
Obviously, those skilled in the art can carry out various change and modification to the present invention and not depart from the spirit and scope of the present invention.Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.
Claims (10)
1., based on the virtual reality human-computer interaction method of gesture, it is characterized in that, said method comprising the steps of:
The three-dimensional model of object is shown in virtual reality space;
Receive the hand depth information that depth transducer obtains;
Based on hand depth information, judge whether the gesture of hand is action of waving;
If so, then control three-dimensional model move in virtual reality space or rotate.
2. the virtual reality human-computer interaction method based on gesture according to claim 1, is characterized in that, if gesture is action of waving, when control three-dimensional model moves or rotates in virtual reality space, described method also comprises:
Based on hand depth information, judge the amplitude of waving;
Control the displacement of three-dimensional model in virtual reality space or the amplitude proportional example relation of rotating radian and waving.
3. the virtual reality human-computer interaction method based on gesture according to claim 1, is characterized in that, if gesture is action of waving, when control three-dimensional model moves or rotates in virtual reality space, described method also comprises:
The fixing displacement of setting and fixing rotation radian;
Often wave once, control three-dimensional model is mobile fixing displacement or rotation fixing rotation radian in virtual reality space.
4. the virtual reality human-computer interaction method based on gesture according to claim 1, is characterized in that, before judging that the gesture of hand is whether for action of waving, described method also comprises:
The aware space dividing depth transducer is sensing region, middle part and avris sensing region;
Then control the movement of three-dimensional model in virtual reality space or rotation, be specially:
Based on hand depth information, judge that hand is positioned at sensing region, middle part or avris sensing region;
When hand is positioned at sensing region, middle part, controls three-dimensional model and move in virtual reality space;
When hand is positioned at avris sensing region, controls three-dimensional model and rotate in virtual reality space.
5. the virtual reality human-computer interaction method based on gesture according to claim 4, is characterized in that, when hand is positioned at sensing region, middle part, controls three-dimensional model and moves in virtual reality space, be specially:
Based on hand depth information judge the palm of the hand towards;
If the palm of the hand is towards user, then control three-dimensional model to close user side to movement; If the palm of the hand is user dorsad, then control three-dimensional model to away from user side to movement;
Described avris sensing region comprises sensing region, left side and sensing region, right side, described when hand is positioned at avris sensing region, controls three-dimensional model and rotates in virtual reality space, be specially:
Judge that hand is positioned at sensing region, left side or sensing region, right side;
Based on hand depth information judge the palm of the hand towards;
If hand be positioned at left side sensing region, and the palm of the hand towards user time, control three-dimensional model rotate counterclockwise; If hand be positioned at left side sensing region, and the palm of the hand dorsad user time, control three-dimensional model rotate clockwise;
If hand be positioned at right side sensing region, and the palm of the hand towards user time, control three-dimensional model rotate clockwise; If hand be positioned at right side sensing region, and the palm of the hand dorsad user time, control three-dimensional model rotate counterclockwise.
6. based on the virtual reality human-computer interaction system of gesture, it is characterized in that, comprise depth transducer, head mounted display, gesture judging unit and control module; Described depth transducer is placed on described head mounted display;
Described depth transducer, obtains the hand depth information in its aware space;
Described head mounted display, for building virtual reality space, the three-dimensional model of display object;
Described gesture judging unit, receives the hand depth information that depth transducer obtains, based on hand depth information, judges whether the gesture of hand is action of waving;
If so, then described control module control three-dimensional model moves or rotates in virtual reality space.
7. the virtual reality human-computer interaction system based on gesture according to claim 6, is characterized in that, described system also comprises gesture amplitude judging unit;
Described gesture amplitude judging unit, based on hand depth information, judges the amplitude of waving;
Then described control module, controls the displacement of three-dimensional model in virtual reality space or the amplitude proportional example relation of rotating radian and waving.
8. the virtual reality human-computer interaction system based on gesture according to claim 6, it is characterized in that, described system also comprises setup unit;
Described setup unit, for setting fixing displacement and fixing rotation radian;
Then described control module, often wave one time time, control three-dimensional model mobile fixing displacement or rotate and fixingly rotate radian in virtual reality space.
9. the virtual reality human-computer interaction system based on gesture according to claim 6, it is characterized in that, described system also comprises area division unit;
Described area division unit, before judging unit judges that the gesture of hand is whether for action of waving, the aware space dividing depth transducer is sensing region, middle part and avris sensing region;
Then judging unit is also for based on hand depth information, judges that hand is positioned at sensing region, middle part or avris sensing region; When hand is positioned at sensing region, middle part, described control module controls three-dimensional model and moves in virtual reality space; When hand is positioned at avris sensing region, described control module controls three-dimensional model and rotates in virtual reality space.
10. the virtual reality human-computer interaction system based on gesture according to claim 9, is characterized in that, described area division unit is also sensing region, left side and sensing region, right side for dividing avris sensing region;
Described system also comprises the palm of the hand towards judging unit, for judge based on hand depth information the palm of the hand towards;
When hand is positioned at sensing region, middle part, if the palm of the hand is towards user, then described control module controls three-dimensional model to close user side to movement; If the palm of the hand is user dorsad, then described control module control three-dimensional model to away from user side to movement;
If hand is positioned at sensing region, left side, and the palm of the hand towards user time, described control module controls three-dimensional model and rotates counterclockwise; If hand is positioned at sensing region, left side, and the palm of the hand dorsad user time, described control module controls three-dimensional model and rotates clockwise;
If hand is positioned at sensing region, right side, and the palm of the hand towards user time, described control module controls three-dimensional model and rotates clockwise; If hand is positioned at sensing region, right side, and the palm of the hand dorsad user time, described control module controls three-dimensional model and rotates counterclockwise.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510763565.XA CN105446481A (en) | 2015-11-11 | 2015-11-11 | Gesture based virtual reality human-machine interaction method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510763565.XA CN105446481A (en) | 2015-11-11 | 2015-11-11 | Gesture based virtual reality human-machine interaction method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105446481A true CN105446481A (en) | 2016-03-30 |
Family
ID=55556774
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510763565.XA Pending CN105446481A (en) | 2015-11-11 | 2015-11-11 | Gesture based virtual reality human-machine interaction method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105446481A (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105975158A (en) * | 2016-05-11 | 2016-09-28 | 乐视控股(北京)有限公司 | Virtual reality interaction method and device |
CN106054627A (en) * | 2016-06-12 | 2016-10-26 | 珠海格力电器股份有限公司 | Control method and device based on gesture recognition and air conditioner |
CN106125938A (en) * | 2016-07-01 | 2016-11-16 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN106155326A (en) * | 2016-07-26 | 2016-11-23 | 北京小米移动软件有限公司 | Object identifying method in virtual reality communication and device, virtual reality device |
CN106249882A (en) * | 2016-07-26 | 2016-12-21 | 华为技术有限公司 | A kind of gesture control method being applied to VR equipment and device |
WO2017079910A1 (en) * | 2015-11-11 | 2017-05-18 | 周谆 | Gesture-based virtual reality human-machine interaction method and system |
CN106774821A (en) * | 2016-11-08 | 2017-05-31 | 广州视源电子科技股份有限公司 | display method and system based on virtual reality technology |
CN106774941A (en) * | 2017-01-16 | 2017-05-31 | 福建农林大学 | The solution that touch screen terminal 3D virtual roles conflict with scene camera motion |
CN106886285A (en) * | 2017-01-20 | 2017-06-23 | 西安电子科技大学 | A kind of historical relic interactive system and operating method based on virtual reality |
CN106951069A (en) * | 2017-02-23 | 2017-07-14 | 深圳市金立通信设备有限公司 | The control method and virtual reality device of a kind of virtual reality interface |
CN107281750A (en) * | 2017-05-03 | 2017-10-24 | 深圳市恒科电子科技有限公司 | VR aobvious action identification methods and VR show |
CN107463257A (en) * | 2017-08-03 | 2017-12-12 | 微景天下(北京)科技有限公司 | A kind of man-machine interaction method and device of Virtual Reality system |
CN107767344A (en) * | 2017-11-07 | 2018-03-06 | 上海漂视网络科技有限公司 | A kind of cloud methods of exhibiting of 3D models |
CN107885317A (en) * | 2016-09-29 | 2018-04-06 | 阿里巴巴集团控股有限公司 | A kind of exchange method and device based on gesture |
CN107992189A (en) * | 2017-09-22 | 2018-05-04 | 深圳市魔眼科技有限公司 | A kind of virtual reality six degree of freedom exchange method, device, terminal and storage medium |
CN108227927A (en) * | 2018-01-09 | 2018-06-29 | 北京小米移动软件有限公司 | Product introduction method, apparatus and electronic equipment based on VR |
CN108319363A (en) * | 2018-01-09 | 2018-07-24 | 北京小米移动软件有限公司 | Product introduction method, apparatus based on VR and electronic equipment |
WO2018149318A1 (en) * | 2017-02-17 | 2018-08-23 | 阿里巴巴集团控股有限公司 | Input method, device, apparatus, system, and computer storage medium |
CN109410691A (en) * | 2018-12-17 | 2019-03-01 | 深圳市中智仿真科技有限公司 | A kind of automobile of gesture control function drives training analog machine |
CN109732606A (en) * | 2019-02-13 | 2019-05-10 | 深圳大学 | Long-range control method, device, system and the storage medium of mechanical arm |
CN110073316A (en) * | 2016-12-19 | 2019-07-30 | 微软技术许可有限责任公司 | Interaction virtual objects in mixed reality environment |
CN110850977A (en) * | 2019-11-06 | 2020-02-28 | 成都威爱新经济技术研究院有限公司 | Stereoscopic image interaction method based on 6DOF head-mounted display |
CN111640183A (en) * | 2020-06-04 | 2020-09-08 | 上海商汤智能科技有限公司 | AR data display control method and device |
CN112799507A (en) * | 2021-01-15 | 2021-05-14 | 北京航空航天大学 | Human body virtual model display method and device, electronic equipment and storage medium |
CN113835527A (en) * | 2021-09-30 | 2021-12-24 | 北京市商汤科技开发有限公司 | Device control method, device, electronic device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102929391A (en) * | 2012-10-23 | 2013-02-13 | 中国石油化工股份有限公司 | Reality augmented distributed control system human-computer interactive equipment and method |
CN103345064A (en) * | 2013-07-16 | 2013-10-09 | 卫荣杰 | Cap integrated with 3D identifying and 3D identifying method of cap |
CN103442244A (en) * | 2013-08-30 | 2013-12-11 | 北京京东方光电科技有限公司 | 3D glasses, 3D display system and 3D display method |
CN104808795A (en) * | 2015-04-29 | 2015-07-29 | 王子川 | Gesture recognition method for reality-augmented eyeglasses and reality-augmented eyeglasses system |
-
2015
- 2015-11-11 CN CN201510763565.XA patent/CN105446481A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102929391A (en) * | 2012-10-23 | 2013-02-13 | 中国石油化工股份有限公司 | Reality augmented distributed control system human-computer interactive equipment and method |
CN103345064A (en) * | 2013-07-16 | 2013-10-09 | 卫荣杰 | Cap integrated with 3D identifying and 3D identifying method of cap |
CN103442244A (en) * | 2013-08-30 | 2013-12-11 | 北京京东方光电科技有限公司 | 3D glasses, 3D display system and 3D display method |
CN104808795A (en) * | 2015-04-29 | 2015-07-29 | 王子川 | Gesture recognition method for reality-augmented eyeglasses and reality-augmented eyeglasses system |
Cited By (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017079910A1 (en) * | 2015-11-11 | 2017-05-18 | 周谆 | Gesture-based virtual reality human-machine interaction method and system |
CN105975158A (en) * | 2016-05-11 | 2016-09-28 | 乐视控股(北京)有限公司 | Virtual reality interaction method and device |
CN106054627A (en) * | 2016-06-12 | 2016-10-26 | 珠海格力电器股份有限公司 | Control method and device based on gesture recognition and air conditioner |
CN106125938A (en) * | 2016-07-01 | 2016-11-16 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN106125938B (en) * | 2016-07-01 | 2021-10-22 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN106249882A (en) * | 2016-07-26 | 2016-12-21 | 华为技术有限公司 | A kind of gesture control method being applied to VR equipment and device |
US11507190B2 (en) | 2016-07-26 | 2022-11-22 | Huawei Technologies Co., Ltd. | Gesture control method applied to VR device, and apparatus |
CN106249882B (en) * | 2016-07-26 | 2022-07-12 | 华为技术有限公司 | Gesture control method and device applied to VR equipment |
CN106155326A (en) * | 2016-07-26 | 2016-11-23 | 北京小米移动软件有限公司 | Object identifying method in virtual reality communication and device, virtual reality device |
US10642569B2 (en) | 2016-07-26 | 2020-05-05 | Beijing Xiaomi Mobile Software Co., Ltd. | Methods and devices for identifying object in virtual reality communication, and virtual reality equipment |
CN107885317A (en) * | 2016-09-29 | 2018-04-06 | 阿里巴巴集团控股有限公司 | A kind of exchange method and device based on gesture |
CN106774821A (en) * | 2016-11-08 | 2017-05-31 | 广州视源电子科技股份有限公司 | display method and system based on virtual reality technology |
CN106774821B (en) * | 2016-11-08 | 2020-05-19 | 广州视源电子科技股份有限公司 | Display method and system based on virtual reality technology |
CN110073316A (en) * | 2016-12-19 | 2019-07-30 | 微软技术许可有限责任公司 | Interaction virtual objects in mixed reality environment |
CN106774941A (en) * | 2017-01-16 | 2017-05-31 | 福建农林大学 | The solution that touch screen terminal 3D virtual roles conflict with scene camera motion |
CN106774941B (en) * | 2017-01-16 | 2019-11-19 | 福建农林大学 | Touch screen terminal 3D virtual role moves the solution to conflict with scene camera |
CN106886285A (en) * | 2017-01-20 | 2017-06-23 | 西安电子科技大学 | A kind of historical relic interactive system and operating method based on virtual reality |
WO2018149318A1 (en) * | 2017-02-17 | 2018-08-23 | 阿里巴巴集团控股有限公司 | Input method, device, apparatus, system, and computer storage medium |
CN106951069A (en) * | 2017-02-23 | 2017-07-14 | 深圳市金立通信设备有限公司 | The control method and virtual reality device of a kind of virtual reality interface |
CN107281750A (en) * | 2017-05-03 | 2017-10-24 | 深圳市恒科电子科技有限公司 | VR aobvious action identification methods and VR show |
CN107463257A (en) * | 2017-08-03 | 2017-12-12 | 微景天下(北京)科技有限公司 | A kind of man-machine interaction method and device of Virtual Reality system |
CN107463257B (en) * | 2017-08-03 | 2020-08-21 | 微景天下(北京)科技有限公司 | Human-computer interaction method and device of virtual reality VR system |
CN107992189A (en) * | 2017-09-22 | 2018-05-04 | 深圳市魔眼科技有限公司 | A kind of virtual reality six degree of freedom exchange method, device, terminal and storage medium |
CN107767344A (en) * | 2017-11-07 | 2018-03-06 | 上海漂视网络科技有限公司 | A kind of cloud methods of exhibiting of 3D models |
CN108227927A (en) * | 2018-01-09 | 2018-06-29 | 北京小米移动软件有限公司 | Product introduction method, apparatus and electronic equipment based on VR |
CN108227927B (en) * | 2018-01-09 | 2021-07-23 | 北京小米移动软件有限公司 | VR-based product display method and device and electronic equipment |
CN108319363A (en) * | 2018-01-09 | 2018-07-24 | 北京小米移动软件有限公司 | Product introduction method, apparatus based on VR and electronic equipment |
CN109410691A (en) * | 2018-12-17 | 2019-03-01 | 深圳市中智仿真科技有限公司 | A kind of automobile of gesture control function drives training analog machine |
CN109732606A (en) * | 2019-02-13 | 2019-05-10 | 深圳大学 | Long-range control method, device, system and the storage medium of mechanical arm |
CN110850977A (en) * | 2019-11-06 | 2020-02-28 | 成都威爱新经济技术研究院有限公司 | Stereoscopic image interaction method based on 6DOF head-mounted display |
CN110850977B (en) * | 2019-11-06 | 2023-10-31 | 成都威爱新经济技术研究院有限公司 | Stereoscopic image interaction method based on 6DOF head-mounted display |
CN111640183A (en) * | 2020-06-04 | 2020-09-08 | 上海商汤智能科技有限公司 | AR data display control method and device |
CN112799507A (en) * | 2021-01-15 | 2021-05-14 | 北京航空航天大学 | Human body virtual model display method and device, electronic equipment and storage medium |
CN112799507B (en) * | 2021-01-15 | 2022-01-04 | 北京航空航天大学 | Human body virtual model display method and device, electronic equipment and storage medium |
CN113835527A (en) * | 2021-09-30 | 2021-12-24 | 北京市商汤科技开发有限公司 | Device control method, device, electronic device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105446481A (en) | Gesture based virtual reality human-machine interaction method and system | |
CN110603509B (en) | Joint of direct and indirect interactions in a computer-mediated reality environment | |
US10817128B2 (en) | Input device for VR/AR applications | |
Martínez et al. | Identifying virtual 3D geometric shapes with a vibrotactile glove | |
CN102779000B (en) | User interaction system and method | |
US11644907B2 (en) | Systems, devices, and methods for physical surface tracking with a stylus device in an AR/VR environment | |
US9483119B2 (en) | Stereo interactive method, display device, operating stick and system | |
US20130307829A1 (en) | Haptic-acoustic pen | |
JP2016186696A (en) | Haptic stylus | |
CN103793060A (en) | User interaction system and method | |
Xia | New advances for haptic rendering: state of the art | |
CN108563341B (en) | Three-dimensional touch electronic pen with vibration tactile feedback and method | |
CN103064514A (en) | Method for achieving space menu in immersive virtual reality system | |
US11397478B1 (en) | Systems, devices, and methods for physical surface tracking with a stylus device in an AR/VR environment | |
KR20130099570A (en) | System and method for implemeting 3-dimensional user interface | |
JP2018113025A (en) | Systems and methods for compliance illusions with haptics | |
CN104516649A (en) | Intelligent cell phone operating technology based on motion-sensing technology | |
CN114529691A (en) | Window control method, electronic device and computer readable storage medium | |
Cui et al. | Mid-air interaction with optical tracking for 3D modeling | |
CN109102571B (en) | Virtual image control method, device, equipment and storage medium thereof | |
Zhang et al. | A hybrid 2D–3D tangible interface combining a smartphone and controller for virtual reality | |
Bai et al. | Asymmetric Bimanual Interaction for Mobile Virtual Reality. | |
Breslauer et al. | Leap motion sensor for natural user interface | |
Machuca et al. | Interaction Devices and Techniques for 3D Sketching | |
CN111240483B (en) | Operation control method, head-mounted device, and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160330 |
|
RJ01 | Rejection of invention patent application after publication |