CN204463032U - System and the virtual reality helmet of gesture is inputted in a kind of 3D scene - Google Patents

System and the virtual reality helmet of gesture is inputted in a kind of 3D scene Download PDF

Info

Publication number
CN204463032U
CN204463032U CN201420860350.0U CN201420860350U CN204463032U CN 204463032 U CN204463032 U CN 204463032U CN 201420860350 U CN201420860350 U CN 201420860350U CN 204463032 U CN204463032 U CN 204463032U
Authority
CN
China
Prior art keywords
gesture
shape
information
real
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201420860350.0U
Other languages
Chinese (zh)
Inventor
姜茂山
徐国庆
周宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Optical Technology Co Ltd
Original Assignee
Qingdao Goertek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Goertek Co Ltd filed Critical Qingdao Goertek Co Ltd
Priority to CN201420860350.0U priority Critical patent/CN204463032U/en
Application granted granted Critical
Publication of CN204463032U publication Critical patent/CN204463032U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The utility model discloses the system and virtual reality helmet that input gesture in a kind of 3D scene, described system comprises for respectively to the gesture collecting unit of the gesture Real-time Collection of user at least two-path video flow data; For identifying the gesture identification unit of the gesture shape of real-time change from described at least two-path video flow data; For resolving the gesture shape of described real-time change, obtain the gesture resolution unit of corresponding gesture motion; Gesture display unit in described 3D scene is presented in real time for described gesture motion being processed into 3D rendering.The technical solution of the utility model can show the true gesture of user in real time in 3D scene, strengthens the true effect of system, and improves user's experience.

Description

System and the virtual reality helmet of gesture is inputted in a kind of 3D scene
Technical field
The utility model relates to technical field of virtual reality, particularly inputs system and the virtual reality helmet of gesture in a kind of 3D scene.
Background technology
Virtual reality technology will develop into a kind of new breakthrough changing people life style future, at present, how virtual reality technology carries out interaction in virtual world with target be the huge challenge that virtual reality technology faces, and therefore virtual reality technology is wanted really to enter consumer level market and had a long way to go.
Current existing various virtual reality equips the interchange that still block between user and virtual world, can not follow the trail of the region of interest of health in 3D scene, and the hand motion of such as user just cannot really be simulated now.
Utility model content
The utility model provides the system and virtual reality helmet that input gesture in a kind of 3D scene, cannot the problem of real analog subscriber hand motion in 3D scene to solve prior art.
For achieving the above object, the technical solution of the utility model is achieved in that
On the one hand, the utility model provides the system inputting gesture in a kind of 3D scene, and described system comprises: gesture collecting unit, gesture identification unit, gesture resolution unit and gesture display unit;
Described gesture collecting unit, for respectively to the gesture Real-time Collection at least two-path video flow data of user;
Described gesture identification unit, for identifying the gesture shape of real-time change from described at least two-path video flow data;
Described gesture resolution unit, for resolving the gesture shape of described real-time change, obtains corresponding gesture motion;
Described gesture display unit, is presented in described 3D scene in real time for described gesture motion being processed into 3D rendering.
Preferably, described system also comprises gesture operation unit,
Described gesture operation unit, the operational order that the gesture for obtaining described gesture motion in the semantic database preset is semantic and this gesture semanteme is corresponding; And semantic for described gesture corresponding operational order is sent to described 3D scene, make described 3D scene carry out the operation of described gesture semanteme.
Preferably, described gesture identification unit comprises:
Sampling module, for carrying out sampling processing respectively to described at least two-path video flow data Zhong Ge road, obtains the vedio data of each sampling;
Gesture profile extraction module, for judging whether comprise hand information in described vedio data, if comprise, carries out binary conversion treatment to described vedio data, extracts hand outline information;
Gesture shape identification module, for identifying gesture shape corresponding to described hand outline information in the gesture model database preset;
Gesture shape synthesis module, the gesture shape that each sampling for the synthesis of each road video stream data identifies, obtains the gesture shape of real-time change.
Preferably, described gesture resolution unit comprises:
Position information acquisition module, for obtaining the relative tertiary location information of the gesture shape of real-time change;
Contact information acquisition module, for the contact determined in shape according to the gesture of real-time change, obtains the change information of the gesture contact in shape of described real-time change, and described contact is the feature key points of mark hand;
Gesture motion acquisition module, for the change information according to described relative tertiary location information and described contact, obtains corresponding gesture motion in the action database preset.
Further preferably, described position information acquisition module specifically for,
The angle information of gesture change of shape is obtained from the video image information of described at least two-path video data stream;
The range information of user's gesture is obtained according to the angle information of described gesture change of shape;
The relative tertiary location information of user's gesture is obtained according to the angle information of described gesture change of shape and the range information of described user's gesture.
Preferably, described position information acquisition module specifically for,
The angle information of gesture change of shape is obtained from the video image information of described at least two-path video data stream;
The range information of user's gesture is responded in real time by range sensor;
The relative tertiary location information of user's gesture is obtained according to the angle information of described gesture change of shape and the range information of described user's gesture.
Preferably, described gesture collecting unit comprises two cameras;
Described two cameras, for respectively to the gesture Real-time Collection two-path video flow data of user.
On the other hand, the utility model provides a kind of virtual reality helmet, and described virtual reality helmet comprises the system inputting gesture in the 3D scene that technique scheme provides.
Preferably, the gesture collecting unit inputting the system of gesture in described 3D scene is be arranged on a front-facing camera on virtual reality helmet and camera is put at an end.
The beneficial effect of the utility model embodiment is: the utility model embodiment discloses the system and virtual reality helmet that input gesture in a kind of 3D scene, the gesture collecting unit of described system is to user's gesture Real-time Collection at least two-path video flow data, gesture identification unit identifies the gesture shape with complete hand information from described at least two-path video flow data, corresponding gesture motion is obtained to after this gesture shape analysis through gesture resolution unit, by gesture display unit, this gesture motion being processed into 3D rendering is presented in 3D scene in real time, thus reach the object showing the true gesture of user in 3D scene.
Further, gesture motion is also processed by gesture operation unit by optimal technical scheme of the present utility model, generates corresponding gesture semantic, makes 3D scene carry out corresponding operating by this gesture semanteme, thus realizes the object by input gesture control 3D scene.Compared to prior art, the technical program does not need keyboard and mouse just can carry out alternately with virtual unit, and this reciprocal process is without the need to doing too much constraint to user and environment for use, namely without the need to wearing any mark and sensor on user's body, control between user and scene in real time by the true gesture of user mutual, improve the experience of user.
Accompanying drawing explanation
The system architecture schematic diagram of gesture is inputted in a kind of 3D scene that Fig. 1 provides for the utility model embodiment;
A kind of techniqueflow schematic diagram utilizing gesture motion to operate virtual reality helmet that Fig. 2 provides for the utility model embodiment.
Embodiment
For making the purpose of this utility model, technical scheme and advantage clearly, below in conjunction with accompanying drawing, the utility model embodiment is described in further detail.
Integral Thought of the present utility model is: utilize at least two cameras Real-time Collection user gesture from different perspectives, according to the gesture shape of the video stream data identification user of each camera collection, parsing is carried out to the gesture shape identified and obtains corresponding gesture motion, described gesture motion being processed into 3D rendering is presented in 3D scene in real time, and make 3D scene carry out the operation of this gesture motion, thus by the true gesture finishing man-machine interaction of user.
One side of the present utility model provides the system inputting gesture in a kind of 3D scene, as shown in Figure 1, input the system architecture schematic diagram of gesture in a kind of 3D scene that Fig. 1 provides for the utility model embodiment, described system comprises: gesture collecting unit 11, gesture identification unit 12, gesture resolution unit 13 and gesture display unit 14.
Gesture collecting unit 11, for respectively to the gesture Real-time Collection at least two-path video flow data of user.
Wherein, gesture collecting unit 11 can pass through multiple camera, the from different perspectives gesture of Real-time Collection user, thus obtains multi-channel video flow data.In actual applications, according to the data processing performance of system and system accuracy requirement, the video stream data of the corresponding way of camera collection of suitable quantity can be selected.It should be noted that, the camera in gesture collecting unit 11 can be the white light camera of universality energy, and also can be infrared camera, the present embodiment be particularly limited to gesture collecting unit.Preferably, gesture collecting unit 11 comprises two cameras, for respectively to the gesture Real-time Collection two-path video flow data of user.
Gesture identification unit 12, for identifying the gesture shape of real-time change from least two-path video flow data.
Gesture resolution unit 13, for resolving the gesture shape of real-time change, obtains corresponding gesture motion.
Gesture display unit 14, is presented in 3D scene in real time for this gesture motion is processed into 3D rendering.
This gesture motion can be processed into 3D rendering superposition and project in 3D scene by the gesture display unit 14 in the utility model, realizes the real-time display of this gesture motion in 3D scene.Preferably, split screen technology can be adopted to be projected to by 3D rendering in 3D scene, namely adopt main display display 3D scene, the gesture motion being processed into 3D rendering is shown by another display screen, by the relative theory of optics, making to be presented in human eye is the 3D scene comprising gesture motion.
Preferably, described system also comprises gesture operation unit, for obtaining the operational order that the corresponding gesture of above-mentioned gesture motion is semantic and this gesture semanteme is corresponding in the semantic database preset; And semantic for this gesture corresponding operational order is sent to 3D scene, make 3D scene carry out the operation of this gesture semanteme.
Wherein, semantic database can be a data relation table, and each gesture motion operational order that corresponding a kind of gesture is semantic and this gesture semanteme is corresponding respectively, such as, can be defined as slip screen with switching display content by the gesture motion of translation.
The gesture collecting unit of the present embodiment is to user's gesture Real-time Collection at least two-path video flow data, gesture identification unit identifies the gesture shape with complete hand information from least two-path video flow data, corresponding gesture motion is obtained to after this gesture shape analysis through gesture resolution unit, by gesture display unit, this gesture motion is processed into 3D rendering to be presented in real time in 3D scene, thus reaches the object showing the true gesture of user in 3D scene.
Further, this gesture motion is also passed through gesture operation cell processing by preferred embodiment, generates corresponding gesture semantic, makes 3D scene carry out the operation of this gesture semanteme, thus realizes the object by input gesture control 3D scene.Compared to prior art, the technical program does not need keyboard and mouse just can carry out alternately with virtual reality device, and this reciprocal process is without the need to doing too much constraint to user and environment for use, namely without the need to wearing any mark and sensor on user's body.
Preferably, above-mentioned embodiment illustrated in fig. 1 in gesture identification unit 12 comprise: sampling module, gesture profile extraction module, gesture shape identification module and gesture shape synthesis module.
Sampling module, for carrying out sampling processing respectively to described at least two-path video flow data Zhong Ge road, obtains the vedio data of each sampling.
Gesture profile extraction module, for judging whether comprise hand information in described vedio data, if comprise, carries out binary conversion treatment to described vedio data, extracts hand outline information.
It should be noted that, gesture profile extraction module in the present embodiment can judge whether comprise hand information in vedio data by prior art, such as by analyzing in video image whether occur the information such as the character shape of five character shapes pointed and palm, can judge whether comprise hand information in this video image.
Gesture shape identification module, for identifying gesture shape corresponding to described hand outline information in the gesture model database preset.
Exemplary, above-mentioned gesture profile extraction module can when user uses native system first time, be saved in gesture model database by the various gestures of user gestures such as () the such as the five fingers strut, clench fist, now gesture shape identification module then can identify gesture shape corresponding to this hand outline information according to the gesture model database of the true gesture that store user.What pre-deposit in certain gesture model database also can be hand-type feature (the different conditions features of the such as the five fingers), by detecting the corresponding gesture shape of status flag identification of each finger in hand outline information.
Gesture shape synthesis module, the gesture shape identified after each sampling for the synthesis of each road video stream data, obtains the gesture shape of real-time change.
In actual applications, what use due to each road video stream data is all the part of the hand of user, complete hand cannot be obtained at synchronization, therefore the present embodiment adopts gesture shape synthesis module, the gesture shape identified after each sampling of each road video stream data is carried out synthesis process, to obtain the gesture shape of more information.
From the above mentioned, gesture identification unit identifies corresponding gesture shape according to the gesture profile information in the video stream data of each road, and the gesture identified in multi-channel video flow data is synthesized, obtain the gesture shape comprising user's hand full detail, thus strengthen the true effect of the gesture be presented in 3D scene, improve the experience of user.
Preferably, the gesture resolution unit in preferred embodiment shown in above-mentioned Fig. 1 comprises: position information acquisition module, contact information acquisition module and gesture motion acquisition module.
Position information acquisition module, for obtaining the relative tertiary location information of the gesture shape of real-time change.
Due to multiple camera synchronization user's gesture is taken time, the light meeting that each camera sends and user's sign-shaped are in an angle, be moved as user's gesture or change, the light that each camera sends and the angle that user's gesture is formed may change, the change of these angles is reflected in the change then showing as locus in video streaming image data, and therefore the technical program obtains the relative tertiary location information of the gesture shape of real-time change based on this objective fact.
Concrete, the utility model schematically illustrates two kinds of relative tertiary location information obtaining the gesture shape of real-time change.Wherein, the first mode obtaining the relative tertiary location information of gesture shape is:
The angle information of gesture change of shape is obtained in the video image information of position information acquisition module at least two-path video data stream described in from above-mentioned gesture collecting unit; Obtain the range information of user's gesture according to the angle information of gesture change of shape, obtain the relative tertiary location information of user's gesture in conjunction with the angle information of gesture change of shape and the range information of user's gesture.
The mode that the second obtains the relative tertiary location information of gesture shape is:
The angle information of gesture change of shape is obtained in the video image information of position information acquisition module at least two-path video data stream described in from above-mentioned gesture collecting unit; The range information of user's gesture is responded in real time by range sensor; The relative tertiary location information of user's gesture is obtained in conjunction with the angle information of gesture change of shape and the range information of user's gesture.
Above-mentioned two schemes all improves the accuracy of the relative tertiary location information of the gesture shape obtained by the real-time range information of the angle information that changes in conjunction with gesture and gesture.Wherein the first scheme does not need extra sensor, and the information only provided by video stream data itself just can obtain the relative tertiary location information of gesture shape, but needs to be realized by advanced algorithm, can increase the computation complexity of system; And first scheme responds to the distance change of gesture in real time by range sensor, the relative tertiary location information of higher precision just can be obtained by simple algorithm.When reality uses, can require to select suitable scheme according to specific design.
Contact information acquisition module, for the contact determined in shape according to the gesture of real-time change, obtains the change information of the gesture contact in shape of real-time change, and described contact is the feature key points of mark hand.
It should be noted that, the contact in this module is the feature key points of mark hand, and this key point is preferably each articulation point of hand, thus better determines the gesture shape of real-time change.The technical program is not particularly limited to the set-up mode that quantity and the contact of gesture contact are in shape established, and comprehensively can weigh the requirement specific design of the aspects such as the data-handling capacity of system accuracy and system in the design process.
Gesture motion acquisition module, for the change information according to relative tertiary location information and contact, obtains corresponding gesture motion in the action database preset.
Another aspect of the present utility model provides a kind of virtual reality helmet, and this virtual reality helmet comprises the system inputting gesture in the 3D scene of technique scheme.
Preferably, the gesture collecting unit inputting the system of gesture in this 3D scene is be arranged on a front-facing camera on virtual reality helmet and camera is put at an end.
For the beneficial effect of more detailed explanation the technical program, be described for virtual helmet.
This virtual helmet comprises: for inputting the system of gesture in the 3D scene of the 3D display screen and technique scheme that show 3D virtual reality scenario, and the gesture collecting unit wherein inputting the system of gesture in 3D scene is be arranged on a front-facing camera on virtual reality helmet and camera is put at an end.
The principle of work of this virtual reality helmet is: put camera by front-facing camera and the end and carry out Real-time Collection to the gesture of user, obtain two-path video flow data, gesture shape is identified from two-path video flow data, corresponding gesture motion is obtained by resolving described gesture shape, this gesture motion being processed into 3D rendering is presented in 3D virtual reality scenario in real time, gesture corresponding for this gesture motion semanteme is sent to the primary processor of virtual reality helmet simultaneously, control the operation that virtual reality helmet carries out described semanteme.
Wherein, obtain the gesture motion of user according to video stream data, and drive virtual reality helmet to carry out the techniqueflow of corresponding operating as shown in Figure 2 according to this gesture motion:
S200, the video stream data that camera collection arrives is put at acquisition front-facing camera and the end.
S201, carries out video sampling process respectively to the two-path video flow data of current time, obtains corresponding video image.
S202, judges the gesture whether having user in video image, if had, then jumps to step S202, if do not had, then obtains the video stream data of subsequent time.
S203, carries out binary conversion treatment to vedio data, extracts hand outline information.
S204, identifies current gesture shape according to the static gesture model preset from hand outline information.
S205, the gesture shape identified after the sampling of synthesis two-path video flow data, obtains the gesture shape comprising more hand information.
S206, obtains the spatial position change information of gesture.
S207, according to change information and the gesture space change in location information of gesture contact, utilizes HMM (Hidden Markov Model, hidden Markov model) dynamic gesture identification method, obtains the corresponding gesture motion of the gesture shape of real-time change.
S208, obtains corresponding gesture according to gesture motion semantic in the semantic database preset.
S209, controls the operation that virtual reality helmet carries out above-mentioned gesture semanteme.
The system inputting gesture in 3D scene is applied in virtual reality helmet by the present embodiment, using the input of the action of user oneself hand as virtual reality helmet, make the associative operation that user has been come in virtual reality scenario by the hand of oneself, thus improve the experience of user, optimize man-machine interaction.
In sum, the utility model embodiment discloses the system and virtual reality helmet that input gesture in a kind of 3D scene, the gesture collecting unit of described system is to user's gesture Real-time Collection at least two-path video flow data, gesture identification unit identifies the gesture shape with complete hand information from least two-path video flow data, corresponding gesture motion is obtained to after this gesture shape analysis through gesture resolution unit, by gesture display unit, this gesture motion being processed into 3D rendering is presented in 3D scene in real time, thus reach the object showing the true gesture of user in 3D scene.Further, gesture motion is also processed by gesture operation unit by optimal technical scheme of the present utility model, generates corresponding gesture semantic, makes 3D scene carry out corresponding operating by this gesture semanteme, thus realizes the object by input gesture control 3D scene.Compared to prior art, the technical program does not need keyboard and mouse just can carry out alternately with virtual unit, and this reciprocal process is without the need to doing too much constraint to user and environment for use, namely without the need to wearing any mark and sensor on user's body, control to carry out alternately with scene by the true gesture of user, improve the experience of user.
The foregoing is only preferred embodiment of the present utility model, be not intended to limit protection domain of the present utility model.All do within spirit of the present utility model and principle any amendment, equivalent replacement, improvement etc., be all included in protection domain of the present utility model.

Claims (9)

1. input a system for gesture in 3D scene, it is characterized in that, comprising: gesture collecting unit, gesture identification unit, gesture resolution unit and gesture display unit;
Described gesture collecting unit, for respectively to the gesture Real-time Collection at least two-path video flow data of user;
Described gesture identification unit, for identifying the gesture shape of real-time change from described at least two-path video flow data;
Described gesture resolution unit, for resolving the gesture shape of described real-time change, obtains corresponding gesture motion;
Described gesture display unit, is presented in described 3D scene in real time for described gesture motion being processed into 3D rendering.
2. system according to claim 1, is characterized in that, described system also comprises gesture operation unit,
Described gesture operation unit, the operational order that the gesture for obtaining described gesture motion in the semantic database preset is semantic and this gesture semanteme is corresponding; And semantic for described gesture corresponding operational order is sent to described 3D scene, make described 3D scene carry out the operation of described gesture semanteme.
3. system according to claim 1, is characterized in that, described gesture identification unit comprises:
Sampling module, for carrying out sampling processing respectively to described at least two-path video flow data Zhong Ge road, obtains the vedio data of each sampling;
Gesture profile extraction module, for judging whether comprise hand information in described vedio data, if comprise, carries out binary conversion treatment to described vedio data, extracts hand outline information;
Gesture shape identification module, for identifying gesture shape corresponding to described hand outline information in the gesture model database preset;
Gesture shape synthesis module, the gesture shape that each sampling for the synthesis of each road video stream data identifies, obtains the gesture shape of real-time change.
4. system according to claim 1, is characterized in that, described gesture resolution unit comprises:
Position information acquisition module, for obtaining the relative tertiary location information of the gesture shape of real-time change;
Contact information acquisition module, for the contact determined in shape according to the gesture of real-time change, obtains the change information of the gesture contact in shape of described real-time change, and described contact is the feature key points of mark hand;
Gesture motion acquisition module, for the change information according to described relative tertiary location information and described contact, obtains corresponding gesture motion in the action database preset.
5. system according to claim 4, is characterized in that, described position information acquisition module specifically for,
The angle information of gesture change of shape is obtained from the video image information of described at least two-path video data stream;
Obtain the range information of user's gesture according to the angle information of described gesture change of shape, or responded to the range information of user's gesture by range sensor in real time;
The relative tertiary location information of user's gesture is obtained according to the angle information of described gesture change of shape and the range information of described user's gesture.
6. system according to claim 4, is characterized in that, described position information acquisition module specifically for,
The angle information of gesture change of shape is obtained from the video image information of described at least two-path video data stream;
The range information of user's gesture is responded in real time by range sensor;
The relative tertiary location information of user's gesture is obtained according to the angle information of described gesture change of shape and the range information of described user's gesture.
7. the system according to any one of claim 1-6, is characterized in that, described gesture collecting unit comprises two cameras;
Described two cameras, for respectively to the gesture Real-time Collection two-path video flow data of user.
8. a virtual reality helmet, is characterized in that, described virtual reality helmet comprises the system inputting gesture in the 3D scene described in any one of the claims 1-7.
9. equipment according to claim 8, is characterized in that, the gesture collecting unit inputting the system of gesture in described 3D scene is be arranged on a front-facing camera on virtual reality helmet and camera is put at an end.
CN201420860350.0U 2014-12-30 2014-12-30 System and the virtual reality helmet of gesture is inputted in a kind of 3D scene Active CN204463032U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201420860350.0U CN204463032U (en) 2014-12-30 2014-12-30 System and the virtual reality helmet of gesture is inputted in a kind of 3D scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201420860350.0U CN204463032U (en) 2014-12-30 2014-12-30 System and the virtual reality helmet of gesture is inputted in a kind of 3D scene

Publications (1)

Publication Number Publication Date
CN204463032U true CN204463032U (en) 2015-07-08

Family

ID=53669892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201420860350.0U Active CN204463032U (en) 2014-12-30 2014-12-30 System and the virtual reality helmet of gesture is inputted in a kind of 3D scene

Country Status (1)

Country Link
CN (1) CN204463032U (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104571510A (en) * 2014-12-30 2015-04-29 青岛歌尔声学科技有限公司 Gesture input system and method in 3D scene
CN105107200A (en) * 2015-08-14 2015-12-02 济南中景电子科技有限公司 Face change system and method based on real-time deep somatosensory interaction and augmented reality technology
CN105487673A (en) * 2016-01-04 2016-04-13 京东方科技集团股份有限公司 Man-machine interactive system, method and device
CN105955469A (en) * 2016-04-26 2016-09-21 乐视控股(北京)有限公司 Control method and device of virtual image
CN106095068A (en) * 2016-04-26 2016-11-09 乐视控股(北京)有限公司 The control method of virtual image and device
CN106125903A (en) * 2016-04-24 2016-11-16 林云帆 Many people interactive system and method
CN106527677A (en) * 2016-01-27 2017-03-22 深圳市原点创新设计有限公司 Method and device for interaction between VR/AR system and user
CN106598235A (en) * 2016-11-29 2017-04-26 歌尔科技有限公司 Gesture recognition method and apparatus for virtual reality device, and virtual reality device
CN106790996A (en) * 2016-11-25 2017-05-31 杭州当虹科技有限公司 Mobile phone virtual reality interactive system and method
CN106873778A (en) * 2017-01-23 2017-06-20 深圳超多维科技有限公司 A kind of progress control method of application, device and virtual reality device
CN106980362A (en) * 2016-10-09 2017-07-25 阿里巴巴集团控股有限公司 Input method and device based on virtual reality scenario
WO2018121779A1 (en) * 2016-12-30 2018-07-05 中兴通讯股份有限公司 Augmented reality implementation method, augmented reality implementation device, and augmented reality implementation system
CN109872519A (en) * 2019-01-13 2019-06-11 上海萃钛智能科技有限公司 A kind of wear-type remote control installation and its remote control method
US10482670B2 (en) 2014-12-30 2019-11-19 Qingdao Goertek Technology Co., Ltd. Method for reproducing object in 3D scene and virtual reality head-mounted device

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104571510A (en) * 2014-12-30 2015-04-29 青岛歌尔声学科技有限公司 Gesture input system and method in 3D scene
WO2016107231A1 (en) * 2014-12-30 2016-07-07 青岛歌尔声学科技有限公司 System and method for inputting gestures in 3d scene
US10482670B2 (en) 2014-12-30 2019-11-19 Qingdao Goertek Technology Co., Ltd. Method for reproducing object in 3D scene and virtual reality head-mounted device
US10466798B2 (en) 2014-12-30 2019-11-05 Qingdao Goertek Technology Co., Ltd. System and method for inputting gestures in 3D scene
CN105107200A (en) * 2015-08-14 2015-12-02 济南中景电子科技有限公司 Face change system and method based on real-time deep somatosensory interaction and augmented reality technology
CN105107200B (en) * 2015-08-14 2018-09-25 济南中景电子科技有限公司 Face Changing system and method based on real-time deep body feeling interaction and augmented reality
CN105487673A (en) * 2016-01-04 2016-04-13 京东方科技集团股份有限公司 Man-machine interactive system, method and device
US10585488B2 (en) 2016-01-04 2020-03-10 Boe Technology Group Co., Ltd. System, method, and apparatus for man-machine interaction
CN106527677A (en) * 2016-01-27 2017-03-22 深圳市原点创新设计有限公司 Method and device for interaction between VR/AR system and user
CN106125903A (en) * 2016-04-24 2016-11-16 林云帆 Many people interactive system and method
CN106095068A (en) * 2016-04-26 2016-11-09 乐视控股(北京)有限公司 The control method of virtual image and device
CN105955469A (en) * 2016-04-26 2016-09-21 乐视控股(北京)有限公司 Control method and device of virtual image
CN106980362A (en) * 2016-10-09 2017-07-25 阿里巴巴集团控股有限公司 Input method and device based on virtual reality scenario
US11054912B2 (en) 2016-10-09 2021-07-06 Advanced New Technologies Co., Ltd. Three-dimensional graphical user interface for informational input in virtual reality environment
US10474242B2 (en) 2016-10-09 2019-11-12 Alibaba Group Holding Limited Three-dimensional graphical user interface for informational input in virtual reality environment
CN106790996A (en) * 2016-11-25 2017-05-31 杭州当虹科技有限公司 Mobile phone virtual reality interactive system and method
WO2018098862A1 (en) * 2016-11-29 2018-06-07 歌尔科技有限公司 Gesture recognition method and device for virtual reality apparatus, and virtual reality apparatus
CN106598235B (en) * 2016-11-29 2019-10-22 歌尔科技有限公司 Gesture identification method, device and virtual reality device for virtual reality device
CN106598235A (en) * 2016-11-29 2017-04-26 歌尔科技有限公司 Gesture recognition method and apparatus for virtual reality device, and virtual reality device
WO2018121779A1 (en) * 2016-12-30 2018-07-05 中兴通讯股份有限公司 Augmented reality implementation method, augmented reality implementation device, and augmented reality implementation system
CN106873778A (en) * 2017-01-23 2017-06-20 深圳超多维科技有限公司 A kind of progress control method of application, device and virtual reality device
CN106873778B (en) * 2017-01-23 2020-04-28 深圳超多维科技有限公司 Application operation control method and device and virtual reality equipment
CN109872519A (en) * 2019-01-13 2019-06-11 上海萃钛智能科技有限公司 A kind of wear-type remote control installation and its remote control method

Similar Documents

Publication Publication Date Title
CN204463032U (en) System and the virtual reality helmet of gesture is inputted in a kind of 3D scene
CN104571510A (en) Gesture input system and method in 3D scene
CN106598227B (en) Gesture identification method based on Leap Motion and Kinect
CN104571511B (en) The system and method for object are reappeared in a kind of 3D scenes
CN105739702B (en) Multi-pose finger tip tracking for natural human-computer interaction
CN107728792B (en) Gesture recognition-based augmented reality three-dimensional drawing system and drawing method
CN1304931C (en) Head carried stereo vision hand gesture identifying device
Dinh et al. Hand gesture recognition and interface via a depth imaging sensor for smart home appliances
CN103399637A (en) Man-computer interaction method for intelligent human skeleton tracking control robot on basis of kinect
CN102096471B (en) Human-computer interaction method based on machine vision
CN104317391A (en) Stereoscopic vision-based three-dimensional palm posture recognition interactive method and system
CN107450714A (en) Man-machine interaction support test system based on augmented reality and image recognition
CN102831380A (en) Body action identification method and system based on depth image induction
CN109460150A (en) A kind of virtual reality human-computer interaction system and method
Chen et al. Research and implementation of sign language recognition method based on Kinect
CN103105924A (en) Man-machine interaction method and device
CN104460967A (en) Recognition method of upper limb bone gestures of human body
WO2012163124A1 (en) Spatial motion-based input method and terminal
She et al. A real-time hand gesture recognition approach based on motion features of feature points
CN204463031U (en) System and the virtual reality helmet of object is reappeared in a kind of 3D scene
CN109189219A (en) The implementation method of contactless virtual mouse based on gesture identification
CN103426000B (en) A kind of static gesture Fingertip Detection
Wu et al. Unfamiliar dynamic hand gestures recognition based on zero-shot learning
KR101525011B1 (en) tangible virtual reality display control device based on NUI, and method thereof
Abdallah et al. An overview of gesture recognition

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201019

Address after: 261031 north of Yuqing street, east of Dongming Road, high tech Zone, Weifang City, Shandong Province (Room 502, Geer electronic office building)

Patentee after: GoerTek Optical Technology Co.,Ltd.

Address before: International Exhibition Center of wealth 18 No. 266061 Shandong province Qingdao city Laoshan District No. 3 Qinling Mountains Road, building 5 floor

Patentee before: Qingdao GoerTek Technology Co.,Ltd.

TR01 Transfer of patent right