CN104656877A - Human-machine interaction method based on gesture and speech recognition control as well as apparatus and application of human-machine interaction method - Google Patents

Human-machine interaction method based on gesture and speech recognition control as well as apparatus and application of human-machine interaction method Download PDF

Info

Publication number
CN104656877A
CN104656877A CN201310576314.1A CN201310576314A CN104656877A CN 104656877 A CN104656877 A CN 104656877A CN 201310576314 A CN201310576314 A CN 201310576314A CN 104656877 A CN104656877 A CN 104656877A
Authority
CN
China
Prior art keywords
machine interaction
hand
interaction method
location
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310576314.1A
Other languages
Chinese (zh)
Inventor
李君�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201310576314.1A priority Critical patent/CN104656877A/en
Publication of CN104656877A publication Critical patent/CN104656877A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Abstract

The invention relates to a human-machine interaction method based on gesture and speech recognition control as well as an apparatus and application of the human-machine interaction method. The method comprises the steps of (1) performing spatial three-dimensional fine location by tracking the position of a hand, and (2) triggering a corresponding event by taking the recognized speech instruction as a trigger instruction after location, thereby avoiding the influence of the trigger instruction on the stability of location while triggering the corresponding event. The apparatus comprises an image pick-up device, a locating device, a sound pick-up device and a trigger device; a speech instruction trigger device and/or a gesture instruction trigger device are/is arranged in the trigger device. The application mainly refers to realizing the mouse moving function by virtue of the location of the hand and realizing the functions of the left and right keys of the mouse by virtue of the speech instruction and/or the gesture instruction. According to the human-machine interaction method, the hand is adopted to locating, and the location is accurate; in combination with the speech trigger instruction, the adverse influence of the trigger instruction on location can be remarkably reduced, and therefore, extremely high location accuracy can be obtained; the human-machine interaction method is more real, natural and immersive when applied to electronic game software.

Description

Based on the man-machine interaction method of gesture and speech recognition controlled and equipment thereof and application
Technical field
The present invention relates to and a kind ofly to control and the man-machine interaction method of speech recognition controlled based on gesture, and realize the equipment of the method and the application of the method, the event that is mainly used in triggers the situation that may there is adverse effect to location, and obtains adverse effect that is lower or that eliminate completely.
Background technology
The development of the technology such as gesture control, speech recognition controlled, recognition of face and eye move makes man-machine interaction present naturalized trend.But each natural interactive style has its limitation, as gesture controls to be beneficial to meticulous location, be unfavorable for confirming and text event detection; Speech recognition controlled is beneficial to switching mode, selectivity operation and text event detection, but cannot carry out meticulous location; Recognition of face and the dynamic comparatively comfortable labor-saving of eye, also can locate, but same efficiency is lower in confirmation and text event detection.
At present, also there is the method that distinct interaction methods combining uses by some, if publication number is the application for a patent for invention of CN200810030194.4, it adopts a finger to locate, and carries out the operation determined and cancel with the action of another finger.But such scheme still exists certain defect, namely such scheme is in actual applications, another finger additional act (as event triggers time) be bound to cause rocking of the finger for locating, thus cause location generation deviation.
Summary of the invention
Based on the above-mentioned defect of prior art, the technical problem to be solved in the present invention is, in interactive process under prior art, not only there is certain weak point in various natural interactive style, and, significantly, when adopting existing interactive mode, there is the defect that can have a negative impact each other between some operation that operator performs, and then make operator be difficult to reach desirable operating effect.
In order to solve the problems of the technologies described above, the main technical schemes that the present invention adopts is to provide:
Control and the man-machine interaction method of speech recognition controlled based on gesture, mainly comprise step:
(1) the meticulous location of space three-dimensional is carried out in the position by following the tracks of hand;
(2) phonetic order by identifying behind location triggers corresponding event as triggering command, the stability of triggering command impact location during to avoid triggering corresponding event.
Realize an equipment for above-mentioned man-machine interaction method, comprise and to be connected with central processing unit respectively:
Camera head, follows the tracks of the position of hand and obtains the positional information of hand and/or follow the tracks of the continuous motion of hand;
Locating device, the positional information of the hand that the position of following the tracks of hand by camera head obtains carries out the meticulous location of space three-dimensional;
Sound pick up equipment, identifies and obtains phonetic order;
Flip flop equipment, comprises phonetic order flip flop equipment and/or gesture instruction flip flop equipment, and the gesture instruction that the continuous motion that hand followed the tracks of by the phonetic order obtained according to sound pick up equipment identification and/or camera head obtains triggers corresponding event.
The application of above-mentioned man-machine interaction method in popular software, wherein, is realized the function of mouse movement, is realized the function of mouse right and left key by phonetic order and/or gesture instruction by the location of hand.
The invention has the beneficial effects as follows: in method of the present invention, control by adopting gesture to position, locating effect accurately can be obtained, and combine and can not locate to gesture the speech trigger instruction had a negative impact, make method of the present invention significantly can reduce the adverse effect of triggering command to location, thus obtain high Position location accuracy, simultaneously, this kind of method not only goes for various software, what is more important, when the method is applied to game software, can simulate more real, natural interaction impression, application person can be made to obtain fabulous game experiencing, operator is allowed to obtain sensation more on the spot in person.By above several natural interactive style is combined, mutually can overcome the deficiency, create unprecedented man-machine interaction experience, when being applied in game software, there is larger marketable value.
Accompanying drawing explanation
Fig. 1 is the overall flow schematic diagram of an embodiment of man-machine interaction method of the present invention.
Fig. 2 is the framed structure schematic diagram of an embodiment of equipment of the present invention.
Embodiment
In order to better state the present invention, so that understand, below in conjunction with accompanying drawing, by embodiment, the invention will be further described.
See Fig. 1, of the present inventionly to control and the man-machine interaction method of speech recognition controlled based on gesture, comprise step:
(1) the meticulous location of space three-dimensional is carried out in the position by following the tracks of hand;
(2) phonetic order by identifying behind location triggers corresponding event as triggering command, the stability of triggering command impact location during to avoid triggering corresponding event.
Wherein, in step (2), the continuous motion of tracking hand can also be comprised to identify gesture instruction to trigger corresponding event.
In a more preferred embodiment of the present invention, in step (2), also comprise and jointly trigger corresponding event as triggering command with the phonetic order recognized and gesture instruction.
Wherein, in step (2), described gesture instruction comprises gesture combined command.
In the further preferred embodiment of any one embodiment above-mentioned of the present invention, step can also be comprised: (3) follow the tracks of the sight line of head or eyes to convert scene rendering angle;
Or
The sight line that (3 ') follows the tracks of head or eyes comes position, the angle of virtual video camera in handoff scenario.
See Fig. 2, the equipment realizing man-machine interaction method of the present invention, it comprises and to be connected with central processing unit 1 respectively:
Camera head 5, follows the tracks of the position of hand and obtains the positional information of hand and/or follow the tracks of the continuous motion of hand;
Locating device 2, the positional information of the hand that the position of following the tracks of hand by camera head obtains carries out the meticulous location of space three-dimensional;
Sound pick up equipment 3, identifies and obtains phonetic order;
Flip flop equipment 4, is provided with phonetic order flip flop equipment 41 and/or gesture instruction flip flop equipment 42, and the gesture instruction that the continuous motion that hand followed the tracks of by the phonetic order obtained according to sound pick up equipment identification and/or camera head obtains triggers corresponding event.
In one embodiment of the invention, recognition of face video camera 6 can also be provided with, be connected with central processing unit 1 and follow the tracks of operator's head converts virtual video camera in scene rendering angle or handoff scenario accordingly position, angle for central processing unit 1.
In one embodiment of the invention, more than one Eye-controlling focus video camera 7 can also be provided with, be connected with central processing unit 1 and the sight line of following the tracks of operator's eyes converts position, the angle of virtual video camera in scene rendering angle or handoff scenario accordingly for central processing unit 1.
In one embodiment of the invention, described Eye-controlling focus video camera is helmet-type.
In one embodiment of the invention, described Eye-controlling focus video camera is distant reading type.
Wherein, in one embodiment of the invention, described camera head can be body propagated sensation sensor or camera.
Wherein, in one embodiment of the invention, described sound pick up equipment can be speech transducer or microphone.
Usually, be on a kind of electronic equipment, by a kind of body propagated sensation sensor or the common camera in conjunction with software, and speech transducer or the common microphone in conjunction with software, the position of the head of user can be obtained or face direction, or the position that eyes are seen; Finger or the position of hand, or finger prick to the direction of screen and finger for the projected position on the direction of screen on screen, or by certain gestures that the continuous motion of hand completes; Phonetic order, or with the data that user speech volume is correlated with; Input based on above user, computer program can adopt comprehensive user's input as software or game of wherein 2 kinds, 3 kinds to trigger corresponding event.
Present invention also offers the application of a kind of man-machine interaction method in popular software, wherein, realized the function of mouse movement by the location of hand, realized the function of mouse right and left key by phonetic order and/or gesture instruction.
Wherein, described popular software comprises game software.
Such as, a kind of shooting game, has the scene of 2 dimensions or 3 dimensions, target gunnery target.
User is placed on body propagated sensation sensor hand or in conjunction with before the common camera of software, body propagated sensation sensor to be pointed by perception in conjunction with the common camera of software or the skyborne position of hand control to play in virtual aim mirror.
User sends specific sound by the prompting of playing or study course, as said " ", the onomatopoeia such as " watchman's wooden clapper ", speech transducer or catch user in conjunction with the common microphone of software voice after, according to the game events that different default triggerings is different, as said " " be exactly shooting, say that " watchman's wooden clapper " is exactly throw away bomb.
When user move rotation head or change eyes see position time, body propagated sensation sensor or pass through face recognition technology in conjunction with the common camera of software, the position of perception head, the angle of face orientation, or the angle seen of eyes or position, change the angles of display of scene of game, simulate by moving the head of oneself in real world, the angle of rotating oneself just can see different scenes.In gaming, user just can be seen originally by the rear object that anterior object blocks by this rotation, or sees original more scenes outside screen.Another kind of effect is that the attack direction of arrival of meeting to enemy realizes dodging to the attack of enemy.
Again such as, a kind of fistfight acrobatic fighting game, has the scene of 2 dimensions or 3 dimensions, target enemy.
User is placed on body propagated sensation sensor hand or in conjunction with before the common camera of software, body propagated sensation sensor or to be pointed by perception in conjunction with the common camera of software or the skyborne position of hand control to play in impact position or direction, body propagated sensation sensor or to be pointed by perception in conjunction with the common camera of software or the continuous motion of hand identifies and specifically presets gesture, when meeting the default gesture in software when user's gesture, just start corresponding movements in martial arts.Such as when user first upwards erects a second forefinger, then refer to forward, just can trigger " Ming Remx ".
User sends specific sound by the prompting of playing or study course, as said " ", onomatopoeia or " Ming Remxs " such as " watchman's wooden clappers ", acrobatic skill movements in martial arts titles such as " one finger like a pen is ", speech transducer or catch user in conjunction with the common microphone of software voice after, according to the game events that different default triggerings is different, as said " " be exactly the impact of band internal force, say that " Ming Remx " or " one finger like a pen is " is exactly specify different powers or effect to ensuing finger gesture (" Ming Remx " is exactly that enemy is lived surely, " one finger like a pen is " is exactly damage to enemy).
Wherein, when user move rotation head or change eyes see position time, body propagated sensation sensor or pass through face recognition technology in conjunction with the common camera of software, the position of perception head, the angle of face orientation, or the angle seen of eyes or position, change the angles of display of scene of game, simulation is by moving the head of oneself in real world, and the angle of rotating oneself just can see different scenes.In gaming, user just can be seen originally by the rear object that anterior object blocks by this rotation, or sees original more scenes outside screen.Another kind of effect is that the attack direction of arrival of meeting to enemy realizes dodging to the attack of enemy.
The mixing of the concentrated interactive mode proposed in the present invention is to reach the set goal, and main advantage is, locates, and confirms that instruction sends with voice, can disturb the stability of location time face says an instruction scarcely with hand.
Especially, the method of " hand shape identifies that being used for location+voice confirms or cancellation+face recognition change scene rendering angle " in the present invention, in gaming preferably should some tasks required for right game manipulation, the task after having gone some decomposition by diversified most suitable mode instead of attempt to have gone all tasks by a kind of interactive mode.
Therefore, the maximum benefit that the present invention obtains just is that of obtaining the stability (as mentioned before) of hand location, secondly change by the mode of face recognition and play up angle and make game experiencing closer to real experience, if only located with hand, such operation is difficult to accomplish " aiming " and " switching visual angle " two pieces thing simultaneously, be equivalent in the present invention " location of hand is as aiming ", " head rotation switches game visual angle ".
Especially, method of the present invention, in practical operation, its efficiency and accuracy increase significantly than single mode tool, achieve significant effect.Complete location simultaneously as CN200810030194.4 attempts to use gesture and confirm two pieces thing, the Stability and veracity (the dynamic finger that another is used for confirming can cause inevitably moving for the finger of location or shaking) of location will be sacrificed.

Claims (10)

1. control and the man-machine interaction method of speech recognition controlled based on gesture, it is characterized in that, comprise step:
(1) the meticulous location of space three-dimensional is carried out in the position by following the tracks of hand;
(2) phonetic order by identifying behind location triggers corresponding event as triggering command, the stability of triggering command impact location during to avoid triggering corresponding event.
2. man-machine interaction method as claimed in claim 1, is characterized in that: in step (2), also comprises the continuous motion of tracking hand to identify gesture instruction to trigger corresponding event.
3. man-machine interaction method as claimed in claim 2, is characterized in that: in step (2), also comprises and jointly trigger corresponding event as triggering command with the phonetic order recognized and gesture instruction.
4. man-machine interaction method as claimed in claim 2, is characterized in that: in step (2), described gesture instruction comprises gesture combined command.
5. man-machine interaction method as claimed in claim 1, is characterized in that, also comprise step:
(3) sight line of following the tracks of head or eyes converts scene rendering angle;
Or
The sight line that (3 ') follows the tracks of head or eyes comes position, the angle of virtual video camera in handoff scenario.
6. realize an equipment for any one man-machine interaction method in claim 1-5, it is characterized in that, comprise and to be connected with central processing unit respectively:
Camera head, follows the tracks of the position of hand and obtains the positional information of hand and/or follow the tracks of the continuous motion of hand;
Locating device, the positional information of the hand that the position of following the tracks of hand by camera head obtains carries out the meticulous location of space three-dimensional;
Sound pick up equipment, identifies and obtains phonetic order;
Flip flop equipment, is wherein provided with phonetic order flip flop equipment and/or gesture instruction flip flop equipment, and the gesture instruction that the continuous motion that hand followed the tracks of by the phonetic order obtained according to sound pick up equipment identification and/or camera head obtains triggers corresponding event.
7. equipment as claimed in claim 6, is characterized in that: described camera head is body propagated sensation sensor or camera.
8. equipment as claimed in claim 6, is characterized in that: described sound pick up equipment is speech transducer or microphone.
9. the application of any one man-machine interaction method in popular software in claim 1-5, be is characterized in that: the function being realized mouse movement by the location of hand, is realized the function of mouse right and left key by phonetic order and/or gesture instruction.
10. apply as claimed in claim 9, it is characterized in that: described popular software comprises game software.
CN201310576314.1A 2013-11-18 2013-11-18 Human-machine interaction method based on gesture and speech recognition control as well as apparatus and application of human-machine interaction method Pending CN104656877A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310576314.1A CN104656877A (en) 2013-11-18 2013-11-18 Human-machine interaction method based on gesture and speech recognition control as well as apparatus and application of human-machine interaction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310576314.1A CN104656877A (en) 2013-11-18 2013-11-18 Human-machine interaction method based on gesture and speech recognition control as well as apparatus and application of human-machine interaction method

Publications (1)

Publication Number Publication Date
CN104656877A true CN104656877A (en) 2015-05-27

Family

ID=53248118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310576314.1A Pending CN104656877A (en) 2013-11-18 2013-11-18 Human-machine interaction method based on gesture and speech recognition control as well as apparatus and application of human-machine interaction method

Country Status (1)

Country Link
CN (1) CN104656877A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105653164A (en) * 2015-07-31 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Method and terminal for voice inputting user event
CN105681859A (en) * 2016-01-12 2016-06-15 东华大学 Man-machine interaction method for controlling smart TV based on human skeletal tracking
CN105765491A (en) * 2015-09-23 2016-07-13 深圳还是威健康科技有限公司 A method of recognizing hand movements, an intelligent wristband and a terminal
CN106200679A (en) * 2016-09-21 2016-12-07 中国人民解放军国防科学技术大学 Single operation person's multiple no-manned plane mixing Active Control Method based on multi-modal natural interaction
CN106681683A (en) * 2016-12-26 2017-05-17 汎达科技(深圳)有限公司 Device and method for voice-based game operation control
CN107103905A (en) * 2015-12-09 2017-08-29 联想(新加坡)私人有限公司 Method for voice recognition and product and message processing device
WO2018219198A1 (en) * 2017-06-02 2018-12-06 腾讯科技(深圳)有限公司 Man-machine interaction method and apparatus, and man-machine interaction terminal
US10448762B2 (en) 2017-09-15 2019-10-22 Kohler Co. Mirror
US10663938B2 (en) 2017-09-15 2020-05-26 Kohler Co. Power operation of intelligent devices
CN111870918A (en) * 2020-07-07 2020-11-03 哈尔滨金翅鸟科技有限公司 Dummy for simulating fighting training, entertainment and security properties
US10887125B2 (en) 2017-09-15 2021-01-05 Kohler Co. Bathroom speaker
US11099540B2 (en) 2017-09-15 2021-08-24 Kohler Co. User identity in household appliances
US11921794B2 (en) 2017-09-15 2024-03-05 Kohler Co. Feedback for water consuming appliance

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937267A (en) * 2009-07-03 2011-01-05 北京宏景捷讯网络技术股份有限公司 Method for simulating mouse input and device thereof
CN102298443A (en) * 2011-06-24 2011-12-28 华南理工大学 Smart home voice control system combined with video channel and control method thereof
CN102693022A (en) * 2011-12-12 2012-09-26 苏州科雷芯电子科技有限公司 Vision tracking and voice identification mouse system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101937267A (en) * 2009-07-03 2011-01-05 北京宏景捷讯网络技术股份有限公司 Method for simulating mouse input and device thereof
CN102298443A (en) * 2011-06-24 2011-12-28 华南理工大学 Smart home voice control system combined with video channel and control method thereof
CN102693022A (en) * 2011-12-12 2012-09-26 苏州科雷芯电子科技有限公司 Vision tracking and voice identification mouse system

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105653164A (en) * 2015-07-31 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Method and terminal for voice inputting user event
CN105653164B (en) * 2015-07-31 2019-02-01 宇龙计算机通信科技(深圳)有限公司 A kind of method and terminal of voice input customer incident
CN105765491A (en) * 2015-09-23 2016-07-13 深圳还是威健康科技有限公司 A method of recognizing hand movements, an intelligent wristband and a terminal
WO2017049499A1 (en) * 2015-09-23 2017-03-30 深圳还是威健康科技有限公司 Method for recognizing hand movement, and smart wristband and terminal
CN107103905A (en) * 2015-12-09 2017-08-29 联想(新加坡)私人有限公司 Method for voice recognition and product and message processing device
CN105681859A (en) * 2016-01-12 2016-06-15 东华大学 Man-machine interaction method for controlling smart TV based on human skeletal tracking
CN106200679B (en) * 2016-09-21 2019-01-29 中国人民解放军国防科学技术大学 Single operation person's multiple no-manned plane mixing Active Control Method based on multi-modal natural interaction
CN106200679A (en) * 2016-09-21 2016-12-07 中国人民解放军国防科学技术大学 Single operation person's multiple no-manned plane mixing Active Control Method based on multi-modal natural interaction
CN106681683A (en) * 2016-12-26 2017-05-17 汎达科技(深圳)有限公司 Device and method for voice-based game operation control
CN108986801A (en) * 2017-06-02 2018-12-11 腾讯科技(深圳)有限公司 A kind of man-machine interaction method, device and human-computer interaction terminal
WO2018219198A1 (en) * 2017-06-02 2018-12-06 腾讯科技(深圳)有限公司 Man-machine interaction method and apparatus, and man-machine interaction terminal
CN108986801B (en) * 2017-06-02 2020-06-05 腾讯科技(深圳)有限公司 Man-machine interaction method and device and man-machine interaction terminal
US10448762B2 (en) 2017-09-15 2019-10-22 Kohler Co. Mirror
US10663938B2 (en) 2017-09-15 2020-05-26 Kohler Co. Power operation of intelligent devices
US10887125B2 (en) 2017-09-15 2021-01-05 Kohler Co. Bathroom speaker
US11099540B2 (en) 2017-09-15 2021-08-24 Kohler Co. User identity in household appliances
US11314215B2 (en) 2017-09-15 2022-04-26 Kohler Co. Apparatus controlling bathroom appliance lighting based on user identity
US11892811B2 (en) 2017-09-15 2024-02-06 Kohler Co. Geographic analysis of water conditions
US11921794B2 (en) 2017-09-15 2024-03-05 Kohler Co. Feedback for water consuming appliance
US11949533B2 (en) 2017-09-15 2024-04-02 Kohler Co. Sink device
CN111870918A (en) * 2020-07-07 2020-11-03 哈尔滨金翅鸟科技有限公司 Dummy for simulating fighting training, entertainment and security properties

Similar Documents

Publication Publication Date Title
CN104656877A (en) Human-machine interaction method based on gesture and speech recognition control as well as apparatus and application of human-machine interaction method
US10486050B2 (en) Virtual reality sports training systems and methods
US10713001B2 (en) Control method and virtual reality experience provision apparatus
US10671239B2 (en) Three dimensional digital content editing in virtual reality
US10782779B1 (en) Feedback coordination for a virtual interaction
US8740702B2 (en) Action trigger gesturing
US8657683B2 (en) Action selection gesturing
JP5116679B2 (en) Intensive computer image and sound processing and input device for interfacing with computer programs
JP2019121362A (en) Connection of physical object and virtual object in augmented reality
US20220212086A1 (en) Virtual reality sports training systems and methods
CN104857704A (en) Wearable virtual reality motion helmet and wearable virtual action game system
KR20080033352A (en) Interactive entertainment system and method of operation thereof
WO2015025442A1 (en) Information processing device and information processing method
WO2017197394A1 (en) Editing animations using a virtual reality controller
US10678327B2 (en) Split control focus during a sustained user interaction
US8845431B2 (en) Shape trace gesturing
WO2019173063A1 (en) Spatialized haptic device force feedback
Yun et al. Generating real-time, selective, and multimodal haptic effects from sound for gaming experience enhancement
KR102205901B1 (en) Method for providing augmented reality, and the computing device
JP6728111B2 (en) Method of providing virtual space, method of providing virtual experience, program, and recording medium
JP6791520B2 (en) Game controls, game systems, and programs
WO2024055905A1 (en) Data processing methods, apparatus, device, and medium
KR102200251B1 (en) Interactive real- time CG(Computer Graphics) image system
WO2021240601A1 (en) Virtual space body sensation system
Hendricks et al. EEG: the missing gap between controllers and gestures

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150527

WD01 Invention patent application deemed withdrawn after publication