CN110531859A - Man-machine interaction method and device based on VR aobvious identification user's operation movements - Google Patents

Man-machine interaction method and device based on VR aobvious identification user's operation movements Download PDF

Info

Publication number
CN110531859A
CN110531859A CN201910823525.8A CN201910823525A CN110531859A CN 110531859 A CN110531859 A CN 110531859A CN 201910823525 A CN201910823525 A CN 201910823525A CN 110531859 A CN110531859 A CN 110531859A
Authority
CN
China
Prior art keywords
eye
aobvious
alternative events
user
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910823525.8A
Other languages
Chinese (zh)
Inventor
盘善荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha University of Science and Technology
Original Assignee
Changsha University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha University of Science and Technology filed Critical Changsha University of Science and Technology
Priority to CN201910823525.8A priority Critical patent/CN110531859A/en
Publication of CN110531859A publication Critical patent/CN110531859A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The invention discloses a kind of man-machine interaction methods based on VR aobvious identification user's operation movements, comprising: edits alternative events according to eye motion and/or headwork, generates alternative events configuration file corresponding with VR operation;User's eye motion data are obtained using eye closing recognizer component and obtain user's head action data using position sensor, determine current alternative events according to user's eye motion data and/or user's head action data;Judge whether current alternative events meet the alternative events in the configuration file prestored;When current alternative events meet the alternative events in the configuration file prestored, according to current alternative events query configuration file, obtains current VR corresponding with current alternative events in configuration file and operate and execute current VR operation.The invention also discloses a kind of VR aobvious device, the present invention improves the validity of the exchange method, and enhances VR aobvious device operability.

Description

Man-machine interaction method and device based on VR aobvious identification user's operation movements
Technical field
The present invention relates to VR aobvious technical fields more particularly to a kind of shown based on VR to identify the man-machine of user's operations movement Exchange method and one kind are based on VR aobvious device.
Background technique
Virtual reality head-mounted display apparatus, abbreviation VR are aobvious, be it is a kind of using head-mounted display apparatus by the external of people The vision on boundary, sense of hearing closing, guidance user generate a kind of feeling in virtual environment.Its displaying principle is right and left eyes screen It shows respectively and generates three-dimensional sense in brain after the image of right and left eyes, human eye obtain this discrepant information of band.
With the continuous development of 5G commercialization and Screen Technology, the application scenarios of virtual reality head-mounted display apparatus and application Range is more and more wide, and VR aobvious advantages are convenient for carrying, are needed thus in the condition for not influencing portability light and handy with volume Under, enhance the validity of VR aobvious man-machine interaction methods and increase more human-computer interaction events, enhances operability.
Summary of the invention
The man-machine interaction methods that identification user's operations act are shown based on VR the main purpose of the present invention is to provide a kind of, It is intended to enhance the validity and operability of VR aobvious man-machine interaction methods.
To achieve the above object, the man-machine interaction method packet provided by the invention based on VR aobvious identification user's operation movements It includes following steps: being applied to VR aobvious device, the VR aobvious device includes VR aobvious, eye closing recognizer components and position sensing Device;The method includes the following steps:
Alternative events are edited according to eye motion and/or headwork, generate alternative events configuration corresponding with VR operation File;
It obtains user's eye motion data using the eye closing recognizer component and is obtained using the position sensor and use Account portion action data determines current interaction according to user's eye motion data and/or the user's head action data Event;
Judge whether the current alternative events meet the alternative events in the configuration file prestored;
When the current alternative events meet the alternative events in the configuration file prestored, worked as according to described Preceding alternative events inquire the configuration file, obtain current VR corresponding with the current alternative events in the configuration file and grasp Make and executes the current VR operation.
Preferably, described to obtain user's eye motion data using the eye closing recognizer component and passed using the position Sensor obtains the step of user's head action data, comprising:
User's eye closed action and the corresponding eye of the eye closed action are obtained using the eye closing recognizer component Closing time determines user's eye motion data according to the eye closed action and the eye closing time;
User's head rotation direction and the corresponding angle of rotation in the head rotation direction are obtained using the position sensor Degree.
Preferably, described to obtain user's eye closed action and the eye closed action using the eye closing recognizer component Corresponding eye closing time determines user's eye motion according to the eye closed action and the eye closing time The step of data, comprising:
Using the eye closing recognizer component obtain user's eye closed action, and obtain eye closed action it is lasting when Between;
Judge whether the duration is greater than the first preset time;
When the duration being greater than the first preset time, the eye closed action is determined as effectively closed dynamic Make, user's eye motion data are determined according to the effectively closed movement;
When the duration being less than or equal to the first preset time, the eye closed action is determined as closing in vain Conjunction movement.
Preferably, described to obtain user's head rotation direction and the head rotation direction pair using the position sensor The step of rotational angle answered, comprising:
The VR aobvious initial angle when eye closed action occurs is obtained by the position sensor, is obtained Described VR aobvious termination point and VR show between initial angle to termination point at the end of the eye closed action Exercise data;
The head rotation direction and institute are determined according to the initial angle, the termination point and the exercise data State rotational angle.
Preferably, when the duration being greater than the first preset time, the eye closed action is determined as effectively After the step of closed action, user's eye motion data determining according to the effectively closed movement, further includes:
Judge that whether described VR show in suspendable operating mode;
When showing in the suspendable operating mode for described VR, it is described suspendable to control described VR aobvious pause Operating mode;
When the described VR aobvious pause suspendable operating mode, execution is described to be obtained using the position sensor The step of user's head rotation direction and the corresponding rotational angle in the head rotation direction;
Judge whether the VR head portrait returns back to the initial angle;
When described VR, which shows, returns back to the initial angle, the described VR aobvious work for continuing starting pause is controlled.
Preferably, described to obtain user's eye closed action and the eye closed action using the eye closing recognizer component Corresponding eye closing time determines user's eye motion according to the eye closed action and the eye closing time The step of data, further includes:
Judge whether the duration is greater than the second preset time, the first preset time is less than the second preset time;
When the duration being greater than the second preset time, controls described VR and show into standby mode.
Preferably, described when the duration being greater than the second preset time, it controls described VR and shows into standby shape After the step of state, further includes:
Start recording stand-by time when into the standby mode;
Judge whether the stand-by time is greater than third preset time;
When the stand-by time is greater than third preset time, control described VR it is aobvious into off-mode.
Preferably, described that alternative events are edited according to eye motion and/or headwork, it generates alternative events and VR is operated The step of corresponding configuration file, comprising:
Alternative events are edited according to eye motion and/or headwork, it is aobvious by corresponding to described VR of each alternative events In each operating mode of device, to generate the configurations text corresponding with VR operation of the alternative events under different working modes Part;
It is described to judge whether the current alternative events meet the alternative events in the configuration file prestored Step, comprising:
Obtain described VR aobvious current operation mode;
Judge whether the current alternative events meet the friendship in the corresponding base configuration file of current operation mode Mutual event;
It is described when the current alternative events meet the alternative events in the configuration file prestored, according to institute It states current alternative events and inquires the configuration file, obtain corresponding with the current alternative events current in the configuration file The step of VR is operated and is executed current VR operation, comprising:
When the current alternative events meet the alternative events in the corresponding base configuration file of current operation mode When, the corresponding base configuration file of the current operation mode is inquired according to the current alternative events;
It obtains corresponding with the current alternative events in the corresponding base configuration file of the current operation mode Current VR operation;Execute the current VR operation.
In addition, to achieve the above object, the present invention also provides a kind of VR aobvious device, including processor module, Yi Jifen The VR head that does not connect with the processor die block signal is aobvious, communication module, storage medium, screen module, eye closing recognizer component and Position sensor;The processor module is connect by communication module with computer, and the storage medium memory is placed with configuration text Part.
Preferably, the communication module includes USB interface and Wi-Fi component, the USB interface and Wi-Fi component It is connect respectively with processor module.
In the inventive solutions, alternative events are edited according to eye motion and/or headwork, generates interaction thing Part configuration file corresponding with VR operation;User's eye motion data are obtained using eye closing recognizer component and utilize position sensing Device obtains user's head action data, is determined according to user's eye motion data and/or the user's head action data current Alternative events;Judge whether current alternative events meet the alternative events in the configuration file prestored;When current alternative events accord with When closing the alternative events in the configuration file that prestores, according to current alternative events query configuration file, obtain in configuration file with The current corresponding current VR of alternative events is operated and is executed current VR operation.
Using eye motion and/or headwork, when use be affected by the surrounding environment it is smaller, to improve the exchange method Validity, this programme increases eye motion and/or headwork into human-computer interaction, and passes through key side in the prior art Formula carries out human-computer interaction and compares, and increases new man-machine interaction mode, such as carries out headwork, eye after first carrying out eye motion Portion's movement and headwork carry out simultaneously, so that the alternative events of VR aobvious device are increased, so as to increase more VR Operation, original sub- VR operation is directly corresponding with alternative events, the process of VR operation is simplified, the response speed of VR operation is more Fastly, enhance VR aobvious device operability.
Detailed description of the invention
Fig. 1 is that the present invention is based on the processes of the man-machine interaction method first embodiment of VR aobvious identification user's operation movements to show It is intended to;
Fig. 2 is that the present invention is based on the processes of the man-machine interaction method 3rd embodiment of VR aobvious identification user's operation movements to show It is intended to;
Fig. 3 is that the present invention is based on the processes of the man-machine interaction method fourth embodiment of VR aobvious identification user's operation movements to show It is intended to;
Fig. 4 is that the present invention is based on the processes of the 5th embodiment of man-machine interaction method of VR aobvious identification user's operation movements to show It is intended to;
Fig. 5 is that the present invention is based on the processes of the man-machine interaction method sixth embodiment of VR aobvious identification user's operation movements to show It is intended to;
Fig. 6 is that the present invention is based on the processes of the 7th embodiment of man-machine interaction method of VR aobvious identification user's operation movements to show It is intended to;
Fig. 7 is that the present invention is based on the processes of the 8th embodiment of man-machine interaction method of VR aobvious identification user's operation movements to show It is intended to;
Fig. 8 is the modular structure schematic diagram of VR aobvious device of the invention.
The object of the invention is realized, the embodiments will be further described with reference to the accompanying drawings for functional characteristics and advantage.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
In subsequent description, it is only using the suffix for indicating such as " module ", " component " or " unit " of element Be conducive to explanation of the invention, itself there is no a specific meaning.Therefore, " module ", " component " or " unit " can mix Ground uses.
The present invention provides a kind of man-machine interaction methods based on VR aobvious identification headworks and/or eye motion and Device.
Referring to Fig. 1, providing to achieve the above object, in the first embodiment of the present invention a kind of based on VR aobvious identification use The man-machine interaction method of family operational motion, includes the following steps:
Step S10 edits alternative events according to eye motion and/or headwork, generates alternative events and VR operation pair The configuration file answered;
Step S20 obtains user's eye motion data using the eye closing recognizer component and utilizes the position sensing Device obtains user's head action data, is determined according to user's eye motion data and/or the user's head action data Current alternative events;
Step S30, judges whether the current alternative events meet the interactive thing in the configuration file prestored Part;
Step S40, when the current alternative events meet the alternative events in the configuration file prestored, root The configuration file is inquired according to the current alternative events, is obtained corresponding with the current alternative events in the configuration file Current VR is operated and is executed the current VR operation.
Specifically, in the inventive solutions, it should be based on the human-computer interaction side of VR aobvious identification user's operation movements Method.Alternative events are edited according to eye motion and/or headwork, generate alternative events configuration file corresponding with VR operation; User's eye motion data are obtained using eye closing recognizer component and obtain user's head action data, root using position sensor Current alternative events are determined according to user's eye motion data and/or the user's head action data;Judge current alternative events Whether alternative events in the configuration file that prestores are met;When current alternative events meet the interaction thing in the configuration file prestored When part, according to current alternative events query configuration file, current VR behaviour corresponding with current alternative events in configuration file is obtained Make and executes current VR operation.
Using eye motion and/or headwork, when use, is affected by the surrounding environment smaller, improves the exchange method Validity increases eye motion and/or headwork into human-computer interaction, increases multiple variables, such as first carries out eye Headwork is carried out after movement, eye motion and headwork carry out simultaneously, so that the alternative events of VR aobvious device are increased, It is so as to increase more VR operations, original sub- VR operation is directly corresponding with alternative events, simplify the stream of VR operation Journey, the response speed of VR operation is faster.
Based on the first embodiment of the man-machine interaction method of the invention based on VR aobvious identification user's operation movements, this hair In the second embodiment of the bright man-machine interaction method based on VR aobvious identification user's operation movements, the step S20 includes:
Step S21 obtains user's eye closed action and the eye closed action pair using the eye closing recognizer component The eye closing time answered determines user's eye motion number according to the eye closed action and the eye closing time According to;
Step S22 obtains user's head rotation direction using the position sensor and the head rotation direction is corresponding Rotational angle.
Specifically, the eye closing recognizer component is analyzed by eyes image or is obtained by the detection that eye muscle acts Eye closed action is taken, user's eye closed action and the corresponding eye of the eye closed action are obtained by eye closing recognizer component Portion's closing time determines user's eye motion data according to eye closed action and the eye closing time;Position sensor, Such as Inertial Measurement Unit (IMU), it is a kind of " All-in-One " equipment for integrating magnetometer, accelerometer and gyroscope;It obtains The corresponding rotational angle of user's head rotation direction and head rotation direction is taken, according to user's head rotation direction and head rotation The corresponding rotational angle in direction, determines user's head data.
Referring to Fig. 2, based on the present invention is based on the second of the man-machine interaction method of VR aobvious identification user's operation movements is real Example is applied, of the invention to be shown in the 3rd embodiment for identifying the man-machine interaction method of user's operations movement based on VR, the step S21, comprising:
Step S211 obtains user's eye closed action using the eye closing recognizer component, and obtains eye closed action Duration;
Step S212, judges whether the duration is greater than the first preset time;
If so, thening follow the steps S213, the eye closed action is determined as effectively closed movement, according to described effective Closed action determines user's eye motion data;
If it is not, thening follow the steps S214, the eye closed action is determined as invalid closed action.
Specifically, human eye has involuntary blink movement due to the psychological need of human body, normal person's average minute clock is wanted Blink ten several times, will blink primary for usual 2~6 seconds, 0.2~0.4 seconds of blinking every time.Need by people not from The eye closing of main blink movement and active distinguishes, and by obtaining the duration of eye closed action, and judges described lasting Whether the time is greater than the first preset time, is greater than 0.5 second or 1 second, thus by the not independently eye closing of blink movement and active Movement distinguishes;Further, when being determined as effectively closed movement, VR aobvious to will do it feedback, such as sound or shake Dynamic, further, first preset value can change, and set personal first according to everyone different wink time Preset value, first preset value are provided with the first multiple, different preset values and reach different effectively closed Movement.
Referring to Fig. 3, second based on the man-machine interaction method of the invention based on VR aobvious identification user's operation movements Embodiment, of the invention to be shown based on VR in the fourth embodiment for identifying the man-machine interaction method of user's operations movement, the step Rapid S22, comprising:
Step S221 is obtained by the position sensor and is shown initial for described VR when the eye closed action occurs Angle obtains at the end of the eye closed action described VR aobvious termination point and the VR initial angles shown to end angle Exercise data between degree;
Step S222 determines that the head turns according to the initial angle, the termination point and the exercise data Rotational angle described in dynamic direction.
Specifically, obtaining the VR aobvious initial angle when eye closed action occurs by the position sensor Degree, the described VR termination point shown, VR aobvious angle using position sensor and at the end of obtaining the eye closed action Refer to any rotational motion that the helmet can follow head to make i.e. along the rotational freedom of three rectangular axes of x, y, z, Different rotational actions is set to correspond to the range of the aobvious angle of different VR, to pass through the model where VR aobvious angle of detection It encloses to determine movement.
Referring to Fig. 4, the third based on the man-machine interaction method of the invention based on VR aobvious identification user's operation movements Embodiment, of the invention to be shown based on VR in the 5th embodiment for identifying the man-machine interaction method of user's operations movement, the step After rapid S213, further includes:
Step S215 judges that whether described VR show in suspendable operating mode;
If so, thening follow the steps S216, the described VR aobvious pause suspendable operating mode is controlled;
Step S217 executes step S22;
Step S218, judges whether the VR head portrait returns back to the initial angle;
If so, thening follow the steps S219, the described VR aobvious work for continuing starting pause is controlled.
Specifically, due to closing one's eyes when in use, it, will if carried out then working when closing one's eyes Certain action can be missed, in order not to miss action, when detecting that eye closing movement is effective, suspends suspendable work Operation mode, such as when watching VR video, video playing can be suspended, at the end of detecting eye closing movement, continue to start;Into one Step, since part VR can carry out including that rotation headwork carries out according to the display content of VR aobvious angulation change VR When human-computer interaction, the display content after will lead to preoperative display content and operation is different, for the continuity of work, judges Whether the VR aobvious device returns back to the initial angle, controls the work that described VR aobvious device continues starting pause.
Referring to Fig. 5, the third based on the man-machine interaction method of the invention based on VR aobvious identification user's operation movements Example is applied, of the invention to be shown in the sixth embodiment for identifying the man-machine interaction method of user's operations movement based on VR, the step Rapid S21, further includes:
Step S51, judges whether the duration is greater than the second preset time, and the first preset time is default less than second Time;
If so, thening follow the steps S52, controls described VR and show into standby mode.
Specifically, personnel may be because when fatigue needs rest and forget that closing VR shows when using VR aobvious device Device, and VR aobvious device be in the rest for continuing to run the personnel of will affect, such as screen light source, and consumes the electricity of VR aobvious device Amount judges that personnel enter resting state hence into standby thus when the duration of eye closing movement being greater than the second preset value State, standby mode, such as close screen and music;Further second preset value can be changed, according to user of service Set the second different preset values.
Referring to Fig. 6, the reality based on the man-machine interaction method of the invention based on VR aobvious identification user's operations movements Six apply example, of the invention to be shown in the 7th embodiment for identifying the man-machine interaction method of user's operations movement based on VR, the step After rapid S52, further includes:
Step S53, start recording stand-by time when into the standby mode;
Step S54, judges whether the stand-by time is greater than third preset time;
If so, thening follow the steps S55, controls described VR and show into off-mode.
Specifically, personnel may be because when fatigue needs rest and forget to close VR aobvious device, acted thus in eye closing Duration be greater than the second preset value when, judge that personnel enter what resting state referred to hence into standby mode, standby mode It is to close the standby modes such as screen and music;And eye closing movement duration be greater than third preset value when, judge personnel into When entering deep sleep stages will not quickly wake up, VR aobvious device of control enters off-mode, to save electricity, extends VR and shows The service life of device;Further third preset value can be changed, and it is default to set different thirds according to user of service Value.
Referring to Fig. 7, the based on the man-machine interaction method of the invention based on VR aobvious identification user's operation movements the real 1st Example is applied to the 7th embodiment, the 8th embodiment of the man-machine interaction method of the invention based on VR aobvious identification user's operation movements In, the step S10, comprising:
Step S11 edits alternative events according to eye motion and/or headwork, and each alternative events are corresponding to institute In each operating mode for stating VR aobvious device, to generate the basis corresponding with VR operation of the alternative events under different working modes Configuration file;
The step S30, comprising:
Step S31 obtains described VR aobvious current operation mode;
Step S32, judges whether the current alternative events meet the corresponding configurations text of current operation mode Alternative events in part;
If so, then follow the steps S41, obtain in the corresponding base configuration file of the current operation mode with it is described The currently corresponding current VR operation of alternative events;Execute the current VR operation.
Specifically, alternative events are edited by eye motion and/or headwork, each alternative events are corresponding to described In each operating mode of VR aobvious device, matched with generating the basis corresponding with VR operation of the alternative events under different working modes Set file, the VR of the required progress in different operating modes operation be possible to it is different, such as in viewing VR video When with carrying out VR call, VR video needs such as F.F., and the VR such as adjusting progress operation is carrying out VR call, needing to hang up, record Deng operation;Therefore the alternative events under different working modes operate corresponding base configuration file needs with VR and change. Same alternative events correspond to different VR operations in different operating modes, for example, the alternative events of rotary head to the right, regard in VR Frequency plays and corresponds to forwarding operation under operating mode, corresponds to recording operation in the case where VR converses operating mode.
In addition, to achieve the above object, the present invention also provides a kind of VR aobvious devices, referring to Fig. 8, described VR aobvious dress Set including processor module, and connect respectively with the processor die block signal communication module, storage medium, screen mould Block, eye closing recognizer component and position sensor;The processor module is connect by communication module with computer, and the storage is situated between Matter memory is placed with configuration file.
It will be understood by those skilled in the art that VR aobvious device shown in Fig. 8 does not constitute the limit to VR aobvious device It is fixed, it may include perhaps combining certain components or different component layouts than illustrating more or fewer components.
In some specific embodiments, the communication module includes USB interface and Wi-Fi component, the USB interface It is connect respectively with processor module with Wi-Fi component.
Specifically, USB interface and Wi-Fi component, VR aobvious device and computer and internet wired connection/wirelessly connect It connects, computer provides more operational capabilities and database for VR aobvious device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in one as described above In computer readable storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions use so that a terminal device into Enter method described in each embodiment of the present invention.
In the description of this specification, reference term " embodiment ", " another embodiment ", " other embodiments " or " The description of one embodiment~X embodiment " etc. mean specific features described in conjunction with this embodiment or example, structure, material or Person's feature is included at least one embodiment or example of the invention.In the present specification, to the schematic table of above-mentioned term Stating may not refer to the same embodiment or example.Moreover, specific features, structure, material, method and step or the spy of description Point can be combined in any suitable manner in any one or more of the embodiments or examples.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the system that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or system institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or system.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of man-machine interaction method based on VR aobvious identification user's operation movements, which is characterized in that be applied to VR aobvious dresses It sets, the VR aobvious device includes VR aobvious, eye closing recognizer components and position sensor;The method includes the following steps:
Alternative events are edited according to eye motion and/or headwork, generate alternative events configuration file corresponding with VR operation;
User's eye motion data are obtained using the eye closing recognizer component and utilize the position sensor acquisition account Portion's action data determines current alternative events according to user's eye motion data and/or the user's head action data;
Judge whether the current alternative events meet the alternative events in the configuration file prestored;
When the current alternative events meet the alternative events in the configuration file prestored, according to the current friendship Configuration file described in mutual event query obtains in the configuration file current VR operation corresponding with the current alternative events simultaneously Execute the current VR operation.
2. a kind of man-machine interaction method based on VR aobvious identification user's operation movements, feature exist according to claim 1 In described to obtain user's eye motion data using the eye closing recognizer component and obtain user using the position sensor The step of headwork data, comprising:
User's eye closed action is obtained using the eye closing recognizer component and the corresponding eye of the eye closed action is closed Time determines user's eye motion data according to the eye closed action and the eye closing time;
User's head rotation direction and the corresponding rotational angle in the head rotation direction are obtained using the position sensor.
3. a kind of man-machine interaction method based on VR aobvious identification user's operation movements, feature exist according to claim 2 In described to be closed using eye closing recognizer component acquisition user's eye closed action and the corresponding eye of the eye closed action The step of closing the time, user's eye motion data determined according to the eye closed action and the eye closing time, Include:
User's eye closed action is obtained using the eye closing recognizer component, and obtains the duration of eye closed action;
Judge whether the duration is greater than the first preset time;
When the duration being greater than the first preset time, the eye closed action is determined as effectively closed movement, root User's eye motion data are determined according to the effectively closed movement;
When the duration being less than or equal to the first preset time, the eye closed action is determined as being closed in vain dynamic Make.
4. a kind of man-machine interaction method based on VR aobvious identification user's operation movements, feature exist according to claim 2 In described to obtain user's head rotation direction and the corresponding rotational angle in the head rotation direction using the position sensor The step of, comprising:
It is obtained described in the VR aobvious initial angle when eye closed action occurs, acquisition by the position sensor Described VR aobvious termination point and the VR initial angles shown are to the movement between termination point at the end of eye closed action Data;
It is determined according to the initial angle, the termination point and the initial angle to the exercise data between termination point The head rotation direction and the rotational angle.
5. a kind of man-machine interaction method based on VR aobvious identification user's operation movements, feature exist according to claim 4 In described to be closed using eye closing recognizer component acquisition user's eye closed action and the corresponding eye of the eye closed action Close the time, the step of user's eye motion data are determined according to the eye closed action and the eye closing time it Afterwards, comprising:
Judge that whether described VR show in suspendable operating mode;
When showing in the suspendable operating mode for described VR, the described VR aobvious pause suspendable work is controlled Mode;
When the described VR aobvious pause suspendable operating mode, execution is described to obtain user using the position sensor The step of head rotation direction and the corresponding rotational angle in the head rotation direction, step;
Judge whether the VR head portrait returns back to the initial angle;
When described VR, which shows, returns back to the initial angle, the described VR aobvious work for continuing starting pause is controlled.
6. a kind of man-machine interaction method based on VR aobvious identification user's operation movements, feature exist according to claim 3 In described to be closed using eye closing recognizer component acquisition user's eye closed action and the corresponding eye of the eye closed action The step of closing the time, user's eye motion data determined according to the eye closed action and the eye closing time, Further include:
Judge whether the duration is greater than the second preset time, the first preset time is less than the second preset time;
When the duration being greater than the second preset time, controls described VR and show into standby mode.
7. a kind of man-machine interaction method based on VR aobvious identification user's operation movements, feature exist according to claim 6 In, it is described when the duration being greater than the second preset time, after controlling described VR aobvious the step of entering standby mode, Further include:
Start recording stand-by time when into the standby mode;
Judge whether the stand-by time is greater than third preset time;
When the stand-by time is greater than third preset time, control described VR it is aobvious into off-mode.
8. according to claim 1 to a kind of human-computer interaction side based on VR aobvious identification user's operation movements described in any one of 7 Method, which is characterized in that it is described that alternative events are edited according to eye motion and/or headwork, it generates alternative events and VR is operated The step of corresponding configuration file, comprising:
Alternative events are edited according to eye motion and/or headwork, each alternative events are corresponding to described VR aobvious device Each operating mode in, to generate the base configuration file corresponding with VR operation of the alternative events under different working modes;
It is described that the step of whether current alternative events meet the alternative events in the configuration file prestored judged, Include:
Obtain described VR aobvious current operation mode;
Judge whether the current alternative events meet the interaction thing in the corresponding base configuration file of current operation mode Part;
It is described when the current alternative events meet the alternative events in the configuration file prestored, worked as according to described Preceding alternative events inquire the configuration file, obtain current VR corresponding with the current alternative events in the configuration file and grasp The step of making and executing current VR operation, comprising:
When the current alternative events meet the alternative events in the corresponding base configuration file of current operation mode, root The corresponding base configuration file of the current operation mode is inquired according to the current alternative events;
It obtains corresponding with the current alternative events current in the corresponding base configuration file of the current operation mode VR operation;Execute the current VR operation.
9. a kind of VR aobvious device, which is characterized in that application is as aobvious based on VR such as one kind of any of claims 1-8 Identify user's operation movement man-machine interaction method, the VR aobvious device includes processor module, and respectively with the place Manage the aobvious VR head that device module by signal connects, communication module, storage medium, screen module, eye closing recognizer component and position sensor; The processor module is connect by communication module with computer, and the storage medium memory is placed with configuration file.
10. a kind of VR aobvious device according to claim 9, which is characterized in that the communication module include USB interface and Wi-Fi component, the USB interface and Wi-Fi component are connect with processor module respectively.
CN201910823525.8A 2019-09-02 2019-09-02 Man-machine interaction method and device based on VR aobvious identification user's operation movements Withdrawn CN110531859A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910823525.8A CN110531859A (en) 2019-09-02 2019-09-02 Man-machine interaction method and device based on VR aobvious identification user's operation movements

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910823525.8A CN110531859A (en) 2019-09-02 2019-09-02 Man-machine interaction method and device based on VR aobvious identification user's operation movements

Publications (1)

Publication Number Publication Date
CN110531859A true CN110531859A (en) 2019-12-03

Family

ID=68666252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910823525.8A Withdrawn CN110531859A (en) 2019-09-02 2019-09-02 Man-machine interaction method and device based on VR aobvious identification user's operation movements

Country Status (1)

Country Link
CN (1) CN110531859A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111158483A (en) * 2019-12-30 2020-05-15 联想(北京)有限公司 Display method and electronic equipment
CN111374677A (en) * 2020-03-26 2020-07-07 龚天逸 Apparatus and method for recognizing head motion
CN113467617A (en) * 2021-07-15 2021-10-01 北京京东方光电科技有限公司 Haptic feedback method, apparatus, device and storage medium
WO2022037355A1 (en) * 2020-08-21 2022-02-24 华为技术有限公司 Smart glasses, and interaction method and apparatus thereof

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111158483A (en) * 2019-12-30 2020-05-15 联想(北京)有限公司 Display method and electronic equipment
CN111374677A (en) * 2020-03-26 2020-07-07 龚天逸 Apparatus and method for recognizing head motion
WO2022037355A1 (en) * 2020-08-21 2022-02-24 华为技术有限公司 Smart glasses, and interaction method and apparatus thereof
CN113467617A (en) * 2021-07-15 2021-10-01 北京京东方光电科技有限公司 Haptic feedback method, apparatus, device and storage medium

Similar Documents

Publication Publication Date Title
CN110531859A (en) Man-machine interaction method and device based on VR aobvious identification user's operation movements
CN112034977B (en) Method for MR intelligent glasses content interaction, information input and recommendation technology application
KR102196380B1 (en) Technology for controlling a virtual image generation system using user's emotional states
JP2024028390A (en) An electronic device that generates an image including a 3D avatar that reflects facial movements using a 3D avatar that corresponds to the face.
JP4481682B2 (en) Information processing apparatus and control method thereof
US9454220B2 (en) Method and system of augmented-reality simulations
Ormel et al. Prosodic correlates of sentences in signed languages: A literature review and suggestions for new types of studies
KR20230107399A (en) Automatic control of wearable display device based on external conditions
US20090131165A1 (en) Physical feedback channel for entertainment or gaming environments
EP2943854B1 (en) Leveraging physical handshaking in head mounted displays
JP2020039029A (en) Video distribution system, video distribution method, and video distribution program
CN108475507A (en) Information processing equipment, information processing method and program
CN109643163A (en) Information processing equipment, information processing method and program
JP7207468B2 (en) Output control device, output control method and program
CN104460955B (en) A kind of information processing method and wearable electronic equipment
CN109040462A (en) Stroke reminding method, apparatus, storage medium and wearable device
CN109461124A (en) A kind of image processing method and terminal device
JP7066115B2 (en) Public speaking support device and program
CN106292994A (en) The control method of virtual reality device, device and virtual reality device
KR20230162116A (en) Asynchronous brain-computer interface in AR using steady-state motion visual evoked potentials
CN110225196A (en) Terminal control method and terminal device
US11328187B2 (en) Information processing apparatus and information processing method
WO2017085963A1 (en) Information processing device and video display device
CN111783587A (en) Interaction method, device and storage medium
US20240069637A1 (en) Touch-based augmented reality experience

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Shan Rong pan

Inventor after: Zhu Jie

Inventor after: Dong Yuhan

Inventor before: Shan Rong pan

WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20191203