CN106200927A - A kind of information processing method and headset equipment - Google Patents
A kind of information processing method and headset equipment Download PDFInfo
- Publication number
- CN106200927A CN106200927A CN201610506709.8A CN201610506709A CN106200927A CN 106200927 A CN106200927 A CN 106200927A CN 201610506709 A CN201610506709 A CN 201610506709A CN 106200927 A CN106200927 A CN 106200927A
- Authority
- CN
- China
- Prior art keywords
- movement locus
- virtual objects
- headset equipment
- head
- instruction information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
Abstract
The embodiment of the present invention provides a kind of information processing method and headset equipment.Wherein, the method includes: when described headset equipment is worn on user's head, determines and shows a virtual scene on the display unit of described headset equipment;Determine that the mark of the location in described virtual scene is positioned at primary importance;Detection obtains the first movement locus of described head;Based on described first movement locus, control described location mark and move with the second movement locus corresponding with described first movement locus in described virtual scene;When determining that described location mark moves to the second position of virtual objects from described primary importance, described virtual objects is carried out the first operation, wherein, the described second position is specially the final position that described location cursor moves based on described second movement locus, thus improve the headset equipment based on the VR treatment effeciency to the virtual objects in its virtual scene, improve the serviceability of equipment further.
Description
Technical field
The present embodiments relate to electronic technology field, particularly relate to a kind of information processing method and headset equipment.
Background technology
Along with the development of science and technology, increasing headset equipment occurs in daily life.
Wherein, headset equipment based on VR is also known as virtual reality headset equipment, mainly by virtual reality
(Virtual Reality is called for short VR) technology is applied in headset equipment, so that VR information is formed VR figure by display screen
Picture, allows users to be immersed in Three-Dimensional Dynamic visual environment, realizes man-machine interaction further so that user faces it such as body
Border.
In prior art, during user uses headset equipment based on VR, if it is desired to control virtual scene
In certain virtual objects trigger different event, generally can be realized by following two approach.The first approach specifically,
After choosing virtual objects, user is operated by such as click on touch pad on the device, slip etc. and then controls to be somebody's turn to do
Virtual objects triggers different events.The second approach is specifically, after choosing virtual objects, when detecting that user watches this void attentively
When the duration of plan object is more than a preset duration, then controls this virtual objects and trigger corresponding event.Thus, it can be seen that using the
During a kind of approach, operate complex, low-response.When using the second approach, however it remains low-response.
Visible, there is the process to the virtual objects in its virtual scene in headset equipment based on VR of the prior art
Efficiency is low, and then causes the technical problem of equipment serviceability difference.
Summary of the invention
The embodiment of the present invention provides a kind of information processing method and headset equipment, in order to solve of the prior art based on
The headset equipment of VR exists low to the treatment effeciency of the virtual objects in its virtual scene, and then causes equipment serviceability poor
Defect, thus improve the headset equipment based on the VR treatment effeciency to the virtual objects in its virtual scene, carry further
The serviceability of high equipment
The embodiment of the present invention provides a kind of information processing method, including:
When described headset equipment is worn on user's head, determines and show on the display unit of described headset equipment
One virtual scene;
Determine that the mark of the location in described virtual scene is positioned at primary importance;
Detection obtains the first movement locus of described head;
Based on described first movement locus, control described location mark in described virtual scene to move with described first
The second movement locus that track is corresponding moves;
When determining that described location mark moves to the second position of virtual objects from described primary importance, to described
Virtual objects carries out the first operation, and wherein, the described second position is specially described location cursor based on described second movement locus
The final position of movement.
The embodiment of the present invention provides a kind of headset equipment, including:
Display unit;
Processor, is connected with described display unit;
Wherein, described processor specifically for:
When described headset equipment is worn on user's head, determines and on described display unit, show a virtual scene;
Determine that the mark of the location in described virtual scene is positioned at primary importance;
Detection obtains the first movement locus of described head;
Based on described first movement locus, control described location mark in described virtual scene to move with described first
The second movement locus that track is corresponding moves;
When determining that described location mark moves to the second position of virtual objects from described primary importance, to described
Virtual objects carries out the first operation, and wherein, the described second position is specially described location cursor based on described second movement locus
The final position of movement.
The information processing method of embodiment of the present invention offer and headset equipment, wherein, headset equipment can be that VR sets
Standby, it is possible to for display one virtual scene.During implementing, by controlling the motion of user's head, and then control to determine
Position light target moves.Further can the first movement locus based on user's head movement, determine that this location light is marked on this void
Intend scene moves along the second movement locus.Further, determining that this location cursor moves to virtual objects from home position
During the residing second position, this virtual objects is processed, such as, when virtual objects is precious deposits, these precious deposits is picked up
Extract operation.For another example, when virtual objects is fruit, this fruit is carried out piecemeal process, etc..Thus improve based on VR
The headset equipment treatment effeciency to the virtual objects in its virtual scene, improves the serviceability of equipment further
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing
In having technology to describe, the required accompanying drawing used is briefly described, it should be apparent that, the accompanying drawing in describing below is this
Some bright embodiments, for those of ordinary skill in the art, on the premise of not paying creative work, it is also possible to root
Other accompanying drawing is obtained according to these accompanying drawings.
Fig. 1 is the method flow diagram of a kind of information processing method in the embodiment of the present invention one;
Fig. 2 is the method flow diagram of step 103 in a kind of information processing method in the embodiment of the present invention one;
Fig. 3 is the method flow diagram of step 105 in a kind of information processing method in the embodiment of the present invention one;
Fig. 4 is method flow diagram after step 103 in a kind of information processing method in the embodiment of the present invention one;
Fig. 5 is a kind of method of the first implementation of step 401 in information processing method in the embodiment of the present invention one
Flow chart;
Fig. 6 is the method for the second implementation of step 401 in a kind of information processing method in the embodiment of the present invention one
Flow chart;
The structural representation of a kind of headset equipment that Fig. 7 provides for the embodiment of the present invention two.
Detailed description of the invention
For making the purpose of the embodiment of the present invention, technical scheme and advantage clearer, below in conjunction with the embodiment of the present invention
In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is
The a part of embodiment of the present invention rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art
The every other embodiment obtained under not making creative work premise, broadly falls into the scope of protection of the invention.
Based in the embodiment that technical scheme is provided, headset equipment is specifically as follows the VR that can put the cell phone and sets
Standby, i.e. mobile phone VR, it is also possible to be the all-in-one VR glasses carrying processing system and display screen, wait and can show setting of virtual scene
Standby.
Embodiment one
Refer to Fig. 1, the flow chart of a kind of information processing method for providing in the embodiment of the present invention, the method is applied to
One headset equipment, described method includes:
101: when described headset equipment is worn on user's head, determine on the display unit of described headset equipment aobvious
It is shown with a virtual scene;
102: determine that the mark of the location in described virtual scene is positioned at primary importance;
103: detection obtains the first movement locus of described head;
104: based on described first movement locus, control described location mark in described virtual scene with described first
The second movement locus that movement locus is corresponding moves;
105: when determining that described location mark moves to the second position of virtual objects from described primary importance, right
Described virtual objects carries out the first operation, and wherein, the described second position is specially described location cursor based on described second motion
The final position that track moves.
In specific implementation process, step 101 to step 105 to implement process as follows:
First, when described headset equipment is worn on user's head, determine on the display unit of described headset equipment
Show a virtual scene.As a example by all-in-one VR glasses, its display screen shows the virtual scene seeking treasured for user.So
After, determine that the mark of the location in described virtual scene is positioned at primary importance.In embodiments of the present invention, described location cursor is concrete
Can be the tracking cross in VR virtual scene, move, for determining in virtual scene for the change of user's head movement
Virtual objects.Specifically, when user's head moves, detect the first movement locus of described user's head movement.So
After, control described location light and be marked in virtual scene and move with the second movement locus corresponding with described first movement locus.Ratio
As, based on the corresponding relation between head movement track and tracking cross movement locus, control this tracking cross show various
The scene of precious deposits moves along the second movement locus.During described location mark moves along described second movement locus,
When determining that described location mark moves to when having the second position of virtual objects from initial primary importance, to this virtual objects
Operating, wherein, the described second position is specially the terminal position that described location cursor moves based on described second movement locus
Put.Such as, when tracking cross moves " blue diamond " position to precious deposits scene, this " blue diamond " is directly picked up.Or
It is that, after being pending destination object by tracking cross selected this " blue diamond ", the user of receiving waiting is for processing this target
After the instruction information of object, this destination object is processed accordingly.Visible, by the embodiment of the present invention provided above-mentioned
Technical scheme, it is achieved that to the quick process of destination object in virtual scene.
In embodiments of the present invention, refer to Fig. 2, step 103 to implement process as follows:
201: the motion being obtained described user's head by least one motion sensor in described headset equipment is joined
Number;
202: based on described kinematic parameter, determine described first movement locus of described head.
In specific implementation process, step 201 to step 202 to implement process as follows:
First, the motion ginseng of described user's head is obtained by least one motion sensor in described headset equipment
Number.Specifically, can by the gyroscope in described headset equipment, gravity sensor, displacement transducer etc. at least one
Motion sensor obtains the kinematic parameter of described user's head, further, based on acquisition being used for is characterized user's head fortune
The emotionally moving parameter information of condition, determines described first movement locus of described head.
In embodiments of the present invention, after step 202, in order in time a certain virtual objects in virtual scene be carried out
Process, refer to Fig. 3, step 105: determining that described location mark moves to the second of virtual objects from described primary importance
During position, described virtual objects is carried out the first operation, specifically includes:
301: determine that the described second position is in a predeterminable area scope in the scope of first area, described virtual objects place
In;
302: when in the range of described location mark is positioned at described predeterminable area not less than the first preset duration, determine described
Virtual objects is operation object;
303: described virtual objects is carried out described first operation.
In specific implementation process, step 301 to step 303 to implement process as follows:
First, determine that the described second position is in a predeterminable area scope in the scope of first area, described virtual objects place
In.Such as, it is " robbing red packet " scene of game at described virtual scene, when described virtual objects is " red packet ", is somebody's turn to do residing for " red packet "
Regional extent for this " red packet " figure position of centre of gravity as the center of circle, be 50 pixels border circular areas as radius with radius.
In specific implementation process, in described virtual objects is falsely dropped, and then improve described virtual objects is processed
Degree of accuracy, when in the range of described location mark is positioned at described predeterminable area not less than the first preset duration, determines described virtual
Object is operation object.Such as, when tracking cross is positioned at above-mentioned border circular areas, and it is positioned at this border circular areas up to 3 seconds
Zhong Shi, it is determined that the virtual objects being somebody's turn to do " red packet " corresponding is operation object.After determining operation object, to described virtual right
As operating accordingly.Such as, this " red packet " is taken apart operation.
In embodiments of the present invention, in order to realize the precise manipulation to virtual objects further, refer to Fig. 4, in step
Rapid 302: when in the range of described location mark is positioned at described predeterminable area not less than the first preset duration, it is described virtual right to determine
After for operation object, described method also includes:
401: receive the instruction information that described user sends;
402: based on described instruction information, described virtual objects is carried out described first operation.
In specific implementation process, step 401 to step 402 to implement process as follows:
First, the instruction information that described user sends is received.Specifically, described instruction information can be to determine operation
After object, moved by user's head and then control location mark and move, based on the situation of movement identified when prelocalization and process
The corresponding relation of operation room, generates and indicates information accordingly.For another example, described instruction information specially user make for controlling
System carries out the command information of which kind of operation to described virtual objects.Then, based on described instruction information, described virtual objects is entered
Described first operation of row.
In embodiments of the present invention, step 401: receive the instruction information that described user sends, following two can be had real
Existing mode, but it is not limited only to following two implementation.
The first implementation
Refer to Fig. 5, the first implementation specifically includes:
501: obtain the 3rd movement locus of described head;
502: based on described 3rd movement locus, it is thus achieved that described location mark moves to the 3rd position from the described second position
The 4th movement locus;
503: determine the instruction information that described 4th movement locus is corresponding, so that described Wearable is to described virtual right
As carrying out described first operation corresponding with described instruction information.
In specific implementation process, step 501 to step 503 to implement process as follows:
First, after determining that described virtual objects is for operation object, it is thus achieved that the 3rd movement locus of described user's head;Ratio
As, a certain fixed position such as second position in the range of tracking cross is positioned at the predeterminable area at " Fructus Mali pumilae " place, now, wear
The user's A head wearing described headset equipment does not moves.Then, the most oblique along face institute at user's A head
During lower motion, the movement locus of described user's head is the 3rd corresponding movement locus that moves obliquely with this.Then, based on
Described 3rd movement locus, it is thus achieved that described location mark moves to the 4th movement locus of the 3rd position from the described second position.
It is to say, based on the corresponding relation between head movement track and location mark, it is thus achieved that the movement locus that location mark is current.So
After, determine that location identifies the instruction information that current movement locus is corresponding, so that described headset equipment is to described virtual objects
Carry out described first operation corresponding with described instruction information.In specific implementation process, can be by setting in described wear-type
The corresponding relation list between the movement locus of location mark and instruction information is pre-set, when determining that location mark is current in Bei
When movement locus is four movement locus, just can determine corresponding with the 4th movement locus from this corresponding relation list
One instruction information.Certainly, those skilled in the art can also be as desired to Design Orientation identifies current movement locus institute
Corresponding instruction information, just differing at this one illustrates.For giving a concrete illustration, at user's head along above-mentioned oblique
During lower motion, the diagonally lower motion of tracking cross region based on " Fructus Mali pumilae " place, due at the corresponding relation row pre-set
In table, the instruction information of the correspondence of moving obliquely of tracking cross is diagonally to descend the instruction information of cutting fruit.Namely
Saying, movement locus is corresponding to the instruction information of cutting fruit obliquely, and then will carry out such as cutting fruit for this " Fructus Mali pumilae "
Operation.It is to say, step 501 to step 503 choose operation object after, if this operation object is carried out next step
Process operation, by the location new movement locus of mark and the corresponding relation of operation room, this operation object can be carried out phase
The process answered.Certainly, those skilled in the art, it is also possible to according to the actually used custom of user, Design Orientation mark motion
Corresponding relation between track and instruction information, repeats the most one by one at this.
The second implementation
Refer to Fig. 6, the second implementation specifically includes:
601: receive the instruction operation that described user is carried out for the touch area of described headset equipment;
602: determine the described instruction information corresponding with described instruction operation.
In specific implementation process, step 601 to step 602 to implement process as follows:
After determining that described virtual objects is for operation object, if user to carry out next step place to this operation object
Reason, specifically, first, receives the instruction operation that described user is carried out for the touch area of described headset equipment;Ratio
As, all-in-one VR equipment is provided above with being easy to user and inputs the touch pad of control instruction, and user can be enterprising at this Trackpad
The operations such as row click, slip.Then, it is determined that the instruction information corresponding with described instruction operation.Such as, when user is at Trackpad
When carrying out clicking operation on touch area, show operation object is carried out " pickup " operation, say, that generate and grasp with " pickup "
Make relevant instruction information.Further, based on this instruction information, this operation object will carry out " picking up " accordingly operation.Again
Such as, when user operates in the enterprising line slip in the touch area of Trackpad, show operation object is carried out " dragging " operation, also
That is, generate the instruction information relevant to " dragging " operation.Further, based on this instruction information, this operation object is carried out
" drag " operation accordingly.
Embodiment two
Based on the inventive concept identical with the embodiment of the present invention one, refer to Fig. 7, the embodiment of the present invention two provides one
Headset equipment, described equipment includes:
Display unit 10;Wherein, in specific implementation process, when described headset equipment is mobile phone VR, display unit
10 namely insert the display screen contained by the smart mobile phone in this equipment;For another example, it is integrated machine when described headset equipment
During VR equipment, display unit 10 display device that namely this headset equipment carries.
Processor 20, is connected with display unit 10;
Wherein, processor 20 specifically for:
When described headset equipment is worn on user's head, determines and on display unit 10, show a virtual scene;
Determine that the mark of the location in described virtual scene is positioned at primary importance;
Detection obtains the first movement locus of described head;
Based on described first movement locus, control described location mark in described virtual scene to move with described first
The second movement locus that track is corresponding moves;
When determining that described location mark moves to the second position of virtual objects from described primary importance, to described
Virtual objects carries out the first operation, and wherein, the described second position is specially described location cursor based on described second movement locus
The final position of movement.
In the embodiment of the present invention two, described headset equipment also includes at least one motion sensor, described at least one
Individual motion sensor is connected with processor 20, and wherein, at least one motion sensor described is specifically for obtaining described account
The kinematic parameter in portion, processor 20 is specifically for based on described kinematic parameter, determining the first movement locus of described head.Specifically
From the point of view of, at least one motion sensor described is specifically as follows the sensors such as gyroscope, gravity sensor, displacement transducer,
This just differs and one schematically illustrates.
In the embodiment of the present invention two, in order to determine the operation object in virtual scene, processor 20 specifically for:
Determine that the described second position is in the scope of first area, described virtual objects place in the range of a predeterminable area;
When in the range of described location mark is positioned at described predeterminable area not less than the first preset duration, determine described virtual
Object is operation object;
Described virtual objects is carried out described first operation.
In the embodiment of the present invention two, after determining that described virtual objects is for operation object, in order to improve for this behaviour
Making the degree of accuracy of object handles, processor 20 is additionally operable to:
Receive the instruction information that described user sends;
Based on described instruction information, described virtual objects is carried out described first operation.
In the embodiment of the present invention two, when described virtual objects is accurately processed, processor 20 specifically for:
Obtain the 3rd movement locus of described head;
Based on described 3rd movement locus, it is thus achieved that described location mark moves to the of the 3rd position from the described second position
Four movement locus;
Determine the instruction information that described 4th movement locus is corresponding, so that described virtual objects is entered by described Wearable
Described first operation that row is corresponding with described instruction information.
Additionally, processor 20 can be also used for:
Receive the instruction operation that described user is carried out for the touch area of described headset equipment;
Determine the instruction information corresponding with described instruction operation.
Device embodiment described above is only schematically, and the wherein said unit illustrated as separating component can
To be or to may not be physically separate, the parts shown as unit can be or may not be physics list
Unit, i.e. may be located at a place, or can also be distributed on multiple NE.Can be selected it according to the actual needs
In some or all of module realize the purpose of the present embodiment scheme.Those of ordinary skill in the art are not paying creativeness
Work in the case of, be i.e. appreciated that and implement.
Through the above description of the embodiments, those skilled in the art it can be understood that to each embodiment can
The mode adding required general hardware platform by software realizes, naturally it is also possible to pass through hardware.Based on such understanding, on
State the part that prior art contributes by technical scheme the most in other words to embody with the form of software product, should
Computer software product can store in a computer-readable storage medium, such as ROM/RAM, magnetic disc, CD etc., including some fingers
Make with so that a computer equipment (can be personal computer, server, or the network equipment etc.) performs each and implements
The method described in some part of example or embodiment.
Last it is noted that above example is only in order to illustrate technical scheme, it is not intended to limit;Although
With reference to previous embodiment, the present invention is described in detail, it will be understood by those within the art that: it still may be used
So that the technical scheme described in foregoing embodiments to be modified, or wherein portion of techniques feature is carried out equivalent;
And these amendment or replace, do not make appropriate technical solution essence depart from various embodiments of the present invention technical scheme spirit and
Scope.
Claims (12)
1. an information processing method, is applied to headset equipment, it is characterised in that including:
When described headset equipment is worn on user's head, determines and on the display unit of described headset equipment, show a void
Intend scene;
Determine that the mark of the location in described virtual scene is positioned at primary importance;
Detection obtains the first movement locus of described head;
Based on described first movement locus, control described location mark in described virtual scene with described first movement locus
The second corresponding movement locus moves;
When determining that described location mark moves to the second position of virtual objects from described primary importance, to described virtual
Object carries out the first operation, and wherein, the described second position is specially described location cursor and moves based on described second movement locus
Final position.
Method the most according to claim 1, it is characterised in that described detection obtains the first movement locus of described head,
Including:
The kinematic parameter of described user's head is obtained by least one motion sensor in described headset equipment;
Based on described kinematic parameter, determine described first movement locus of described head.
Method the most according to claim 1, it is characterised in that described determining described location mark from described primary importance
When moving to the second position of virtual objects, described virtual objects is carried out the first operation, including:
Determine that the described second position is in the scope of first area, described virtual objects place in the range of a predeterminable area;
When in the range of described location mark is positioned at described predeterminable area not less than the first preset duration, determine described virtual objects
For operation object;
Described virtual objects is carried out described first operation.
Method the most according to claim 3, it is characterised in that after determining that described virtual objects is for operation object, institute
Method of stating also includes:
Receive the instruction information that described user sends;
Based on described instruction information, described virtual objects is carried out described first operation.
Method the most according to claim 4, it is characterised in that the instruction information that the described user of described reception sends, including:
Obtain the 3rd movement locus of described head;
Based on described 3rd movement locus, it is thus achieved that described location mark moves to the 4th fortune of the 3rd position from the described second position
Dynamic track;
Determine the instruction information that described 4th movement locus is corresponding so that described headset equipment described virtual objects is carried out with
Described first operation that described instruction information is corresponding.
Method the most according to claim 4, it is characterised in that the instruction information that the described user of described reception sends, including:
Receive the instruction operation that described user is carried out for the touch area of described headset equipment;
Determine the described instruction information corresponding with described instruction operation.
7. a headset equipment, it is characterised in that including:
Display unit;
Processor, is connected with described display unit;
Wherein, described processor specifically for:
When described headset equipment is worn on user's head, determines and on described display unit, show a virtual scene;
Determine that the mark of the location in described virtual scene is positioned at primary importance;
Detection obtains the first movement locus of described head;
Based on described first movement locus, control described location mark in described virtual scene with described first movement locus
The second corresponding movement locus moves;
When determining that described location mark moves to the second position of virtual objects from described primary importance, to described virtual
Object carries out the first operation, and wherein, the described second position is specially described location cursor and moves based on described second movement locus
Final position.
Headset equipment the most according to claim 7, it is characterised in that described headset equipment also includes that at least one is transported
Dynamic sensor, at least one motion sensor described is connected with described processor, wherein, at least one motion-sensing utensil described
Body is for obtaining the kinematic parameter of described user's head, and described processor, specifically for based on described kinematic parameter, determines described
Described first movement locus of head.
Headset equipment the most according to claim 7, it is characterised in that described processor specifically for:
Determine that the described second position is in the scope of first area, described virtual objects place in the range of a predeterminable area;
When in the range of described location mark is positioned at described predeterminable area not less than the first preset duration, determine described virtual objects
For operation object;
Described virtual objects is carried out described first operation.
Headset equipment the most according to claim 9, it is characterised in that determining that described virtual objects is for operation object
Afterwards, described processor is additionally operable to:
Receive the instruction information that described user sends;
Based on described instruction information, described virtual objects is carried out described first operation.
11. headset equipments according to claim 10, it is characterised in that described processor specifically for:
Obtain the 3rd movement locus of described head;
Based on described 3rd movement locus, it is thus achieved that described location mark moves to the 4th fortune of the 3rd position from the described second position
Dynamic track;
Determine the instruction information that described 4th movement locus is corresponding so that described Wearable described virtual objects is carried out with
Described first operation that described instruction information is corresponding.
12. headset equipments according to claim 10, it is characterised in that described processor specifically for:
Receive the instruction operation that described user is carried out for the touch area of described headset equipment;
Determine the described instruction information corresponding with described instruction operation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610506709.8A CN106200927A (en) | 2016-06-30 | 2016-06-30 | A kind of information processing method and headset equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610506709.8A CN106200927A (en) | 2016-06-30 | 2016-06-30 | A kind of information processing method and headset equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106200927A true CN106200927A (en) | 2016-12-07 |
Family
ID=57463988
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610506709.8A Pending CN106200927A (en) | 2016-06-30 | 2016-06-30 | A kind of information processing method and headset equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106200927A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106873767A (en) * | 2016-12-30 | 2017-06-20 | 深圳超多维科技有限公司 | The progress control method and device of a kind of virtual reality applications |
CN107168517A (en) * | 2017-03-31 | 2017-09-15 | 北京奇艺世纪科技有限公司 | A kind of control method and device of virtual reality device |
CN107451799A (en) * | 2017-04-21 | 2017-12-08 | 阿里巴巴集团控股有限公司 | A kind of Risk Identification Method and device |
CN107977083A (en) * | 2017-12-20 | 2018-05-01 | 北京小米移动软件有限公司 | Operation based on VR systems performs method and device |
WO2018149042A1 (en) * | 2017-02-16 | 2018-08-23 | 全球能源互联网研究院有限公司 | Method and device for controlling virtual operation interface, and storage medium |
CN109683765A (en) * | 2018-12-27 | 2019-04-26 | 张家港康得新光电材料有限公司 | A kind of resource allocation methods, device and 3D display terminal |
CN110325953A (en) * | 2017-02-23 | 2019-10-11 | 三星电子株式会社 | Screen control method and equipment for virtual reality service |
CN111104042A (en) * | 2019-12-27 | 2020-05-05 | 惠州Tcl移动通信有限公司 | Human-computer interaction system and method and terminal equipment |
CN112370795A (en) * | 2020-10-21 | 2021-02-19 | 潍坊歌尔电子有限公司 | Head-mounted equipment-based swing ball game method, device and equipment |
US11327630B1 (en) | 2021-02-04 | 2022-05-10 | Huawei Technologies Co., Ltd. | Devices, methods, systems, and media for selecting virtual objects for extended reality interaction |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102981626A (en) * | 2012-12-12 | 2013-03-20 | 紫光股份有限公司 | Wearing-on-head type computer |
US20130335321A1 (en) * | 2012-06-13 | 2013-12-19 | Sony Corporation | Head-mounted video display device |
CN103543843A (en) * | 2013-10-09 | 2014-01-29 | 中国科学院深圳先进技术研究院 | Man-machine interface equipment based on acceleration sensor and man-machine interaction method |
CN105260017A (en) * | 2015-09-28 | 2016-01-20 | 南京民办致远外国语小学 | Glasses mouse and working method therefor |
-
2016
- 2016-06-30 CN CN201610506709.8A patent/CN106200927A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130335321A1 (en) * | 2012-06-13 | 2013-12-19 | Sony Corporation | Head-mounted video display device |
CN102981626A (en) * | 2012-12-12 | 2013-03-20 | 紫光股份有限公司 | Wearing-on-head type computer |
CN103543843A (en) * | 2013-10-09 | 2014-01-29 | 中国科学院深圳先进技术研究院 | Man-machine interface equipment based on acceleration sensor and man-machine interaction method |
CN105260017A (en) * | 2015-09-28 | 2016-01-20 | 南京民办致远外国语小学 | Glasses mouse and working method therefor |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106873767A (en) * | 2016-12-30 | 2017-06-20 | 深圳超多维科技有限公司 | The progress control method and device of a kind of virtual reality applications |
CN106873767B (en) * | 2016-12-30 | 2020-06-23 | 深圳超多维科技有限公司 | Operation control method and device for virtual reality application |
WO2018149042A1 (en) * | 2017-02-16 | 2018-08-23 | 全球能源互联网研究院有限公司 | Method and device for controlling virtual operation interface, and storage medium |
US11287875B2 (en) | 2017-02-23 | 2022-03-29 | Samsung Electronics Co., Ltd. | Screen control method and device for virtual reality service |
CN110325953A (en) * | 2017-02-23 | 2019-10-11 | 三星电子株式会社 | Screen control method and equipment for virtual reality service |
CN110325953B (en) * | 2017-02-23 | 2023-11-10 | 三星电子株式会社 | Screen control method and device for virtual reality service |
CN107168517A (en) * | 2017-03-31 | 2017-09-15 | 北京奇艺世纪科技有限公司 | A kind of control method and device of virtual reality device |
CN107451799A (en) * | 2017-04-21 | 2017-12-08 | 阿里巴巴集团控股有限公司 | A kind of Risk Identification Method and device |
CN107451799B (en) * | 2017-04-21 | 2020-07-07 | 阿里巴巴集团控股有限公司 | Risk identification method and device |
CN107977083B (en) * | 2017-12-20 | 2021-07-23 | 北京小米移动软件有限公司 | Operation execution method and device based on VR system |
CN107977083A (en) * | 2017-12-20 | 2018-05-01 | 北京小米移动软件有限公司 | Operation based on VR systems performs method and device |
CN109683765A (en) * | 2018-12-27 | 2019-04-26 | 张家港康得新光电材料有限公司 | A kind of resource allocation methods, device and 3D display terminal |
CN111104042A (en) * | 2019-12-27 | 2020-05-05 | 惠州Tcl移动通信有限公司 | Human-computer interaction system and method and terminal equipment |
CN112370795A (en) * | 2020-10-21 | 2021-02-19 | 潍坊歌尔电子有限公司 | Head-mounted equipment-based swing ball game method, device and equipment |
US11327630B1 (en) | 2021-02-04 | 2022-05-10 | Huawei Technologies Co., Ltd. | Devices, methods, systems, and media for selecting virtual objects for extended reality interaction |
WO2022166448A1 (en) * | 2021-02-04 | 2022-08-11 | Huawei Technologies Co.,Ltd. | Devices, methods, systems, and media for selecting virtual objects for extended reality interaction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106200927A (en) | A kind of information processing method and headset equipment | |
US9405404B2 (en) | Multi-touch marking menus and directional chording gestures | |
CN108245888A (en) | Virtual object control method, device and computer equipment | |
CN106325835B (en) | 3D application icon interaction method applied to touch terminal and touch terminal | |
CN106975219A (en) | Display control method and device, storage medium, the electronic equipment of game picture | |
US11068155B1 (en) | User interface tool for a touchscreen device | |
US20130234957A1 (en) | Information processing apparatus and information processing method | |
EP0816998A3 (en) | Method and apparatus for grouping graphic objects on a computer based system having a graphical user interface | |
EP3414645A1 (en) | Limited field of view in virtual reality | |
CN110493018B (en) | Group chat creating method and device | |
CN110448904B (en) | Game view angle control method and device, storage medium and electronic device | |
WO2017000917A1 (en) | Positioning method and apparatus for motion-stimulation button | |
CN109697002A (en) | A kind of method, relevant device and the system of the object editing in virtual reality | |
CN107038032A (en) | The switching method of mobile terminal application running status, device and system | |
CN107102802A (en) | Overlay target system of selection and device, storage medium, electronic equipment | |
CN105068653A (en) | Method and apparatus for determining touch event in virtual space | |
CN106201271A (en) | Horizontal/vertical screen method for handover control and device | |
CN105975158A (en) | Virtual reality interaction method and device | |
CN109656493A (en) | Control method and device | |
CN105094327B (en) | Adjust the method and device of virtual article attitude angle in Virtual Space | |
CN110111411A (en) | A kind of browse processing method and device of threedimensional model | |
CN111872928B (en) | Obstacle attribute distinguishing method and system and intelligent robot | |
CN103744608B (en) | A kind of information processing method and electronic equipment | |
CN111068325B (en) | Method, device, equipment and storage medium for collecting articles in game scene | |
CN108536276A (en) | Virtual hand grasping algorithm in a kind of virtual reality system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20161207 |
|
WD01 | Invention patent application deemed withdrawn after publication |