CN107168530A - Object processing method and device in virtual scene - Google Patents
Object processing method and device in virtual scene Download PDFInfo
- Publication number
- CN107168530A CN107168530A CN201710292385.7A CN201710292385A CN107168530A CN 107168530 A CN107168530 A CN 107168530A CN 201710292385 A CN201710292385 A CN 201710292385A CN 107168530 A CN107168530 A CN 107168530A
- Authority
- CN
- China
- Prior art keywords
- menu
- virtual scene
- scene
- target
- response
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
Abstract
The invention discloses the object processing method in a kind of virtual scene and device.This method includes:Detect the first operation performed to the first object object in real scene;Corresponding first menu object of the second destination object is generated in virtual scene in response to the first operation, the second destination object is first object object virtual objects corresponding in virtual scene;Detect the second operation performed to first object object, the position that the second destination object is moved to where the target menu object in the first menu object by the second operation instruction in virtual scene;The performance objective processing operation in virtual scene in response to the second operation, target processing operation is the corresponding processing operation of target menu object, a kind of each processing operation of first menu object correspondence.Solve correlation technique and menu option in the 2D menuboards in virtual scene, the technical problem for causing the menu selection action in virtual scene more complicated are positioned by the way of the divergent-ray.
Description
Technical field
The present invention relates to computer realm, in particular to the object processing method and device in a kind of virtual scene.
Background technology
Interaction schemes under existing VR environment are all the 2D menuboards that the ray sent by hand interacts 3d space
To realize.For example, Fig. 1 is a kind of schematic diagram at SteamVR of prior art menu mutual interface, as shown in figure 1, one is penetrated
Line is projected from hand, and ray is pointed to just is desirable to the position that interacts with the intersection point of 2D panels, the position where similar mouse pointer,
Then the button of handle is interacted and rung with the 2D menuboards in this 3d space by button with regard to the button of similar mouse
Should.
Above-mentioned conventional interaction schemes Primary Reference be original mouse interaction schemes, for a user also at last
Compare directly perceived, the experience of the menu setecting using mouse can be grafted directly to above the menu setecting using VR.But this
The benefit that selection scheme can be described as not brought using the input to 3D Virtual Spaces completely is planted, controller is not used
Input be no longer the mouse epoch 2D data, but real 3d space coordinate.This mode is by the coordinate bit of 3D hands
Put and go by ray to navigate to virtual 2D locus again again, then make menu response, it may be said that be come real by ray
The coordinate mapping of existing (3D controller coordinates --- 2D menuboards position), wastes the 3D input datas of controller, even
Instead this additional dimension is the increase in complexity, because this mode of operation is also quick without mouse.
The menu in the 2D menuboards in virtual scene is positioned by the way of the divergent-ray for above-mentioned correlation technique
Option, causes the problem of menu selection action in virtual scene is more complicated, and effective solution is not yet proposed at present.
The content of the invention
The embodiments of the invention provide the object processing method in a kind of virtual scene and device, at least to solve related skill
Art positions the menu option in the 2D menuboards in virtual scene by the way of the divergent-ray, causes the dish in virtual scene
The more complicated technical problem of single selection operation.
One side according to embodiments of the present invention there is provided the object processing method in a kind of virtual scene, including:Inspection
Survey the first operation performed to the first object object in real scene;Generated in response to the described first operation in virtual scene
At least one corresponding first menu object of second destination object, wherein, second destination object is the first object pair
As corresponding virtual objects in the virtual scene;The second operation performed to the first object object is detected, wherein,
Described second operate for indicate in the virtual scene by second destination object be moved to it is described at least one first
The position where target menu object in menu object;In response to described second operation performance objective in the virtual scene
Processing operation, wherein, target processing operation is the corresponding processing operation of the target menu object, it is described at least one the
A kind of processing operation of each first menu object correspondence in one menu object.
Another aspect according to embodiments of the present invention, additionally provides the object handles device in a kind of virtual scene, including:
First detection unit, the first operation performed for detecting to the first object object in real scene;First response unit, is used
At least one corresponding first menu object of the second destination object is generated in virtual scene in being operated in response to described first, its
In, second destination object is first object object virtual objects corresponding in the virtual scene;Second inspection
Unit is surveyed, the second operation performed for detecting to the first object object, wherein, described second operates for indicating in institute
State the target menu object that second destination object is moved at least one described first menu object in virtual scene
The position at place;Second response unit, for performance objective processing to be grasped in the virtual scene in response to the described second operation
Make, wherein, the target processing operation is the corresponding processing operation of the target menu object, at least one described first menu
A kind of processing operation of each first menu object correspondence in object.
In embodiments of the present invention, operated by detecting first object object is performed first, then basis is detected
The first operation, multiple first dishes corresponding to corresponding with first object object the second destination object are generated in virtual scene
Single object, then the second operation performed to first object object is detected, and according to the second operation detected, indicate virtual scene
In the second destination object be moved to target menu object position in the first menu object, the second mesh in virtual scene
In the case that mark object is moved to destination object position, the performance objective processing operation in virtual scene, without mould
Intend mouse, operation will be performed for 3d space Coordinate Conversion is 2D locus, correlation technique is solved and use divergent-ray
Mode position menu option in the 2D menuboards in virtual scene, cause the menu selection action in virtual scene to compare
Complicated technical problem, and then reach and directly perform operation using 3d space coordinate, and then so as to the menu in virtual scene
The easier technique effect of selection operation.
Brief description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, constitutes the part of the application, this hair
Bright schematic description and description is used to explain the present invention, does not constitute inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is a kind of schematic diagram at SteamVR of prior art menu mutual interface;
Fig. 2 is the schematic diagram of the hardware environment of the object processing method in virtual scene according to embodiments of the present invention;
Fig. 3 is the flow chart of the object processing method in a kind of optional virtual scene according to embodiments of the present invention;
Fig. 4 is the schematic diagram of hand menu under a kind of optional reality environment according to embodiments of the present invention;
Fig. 5 is the schematic diagram of hand menu under another optional reality environment according to embodiments of the present invention;
Fig. 6 is the schematic diagram of hand menu under a kind of optional reality environment according to embodiments of the present invention;
Fig. 7 is a kind of schematic diagram of optional menu control logic according to embodiments of the present invention;
Fig. 8 is the schematic diagram of the object handles device in a kind of optional virtual scene according to embodiments of the present invention;
Fig. 9 is the schematic diagram of the object handles device in another optional virtual scene according to embodiments of the present invention;
Figure 10 is the schematic diagram of the object handles device in another optional virtual scene according to embodiments of the present invention;
Figure 11 is the schematic diagram of the object handles device in another optional virtual scene according to embodiments of the present invention;
Figure 12 is the schematic diagram of the object handles device in another optional virtual scene according to embodiments of the present invention;
Figure 13 is the schematic diagram of the object handles device in another optional virtual scene according to embodiments of the present invention;
Figure 14 is a kind of structured flowchart of terminal according to embodiments of the present invention.
Embodiment
In order that those skilled in the art more fully understand the present invention program, below in conjunction with the embodiment of the present invention
Accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is only
The embodiment of a part of the invention, rather than whole embodiments.Based on the embodiment in the present invention, ordinary skill people
The every other embodiment that member is obtained under the premise of creative work is not made, should all belong to the model that the present invention is protected
Enclose.
It should be noted that term " first " in description and claims of this specification and above-mentioned accompanying drawing, "
Two " etc. be for distinguishing similar object, without for describing specific order or precedence.It should be appreciated that so using
Data can exchange in the appropriate case, so as to embodiments of the invention described herein can with except illustrating herein or
Order beyond those of description is implemented.In addition, term " comprising " and " having " and their any deformation, it is intended that cover
Lid is non-exclusive to be included, for example, the process, method, system, product or the equipment that contain series of steps or unit are not necessarily limited to
Those steps or unit clearly listed, but may include not list clearly or for these processes, method, product
Or the intrinsic other steps of equipment or unit.
First, the part noun or term occurred during the embodiment of the present invention is described is applied to as follows
Explain:
VR:Virtual reality (English:Virtual reality, are abbreviated as VR), abbreviation virtual technology, also referred to as virtual environment,
It is simulation of the virtual world there is provided user on sense organs such as visions that a three dimensions is produced using computer simulation, allows user
Sensation is seemingly personally on the scene, and the things in three dimensions can be observed in time, without limitation.
Steam:It is digital distribution, digital copyright management and social system that U.S. Wei Erfu was released on the 12nd in September in 2003
System, it is used for the distribution sale and follow-up renewal of numerical software and game, supports the operation such as Windows, OS X and Linux system
System, is currently global maximum PC numbers game platforms.
Steam VR:It is 360 ° of complete house type space virtual experience of reality of One function.This development kit contains one
Individual head mounted display, two single-hand handling & controllers, one can be in the positioning system of pursuit displays and controller simultaneously in space
System.
Oculus:An American Virtual Shi Jing scientific & technical corporation, by Palmer draw very with Boulogne Dan Airui ratios
(Brendan Iribe) is set up.Their initial workpiece product Oculus Rift are that a virtual reality wear-type true to nature is shown
Device.
Oculus Touch:It is Oculus Rift motion capture handle, adapted space alignment system is used, Oculus
Touch employs the design of similar bracelet, it is allowed to which video camera is tracked to the hand of user, and sensor can also follow the trail of finger
Motion, while the grip also offered convenience for user.
Embodiment 1
There is provided a kind of embodiment of the method for the object processing method in virtual scene according to embodiments of the present invention.
Alternatively, in the present embodiment, the object processing method in above-mentioned virtual scene can apply to as shown in Figure 2
In the hardware environment being made up of server 102 and terminal 104.As shown in Fig. 2 server 102 is entered by network with terminal 104
Row connection, above-mentioned network includes but is not limited to:Wide area network, Metropolitan Area Network (MAN) or LAN, terminal 104 are not limited to PC, mobile phone, put down
Plate computer etc..Object processing method in the virtual scene of the embodiment of the present invention can be performed by server 102, can also be by
Terminal 104 is performed, and be can also be and is performed jointly by server 102 and terminal 104.Wherein, terminal 104 performs implementation of the present invention
Object processing method in the virtual scene of example can also be performed by client mounted thereto.
Fig. 3 is the flow chart of the object processing method in a kind of optional virtual scene according to embodiments of the present invention, such as
Shown in Fig. 3, this method may comprise steps of:
Step S402, detects the first operation performed to the first object object in real scene;
Step S404, in response to first operation generated in virtual scene the second destination object it is corresponding at least one first
Menu object, wherein, the second destination object is first object object virtual objects corresponding in virtual scene;
Step S406, detects the second operation performed to first object object, wherein, second operates for indicating virtual
The position that the second destination object is moved to where the target menu object at least one first menu object in scene;
Step S408, in response to the second operation, performance objective processing is operated in virtual scene, wherein, target processing operation
Operated for the corresponding processing of target menu object, each first menu object correspondence at least one first menu object is a kind of
Processing operation.
Above-mentioned steps S402 to step S408, is operated, then basis by detecting first object object is performed first
The first operation detected, is generated multiple corresponding to the second destination object corresponding with first object object in virtual scene
First menu object, then the second operation performed to first object object is detected, and according to the second operation detected, indicate empty
The second destination object intended in scene is moved to target menu object position in the first menu object, in virtual scene
In the case that second destination object is moved to destination object position, the performance objective processing operation in virtual scene, so that
Without analog mouse, operation will be performed for 3d space Coordinate Conversion is 2D locus, and solve correlation technique using hair
The mode of ray positions the menu option in the 2D menuboards in virtual scene, causes the menu setecting in virtual scene to be grasped
The technical problem for complexity of making comparisons, and then reach and directly perform operation using 3d space coordinate, and then so that in virtual scene
The easier technique effect of menu selection action.
Step S402 provide technical scheme in, first object object can be real scene in be used for virtual scene
The device object being controlled, for example, first object object can be the game paddle in real scene, remote control etc..True
User can perform first to first object object and operate in real field scape, wherein, the first operation can include but is not limited to:Point
Hit, long-press, gesture, rock.The embodiment of the present invention can be by detecting that user is held to first object object in real scene
Capable first operation, get with the corresponding control instruction of the first operation, wherein, the control instruction can be used for control virtually
Scene.For example, in VR game applications, user can press the button of game paddle, realize raw in control virtual game picture
Into menu option.Or, in VR Video Applications, user can press the case in remote control to control virtual video picture
Playback action.
Alternatively, the embodiment of the present invention can detect the first behaviour performed to the first object object in real scene in real time
Make, can so realize and rapidly the first operation is responded in time, and then can cause to the object in virtual scene
Processing is operated much sooner, to improve usage experience of the user to virtual scene.
In the technical scheme that step S404 is provided, the second destination object can be the first object object in real scene
The corresponding virtual objects in virtual scene, for example, it is assumed that first object object is the game paddle in real scene, then second
Destination object can be that position of the game paddle in virtual scene in the game paddle in virtual scene, virtual scene can
It is corresponding with the position with the game paddle in real scene in real scene, for example, the trip in user's control real scene
When handle of playing is moved, the game paddle in virtual scene can also be moved with it, and moving direction and displacement and real scene
In game paddle moving direction it is identical with displacement.
User to first object object when performing the first operation, and the embodiment of the present invention can ring to first operation
Should, specific response process, which can be included in virtual scene, generates at least one corresponding first menu pair of the second destination object
As, wherein, the first menu object can be the virtual menu for being controlled to virtual scene.For example, user is in real field
After the menu control button that game paddle is pressed in scape, the game paddle in virtual scene, which is also accordingly performed, presses menu control
The operation of button processed, the operation is then responded in virtual scene can generate menu in virtual scene, and dish is carried out for user
The selection of single object, to realize the function corresponding to menu object.
It should be noted that the embodiment of the present invention is not specifically limited to the function corresponding to the first menu object, first
Menu object can correspond to generation drop-down menu, can also correspond to and perform some action, or complete some task dispatching.
As a kind of optional embodiment, step S404 generates the second target pair in response to the first operation in virtual scene
As at least one corresponding first menu object may comprise steps of:
Step S4042, obtains target scene current in virtual scene when detecting the first operation;
Step S4044, according to second mesh of the corresponding relation of predetermined virtual scene and menu object in virtual scene
Mark and at least one first menu object corresponding with target scene is generated around object.
Using the above embodiment of the present invention, when detecting the first operation, the process that response first is operated can include excellent
Target scene current in now virtual scene is first obtained, predetermined virtual scene is can then proceed in corresponding with menu object
Relation, determines the menu object corresponding to the target scene, namely virtual scene is currently target when detecting the first operation
Scene, determines that the corresponding menu object of the target scene is the first menu object according to predetermined relationship, then can in virtual scene
To generate first menu object corresponding with target scene around the second destination object for user's selection, so as to so that
User selects corresponding menu option according to the first menu object of generation.
Alternatively, menu object corresponding with different target scene can be generated in different target scenes, for example, is existed
In shooting game, the corresponding menu object of generation can be weaponry selection menu;In Fighting Game (FTG), generation
Corresponding menu object can be technical ability selection menu.For the menu object corresponding to other target scenes, here no longer
Illustrate one by one.
As a kind of optional embodiment, in response to the first operation life around the second destination object in virtual scene
The arrangement mode embodiment of the present invention at least one the first menu object is not specifically limited, and the first menu object is in the second mesh
The arrangement mode of mark data collection can include at least one of:
(1) at least one first menu object is generated according to predetermined space in predetermined circle, wherein, predetermined circle can be with
For using the second destination object position as the center of circle, the circumference that preset distance is made up of radius.Preset distance herein and
Predetermined space can be set according to the actual requirements, be not specifically limited herein.
(2) it is sequentially generated at least one first menu pair according to predetermined arrangement on the predetermined direction of the second destination object
As, wherein, predetermined direction includes at least one of:Top, lower section, left etc., right, predetermined arrangement order are included below extremely
It is one of few:Order arranged in a straight line, curved arrangement order etc..
It should be noted that above-mentioned simply list component arrangement mode, the embodiment of the present invention can also be using other rows
Row mode, such as random alignment mode, are no longer illustrated one by one herein.
Using the above embodiment of the present invention, response first is operated, can in virtual scene using the second destination object in
The heart is uniformly arranged multiple first menu objects around the second destination object according to predetermined circle, wherein, by each first dish
The preset distance of single object and the second destination object is as the radius of predetermined circle, pressing between two neighboring first menu object
According to predetermined space arrangement.Or, the first menu object can also be arranged in the upper of the second destination object with linear or shaped form
The directions such as side, lower section, left, right.The program arranges multiple first menu objects around the second destination object, can make
User controls first menu pair of second destination object to required selection in the case where observing the second destination object, easily
The side of elephant is moved up, and completes the selection to menu object.
Alternatively, the first menu object can use the pattern of 3D balls to be arranged in around the second destination object;First
Menu object can also use the pattern of 3D squares to be arranged in around the second destination object, and the first menu object can also use it
He is arranged in around the second destination object different patterns, and differ an illustration here.
Alternatively, the first menu object be able to can also be used around the second destination object using circumferential fashion arrangement
Other forms are arranged, and are also no longer illustrated one by one here.
In the technical scheme that step S406 is provided, after the first operation in response to being performed to first object object,
The embodiment of the present invention can also detect that user operates to second performed by first object object in real scene in real time, its
In, the second operation can the including but not limited to operation such as movement, slip.Alternatively, the second operation can be to trip in user
Play handle performs other operations performed after the first operation, for example, in the case where the first operation is to pin game paddle button,
Second operation is to pin game paddle movement.Second operation also has a variety of different implementations, does not repeat one by one herein.
Detect to performed by first object object second operation after, the embodiment of the present invention can respond this second
Operation, can specifically be included in virtual scene and control shifting of second destination object according to first object object in real scene
Dynamic direction and displacement are moved so that the target menu object institute that the second destination object can be moved in virtual scene
Position, wherein, target menu object can be at least one first menu around the second destination object in virtual scene
Any one menu object in object, user can be operated by performing second to first object object, control virtual scene
In some target menu object under the control of the second operation in mobile at most individual first menu object of the second destination object.
For example, in the case where user needs to select some target menu object, the trip in the hand-held reality scene of user
Handle of playing is moved, then the game paddle in virtual scene is also moved to corresponding direction, and user is by controlling reality scene middle reaches
The moving direction of handle of playing controls the moving direction of game paddle in virtual scene, moves the game paddle in virtual scene
Onto the target menu object for needing selection, the selection of target menu object is completed.
In the technical scheme that step S408 is provided, the operation of response of the embodiment of the present invention second can be controlled in virtual scene
The second destination object be moved to target menu object, and trigger target when the second destination object is moved to target menu object
The corresponding target processing operation of menu object.It should be noted that being arranged in virtual scene around the second destination object extremely
Each first menu object in few first menu object corresponds to a kind of processing operation, wherein, target menu object pair
The processing operation answered handles operation for target.It should also be noted that, the processing operation corresponding to the first menu object can be wrapped
Include but be not limited to generate the drop-down menu object of the first menu object, realize certain function etc..
For example, in shooting game, i.e., virtual scene is shooting game environment, in the virtual scene, user's control
Virtual game handle is moved to the target menu object position for changing magazine, then responds the target dish for changing magazine of user's selection
Game role in the corresponding target processing operation of single object, control virtual scene changes firearm, performance objective processing behaviour
Make.It should be noted that target processing operation can include it is a variety of, for example:The corresponding function of the menu object is performed, or,
Triggering generates the drop-down menu of the menu option, and target processing operation also includes a variety of implementations, differs one illustrate herein
It is bright.
As a kind of optional embodiment, step S408 in response to the second operation, grasp by the performance objective processing in virtual scene
Work can include at least one of:
At least one second menu object is generated in virtual scene, wherein, at least one second menu object is target
The drop-down menu object of menu object.
The first scene in virtual scene is switched into the second scene, the switching of such as scene of game.
The attribute of operation object in virtual scene is set to objective attribute target attribute, such as game skin, weaponry, skill
The renewal of energy.
The operation object performance objective task in virtual scene is controlled, for example, game, which is performed, beats strange task.
It should be noted that target processing operation is not limited in aforesaid operations, target processing operation can also include it
He operates, and no longer illustrates one by one herein.Using the above embodiment of the present invention, held in response to the second operation in virtual scene
The processing operation of row target, can be according to the function representated by the demand of user and the first different menu objects, and selection is different
Target processing operation, menu in virtual scene can be made to disclosure satisfy that a variety of use demands.
As a kind of optional embodiment, the second destination object correspondence is being generated in virtual scene in response to the first operation
At least one first menu object after, the embodiment can also include:Detect the 3rd behaviour performed to first object object
Make;At least one first menu object is deleted in virtual scene in response to the 3rd operation.
Using the above embodiment of the present invention, the first menu object is generated around the second destination object in virtual scene
Afterwards, user can also control first object object to perform the 3rd operation, wherein, the 3rd operation can including but not limited to unclamp
Button, click, long-press or movement etc. are operated.Detect to performed by first object object the 3rd operation when, the present invention
Embodiment can control the second destination object in virtual scene accordingly to perform the 3rd operation, and first is deleted in virtual scene
Menu object, namely cancellation shows menu content in virtual scene.
For example, being generated around game paddle in virtual scene after multiple first menu objects, user's control reality field
Game paddle in scape performs the 3rd and operated, for example, press some button in game paddle, or shake on game paddle
Some rocking bar, or unclamp the case on game paddle, then the game paddle in virtual scene also performs corresponding operation,
Multiple first menu objects are deleted in virtual scene so that cancel in virtual scene and show menu content.
As a kind of optional embodiment, in the performance objective processing operation in virtual scene in response to the second operation
Before, the embodiment can also include:Mark is set for target menu object in virtual scene in response to the second operation, wherein,
Mark the position being moved to for the second destination object of instruction where target menu object.
Using above-described embodiment, mark can be set on target menu object, make the second target pair in virtual scene
As that during target menu object position is moved to, user can be made clearly to see the mark, and in the mark
Can more easily it be moved under guiding.
For example, user can control game paddle in virtual scene to point to a certain target menu object, then target menu
Object can amplify in the presence of mark, light, flicker or rotation.It should be noted that target menu object can be in mark
In the presence of also have other forms of expression, differ an illustration herein.
Present invention also offers a kind of preferred embodiment, the preferred embodiment provides hand under a kind of reality environment
The interaction schemes of menu.
Application scenarios described in the invention are under VR environment.For host game, and all 2D such as mobile phone games
The game of game under display environment, either 3D or 2D scenes, the making of menu is all realized using 2D interfaces.
In the case of being 2D displays in final display, make menu not as that should exist in scene of game in
Hold, be intended only as the connection medium of user and game content, menu is made using 2D interfaces, can directly allow the 2D faces of menu
Plate face is to display display direction (namely player's camera direction in virtual world), so as to more quickly and easily allow use
Family is selected, and is not also interfered with the game logic of virtual world, is made menu relatively independent.
No longer it is so by 2D interfaces by mouse, keyboard due to user and the interactive mode of main frame under VR environment
Change in location come back mapping to 3d space go operate virtual world, but directly acquisition user in the position of true 3d space
Put, the 3D positions of Virtual Space are directly corresponded to according to this position, thus pass through mouse action, such as 2D in the absence of original
Screen space --- corresponding relation as 3D Virtual Spaces.
Menu is shown using 3D mode, the 3D objects of menu as virtual world virtual generation can be directly dissolved into
Gone inside boundary's scene, user can more facilitate, intuitively operates, it is stronger that the substitution sense that VR plays can also become.
Performance and logic realization that the present invention is mainly described.When user breathes out menu, each choosing of menu
Item is appeared in around hand as a 3D object according to certain arrangement algorithm, and then user passes through pre-defined behavior
Go to trigger menu option, the menu item chosen triggers corresponding function, and whole menu disappears on triggering.
The invention provides hand 3D object menus under a kind of VR environment.In the case of user's exhalation menu, in menu
Each option is appeared in around hand as a 3D object according to certain arrangement algorithm, and then user passes through fixed in advance
The behavior of justice goes to trigger menu option, and the menu item chosen triggers corresponding function, and whole menu disappears on triggering.
The hand 3D object menus that the present invention is provided can be applied as VR in shortcut menu, particularly specifically swimming
Play scene in, each menu option really can be present in scene of game, allow user will not because of menu appearance
And the immersion experience of game is influenceed, again user can be allow quickly to select corresponding function.
Fig. 4 is the schematic diagram of hand menu under a kind of optional reality environment according to embodiments of the present invention, such as Fig. 4
It is shown, show a homemade handle model, the handle model is present in the virtual 3d space of game environment, wherein,
Handle is corresponded in virtual spatial location and handle in true spatial location.User is by controlling in the real space in hand
Handle controls the handle in Virtual Space.
Alternatively, game paddle can be the VR of Vive handles, Oculus Touch handles or corresponding both hands separate type
Handle.
It should be noted that no matter handle is in any position of Virtual space, when user presses the menu of entity handle
In the case of key (can press in Vive handles in the case of Menu buttons, or press in Oculus Touch handles A/X by
In the case of key), need in Virtual Space the menu object of ejection to be born from handle position, then move to predetermined computation
The target location drawn.
Alternatively, the target location can be a relative position around handle position.
Fig. 5 is the schematic diagram of hand menu under another optional reality environment according to embodiments of the present invention, such as
Shown in Fig. 5, the target location can will press the position of the moment handle of button in Virtual Space as home position, and with this
Home position is as the center of circle, using 20cm as radius, sets up disk, and according to equidistantly arranging multiple around the disk
Selecting object, wherein, the normal vector of disk is towards the camera of virtual world, i.e. the normal vector of disk watches towards user
Direction.
In the scheme provided in the present invention, the handle of Virtual Space is a 3D model, is appeared in every around handle
One menu option is also 3D models.The handle position of Virtual Space and the handle position of real space are corresponded, virtual empty
Between in menu option position also corresponded with the position of real space.
In the scheme provided in the present invention, user presses the multiple menu options of instantaneous trigger of menu button, makes multiple
Menu option appears in some fixed position in virtual world, and the position where the virtual menu that these 3D models are represented is not
It can change again, the interaction until this time terminates menu disappearance.
Alternatively, after user presses Menu key popup menu, it is necessary to user be always maintained at Menu key be in pin state,
If user unclamps Menu key, then it represents that whole menu mutual process terminates.
It should be noted that menu is in interaction, other operation behaviors such as triggering of menu option and handle are pressed
Key is unrelated, and menu key control is exactly appearing and subsiding of the whole menu in interaction.
In the scheme that the present invention is provided, at any time in section, position of the handle in Virtual Space in Virtual Space
It is all one-to-one in true spatial location with the handle in real space to put.Therefore occur in these virtual menu options
In the case of, setting user's handle goes to touch the situation of these virtual menu objects, exactly triggers the sign of these menu options
Corresponding function.In the case of being collided in the shift position of handle and menu option, menu function is triggered.
Fig. 6 is the schematic diagram of hand menu under a kind of optional reality environment according to embodiments of the present invention, such as Fig. 6
Shown, handle can also design in moving process and allow the size of menu option to be approached with the distance of handle and menu option
And become big, allow menu option size as the distance of handle and menu option becomes remote and diminishes, so as to according to the prompting,
Guiding user goes to touch these menu options.
Alternatively, in the case where user touches menu object, the corresponding function of triggering menu, the function can be set
Fixed game logic and closing menu, can also be the menu for opening next stage.
Fig. 7 is a kind of schematic diagram of optional menu control logic according to embodiments of the present invention, as shown in fig. 7, in hair
Go out【Menu is triggered】Request after, enter【Individual layer menu logic】, according to【Individual layer menu logic】Corresponding menu logic is performed,
And the result of execution is carried out【Feedback】, further according to【Feedback】Result, selection perform【Menu disappears】Or open【Two grades of dishes
It is single】If performing【Second-level menu】, then return【Individual layer menu logic】Perform second of menu logic.
Alternatively, opening the mode of a menu mutual can include【Menu is triggered】With【Second-level menu】Mode,【Dish
One shot】Represent that user presses menu key;【Second-level menu】The menu option of expression user's triggering further opens new
Level-two menu option, in this case, the menu option of upper level, which disappears, completes last menu mutual, while generation is new
Menu option open the current set of menu interaction.
Alternatively, closing the mode of a menu mutual can equally include【Menu is triggered】With【Menu disappears】Side
Formula,【Menu is triggered】That represent is one that user's moving handle is gone to have touched in menu option, in this case, either
Open next stage menu and still perform the preset logic of menu option, the first step before this is all to destroy this time all
Menu option.【Menu disappears】What is represented is that user does not go to collide any one option in this time menu mutual, and
It is to have unclamped menu button, can at this time triggers end current【Individual layer menu logic】Behavior.
It should be noted that【Individual layer menu logic】Inside, is the execution logic of a menu mutual.
Alternatively,【Individual layer menu logic】It is interior including initial phase and perform the stage.
Open after a menu mutual, initially enter initial phase, generate menu option.
Alternatively, the content for the hand menu currently to be ejected is determined according to current game environment, the method for realization can
To be, the type of the current set of menu is identified in one variable of hand menu memory storage, and hand is pre-defined for each type
The content that portion's menu need be ejected.In the case where user presses menu button, going game environment is checked, this menu is determined
The value of categorical variable, as the parameter for creating the current set of menu option, is achieved in【Menu content is generated】.
Initializing second step is【Determine menu locus】, implementation method can be according to current【Handle position】, lead to
Cross algorithm and pressed to allow all menu options to be arranged in around the handle position of menu moment.
Alternatively, the algorithm can be:The position of the moment handle of button in Virtual Space will be pressed as home position,
And using the home position as the center of circle, using 20cm as radius, disk is set up, and according to equidistant row around the disk
Multiple selecting objects are arranged, wherein, the normal vector of disk is towards the camera of virtual world, i.e. the normal vector of disk is towards user
The direction of viewing.And store collision body and the position of each menu option.
Subsequently into the stage of execution, it is for the logic that each frame image will be performed in Virtual Space:Obtain current empty
Intend handle【Handle position】, then go execution next step to judge current virtual handle【Handle position】Whether satisfaction terminates bar
Part, if being unsatisfactory for termination condition, continues cycling through this step.
It should be noted that termination condition can include situations below, such as:User unclamps menu button;Current is virtual
Handle【Handle position】With colliding for any one menu option.As long as meeting any one above-mentioned condition, all can
Terminate a current menu mutual, complete【Individual layer logic menu】.
The above embodiment of the present invention, can more agree with user in displaying, and void is touched by virtual hand
Intend feedback of the object object simultaneously to your audiovisual tactile, make the sensation of user more true under VR environment.
The above embodiment of the present invention, can allow user quickly to select menu under VR environment.
In the above embodiment of the present invention, as long as the handle of both hands separate type, and handle can obtain the position of 3d space
That puts can obtain the support of scheme provided by the present invention.
In the above embodiment of the present invention, menu key can be adjusted arbitrarily.
In the above embodiment of the present invention, the algorithm that position occurs in menu option can be various, using above-mentioned implementation
The selection custom that algorithm in example can more conform to user carries out option.
In the above embodiment of the present invention, button will be pinned always can just show that the scheme of menu can also use other
Mode is realized, for example, clicking button, then starts to perform menu mutual, button is clicked again, then terminates menu mutual.
In the above embodiment of the present invention, the menu response animation in handle moving process be also it is various, herein not
Repeat one by one again.
It should be noted that for foregoing each method embodiment, in order to be briefly described, therefore it is all expressed as a series of
Combination of actions, but those skilled in the art should know, the present invention is not limited by described sequence of movement because
According to the present invention, some steps can be carried out sequentially or simultaneously using other.Secondly, those skilled in the art should also know
Know, embodiment described in this description belongs to preferred embodiment, involved action and module is not necessarily of the invention
It is necessary.
Through the above description of the embodiments, those skilled in the art can be understood that according to above-mentioned implementation
The method of example can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but a lot
In the case of the former be more preferably embodiment.Understood based on such, technical scheme is substantially in other words to existing
The part that technology contributes can be embodied in the form of software product, and the computer software product is stored in a storage
In medium (such as ROM/RAM, magnetic disc, CD), including some instructions are to cause a station terminal equipment (can be mobile phone, calculate
Machine, server, or network equipment etc.) perform method described in each of the invention embodiment.
Embodiment 2
According to embodiments of the present invention, a kind of void for being used to implement the object processing method in above-mentioned virtual scene is additionally provided
Intend the object handles device in scene.Fig. 8 is the object handles in a kind of optional virtual scene according to embodiments of the present invention
The schematic diagram of device, as shown in figure 8, the device can include:
First detection unit 91, the first operation performed for detecting to the first object object in real scene;First
Response unit 93, for generating at least one corresponding first dish of the second destination object in virtual scene in response to the first operation
Single object, wherein, the second destination object is first object object virtual objects corresponding in virtual scene;Second detection is single
Member 95, for detect to first object object perform second operation, wherein, second operate for indicate in virtual scene will
The position that second destination object is moved to where the target menu object at least one first menu object;Second response unit
97, for the performance objective processing operation in virtual scene in response to the second operation, wherein, target processing operation is target menu
A kind of processing operation of each first menu object correspondence in the corresponding processing operation of object, at least one first menu object.
It should be noted that the first detection unit 91 in the embodiment can be used for performing in the embodiment of the present application 1
The first response unit 93 in step S402, the embodiment can be used for performing the step S404 in the embodiment of the present application 1, the reality
The second detection unit 95 in example is applied to can be used for performing second in the step S406 in the embodiment of the present application 1, the embodiment
Response unit 97 can be used for performing the step S408 in the embodiment of the present application 1.
Herein it should be noted that above-mentioned module is identical with example and application scenarios that the step of correspondence is realized, but not
It is limited to the disclosure of that of above-described embodiment 1.It should be noted that above-mentioned module as a part for device may operate in as
It in hardware environment shown in Fig. 2, can be realized, can also be realized by hardware by software.
As a kind of optional embodiment, as shown in figure 9, the embodiment can also include:3rd detection unit 98, is used for
The inspection after at least one corresponding first menu object of the second destination object is generated in virtual scene in response to the first operation
Survey the 3rd operation performed to first object object;3rd response unit 99, is deleted in response to the 3rd operation in virtual scene
At least one first menu object.
As a kind of optional embodiment, as shown in Figure 10, the embodiment can also include:4th response unit 96, is used
Before performance objective processing operation, operated in response to the second operation in virtual scene in response to second in virtual scene
Set and mark for target menu object, wherein, mark for indicating that the second destination object is moved to where target menu object
Position.
As a kind of optional embodiment, as shown in figure 11, the first response unit 93 can include:Acquisition module 931, is used
The target scene current in virtual scene when acquisition detects the first operation;First generation module 932, for according to predetermined
The corresponding relation of virtual scene and menu object generation and target scene phase around the second destination object in virtual scene
At least one corresponding first menu object.
As a kind of optional embodiment, as shown in figure 12, the first response unit 93 can include with lower module extremely
It is one of few:Second generation module 933, for generating at least one first menu object according to predetermined space in predetermined circle,
Wherein, predetermined circle is the circumference that preset distance is made up of radius using the second destination object position as the center of circle;The three lives
Into module 934, for being sequentially generated at least one first menu according to predetermined arrangement on the predetermined direction of the second destination object
Object, wherein, predetermined direction includes at least one of:Top, lower section, left, right, predetermined arrangement order are included below extremely
It is one of few:Order arranged in a straight line, curved arrangement order.
As a kind of optional embodiment, as shown in figure 13, the second response unit 97 can include with lower module extremely
It is one of few:4th generation module 951, for generating at least one second menu object in virtual scene, wherein, at least one
Second menu object is the drop-down menu object of target menu object;Handover module 952, for by first in virtual scene
Scape switches to the second scene;Setup module 953, for the attribute of operation object in virtual scene to be set into objective attribute target attribute;Control
Molding block 954, for controlling the operation object performance objective task in virtual scene.
Above-mentioned module, the first operation performed by detecting to first object object, then according to the first behaviour detected
Make, generate multiple first menu objects around the second destination object corresponding with first object object in virtual scene, then examine
The second operation performed to first object object is surveyed, and according to the second operation detected, indicates the second mesh in virtual scene
Mark object is moved to target menu object position in the first menu object, the second destination object movement in virtual scene
To destination object position, the performance objective processing operation in virtual scene, without analog mouse, by pin
Operation is performed to 3d space Coordinate Conversion is 2D locus, correlation technique is solved and void is positioned by the way of divergent-ray
Intend the menu option in the 2D menuboards in scene, the technology for causing the menu selection action in virtual scene more complicated is asked
Topic, and then reach and directly perform operation using 3d space coordinate, and then so as to the menu selection action in virtual scene more
Easy technique effect.
Embodiment 3
According to embodiments of the present invention, a kind of end for being used to implement the object processing method in above-mentioned virtual scene is additionally provided
End.
Figure 14 is a kind of structured flowchart of terminal according to embodiments of the present invention, and as shown in figure 14, the terminal can include:
One or more (one is only shown in figure) processors 201, memory 203 and transmitting device 205, as shown in figure 14, the end
End can also include input-output equipment 207.
Wherein, memory 203 can be used in the virtual scene in storage software program and module, such as embodiment of the present invention
Object processing method and the corresponding programmed instruction/module of device, processor 201 by operation be stored in memory 203
Software program and module, so as to perform various function application and data processing, that is, realize pair in above-mentioned virtual scene
As processing method.Memory 203 may include high speed random access memory, can also include nonvolatile memory, such as one or
Multiple magnetic storage devices, flash memory or other non-volatile solid state memories.In some instances, memory 203 can enter one
Step includes the memory remotely located relative to processor 201, and these remote memories can pass through network connection to terminal.On
The example for stating network includes but is not limited to internet, intranet, LAN, mobile radio communication and combinations thereof.
Above-mentioned transmitting device 205 is used to data are received or sent via a network.Above-mentioned network instantiation
It may include cable network and wireless network.In an example, transmitting device 205 includes a network adapter (Network
Interface Controller, NIC), its can be connected by netting twine and other network equipments with router so as to interconnection
Net or LAN are communicated.In an example, transmitting device 205 is radio frequency (Radio Frequency, RF) module, its
For wirelessly being communicated with internet.
Wherein, specifically, memory 203 is used to store application program.
Processor 201 can call the application program that memory 203 is stored, to perform following step:Detection is to true field
The first operation that first object object in scape is performed;The second destination object pair is generated in virtual scene in response to the first operation
The menu object of at least one answered first, wherein, the second destination object is that first object object is corresponding in virtual scene
Virtual objects;Detect to first object object perform second operation, wherein, second operate for indicate in virtual scene will
The position that second destination object is moved to where the target menu object at least one first menu object;In response to the second behaviour
Make the performance objective processing operation in virtual scene, wherein, target processing operation is the corresponding processing operation of target menu object,
A kind of processing operation of each first menu object correspondence at least one first menu object.
Processor 201 is additionally operable to perform following step:Detect the 3rd operation performed to first object object;In response to
At least one first menu object is deleted in three operations in virtual scene.
Processor 201 is additionally operable to perform following step:It is target menu object in virtual scene in response to the second operation
Mark is set, wherein, mark the position being moved to for the second destination object of instruction where target menu object.
Processor 201 is additionally operable to perform following step:Obtain target current in virtual scene when detecting the first operation
Scene;According to the life around the second destination object in virtual scene of the corresponding relation of predetermined virtual scene and menu object
Into the menu object of at least one corresponding with target scene first.
Processor 201 is additionally operable to perform following step:In predetermined circle according to predetermined space generate at least one first
Menu object, wherein, predetermined circle is the circle that preset distance is made up of radius using the second destination object position as the center of circle
Week;At least one first menu object is sequentially generated according to predetermined arrangement on the predetermined direction of the second destination object, wherein, in advance
Determining direction includes at least one of:Top, lower section, left, right, predetermined arrangement order include at least one of:Straight line
Put in order, curved arrangement order.
Processor 201 is additionally operable to perform following step:At least one second menu object is generated in virtual scene, its
In, at least one second menu object is the drop-down menu object of target menu object;The first scene in virtual scene is cut
Shift to the second scene;The attribute of operation object in virtual scene is set to objective attribute target attribute;Control the operation pair in virtual scene
As performance objective task.
Using the embodiment of the present invention, there is provided the object handles scheme in a kind of virtual scene.By detecting to the first mesh
The first operation that object is performed is marked, it is corresponding with first object object in virtual scene then according to the first operation detected
The second destination object around generate multiple first menu objects, then detect the second operation performed to first object object, and
According to the second operation detected, indicate that the second destination object in virtual scene is moved to target menu in the first menu object
Object position, in the case that the second destination object in virtual scene is moved to destination object position, virtual
Performance objective processing operation in scene, without analog mouse, will hold for 3d space Coordinate Conversion is 2D locus
Row operation, is solved correlation technique and is selected using the menu in the 2D menuboards positioned by the way of divergent-ray in virtual scene
, the technical problem for causing the menu selection action in virtual scene more complicated, and then reach direct using 3d space coordinate
Operation is performed, and then causes the technique effect easier to the menu selection action in virtual scene.
Alternatively, the specific example in the present embodiment may be referred to showing described in above-described embodiment 1 and embodiment 2
Example, the present embodiment will not be repeated here.
It will appreciated by the skilled person that the structure shown in Figure 14 is only signal, terminal can be smart mobile phone
(such as Android phone, iOS mobile phones), tablet personal computer, palm PC and mobile internet device (Mobile Internet
Devices, MID), the terminal device such as PAD.Figure 14 it does not cause to limit to the structure of above-mentioned electronic installation.For example, terminal is also
May include than shown in Figure 14 more either less components (such as network interface, display device etc.) or with Figure 14 institutes
Show different configurations.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can
To be completed by program come the device-dependent hardware of command terminal, the program can be stored in a computer-readable recording medium
In, storage medium can include:Flash disk, read-only storage (Read-Only Memory, ROM), random access device (Random
Access Memory, RAM), disk or CD etc..
Embodiment 4
Embodiments of the invention additionally provide a kind of storage medium.Alternatively, in the present embodiment, above-mentioned storage medium can
For performing the program code of the object processing method in virtual scene.
Alternatively, in the present embodiment, above-mentioned storage medium can be located at multiple in the network shown in above-described embodiment
On at least one network equipment in the network equipment.
Alternatively, in the present embodiment, storage medium is arranged to the program code that storage is used to perform following steps:
S1, detects the first operation performed to the first object object in real scene;
S2, at least one corresponding first menu pair of the second destination object is generated in response to the first operation in virtual scene
As, wherein, the second destination object is first object object virtual objects corresponding in virtual scene;
S3, detects the second operation performed to first object object, wherein, second operates for indicating in virtual scene
The position that second destination object is moved to where the target menu object at least one first menu object;
S4, in response to the second operation, performance objective processing is operated in virtual scene, wherein, target processing operation is target
A kind of processing behaviour of each first menu object correspondence in the corresponding processing operation of menu object, at least one first menu object
Make.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:Detection is to the first mesh
Mark the 3rd operation that object is performed;At least one first menu object is deleted in virtual scene in response to the 3rd operation.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:In response to the second behaviour
Make to set mark in virtual scene for target menu object, wherein, mark for indicating that the second destination object is moved to target
Position where menu object.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:Acquisition detects
Target scene current in virtual scene during one operation;According to the corresponding relation of predetermined virtual scene and menu object virtual
At least one first menu object corresponding with target scene is generated around the second destination object in scene.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:In predetermined circle
Generate at least one first menu object according to predetermined space, wherein, predetermined circle be using the second destination object position as
The center of circle, the circumference that preset distance is made up of radius;Given birth on the predetermined direction of the second destination object according to predetermined arrangement order
Into at least one the first menu object, wherein, predetermined direction includes at least one of:Top, lower section, left, right, make a reservation for
Put in order including at least one of:Order arranged in a straight line, curved arrangement order.
Alternatively, storage medium is also configured to the program code that storage is used to perform following steps:In virtual scene
At least one second menu object is generated, wherein, at least one second menu object is the drop-down menu pair of target menu object
As;The first scene in virtual scene is switched into the second scene;The attribute of operation object in virtual scene is set to target
Attribute;Control the operation object performance objective task in virtual scene.
Alternatively, the specific example in the present embodiment may be referred to showing described in above-described embodiment 1 and embodiment 2
Example, the present embodiment will not be repeated here.
Alternatively, in the present embodiment, above-mentioned storage medium can include but is not limited to:USB flash disk, read-only storage (ROM,
Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disc or
CD etc. is various can be with the medium of store program codes.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
If the integrated unit in above-described embodiment is realized using in the form of SFU software functional unit and is used as independent product
Sale or in use, the storage medium that above computer can be read can be stored in.Understood based on such, skill of the invention
The part or all or part of the technical scheme that art scheme substantially contributes to prior art in other words can be with soft
The form of part product is embodied, and the computer software product is stored in storage medium, including some instructions are to cause one
Platform or multiple stage computers equipment (can be personal computer, server or network equipment etc.) perform each embodiment institute of the invention
State all or part of step of method.
In the above embodiment of the present invention, the description to each embodiment all emphasizes particularly on different fields, and does not have in some embodiment
The part of detailed description, may refer to the associated description of other embodiment.
, can be by others side in several embodiments provided herein, it should be understood that disclosed client
Formula is realized.Wherein, device embodiment described above is only schematical, such as division of described unit, only one
Kind of division of logic function, can there is other dividing mode when actually realizing, such as multiple units or component can combine or
Another system is desirably integrated into, or some features can be ignored, or do not perform.It is another, it is shown or discussed it is mutual it
Between coupling or direct-coupling or communication connection can be the INDIRECT COUPLING or communication link of unit or module by some interfaces
Connect, can be electrical or other forms.
The unit illustrated as separating component can be or may not be it is physically separate, it is aobvious as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
Described above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, under the premise without departing from the principles of the invention, some improvements and modifications can also be made, these improvements and modifications also should
It is considered as protection scope of the present invention.
Claims (12)
1. the object processing method in a kind of virtual scene, it is characterised in that including:
Detect the first operation performed to the first object object in real scene;
At least one corresponding first menu object of the second destination object is generated in virtual scene in response to the described first operation,
Wherein, second destination object is first object object virtual objects corresponding in the virtual scene;
The second operation performed to the first object object is detected, wherein, described second operates for indicating described virtual
Second destination object is moved to where the target menu object at least one described first menu object in scene
Position;
The performance objective processing operation in the virtual scene in response to the described second operation, wherein, the target processing operation
Operated for the corresponding processing of the target menu object, each first menu object at least one described first menu object
A kind of processing operation of correspondence.
2. according to the method described in claim 1, it is characterised in that operated described in response to described first in virtual scene
Generate after at least one corresponding first menu object of the second destination object, methods described also includes:
Detect the 3rd operation performed to the first object object;
At least one described first menu object is deleted in the virtual scene in response to the described 3rd operation.
3. according to the method described in claim 1, it is characterised in that operated described in response to described second in the virtual field
In scape before performance objective processing operation, methods described also includes:
It is that the target menu object sets mark in the virtual scene in response to the described second operation, wherein, the mark
Remember for indicate second destination object be moved to target menu object where position.
4. according to the method in any one of claims 1 to 3, it is characterised in that described to exist in response to the described first operation
At least one corresponding first menu object of the second destination object is generated in virtual scene to be included:
Obtain target scene current in the virtual scene when detecting first operation;
According to second destination object of the corresponding relation of predetermined virtual scene and menu object in the virtual scene
Around generate described at least one first menu object corresponding with the target scene.
5. according to the method in any one of claims 1 to 3, it is characterised in that described to exist in response to the described first operation
At least one corresponding first menu object of the second destination object is generated in virtual scene includes at least one of:
At least one described first menu object is generated according to predetermined space in predetermined circle, wherein, the predetermined circle is
Using the second destination object position as the center of circle, the circumference that preset distance is made up of radius;
On the predetermined direction of second destination object at least one described first menu pair is sequentially generated according to predetermined arrangement
As, wherein, the predetermined direction includes at least one of:Top, lower section, left, right, the predetermined arrangement order include
At least one of:Order arranged in a straight line, curved arrangement order.
6. according to the method in any one of claims 1 to 3, it is characterised in that described to exist in response to the described second operation
Performance objective processing operation includes at least one of in the virtual scene:
At least one second menu object is generated in the virtual scene, wherein, at least one described second menu object is
The drop-down menu object of the target menu object;
The first scene in the virtual scene is switched into the second scene;
The attribute of operation object in the virtual scene is set to objective attribute target attribute;
Control the operation object performance objective task in the virtual scene.
7. the object handles device in a kind of virtual scene, it is characterised in that including:
First detection unit, the first operation performed for detecting to the first object object in real scene;
First response unit, it is corresponding at least for generating the second destination object in virtual scene in response to the described first operation
One the first menu object, wherein, second destination object for the first object object in the virtual scene it is right
The virtual objects answered;
Second detection unit, the second operation performed for detecting to the first object object, wherein, second operation is used
In the mesh for indicating to be moved to second destination object in the virtual scene at least one described first menu object
Mark the position where menu object;
Second response unit, for the performance objective processing operation in the virtual scene in response to the described second operation, wherein,
The target processing operation operates for the corresponding processing of the target menu object, at least one described first menu object
A kind of each processing operation of first menu object correspondence.
8. device according to claim 7, it is characterised in that described device also includes:
3rd detection unit, for generating the second destination object correspondence in virtual scene in described operated in response to described first
At least one first menu object after, detect to the first object object perform the 3rd operation;
3rd response unit, at least one described first menu pair is deleted in response to the described 3rd operation in the virtual scene
As.
9. device according to claim 7, it is characterised in that described device also includes:
4th response unit, in the performance objective processing operation in the virtual scene in response to the described second operation
Before, it is that the target menu object sets mark in the virtual scene in response to the described second operation, wherein, the mark
Remember for indicate second destination object be moved to target menu object where position.
10. the device according to any one of claim 7 to 9, it is characterised in that first response unit includes:
Acquisition module, target scene current in virtual scene when detecting first operation for obtaining;
First generation module, for the corresponding relation according to predetermined virtual scene and menu object in the virtual scene
Described at least one first menu object corresponding with the target scene is generated around second destination object.
11. the device according to any one of claim 7 to 9, it is characterised in that first response unit includes following
At least one of module:
Second generation module, for generating at least one described first menu object according to predetermined space in predetermined circle, its
In, the predetermined circle is the circumference that preset distance is made up of radius using the second destination object position as the center of circle;
3rd generation module, on the predetermined direction of second destination object according to predetermined arrangement be sequentially generated it is described extremely
Few first menu object, wherein, the predetermined direction includes at least one of:Top, lower section, left, right, it is described
Predetermined arrangement order includes at least one of:Order arranged in a straight line, curved arrangement order.
12. the device according to any one of claim 7 to 9, it is characterised in that second response unit includes following
At least one of module:
4th generation module, for generating at least one second menu object in the virtual scene, wherein, described at least one
Individual second menu object is the drop-down menu object of the target menu object;
Handover module, for the first scene in the virtual scene to be switched into the second scene;
Setup module, for the attribute of operation object in the virtual scene to be set into objective attribute target attribute;
Control module, for controlling the operation object performance objective task in the virtual scene.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710292385.7A CN107168530A (en) | 2017-04-26 | 2017-04-26 | Object processing method and device in virtual scene |
PCT/CN2018/081258 WO2018196552A1 (en) | 2017-04-25 | 2018-03-30 | Method and apparatus for hand-type display for use in virtual reality scene |
US16/509,038 US11194400B2 (en) | 2017-04-25 | 2019-07-11 | Gesture display method and apparatus for virtual reality scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710292385.7A CN107168530A (en) | 2017-04-26 | 2017-04-26 | Object processing method and device in virtual scene |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107168530A true CN107168530A (en) | 2017-09-15 |
Family
ID=59813899
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710292385.7A Pending CN107168530A (en) | 2017-04-25 | 2017-04-26 | Object processing method and device in virtual scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107168530A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107870672A (en) * | 2017-11-22 | 2018-04-03 | 腾讯科技(成都)有限公司 | Virtual reality scenario realizes the method, apparatus and readable storage medium storing program for executing of menuboard |
CN108536295A (en) * | 2018-03-30 | 2018-09-14 | 腾讯科技(深圳)有限公司 | Object control method, apparatus in virtual scene and computer equipment |
CN108595010A (en) * | 2018-04-27 | 2018-09-28 | 网易(杭州)网络有限公司 | The exchange method and device of dummy object in virtual reality |
CN108922300A (en) * | 2018-07-24 | 2018-11-30 | 杭州行开科技有限公司 | Surgical simulation 3D system based on digitized humans |
CN109064817A (en) * | 2018-07-18 | 2018-12-21 | 杭州行开科技有限公司 | Surgery simulation system based on CT Three-dimension Reconstruction Model |
CN109557998A (en) * | 2017-09-25 | 2019-04-02 | 腾讯科技(深圳)有限公司 | Information interacting method, device, storage medium and electronic device |
CN109644181A (en) * | 2017-12-29 | 2019-04-16 | 腾讯科技(深圳)有限公司 | A kind of method, relevant apparatus and system that multimedia messages are shared |
CN110163976A (en) * | 2018-07-05 | 2019-08-23 | 腾讯数码(天津)有限公司 | A kind of method, apparatus, terminal device and the storage medium of virtual scene conversion |
CN110192169A (en) * | 2017-11-20 | 2019-08-30 | 腾讯科技(深圳)有限公司 | Menu treating method, device and storage medium in virtual scene |
CN110308841A (en) * | 2019-06-14 | 2019-10-08 | 广州世峰数字科技有限公司 | A kind of VR menu displaying method and system |
CN110325965A (en) * | 2018-01-25 | 2019-10-11 | 腾讯科技(深圳)有限公司 | Object processing method, equipment and storage medium in virtual scene |
CN110377150A (en) * | 2019-06-11 | 2019-10-25 | 中新软件(上海)有限公司 | The method, apparatus and computer equipment of application entity component in virtual scene |
CN110866940A (en) * | 2019-11-05 | 2020-03-06 | 广东虚拟现实科技有限公司 | Virtual picture control method and device, terminal equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102023706A (en) * | 2009-09-15 | 2011-04-20 | 帕洛阿尔托研究中心公司 | System for interacting with objects in a virtual environment |
CN105867599A (en) * | 2015-08-17 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | Gesture control method and device |
CN106020633A (en) * | 2016-05-27 | 2016-10-12 | 网易(杭州)网络有限公司 | Interaction control method and device |
CN106445118A (en) * | 2016-09-06 | 2017-02-22 | 网易(杭州)网络有限公司 | Virtual reality interaction method and apparatus |
-
2017
- 2017-04-26 CN CN201710292385.7A patent/CN107168530A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102023706A (en) * | 2009-09-15 | 2011-04-20 | 帕洛阿尔托研究中心公司 | System for interacting with objects in a virtual environment |
CN105867599A (en) * | 2015-08-17 | 2016-08-17 | 乐视致新电子科技(天津)有限公司 | Gesture control method and device |
CN106020633A (en) * | 2016-05-27 | 2016-10-12 | 网易(杭州)网络有限公司 | Interaction control method and device |
CN106445118A (en) * | 2016-09-06 | 2017-02-22 | 网易(杭州)网络有限公司 | Virtual reality interaction method and apparatus |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11809685B2 (en) | 2017-09-25 | 2023-11-07 | Tencent Technology (Shenzhen) Company Limited | Information interaction method and apparatus, storage medium, and electronic apparatus |
US11226722B2 (en) | 2017-09-25 | 2022-01-18 | Tencent Technology (Shenzhen) Company Limited | Information interaction method and apparatus, storage medium, and electronic apparatus |
CN109557998B (en) * | 2017-09-25 | 2021-10-15 | 腾讯科技(深圳)有限公司 | Information interaction method and device, storage medium and electronic device |
CN109557998A (en) * | 2017-09-25 | 2019-04-02 | 腾讯科技(深圳)有限公司 | Information interacting method, device, storage medium and electronic device |
CN110192169B (en) * | 2017-11-20 | 2020-10-02 | 腾讯科技(深圳)有限公司 | Menu processing method and device in virtual scene and storage medium |
US11449196B2 (en) | 2017-11-20 | 2022-09-20 | Tencent Technology (Shenzhen) Company Limited | Menu processing method, device and storage medium in virtual scene |
CN110192169A (en) * | 2017-11-20 | 2019-08-30 | 腾讯科技(深圳)有限公司 | Menu treating method, device and storage medium in virtual scene |
CN107870672A (en) * | 2017-11-22 | 2018-04-03 | 腾讯科技(成都)有限公司 | Virtual reality scenario realizes the method, apparatus and readable storage medium storing program for executing of menuboard |
CN107870672B (en) * | 2017-11-22 | 2021-01-08 | 腾讯科技(成都)有限公司 | Method and device for realizing menu panel in virtual reality scene and readable storage medium |
CN109644181A (en) * | 2017-12-29 | 2019-04-16 | 腾讯科技(深圳)有限公司 | A kind of method, relevant apparatus and system that multimedia messages are shared |
US10965783B2 (en) | 2017-12-29 | 2021-03-30 | Tencent Technology (Shenzhen) Company Limited | Multimedia information sharing method, related apparatus, and system |
CN110325965A (en) * | 2018-01-25 | 2019-10-11 | 腾讯科技(深圳)有限公司 | Object processing method, equipment and storage medium in virtual scene |
CN108536295A (en) * | 2018-03-30 | 2018-09-14 | 腾讯科技(深圳)有限公司 | Object control method, apparatus in virtual scene and computer equipment |
CN108536295B (en) * | 2018-03-30 | 2021-08-10 | 腾讯科技(深圳)有限公司 | Object control method and device in virtual scene and computer equipment |
CN108595010A (en) * | 2018-04-27 | 2018-09-28 | 网易(杭州)网络有限公司 | The exchange method and device of dummy object in virtual reality |
CN110163976A (en) * | 2018-07-05 | 2019-08-23 | 腾讯数码(天津)有限公司 | A kind of method, apparatus, terminal device and the storage medium of virtual scene conversion |
CN110163976B (en) * | 2018-07-05 | 2024-02-06 | 腾讯数码(天津)有限公司 | Virtual scene conversion method, device, terminal equipment and storage medium |
CN109064817A (en) * | 2018-07-18 | 2018-12-21 | 杭州行开科技有限公司 | Surgery simulation system based on CT Three-dimension Reconstruction Model |
CN108922300A (en) * | 2018-07-24 | 2018-11-30 | 杭州行开科技有限公司 | Surgical simulation 3D system based on digitized humans |
CN110377150B (en) * | 2019-06-11 | 2023-01-24 | 中新软件(上海)有限公司 | Method and device for operating entity component in virtual scene and computer equipment |
CN110377150A (en) * | 2019-06-11 | 2019-10-25 | 中新软件(上海)有限公司 | The method, apparatus and computer equipment of application entity component in virtual scene |
CN110308841A (en) * | 2019-06-14 | 2019-10-08 | 广州世峰数字科技有限公司 | A kind of VR menu displaying method and system |
CN110866940A (en) * | 2019-11-05 | 2020-03-06 | 广东虚拟现实科技有限公司 | Virtual picture control method and device, terminal equipment and storage medium |
CN110866940B (en) * | 2019-11-05 | 2023-03-10 | 广东虚拟现实科技有限公司 | Virtual picture control method and device, terminal equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107168530A (en) | Object processing method and device in virtual scene | |
CN109557998B (en) | Information interaction method and device, storage medium and electronic device | |
CN107132917B (en) | For the hand-type display methods and device in virtual reality scenario | |
US11194400B2 (en) | Gesture display method and apparatus for virtual reality scene | |
Billinghurst et al. | Advanced interaction techniques for augmented reality applications | |
WO2018192394A1 (en) | Interaction method and apparatus for virtual reality scene, storage medium and electronic apparatus | |
CN113826058A (en) | Artificial reality system with self-tactile virtual keyboard | |
CN107209582A (en) | The method and apparatus of high intuitive man-machine interface | |
CN108579089A (en) | Virtual item control method and device, storage medium, electronic equipment | |
Smith et al. | Digital foam interaction techniques for 3D modeling | |
CN113785262A (en) | Artificial reality system with finger mapping self-touch input method | |
CN108159697A (en) | Virtual objects transfer approach and device, storage medium, electronic equipment | |
CN109529340A (en) | Virtual object control method, device, electronic equipment and storage medium | |
CN110420456A (en) | The method and device of selecting object, computer storage medium, electronic equipment | |
Sun et al. | Phonecursor: Improving 3d selection performance with mobile device in ar | |
CN108543308B (en) | Method and device for selecting virtual object in virtual scene | |
CN111399632A (en) | Operating method for interacting with virtual reality by means of a wearable device and operating device thereof | |
US11656687B2 (en) | Method for controlling interaction interface and device for supporting the same | |
WO2018043693A1 (en) | Game program, method, and information processing device | |
US20130296049A1 (en) | System and Method for Computer Control | |
JP7370721B2 (en) | Game program, method, and information processing device | |
CN107728811A (en) | Interface control method, apparatus and system | |
KR101962464B1 (en) | Gesture recognition apparatus for functional control | |
CN117369649B (en) | Virtual reality interaction system and method based on proprioception | |
CN111522429A (en) | Interaction method and device based on human body posture and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170915 |