CN109697002A - A kind of method, relevant device and the system of the object editing in virtual reality - Google Patents
A kind of method, relevant device and the system of the object editing in virtual reality Download PDFInfo
- Publication number
- CN109697002A CN109697002A CN201711005203.XA CN201711005203A CN109697002A CN 109697002 A CN109697002 A CN 109697002A CN 201711005203 A CN201711005203 A CN 201711005203A CN 109697002 A CN109697002 A CN 109697002A
- Authority
- CN
- China
- Prior art keywords
- target
- target object
- plane
- equipment
- processing equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04806—Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses a kind of in the virtual reality method of object editing, relevant device and system.The method comprise the steps that processing equipment receives the operation of the selecting object of input equipment detection;Target object to be edited is determined according to the operation of selecting object;Determine target object in the target affinity plane of space editing area;Receive the operation of the mobile object of input equipment detection;The target position of target object is determined according to the operation of mobile object;Target object is moved to target position and by showing that equipment is shown.The embodiment of the invention also provides a kind of processing equipment and systems to provide a kind of method of simple edit object for quickly editing to object.
Description
Technical field
The present invention relates to computer field more particularly to a kind of methods of the object editing in virtual reality, relevant device
And system.
Background technique
Virtual reality (Virtual Reality, abbreviation: VR) technology is a kind of meter that can be created with the experiencing virtual world
Calculation machine analogue system, it generates a kind of simulated environment using computer, is that a kind of Multi-source Information Fusion, interactive three-dimensional are dynamic
The system emulation of state what comes into a driver's and entity behavior is immersed to user in the environment.
In virtual reality, it is often necessary to be edited to the object in virtual reality.For example, object is moved,
Rotation, scaling etc..In conventional method, realize that the editor of object, Unity3D are one and create such as by Unity3D engine
The multi-platform comprehensive development of games work of the types interaction contents such as 3 D video game, building visualization, realtime three dimensional animation
Tool, is the game engine integrated comprehensively, although Unity3D in the prior art can accomplish the object editor under VR,
Operation difficulty is higher.For example, editing by Unity3D to object, user is needed to understand the menu of Unity3D, view circle
Face etc. understands the coordinate system in scene, input system etc.;Education resource imports the basic element of aspect: grid, material, patch
Figure, animation etc..
Object is edited in VR in traditional approach, due to complicated for operation, ordinary user is difficult to grasp.
Summary of the invention
The embodiment of the invention provides a kind of in the virtual reality method of object editing, relevant device and system, are used for
Quickly object is edited, provides a kind of method of simple edit object.
In a first aspect, the method for the embodiment of the invention provides a kind of in virtual reality object editing, comprising:
Receive the operation of selecting object;
Target object to be edited is determined according to the operation of the selecting object;
Determine the target object in the target affinity plane of space editing area;
Receive the operation of the mobile object of the input equipment detection;
The target position of the target object is determined according to the operation of the mobile object;
The target object is moved to the target position and by showing that equipment is shown.
Second aspect, the embodiment of the invention provides a kind of processing equipments, comprising:
First receiving module, for receiving the operation of selecting object;
Object determining module, for being determined according to the operation of the received selecting object of first receiving module wait compile
The target object collected;
Adsorbing plane determining module, for determining that the target object that the object determining module determines is edited in space
The target affinity plane in region;
Second receiving module, the operation of the mobile object for receiving the input equipment detection;
Position determination module, the institute determining for the operation according to the received mobile object of second receiving module
The intersection point for stating target affinity plane determines the target position of the target object;
Mobile module, for the target object to be moved to the determining target position of the position determination module simultaneously
By showing that equipment is shown.
The third aspect, the embodiment of the invention provides a kind of processing equipments, comprising:
Memory, for storing computer executable program code;
Transceiver, and
Processor, with the memory and the transceiver couples;
Wherein said program code includes instruction, and when the processor executes described instruction, described instruction makes the place
Manage the method that equipment executes first aspect.
Fourth aspect, the embodiment of the invention provides virtual reality systems, comprising:
Include: display equipment, input equipment and processing equipment, the display equipment and the input equipment with the place
Manage equipment connection;
The input equipment detects and selects the operation of object;
The processing equipment receives the operation of the selecting object of the input equipment detection;
The processing equipment determines target object to be edited according to the operation of the selecting object;
The processing equipment determines the target object in the target affinity plane of space editing area;
The processing equipment receives the operation of the mobile object of the input equipment detection;
The processing equipment generates ray according to the operation of the mobile object, and the data of the ray are sent to institute
State display equipment;
The display equipment shows the ray;
The processing equipment determines the mesh of the target object according to the intersection point of the ray and the target affinity plane
Cursor position;
The target object is moved to the target position by the processing equipment;
The display equipment shows that the target object is moved to the target position.
As can be seen from the above technical solutions, the embodiment of the present invention has the advantage that
Processing equipment receives the operation of the selecting object of input equipment detection;Then, it is determined according to the operation of selecting object
Target object to be edited;Determine target object in the target affinity plane of space editing area;Receive input equipment detection
The operation of mobile object;The target position of target object is determined according to the operation of mobile object;Target object is moved to target
Position is simultaneously shown by display equipment.The embodiment of the present invention can quickly edit object, provide a kind of simple volume
The method for collecting object, calculation amount is small, and the edit methods of object are simple, and ordinary user can simply grasp.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for
For those skilled in the art, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is the schematic diagram of virtual reality system in the embodiment of the present invention;
Fig. 2 is the schematic diagram of adsorbing plane in the embodiment of the present invention;
Fig. 3 is to illustrate the step of one embodiment of the method for object editing in a kind of virtual reality in the embodiment of the present invention
Figure;
Fig. 4 is to show the schematic diagram of a scenario shown in equipment in the embodiment of the present invention;
Fig. 5 is the schematic diagram of space editing area in the embodiment of the present invention;
Fig. 6 is a schematic diagram of a scenario in the embodiment of the present invention;
Fig. 7 is the side view schematic diagram of a scenario that intersection point is determined in the embodiment of the present invention;
Fig. 8 is the side view schematic diagram of a scenario that intersection point is determined in the embodiment of the present invention;
The center schematic diagram for the grid that Fig. 9 is located at by intersection point in the embodiment of the present invention;
Figure 10 is the schematic diagram of offset vector in the embodiment of the present invention;
Figure 11 is the schematic side view of target gridding in the embodiment of the present invention;
Figure 12 is the schematic side view of offset vector in the embodiment of the present invention;
Figure 13 is the schematic diagram of a scenario that target object is moved to target position in the embodiment of the present invention;
Figure 14 is the schematic side view that target object is moved to target position in the embodiment of the present invention;
Figure 15 is the schematic diagram of a scenario that drop target object is confirmed in the embodiment of the present invention;
Figure 16 is the schematic side view that target object is moved to target position in the embodiment of the present invention;
Figure 17 is translation direction schematic diagram in the embodiment of the present invention;
Figure 18 is a schematic diagram of a scenario of edit pattern in the embodiment of the present invention;
Figure 19 is the schematic diagram of object direction of rotation in the embodiment of the present invention;
Figure 20 is a kind of structural schematic diagram of one embodiment of processing equipment in the embodiment of the present invention;
Figure 21 is a kind of structural schematic diagram of another embodiment of processing equipment in the embodiment of the present invention;
Figure 22 is a kind of structural schematic diagram of another embodiment of processing equipment in the embodiment of the present invention;
Figure 23 is a kind of structural schematic diagram of another embodiment of processing equipment in the embodiment of the present invention;
Figure 24 is a kind of structural schematic diagram of another embodiment of processing equipment in the embodiment of the present invention;
Figure 25 is a kind of structural schematic diagram of another embodiment of processing equipment in the embodiment of the present invention;
Figure 26 is a kind of structural schematic diagram of another embodiment of processing equipment in the embodiment of the present invention;
Figure 27 is a kind of structural schematic diagram of another embodiment of processing equipment in the embodiment of the present invention.
Specific embodiment
The embodiment of the invention provides a kind of in the virtual reality method of object editing, relevant device and system, are used for
Quickly object is edited, provides a kind of method of simple edit object.
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention
Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only
The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people
Member's every other embodiment obtained, should fall within the scope of the present invention.
Description and claims of this specification and term " first ", " second ", " third " " in above-mentioned attached drawing
The (if present)s such as four " are to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should manage
The data that solution uses in this way are interchangeable under appropriate circumstances, so that the embodiments described herein can be in addition to illustrating herein
Or the sequence other than the content of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that
Cover it is non-exclusive include, for example, containing the process, method, system, product or equipment of a series of steps or units need not limit
In step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, produce
The other step or units of product or equipment inherently.
The method of the embodiment of the invention provides a kind of in virtual reality object editing, is applied to Virtual Reality system
System, is understood incorporated by reference to Fig. 1, and Fig. 1 is VR system schematic, which includes display equipment 101,102 and of input equipment
Processing equipment 103, input equipment 102 and display equipment 101 are connect with processing equipment 103.In an application scenarios, at this
Managing equipment 103 can be computer, mobile phone, palm PC etc..The display equipment 101 can for VR show (virtual reality is worn
Formula shows equipment).VR glasses, VR helmet etc. can also be become.VR aobvious be it is a kind of using head-mounted display by the external of people
The vision on boundary, sense of hearing closing, guidance user generate a kind of feeling in virtual environment.The input equipment 102 is a kind of energy
The equipment that the environmental data of real world is mapped to virtual world is the equipment for inputting user instruction to VR system, the input
Equipment can be sensor glove, sensing handle etc..
A kind of method that the virtual object in VR system is edited is provided in the embodiment of the present invention, the present invention
Editor in embodiment may include movement, scaling, rotation etc..Input equipment detects and selects the operation of object, for example, the choosing
The operation for selecting object is the object selected in VR by input equipment, which can be the bed in VR, chest, desk etc.;
The input equipment detects the operation of the selecting object of user's input, generates corresponding operation data, and then, which will
The data of the operation are transferred to processing equipment, and processing equipment determines target object to be edited according to the data of the operation;Processing
Equipment calculates the space editing area of the target object, and determines target object in the target affinity plane of space editing area;
When the position of input equipment in the actual environment changes, which detects the operation of mobile object, which sets
Standby that the data for the operation moved are transferred to processing equipment, processing equipment is raw according to the data of the operation of the mobile object received
It is sent to display equipment at ray, and by the data of ray, display equipment shows ray;Processing equipment is inhaled according to ray and target
The intersection point of attached plane determines the target position of target object, and target object is moved to target position;Show equipment displaying target
Object is moved to target position.
In order to facilitate understanding, word involved in the embodiment of the present invention is explained first.
Adsorbing plane: according in real scene, object is according to respective attribute, it should which the plane of placement, the absorption are flat
Face can be preset according to the attribute of object.For example, bed, desk are placed on ground, mural painting is placed on wall, and pendent lamp is inhaled
It is attached to ceiling.
In virtual scene, X can be used, Y, Z indicate six faces in editable region plus the form of sign, should
Editable region can be understood as a rectangular body region, and it is flat that six faces of the rectangular body region of the editable can be used as absorption
Face understands that adsorbing plane, Fig. 2 are the schematic diagram of adsorbing plane incorporated by reference to body Fig. 2 and the following table 1:
Table 1
One object, it is understood that for an object in Virtual Space, which can have N number of adsorbing plane, N
For the positive integer more than or equal to 1, different objects may have the adsorbing plane of different number.As shown in Table 1, bed, table
The attribute of son is absorption ground, and the corresponding adsorbing plane in absorption ground is (- Z) plane, and the attribute of pendent lamp is absorption ceiling, is inhaled
The corresponding adsorbing plane of attached ceiling is (+Z) plane, and the attribute of mural painting is absorption wall, the corresponding adsorbing plane of absorption wall
Can be (+X), (- X), (+Y), one in (- Y) four planes, that is to say, that the attribute and adsorbing plane of object have pair
It should be related to, however, it is determined that the attribute of object, so that it may the attribute of the object is determined according to the attribute and the corresponding relationship.Upper table 1
In example can be seen that mural painting and can correspond to 4 adsorbing planes, can when the corresponding adsorbing plane of an object is greater than 1
To preset the preset adsorbing plane of the object, in the operation that user does not rotate the object, the object
Adsorbing plane be the preset adsorbing plane.For example, the corresponding preset adsorbing plane of pendent lamp can be set as (- X), if should
" pendent lamp " is rotated by 90 ° around Y-axis, then the corresponding adsorbing plane of the pendent lamp is (+Y).
It should be noted that being to indicate 6 adsorbing planes only plus the form of sign by X, Y, Z in above-mentioned table 1
It is for example, not causing to the limited explanation in the embodiment of the present invention.For example, can be different with A, B, C, D, E, F
Letter indicates different adsorbing planes, as long as the Different Plane of editing area can be distinguished, various informative, herein not one by one
Carry out exhaustion.
It is please referred to shown in Fig. 3 below, the method for object editing that the embodiment of the invention provides a kind of in virtual reality
One embodiment, the present embodiment are illustrated by executing subject of processing equipment, and specific steps flow chart includes:
Step 301, processing equipment receive the operation of the selecting object of input equipment detection.
Input equipment detects the operation of the selecting object of user's input, and input equipment is by the data of the operation of selecting object
It is sent to processing equipment, which receives the operation of the selecting object of input equipment detection.
Understood incorporated by reference to body Fig. 4, Fig. 4 is to show the schematic diagram of a scenario shown in equipment.In an applied field
Jing Zhong, display equipment show object included in menu, and user sees the object in menu in the display device, for example, should
Object includes bed, glasses, desk and mural painting etc. in menu, and user's operation input equipment (such as sensing handle), processing equipment connects
The operation for receiving the selecting object that input equipment detects, operates according to the selecting object and generates ray, which shows
The ray.
Step 302, processing equipment determine target object to be edited according to the operation of selecting object.
In virtual environment, which chooses target object, for example, the ray is directed toward " bed ", which is " bed ",
After processing equipment has determined target object, which enters the edit pattern to the target object.
Step 303, processing equipment calculate space editing area according to target object, and space editing area is divided into multiple
Net region.
After processing equipment has determined target object, which calculates the space editing area of the target object.It please tie
It closes Fig. 5 and Fig. 6 to be understood, Fig. 5 is the schematic diagram of space editing area, and Fig. 6 is schematic diagram of a scenario.In an application scenarios
In, a cuboid can be set as the space editing area in this, then makees equal part to the length, width and height of the cuboid, forms three
Network in dimension space, each equal part grid side length of the grid can be 0.5m.It should be noted that space is edited
The size of grid in region is intended merely to herein conveniently and for example, not causing limitation of the invention explanation.
It should be noted that step 303 is optional step, it is not all to be held when being edited each time to target object
Row, this step execute once during initialization, after the completion of initialization, that is, the target object have had been calculated
After the editing area of space, in subsequent editing process, it can not execute.
Step 304, processing equipment determine target object in the target affinity plane of space editing area.
Mobile target object simultaneously places the target object in place, it is thus necessary to determine that the absorption for going out the target is flat
Face, it is to be understood that " bed " needs are placed on the ground, and cannot be placed on ceiling, and " pendent lamp " needs are placed on
It on ceiling, cannot be placed on wall, it is thus necessary to determine that the adsorbing plane of the target object space editing area.
The processing equipment determines the attribute of target object, then, is determined according to the corresponding relationship of the attribute and adsorbing plane
The target affinity plane of the target object out.
The target affinity plane is preset adsorbing plane, it is to be understood that the number of the corresponding adsorbing plane of target object
Amount is at least one.
In oneainstance, the adsorbing plane of target object only one, for example, when target object be " bed " when, " bed "
Only one adsorbing plane (- Z) plane, being somebody's turn to do (- Z) plane is preset adsorbing plane.
In another scenario, the adsorbing plane of target object at least there are two.For example, when target object is " mural painting "
When, since the adsorbing plane of " mural painting " has 4, can be (+X) plane, (- X) plane, (+Y) plane or (- Y) plane, in advance
The preset adsorbing plane that " mural painting " is somebody's turn to do in first setting is (+X) plane, that is to say, that after having selected target object, if user does not have
Have and " mural painting " is rotated, then the target affinity plane of the target object is preset adsorbing plane.
In another scenario, the quantity at least two of the corresponding adsorbing plane of target object, the target affinity plane
Adsorbing plane after being rotated for target object, and it is 90 degree that the angle that the target object rotates every time, which can be set,.Firstly, place
It manages equipment and preset adsorbing plane, such as processing equipment is determined according to the attribute and attribute of target object and the corresponding relationship of adsorbing plane
The preset adsorbing plane for determining " mural painting " is (+X) plane, and processing equipment receives the operation of the target rotation of input equipment detection,
Then, after processing equipment determines that target object is rotated on the basis of preset adsorbing plane according to the operation of target rotation
Corresponding target affinity plane.For example, the operation of the rotation " mural painting " of input equipment detection user's input, " mural painting " is corresponding
It is rotated by 90 ° around Y-axis, after " mural painting " rotation, corresponding target affinity plane is (- Y) plane.
Step 305, processing equipment receive the operation of the mobile object of input equipment detection.
User carries out the operation of mobile object, for example, user holds input equipment (such as handle), changes handle in actual rings
Position in border, input equipment detect the operation of mobile object, generate the data of the operation of mobile object, then by the operation
Data be sent to processing equipment, processing equipment receives the operation of the mobile object of input equipment detection.
Step 306, processing equipment generate ray according to the operation of mobile object and are shown by display equipment.
Processing equipment generates ray according to the operation of the mobile object, then shows the ray by the display equipment,
That is display equipment shows the ray while showing current virtual scene, user sees one by the display equipment
Ray.
In an application scenarios, user is in a real space, for example, user wears VR head in a room
It is aobvious, sensing handle is held, which is directed toward the ground in true room by user, and VR aobvious middle display rays are directed toward virtual empty
Between in ground, which is directed toward the ceiling in true room by user, VR it is aobvious in display rays be directed toward Virtual Spaces
In ceiling.
Step 307, processing equipment determine the target position of target object according to the intersection point of ray and target affinity plane.
In the Virtual Space that display equipment is shown, which is that user wishes the position for placing the target object
It sets.
It, may be there are two types of situation for the operational circumstances of user:
1, understood incorporated by reference to Fig. 7, Fig. 7 is the side view schematic diagram of a scenario for determining intersection point.User's operation is correct, this is virtual
Equipment has been directed toward target affinity plane, and the ray and target affinity plane have intersection point, for example, the target affinity plane of " bed " be (-
Z) plane, user are operated by input equipment, which is directed toward ground (- Z) plane, the intersection point of ray a and (- Z) plane
For o.
2, understood incorporated by reference to Fig. 8, Fig. 8 is the side view schematic diagram of a scenario for determining intersection point.User's operation inaccuracy, the void
It proposes for target affinity plane is not directed toward, the ray and target affinity plane do not have intersection point, for example, the ray has been directed toward ceiling
(+Z) plane, that is to say, that in such situation, ray and target affinity plane do not have intersection point, can take mirror image to the ray,
As shown in figure 8, the mirror image ray of ray b is c, the intersection point of mirror image ray c is d.
In the first possible implementation, processing equipment determines the intersection point of ray Yu target affinity plane, and processing is set
Standby that intersection point is determined as target position, which is the position that target object is placed.
In the second possible implementation, processing equipment determines the intersection point of ray Yu target affinity plane, this implementation
In example, it is illustrated by taking above-mentioned situation shown in fig. 7 as an example.Then, processing equipment calculates in the grid that intersection point is located at
Heart position, is understood incorporated by reference to Fig. 9, the center schematic diagram for the grid that Fig. 9 is located at by intersection point.Where grid intersection point o
Grid indicate that in target affinity plane, the center of grid g is indicated with f, in the grid which is located at g
Heart position is point f.
Then, processing equipment calculates the target position of target object, target according to the center of grid and offset vector
The target position of object is calculated by following formula 1:
P_center=P_exp+V_offset, wherein P_center is target position, and V_offset is offset vector,
P_exp is the center of grid g.
Wherein, offset vector is the central point on side corresponding with target affinity plane on target gridding to target object
On preset point vector, target gridding be accommodate target object bounding box minimum grid region.It is carried out incorporated by reference to Figure 10
Understand that the offset vector in the embodiment of the present invention, Figure 10 are the schematic diagram of offset vector.
The calculation of the offset vector can also be there are two types of implementation:
1, precalculate mode: before this step 307, the specific method for calculating offset vector includes:
Step i: the attribute of processing equipment according to each object determines N number of adsorbing plane of each object, N be greater than or
Positive integer equal to 1, for example, including multiple objects, such as bed, desk, mural painting, pendent lamp in article menu in one scenario
Deng, the attribute of processing equipment according to each object determines N number of adsorbing plane of each object, for example, the adsorbing plane of bed be (-
Z) plane, the adsorbing plane of desk are (- Z) plane, and the adsorbing plane of mural painting can be (+X) plane, and (- X) plane, (+Y) is flat
Face or (- Y) plane, the adsorbing plane of pendent lamp are (+X) plane.It should be noted that multiple objects in the present embodiment be in order to
Facilitate explanation and for example, do not cause limitation of the invention explanation.
Step j: processing equipment calculates the bounding box of each object in multiple objects to be selected.
Understood incorporated by reference to Figure 10, Figure 10 is the schematic diagram of bounding box.Bounding box 1002 is to accommodate object 1001 most
Small rectangular parallelepiped space region.
Processing equipment calculates the bounding box of bed, the bounding box of mural painting, the bounding box of desk, the bounding box of pendent lamp.
Step k: processing equipment calculates target gridding according to bounding box, and target gridding 1101 is to accommodate bounding box 1002 most
Small grid region.
Understood incorporated by reference to Figure 11, Figure 11 is the schematic side view of target gridding.Processing equipment calculates in multiple objects
The target gridding of each object.
Step l: processing equipment determines target network corresponding to each adsorbing plane in N number of adsorbing plane of each object
The side of lattice.
Understood incorporated by reference to Figure 11, " bed " has an adsorbing plane, and the adsorbing plane of " bed " is (- Z) plane, in target
In grid 1101, side 11011 corresponding with (- Z) plane is bottom surface, it should be noted that in the present embodiment, the side of target gridding
Face can be understood as some face of target gridding.
It in the present embodiment, is illustrated by taking " bed " in multiple objects as an example, other objects are referred to " bed " progress
Understand.
Step m: processing equipment calculates offset vector, which is the central point of the side of the target gridding to object
On preset point vector.
Understood incorporated by reference to Figure 12, Figure 12 is the schematic side view of offset vector.The offset vector is the encirclement of object
When the target face of box is aligned with the side of the target gridding, the central point 1201 of the side (such as bottom surface) of the target gridding to object
On preset point 1202 vector 1203.
In the embodiment of the present invention, which can be a pre-set any point on object, which can
Think the central point of object, in the present embodiment, which is illustrated by taking central point as an example.
It is pre-calculated the corresponding offset vector of multiple objects, then, is selected from the corresponding offset vector of multiple objects
Select the corresponding offset vector of target object.
2, real-time calculation:
It in the mode that may be implemented at second, does not need before the step 307, calculates offseting to for all objects
Amount, but in this step, target object offset vector is directly calculated, the specific method is as follows:
Step o: processing equipment determines the target affinity plane of target object according to the attribute of target object, for example, one
In a scene, target object is bed, and processing equipment determines that the adsorbing plane of " bed " is (- Z) plane.
Step p: processing equipment calculates the bounding box of each object of target object.
Step q: processing equipment calculates target gridding according to the bounding box of target object.
Step r: processing equipment determines the side of target gridding corresponding to the target affinity plane of target object.
Understood incorporated by reference to Figure 11, " bed " has an adsorbing plane, and the adsorbing plane of " bed " is (- Z) plane, in target
In grid 1101, side 11011 corresponding with (- Z) plane is bottom surface, it should be noted that in the present embodiment, the side of target gridding
Face can be understood as some face of target gridding.
Step m: processing equipment calculates the corresponding offset vector of target object, which is the side of the target gridding
Central point to object on preset point vector.
Target object is moved to target position and is shown by display equipment by step 308, processing equipment.
In the first implementation, understood incorporated by reference to Figure 13, Figure 13 is that target object is moved to target position
Schematic diagram of a scenario.
The bounding box 1301 of processing equipment calculating target object 1306;Determine target corresponding with target affinity plane 1302
Side 1303;Target side 1303 and target affinity plane are subjected to overlap processing, and the central point 1304 of target side 1303
The intersection point 1305 of corresponding ray and target affinity plane.
In the second implementation, understood incorporated by reference to Figure 14 and Figure 15, Figure 14 is that target object is moved to target
The schematic side view of position, Figure 15 are the schematic diagram of a scenario for confirming drop target object.It will be on target object for processing equipment
Preset point (such as central point) is moved to target position 1501.
It should be noted that the grid in the editing area of space provides alignment standard, due to meter in this kind of implementation
Calculate offset vector it needs to be determined that two points coordinate, i.e. starting point coordinate and terminal point coordinate determine that terminal point coordinate is pre- on object
It sets a little, starting point coordinate is the central point of the side of target gridding, and target gridding includes at least a square volume mesh, the target network
The central point of the side of lattice is just the central point of the side of square.Therefore, it in the corresponding scene of Fig. 9, needs according to intersection point o
The center for calculating the grid that the intersection point is located at is point f.
It should be noted that according to above-mentioned formula 1 calculate center be possible to can it is illegal, that is, cause accommodate object
Target gridding be in except editing area, need for the position of object to be moved to nearest legal position at this time, that is, guarantee use
It is completely in the position within editing area in the target gridding for accommodating target object, is understood incorporated by reference to Figure 16, Tu16Wei
Target object is moved to the schematic side view of target position.
Finally, determining the intersection situation of target object and other objects placed.If intersecting (i.e. object model phase
Hand over, and/or, target gridding belonging to target object intersects with grid belonging to the object placed), then by target object with
Red is surrounded frame and is shown, does not allow for player to confirm and places.Object is surrounded frame with green if not intersecting to show, is allowed
Confirmation is placed.
In the embodiment of the present invention, user can quickly be edited object, and calculation amount is small, and the edit methods of object
Simply, ordinary user can simply grasp, and mode of operation can meet the intuition habit of user.
Further, above-described embodiment is the method for the mobile placement to object, on the basis of the above embodiments, this hair
Bright embodiment additionally provides another embodiment, in the present embodiment, provides the method finely edited to target object, such as flat
It moves, rotates, scaling.
1, it translates:
The present embodiment can be the fine edit pattern based on Gizmo.Under the fine edit pattern, user can freely be adjusted
Position, rotation and the scaling of whole object.Understood incorporated by reference to Figure 17 and Figure 18, Figure 17 is translation direction schematic diagram, Tu18Wei
One schematic diagram of a scenario of edit pattern.Object can be put down in the positive and negative both direction on each axis in X, tri- axis of Y, Z
It moves, therefore, one shares 6 moving directions.6 moving directions are respectively +X direction, -X direction, +Y direction, -Y direction, the side+Z
To, -Z direction, 6 are provided in the embodiment of the present invention and is directed toward axially different arrows as the translation for translating target object
Gizmo1801。
Input equipment detects the operation of the selection translation object of user's input, and input equipment translates object according to the selection
Operation generate operation data, and the operation data is sent to processing equipment, then, processing equipment is raw according to the operation data
At ray, which is directed toward the arrow of " upward " (+Z direction), and processing equipment determines that the translation direction of the target object be+Z
Direction, and according to the operation data generate first position coordinate, the first position coordinate be translation Gizmo coordinate (" to
On " coordinate of arrow, it is denoted as " P_Gizmo ");For example, user selects upward arrow, then processing equipment determines target object
The translation direction of (such as chessboard) is mobile to the positive direction of Z axis, and x and y-axis coordinate will not change.
Processing equipment calculates the second position coordinate (being denoted as " P_Center ") of the preset point on target object, the preset point
It can be the central point on target object.
When user moves up and down the position of the input equipment in true environment, for example, user is upward by the input equipment
Translation, input equipment detect the operation of the translation object of user's input, and input equipment generates operation data, and by the operand
According to processing equipment is sent to, which generates the ray for translating target object according to the operation data, and according to this
Operation data calculates the third place coordinate, and the third place coordinate is in the corresponding reference axis of translation (such as X-axis), the reference axis
It is regarded as a space line, which is also regarded as a space line, has the shortest distance, example between respective coordinates axis and ray
Such as, the third place coordinate in X-axis (being denoted as " P_Near ") and the distance between the target point on ray are that two spaces are straight
The shortest distance between line, it is determined that the third place coordinate.
Processing equipment according to first position coordinate, second position coordinate and the third place coordinate calculate target object carry out it is flat
The 4th position coordinates (being denoted as " P_New ") of preset point after shifting;The specific formula 2 for calculating the 4th position coordinates is as follows:
P_New=P_Center+P_Near-P_Gizmo;
The preset point of target object is moved to the 4th position coordinates in a transverse direction and is set by display by processing equipment
Standby display.
2, it rotates:
Object can be understood that Figure 19 is object direction of rotation around X, these three axis of Y, Z into rotation incorporated by reference to Figure 19
Schematic diagram.A rotation Gizmo for target rotation is respectively arranged outside corresponding four ribs of bounding box of each axis (in such as Figure 18
Turning shape Gizmo1802), wherein Gizmo1901, the corresponding rotation Gizmo1902 of Y-axis are revolved in the corresponding rotation of X-axis, and Z axis corresponds to
Rotation Gizmo1903, rotation Gizmo have 12 altogether.
This mode can allow player individually to edit a certain item attribute of object.
Processing equipment receives the operation of the selection target rotation of input equipment detection;Player can choose 12 by ray
Any one rotation Gizmo in Gizmo is rotated, then rotational handle carries out specific aim editor.
Processing equipment determines the direction of rotation to target object according to the operation of selection target rotation and is rotated according to selection
The first Eulerian angles of current the first rotational component (being denoted as " R ") of the operation note target object of object and input equipment currently,
First Eulerian angles include Pitch value and Yaw value, and Pitch value is the initial value for the angle that input equipment is rotated around X-axis, are denoted as
" R_Pitch ", Yaw value are that the initial value for the angle that input equipment is rotated around Y-axis is denoted as " R_Yaw ";
Processing equipment detects the operation of the target rotation of input equipment detection.For example, when user chooses the corresponding rotation of Y-axis
Gizmo and when rotating upwardly and downwardly input equipment (such as handle), object can be rotated around Y-axis, the rotation angle on other axis
It remains unchanged.
Processing equipment calculates the second Eulerian angles after input equipment rotation, second Euler according to the operation of target rotation
Angle includes Pitch the and Yaw value after input equipment rotation, and the Pitch after rotating is denoted as " R_NewPitch ", rotates it
Yaw value afterwards is denoted as " R_NewYaw ";
Processing equipment is calculated according to the first rotational component, the first Eulerian angles and the second Eulerian angles after target object rotation
Second rotational component, the calculation formula 3 for calculating the second rotational component are as follows:
R_New=R+k × (R_NewPitch+R_NewYaw-R_Pitch-R_Yaw);Wherein k is sensitivity coefficient, user
Customized it can be arranged, k × (R_NewPitch+R_NewYaw-R_Pitch-R_Yaw) is rotational value.
Target object is rotated the second rotational component on the basis of the first rotational component and is set by display by processing equipment
Standby display.
3, it scales
Scaling is similar with the principle of rotation, and scaling can only one scaling Gizmo of setting (cube on the left of in Figure 18
Gizmo1803)。
Processing equipment receives the operation of the selection scale objects of input equipment detection, for example, using in an application scenarios
Family selection scaling Gizmo, it is understood that be that user has input scaling instruction.
Processing equipment (is denoted as according to the first current space size of the operation note target object of selection scale objects
" M "), current space size is the volume of the target object before target object does not zoom in and out, and records input equipment and work as
Preceding third Eulerian angles, the third Eulerian angles include Pitch value and Yaw value, and Pitch value is input equipment around the angle that X-axis rotates
The initial value of degree is denoted as " R_Pitch ", and Yaw value is that the initial value for the angle that input equipment is rotated around Y-axis is denoted as " R_Yaw ".
Processing equipment receives the operation of the scale objects of input equipment detection, for example, user rotates the input equipment, this is defeated
Enter the operation that equipment detects scaling target object.
Processing equipment calculates the 4th Eulerian angles after input equipment rotation, the 4th Euler according to the operation of scale objects
Angle includes Pitch the and Yaw value after input equipment rotation, and the Pitch after rotating is denoted as " R_NewPitch ", rotates it
Yaw value afterwards is denoted as " R_NewYaw ";
Processing equipment is calculated according to the first space size, third Eulerian angles and the 4th Eulerian angles after target object scaling
Second space size, the calculation formula 4 for calculating second space size are as follows:
M_New=M × k × (R_NewPitch+R_NewYaw-R_Pitch-R_Yaw);Wherein k is sensitivity coefficient, user
Customized it can be arranged;K × (R_NewPitch+R_NewYaw-R_Pitch-R_Yaw) be according to user rotate input equipment and
The scale value of calculating.
Target object is zoomed in and out second space size on the basis of the first space size and passed through by processing equipment
Display equipment is shown.
In the embodiment of the present invention, the convenience degree for carrying out edit operation under VR for object, the present invention can be substantially improved
The edit methods provided in embodiment can allow regular player quickly and easily to carry out object and put, can also meticulously instrumentality
The pose and scale attributes of body, so that player freely edits object in VR.
It please refers to shown in Figure 20, the embodiment of the invention provides a kind of one embodiment of processing equipment 2000, are applied to
Virtual Reality system, VR system include that display equipment, input equipment and processing equipment, processing equipment include:
First receiving module 2001, the operation of the selecting object for receiving input equipment detection;
Object determining module 2002, for being determined according to the operation of the received selecting object of the first receiving module 2001 wait compile
The target object collected;
Adsorbing plane determining module 2003, for determining that the target object that object determining module 2002 determines is edited in space
The target affinity plane in region;
Second receiving module 2004, the operation of the mobile object for receiving input equipment detection;
First generation module 2005, for generating ray according to the operation of the received mobile object of the second receiving module 2004
And by showing that equipment is shown;
Position determination module 2006, ray and adsorbing plane determining module for being generated according to the first generation module 2005
The intersection point of the 2003 target affinity planes determined determines the target position of target object;
Mobile module 2007, the target object for determining object determining module 2002 are moved to position determination module
2006 target positions determined are simultaneously shown by display equipment.
Optionally, space editing area includes multiple space lattices, position determination module 2006, be also used to determine ray with
The intersection point of target affinity plane;
Calculate the center for the grid that intersection point is located at;
The target position of target object is calculated according to the center of grid and offset vector, offset vector is target gridding
The vector of the preset point on central point to target object on upper side corresponding with target affinity plane, target gridding are to accommodate
The minimum grid region of the bounding box of target object.
Processing equipment please refers to shown in Figure 21, and on the basis of Figure 20 corresponding embodiment, the embodiment of the present invention is also provided
The embodiment of another processing equipment 2100 includes:
Position determination module 2006 further includes intersection point determination unit 20061 and position determination unit 20062;Intersection point determines single
Member 20061, for determining the intersection point of ray Yu target affinity plane;
Position determination unit 20062, the intersection point for determining intersection point determination unit 20061 are determined as target position.
Processing equipment is optional, and mobile module 2007 is also used to:
Calculate the bounding box of target object;
Determine target corresponding with target affinity plane side;
Target side and target affinity plane are subjected to overlap processing, and the central point of target side corresponds to intersection point.
It please refers to shown in Figure 22, on the basis of Figure 20 corresponding embodiment, the embodiment of the invention also provides another
The embodiment of processing equipment 2200 further includes the first computing module 2008;First computing module 2008, for according to target object
Space editing area is calculated, space editing area is divided into multiple net regions.
Processing equipment is optional, and the quantity of the corresponding adsorbing plane of target object is at least one, and adsorbing plane determines mould
Block 2003 is also used to determine preset adsorbing plane according to the attribute and attribute of target object and the corresponding relationship of adsorbing plane;
Receive the operation of the target rotation of input equipment detection;
Determine that institute is right after target object is rotated on the basis of preset adsorbing plane according to the operation of target rotation
The target affinity plane answered.
It please refers to shown in Figure 23, on the basis of Figure 22 corresponding embodiment, the embodiment of the invention also provides another
The embodiment of processing equipment 2300:
Processing equipment further include: the first determining module 2009, the second computing module 2010, third computing module 2011 and
Four computing modules 2012 and the second determining module 2013;
First determining module 2009 determines that N number of adsorbing plane of each object, N are for attribute according to each object
Positive integer more than or equal to 1;
Second computing module 2010, for calculating the bounding box of each object in multiple objects to be selected;
Third computing module 2011, the bounding box for being calculated according to the second computing module 2010 calculate target gridding, mesh
Mark grid is the minimum grid region for accommodating bounding box;
Second determining module 2013, for determining N number of adsorbing plane of the determining each object of the first determining module 2009
In each adsorbing plane corresponding to third computing module 2011 calculate target gridding side;
4th computing module 2012, for calculating offset vector, offset vector is the mesh that the second determining module 2013 determines
Mark the vector of the preset point on the central point to target object of the side of grid;
Position determination module 2006, the ray and adsorbing plane for being also used to be generated according to the first generation module 2005 determine mould
The offset vector that the intersection point and the 4th computing module 2012 of the target affinity plane that block 2003 determines calculate determines target object
Target position.
It please refers to shown in Figure 24, on the basis of Figure 20 corresponding embodiment, the embodiment of the invention also provides another
The embodiment of processing equipment 2400: further including third receiving module 2014, translation direction determining module 2015, the second generation module
2016, the 5th computing module 2017, the 4th receiving module 2018, the 6th computing module 2019, the 5th computing module 2017 the 7th
Computing module 2020 and translation module 2021;
Third receiving module 2014, the operation of the selection translation object for receiving input equipment detection;
Translation direction determining module 2015, for the operation according to the received selection translation object of third receiving module 2014
Determine the translation direction to target object;
Second generation module 2016, for being generated according to the operation of the received selection translation object of third receiving module 2014
First position coordinate;
5th computing module 2017, for calculating the second position coordinate of the preset point on target object;
4th receiving module 2018, the operation of the translation object for receiving input equipment detection;
6th computing module 2019, for calculating third according to the operation of the received translation object of the 4th receiving module 2018
Position coordinates;
7th computing module 2020, the first position coordinate for being generated according to the second generation module 2016, the 5th calculates
Module 2017 calculate second position coordinate and the 6th computing module 2019 calculate the third place coordinate calculate target object into
4th position coordinates of preset point after row translation;
Translation module 2021, for the translation side that the preset point of target object is determining in translation direction determining module 2015
It is moved upward to the 4th position coordinates of the 7th computing module 2020 calculating and by showing that equipment is shown.
It please refers to shown in Figure 25, on the basis of Figure 20 corresponding embodiment, the embodiment of the invention also provides another
The embodiment of processing equipment 2500: further including the 5th receiving module 2022, direction of rotation determining module 2023, the first logging modle
2024, the 6th receiving module 2025, the 8th computing module 2026, rotational component computing module 2027 and rotary module 2028;
5th receiving module 2022, the operation of the selection target rotation for receiving input equipment detection;
Direction of rotation determining module 2023, for the operation according to the received selection target rotation of the 5th receiving module 2022
Determine the direction of rotation to target object;
First logging modle 2024, for the operation note according to the received selection target rotation of the 5th receiving module 2022
The first Eulerian angles of the first current rotational component of target object and input equipment currently;
6th receiving module 2025, the operation of the target rotation for receiving input equipment detection;
8th computing module 2026 is inputted for being calculated according to the operation of the received target rotation of the 6th receiving module 2025
The second Eulerian angles after equipment rotation;
Rotational component computing module 2027, the first rotational component for being recorded according to the first logging modle 2024, first
The second Eulerian angles that Eulerian angles and the 8th computing module 2026 calculate calculate the second rotational component after target object rotation;
Rotary module 2028, for target object to be rotated to the second rotational component on the basis of the first rotational component and is led to
Display equipment is crossed to show.
It please refers to shown in Figure 26, on the basis of Figure 20 corresponding embodiment, the embodiment of the invention also provides another
The embodiment of processing equipment 2600: further include:
7th receiving module 2029, the operation of the selection scale objects for receiving input equipment detection;
Second logging modle 2030, for the operation note according to the received selection scale objects of the 7th receiving module 2029
The first current space size of target object simultaneously records the current third Eulerian angles of input equipment;
8th receiving module 2031, the operation of the scale objects for receiving input equipment detection;
9th computing module 2033 calculates the 4th Euler after input equipment rotation for the operation according to scale objects
Angle;
Tenth computing module 2034, the first space size, third Euler for being recorded according to the second logging modle 2030
The 4th Eulerian angles that angle and the 9th computing module 2033 calculate calculate the second space size after target object scaling;
Zoom module 2035, the basis of the first space size for recording target object in the second logging modle 2030
On zoom to the second space size of the tenth computing module 2034 calculating and by showing that equipment is shown.
Further, processing equipment of the Figure 20 into Figure 26 is presented in the form of functional module.Here " module "
It can refer to application-specific integrated circuit (application-specific integrated circuit, ASIC), circuit is held
Row one or more softwares or firmware program processor and memory, integrated logic circuit and/or other can provide it is above-mentioned
The device of function.In a simple embodiment, processing equipment of the Figure 20 into Figure 26 can be using form shown in Figure 27.
Each module can be realized by the processor, transceiver and memory of Fig. 3.
The embodiment of the invention also provides another processing equipments, as shown in figure 27, for ease of description, illustrate only with
The relevant part of the embodiment of the present invention, it is disclosed by specific technical details, please refer to present invention method part.The processing
Equipment can be include mobile phone, tablet computer, PDA (Personal Digital Assistant, personal digital assistant), computer
Deng.
Figure 27 shows the block diagram of the part-structure of processing equipment relevant to terminal provided in an embodiment of the present invention.Ginseng
Figure 27 is examined, processing equipment includes: transceiver 2710, memory 2720, input unit 2730, display unit 2740, voicefrequency circuit
2760, the components such as Wireless Fidelity (wireless fidelity, WiFi) module 2770, processor 2780 and power supply 2790.
It, can be with it will be understood by those skilled in the art that processing equipment structure shown in Figure 27 does not constitute the restriction to processing equipment
Including perhaps combining certain components or different component layouts than illustrating more or fewer components.
It is specifically introduced below with reference to each component parts of the Figure 27 to processing equipment:
Transceiver 2710 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station
After downlink information receives, handled to processor 1180;In addition, the data for designing uplink are sent to base station.In general, transceiver
2710 include but is not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier (Low Noise
Amplifier, LNA), duplexer etc..In addition, transceiver 2710 can also be logical with network and other equipment by wireless communication
Letter.
Memory 2720 can be used for storing software program and module, and processor 2780 is stored in memory by operation
2720 software program and module, thereby executing the various function application and data processing of processing equipment.Memory 2720
It can mainly include storing program area and storage data area, wherein storing program area can storage program area, at least one function institute
The application program (such as sound-playing function, image player function etc.) etc. needed;Storage data area can be stored according to processing equipment
Use created data (such as audio data, phone directory etc.) etc..In addition, memory 2720 may include that high speed is deposited at random
Access to memory, can also include nonvolatile memory, a for example, at least disk memory, flush memory device or other easily
The property lost solid-state memory.
Input unit 2730 can be used for receiving the number or character information of input, and generates and set with the user of processing equipment
It sets and the related key signals of function control inputs.Specifically, input unit 2730 may include touch panel 2731 and other
Input equipment 2732.Touch panel 2731, also referred to as touch screen, collect user on it or nearby touch operation (such as
User is using any suitable objects or attachment such as finger, stylus on touch panel 2731 or near touch panel 2731
Operation), and corresponding attachment device is driven according to preset formula.Optionally, touch panel 2731 may include touching inspection
Survey two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation
Bring signal, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and will
It is converted into contact coordinate, then gives processor 2780, and can receive order that processor 2780 is sent and be executed.This
Outside, touch panel 2731 can be realized using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touching
Panel 2731 is controlled, input unit 2730 can also include other input equipments 2732.Specifically, other input equipments 2732 can be with
Including but not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc.
One of or it is a variety of.
Display unit 2740 can be used for showing information input by user or the information and processing equipment that are supplied to user
Various menus.Display unit 2740 may include display panel 2741, optionally, can use liquid crystal display (Liquid
Crystal Display, LCD), the forms such as Organic Light Emitting Diode (Organic Light-Emitting Diode, OLED)
To configure display panel 2741.Further, touch panel 2731 can cover display panel 2741, when touch panel 2731 detects
After arriving touch operation on it or nearby, processor 2780 is sent to determine the type of touch event, is followed by subsequent processing device
2780 provide corresponding visual output according to the type of touch event on display panel 2741.Although in Figure 27, touch surface
Plate 2731 and display panel 2741 are the input and input function for realizing processing equipment as two independent components, but
In some embodiments, can be integrated by touch panel 2731 and display panel 2741 and that realizes processing equipment output and input function
Energy.
Voicefrequency circuit 2760, loudspeaker 2761, microphone 2762 can provide the audio interface between user and processing equipment.
Electric signal after the audio data received conversion can be transferred to loudspeaker 2761, by loudspeaker 2761 by voicefrequency circuit 2760
Be converted to voice signal output.
Processor 2780 is the control centre of processing equipment, utilizes each of various interfaces and the entire processing equipment of connection
A part by running or execute the software program and/or module that are stored in memory 2720, and calls and is stored in storage
Data in device 2720 execute the various functions and processing data of processing equipment, to carry out integral monitoring to processing equipment.It can
Choosing, processor 2780 may include one or more processing units;Preferably, processor 2780 can integrate application processor and tune
Demodulation processor processed, wherein the main processing operation system of application processor, user interface and application program etc., modulatedemodulate is mediated
Reason device mainly handles wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 2780
In.
WiFi belongs to short range wireless transmission technology, and processing equipment can receive input by WiFi module 2770 and set
The data that preparation is sent, or data are sent to display equipment.
Bluetooth module 2750, belongs to short range wireless transmission technology, and processing equipment can also can by bluetooth module 2750
With receive input equipment transmission data, or to display equipment send data.
Processing equipment further includes the power supply 2790 (such as battery) powered to all parts, it is preferred that power supply can pass through
Power-supply management system and processor 2780 are logically contiguous, to realize management charging, electric discharge, Yi Jigong by power-supply management system
The functions such as consumption management.
In embodiments of the present invention, processor 2780 included by the processing equipment, which also has, makes the processing equipment side of execution
Method and step in method embodiment.
The embodiment of the invention also provides a kind of computer readable storage medium, it is stored in computer readable storage medium
Instruction, when run on a computer, so that computer executes method described in above method embodiment.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description,
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components
It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or
The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit
It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the present invention
Portion or part steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey
The medium of sequence code.
The above, the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although referring to before
Stating embodiment, invention is explained in detail, those skilled in the art should understand that: it still can be to preceding
Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these
It modifies or replaces, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.
Claims (14)
1. a kind of method of the object editing in virtual reality characterized by comprising
Receive the operation of selecting object;
Target object to be edited is determined according to the operation of the selecting object;
Determine the target object in the target affinity plane of space editing area;
Receive the operation of the mobile object of the input equipment detection;
The target position of the target object is determined according to the operation of the mobile object;
The target object is moved to the target position and by showing that equipment is shown.
2. the method according to claim 1, wherein the operation according to the mobile object determines the mesh
Mark the target position of object, comprising:
Ray is generated according to the operation of the mobile object;
The target position of the target object is determined according to the intersection point of the ray and the target affinity plane.
3. the method according to claim 1, wherein the space editing area includes multiple space lattices, institute
State the target position of the target object according to the intersection point calculation of the ray and the target affinity plane, comprising: determine institute
State the intersection point of ray Yu the target affinity plane;
Calculate the center for the grid that the intersection point is located at;
The target position of the target object is calculated according to the center of the grid and offset vector, the offset vector is
The preset point on central point to the target object on target gridding on side corresponding with the target affinity plane to
Amount, the target gridding is the minimum grid region for accommodating the bounding box of the target object.
4. the method according to claim 1, wherein described according to the ray and the target affinity plane
Intersection point determines the target position of the target object, comprising:
Determine the intersection point of the ray Yu the target affinity plane;
The intersection point is determined as target position.
5. according to the method described in claim 4, it is characterized in that, described be moved to the target position for the target object
It sets, comprising:
Calculate the bounding box of the target object;
Determine target side corresponding with the target affinity plane;
The target side and the target affinity plane are subjected to overlap processing, and the central point of the target side corresponds to institute
State intersection point.
6. the method according to claim 1, wherein the operation according to the selection target object determines institute
State in display equipment after the target object to be edited that shows, it is described according to the attribute of the target object and the attribute with
Before the corresponding relationship of adsorbing plane determines the adsorbing plane of the target object, the method also includes: according to the target
Calculation and object space editing area, the space editing area are divided into multiple net regions.
7. the method according to claim 1, wherein the quantity of the corresponding adsorbing plane of the target object is at least
It is one, the corresponding relationship according to the attribute of the target object and the attribute and adsorbing plane determines the target pair
The target affinity plane of elephant, comprising: the corresponding relationship according to the attribute of the target object and the attribute and adsorbing plane is true
Fixed preset adsorbing plane;
Receive the operation of the target rotation of the input equipment detection;
Determine that the target object is rotated on the basis of the preset adsorbing plane according to the operation of the target rotation
Corresponding target affinity plane later.
8. according to the method described in claim 6, it is characterized in that, the selection target object for receiving input equipment detection
Before operation, the method also includes: N number of adsorbing plane of each object is determined according to the attribute of each object,
The N is the positive integer more than or equal to 1;
Calculate the bounding box of each object in multiple objects to be selected;
Target gridding is calculated according to the bounding box, the target gridding is the minimum grid region for accommodating the bounding box;
Determine the side of target gridding corresponding to each adsorbing plane in N number of adsorbing plane of each object;
Offset vector is calculated, the offset vector is pre- on the central point to the target object of the side of the target gridding
Set vector a little;
Target position according to target object described in intersection point calculation of the ray with the target affinity plane includes:
The target of the target object is calculated according to the intersection point and the offset vector of the ray and the target affinity plane
Position.
9. according to claim 1 to method described in any one of 8 claims, which is characterized in that described by the target pair
After being moved to the target position and being shown by display equipment, the method also includes:
Receive the operation of the selection translation object of the input equipment detection;
The translation direction to the target object is determined according to the operation of the selection translation object and is translated according to the selection
The operation of object generates first position coordinate;
Calculate the second position coordinate of the preset point on the target object;
Receive the operation of the translation object of the input equipment detection;
The third place coordinate is calculated according to the operation of the translation object;
According to the first position coordinate, the second position coordinate and the third place coordinate calculate the target object into
4th position coordinates of the preset point after row translation;
The preset point of the target object is moved to the 4th position coordinates on the translation direction and by described aobvious
Show that equipment is shown.
10. according to claim 1 to method described in any one of 8 claims, which is characterized in that move the target object
After moving to the target position and being shown by the display equipment, the method also includes:
Receive the operation of the selection target rotation of the input equipment detection;
The direction of rotation to the target object is determined according to the operation of the selection target rotation and is rotated according to the selection
The first Eulerian angles of the first current rotational component of target object described in the operation note of object and input equipment currently;
Detect the operation of the target rotation of the input equipment detection;
The second Eulerian angles after the input equipment rotation are calculated according to the operation of the target rotation;
The target object, which is calculated, according to first rotational component, first Eulerian angles and second Eulerian angles rotates it
The second rotational component afterwards;
The target object is rotated to the second rotational component on the basis of first rotational component and is set by the display
Standby display.
11. according to claim 1 to method described in any one of 8 claims, which is characterized in that move the target object
After moving to the target position and being shown by the display equipment, the method also includes:
Receive the operation of the selection scale objects of the input equipment detection;
According to the first current space size of target object described in the operation note of the selection scale objects and record described defeated
Enter the current third Eulerian angles of equipment;
Receive the operation of the scale objects of the input equipment detection;
The 4th Eulerian angles after the input equipment rotation are calculated according to the operation of the scale objects;
The target object, which is calculated, according to first space size, the third Eulerian angles and the 4th Eulerian angles scales it
Second space size afterwards;
The target object is zoomed in and out to second space size on the basis of first space size and by described
Display equipment is shown.
12. a kind of processing equipment characterized by comprising
First receiving module, for receiving the operation of selecting object;
Object determining module, for be edited according to the operation of the received selecting object of first receiving module determination
Target object;
Adsorbing plane determining module, for determining the determining target object of the object determining module in space editing area
Target affinity plane;
Second receiving module, the operation of the mobile object for receiving the input equipment detection;
Position determination module, the mesh determining for the operation according to the received mobile object of second receiving module
The intersection point of mark adsorbing plane determines the target position of the target object;
Mobile module, for the target object to be moved to the determining target position of the position determination module and is passed through
Display equipment is shown.
13. a kind of processing equipment characterized by comprising
Memory, for storing computer executable program code;
Transceiver, and
Processor, with the memory and the transceiver couples;
Wherein said program code includes instruction, and when the processor executes described instruction, described instruction sets the processing
The standby method executed as described in any one of claims 1 to 11.
14. a kind of virtual reality system characterized by comprising display equipment, input equipment and processing equipment, the display
Equipment and the input equipment are connect with the processing equipment;
The input equipment detects and selects the operation of object;
The processing equipment receives the operation of the selecting object of the input equipment detection;
The processing equipment determines target object to be edited according to the operation of the selecting object;
The processing equipment determines the target object in the target affinity plane of space editing area;
The processing equipment receives the operation of the mobile object of the input equipment detection;
The processing equipment generates ray according to the operation of the mobile object, and the data of the ray are sent to described show
Show equipment;
The display equipment shows the ray;
The processing equipment determines the target position of the target object according to the intersection point of the ray and the target affinity plane
It sets;
The target object is moved to the target position by the processing equipment;
The display equipment shows that the target object is moved to the target position.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711005203.XA CN109697002B (en) | 2017-10-23 | 2017-10-23 | Method, related equipment and system for editing object in virtual reality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711005203.XA CN109697002B (en) | 2017-10-23 | 2017-10-23 | Method, related equipment and system for editing object in virtual reality |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109697002A true CN109697002A (en) | 2019-04-30 |
CN109697002B CN109697002B (en) | 2021-07-16 |
Family
ID=66229367
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711005203.XA Active CN109697002B (en) | 2017-10-23 | 2017-10-23 | Method, related equipment and system for editing object in virtual reality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109697002B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110322571A (en) * | 2019-05-30 | 2019-10-11 | 腾讯科技(上海)有限公司 | A kind of page processing method, device and medium |
CN110381111A (en) * | 2019-06-03 | 2019-10-25 | 华为技术有限公司 | A kind of display methods, location determining method and device |
CN111429580A (en) * | 2020-02-17 | 2020-07-17 | 浙江工业大学 | Space omnibearing simulation system and method based on virtual reality technology |
CN111757081A (en) * | 2020-05-27 | 2020-10-09 | 海南车智易通信息技术有限公司 | Movement limiting method for virtual scene, client, server and computing equipment |
CN111782053A (en) * | 2020-08-10 | 2020-10-16 | Oppo广东移动通信有限公司 | Model editing method, device, equipment and storage medium |
CN112203076A (en) * | 2020-09-16 | 2021-01-08 | 青岛小鸟看看科技有限公司 | Alignment method and system for exposure center points of multiple cameras in VR system |
CN113554724A (en) * | 2020-04-24 | 2021-10-26 | 西安诺瓦星云科技股份有限公司 | Method and device for zooming and adsorbing graph |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105264461A (en) * | 2013-05-13 | 2016-01-20 | 微软技术许可有限责任公司 | Interactions of virtual objects with surfaces |
CN105912110A (en) * | 2016-04-06 | 2016-08-31 | 北京锤子数码科技有限公司 | Method, device and system for performing target selection in virtual reality space |
CN106055090A (en) * | 2015-02-10 | 2016-10-26 | 李方炜 | Virtual reality and augmented reality control with mobile devices |
CN106575153A (en) * | 2014-07-25 | 2017-04-19 | 微软技术许可有限责任公司 | Gaze-based object placement within a virtual reality environment |
US20170185160A1 (en) * | 2015-12-24 | 2017-06-29 | Samsung Electronics Co., Ltd. | Electronic device and method of controlling the same |
WO2017139509A1 (en) * | 2016-02-12 | 2017-08-17 | Purdue Research Foundation | Manipulating 3d virtual objects using hand-held controllers |
CN107111979A (en) * | 2014-12-19 | 2017-08-29 | 微软技术许可有限责任公司 | The object of band auxiliary in three-dimension visible sysem is placed |
CN107229393A (en) * | 2017-06-02 | 2017-10-03 | 三星电子(中国)研发中心 | Real-time edition method, device, system and the client of virtual reality scenario |
-
2017
- 2017-10-23 CN CN201711005203.XA patent/CN109697002B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105264461A (en) * | 2013-05-13 | 2016-01-20 | 微软技术许可有限责任公司 | Interactions of virtual objects with surfaces |
CN106575153A (en) * | 2014-07-25 | 2017-04-19 | 微软技术许可有限责任公司 | Gaze-based object placement within a virtual reality environment |
CN107111979A (en) * | 2014-12-19 | 2017-08-29 | 微软技术许可有限责任公司 | The object of band auxiliary in three-dimension visible sysem is placed |
CN106055090A (en) * | 2015-02-10 | 2016-10-26 | 李方炜 | Virtual reality and augmented reality control with mobile devices |
US20170185160A1 (en) * | 2015-12-24 | 2017-06-29 | Samsung Electronics Co., Ltd. | Electronic device and method of controlling the same |
WO2017139509A1 (en) * | 2016-02-12 | 2017-08-17 | Purdue Research Foundation | Manipulating 3d virtual objects using hand-held controllers |
CN105912110A (en) * | 2016-04-06 | 2016-08-31 | 北京锤子数码科技有限公司 | Method, device and system for performing target selection in virtual reality space |
CN107229393A (en) * | 2017-06-02 | 2017-10-03 | 三星电子(中国)研发中心 | Real-time edition method, device, system and the client of virtual reality scenario |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110322571A (en) * | 2019-05-30 | 2019-10-11 | 腾讯科技(上海)有限公司 | A kind of page processing method, device and medium |
CN110322571B (en) * | 2019-05-30 | 2023-08-11 | 腾讯科技(上海)有限公司 | Page processing method, device and medium |
CN110381111A (en) * | 2019-06-03 | 2019-10-25 | 华为技术有限公司 | A kind of display methods, location determining method and device |
CN112039937A (en) * | 2019-06-03 | 2020-12-04 | 华为技术有限公司 | Display method, position determination method and device |
CN112039937B (en) * | 2019-06-03 | 2022-08-09 | 华为技术有限公司 | Display method, position determination method and device |
CN111429580A (en) * | 2020-02-17 | 2020-07-17 | 浙江工业大学 | Space omnibearing simulation system and method based on virtual reality technology |
CN113554724A (en) * | 2020-04-24 | 2021-10-26 | 西安诺瓦星云科技股份有限公司 | Method and device for zooming and adsorbing graph |
CN111757081A (en) * | 2020-05-27 | 2020-10-09 | 海南车智易通信息技术有限公司 | Movement limiting method for virtual scene, client, server and computing equipment |
CN111757081B (en) * | 2020-05-27 | 2022-07-08 | 海南车智易通信息技术有限公司 | Movement limiting method for virtual scene, client, server and computing equipment |
CN111782053B (en) * | 2020-08-10 | 2023-04-28 | Oppo广东移动通信有限公司 | Model editing method, device, equipment and storage medium |
CN111782053A (en) * | 2020-08-10 | 2020-10-16 | Oppo广东移动通信有限公司 | Model editing method, device, equipment and storage medium |
CN112203076A (en) * | 2020-09-16 | 2021-01-08 | 青岛小鸟看看科技有限公司 | Alignment method and system for exposure center points of multiple cameras in VR system |
CN112203076B (en) * | 2020-09-16 | 2022-07-29 | 青岛小鸟看看科技有限公司 | Alignment method and system for exposure center points of multiple cameras in VR system |
US11962749B2 (en) | 2020-09-16 | 2024-04-16 | Qingdao Pico Technology Co., Ltd. | Virtual reality interaction method, device and system |
Also Published As
Publication number | Publication date |
---|---|
CN109697002B (en) | 2021-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109697002A (en) | A kind of method, relevant device and the system of the object editing in virtual reality | |
CN102830795B (en) | Utilize the long-range control of motion sensor means | |
CN108604121A (en) | Both hands object manipulation in virtual reality | |
Akaoka et al. | DisplayObjects: prototyping functional physical interfaces on 3d styrofoam, paper or cardboard models | |
EP1709609B1 (en) | Advanced control device for home entertainment utilizing three dimensional motion technology | |
CN103513894B (en) | Display device, remote control equipment and its control method | |
CN102763422A (en) | Projectors and depth cameras for deviceless augmented reality and interaction | |
CN110517319A (en) | A kind of method and relevant apparatus that camera posture information is determining | |
CN108027653A (en) | haptic interaction in virtual environment | |
CN110109512A (en) | The system and method for three-dimensional virtual object are controlled on portable computing device | |
CN107291266A (en) | The method and apparatus that image is shown | |
CN107038455A (en) | A kind of image processing method and device | |
CN109544663A (en) | The virtual scene of application program identifies and interacts key mapping matching process and device | |
CN108694073A (en) | Control method, device, equipment and the storage medium of virtual scene | |
KR20050102803A (en) | Apparatus, system and method for virtual user interface | |
CN103324400A (en) | Method and device for displaying menus in 3D model | |
CN109646944A (en) | Control information processing method, device, electronic equipment and storage medium | |
CN207008556U (en) | Intelligent wireless location tracking manipulates pen | |
CN204945943U (en) | For providing the remote control equipment of remote control signal for external display device | |
CN107390922A (en) | virtual touch control method, device, storage medium and terminal | |
CN103207677A (en) | System and method for realizing virtual-real somatosensory interaction of digital Zenghouyi bells | |
JPWO2018025511A1 (en) | INFORMATION PROCESSING APPARATUS, METHOD, AND COMPUTER PROGRAM | |
CN108888954A (en) | A kind of method, apparatus, equipment and storage medium picking up coordinate | |
CN111445563A (en) | Image generation method and related device | |
CN205594583U (en) | Virtual impression system of VR based on BIM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |