CN111784850A - Object capture simulation method based on illusion engine and related equipment - Google Patents

Object capture simulation method based on illusion engine and related equipment Download PDF

Info

Publication number
CN111784850A
CN111784850A CN202010630560.0A CN202010630560A CN111784850A CN 111784850 A CN111784850 A CN 111784850A CN 202010630560 A CN202010630560 A CN 202010630560A CN 111784850 A CN111784850 A CN 111784850A
Authority
CN
China
Prior art keywords
rotation
target object
collision body
collision
collider
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010630560.0A
Other languages
Chinese (zh)
Other versions
CN111784850B (en
Inventor
罗威
许秋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruilishi Technology Kunming Co ltd
Shenzhen Realis Multimedia Technology Co Ltd
Original Assignee
Ruilishi Technology Kunming Co ltd
Shenzhen Realis Multimedia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruilishi Technology Kunming Co ltd, Shenzhen Realis Multimedia Technology Co Ltd filed Critical Ruilishi Technology Kunming Co ltd
Priority to CN202010630560.0A priority Critical patent/CN111784850B/en
Publication of CN111784850A publication Critical patent/CN111784850A/en
Application granted granted Critical
Publication of CN111784850B publication Critical patent/CN111784850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of computer vision recognition, and discloses an object grabbing simulation method based on a ghost engine and related equipment, wherein the method comprises the following steps: acquiring the rotation of a first position of a target object and the rotation of a second position of a collision body when the collision body performs grabbing action, wherein the collision body comprises a first collision body and a second collision body; when the collision body grabs the target object to move, acquiring the rotation of the third position of the current collision body, and calculating the rotation change value of the collision body according to the rotation of the second position and the rotation of the third position; calculating the rotation of a fourth position to which the current target object is expected to move according to the rotation of the first position and the rotation change value; and controlling the target object to move along with the first collision body and the second collision body based on the fourth position rotation. The invention realizes the simulation of simultaneously grabbing the object by two hands based on the illusion engine, so that the object grabbing operation is closer to the scene requirement, and the application scene of object grabbing and moving is expanded.

Description

Object capture simulation method based on illusion engine and related equipment
Technical Field
The invention relates to the technical field of computer vision recognition, in particular to an object grabbing simulation method based on a ghost engine and related equipment.
Background
In the VR (Virtual Reality) technology, a Virtual system simulates another real system to form a Virtual space capable of freely matching articles and operating articles, along with the development of VR simulation technology, the degree of free matching of articles in the Virtual space is higher and higher, and the execution behaviors of the articles are more and more diversified.
In the prior art, the function of grabbing an object is basically realized in an adhesion manner, one object is attached to another object or component, and a child object moves and rotates along with a parent object, but this simulation process can only realize grabbing an object with one hand, and cannot process the logic of grabbing an object with two hands, because the child object can only have one parent object, that is, an object can only be grabbed with one hand, and cannot simultaneously move along with two hands, if grabbing of the object is realized by using a physical hinge operation of the UE4(Unreal Engine 4), the UE4 physical hinge operation cannot support the function of linking one object with two hinge operations at the same time. It is difficult to achieve simultaneous grasping of objects in virtual reality space with both hands.
Disclosure of Invention
The invention mainly aims to provide an object grabbing simulation method based on an illusion engine and related equipment, and aims to solve the technical problem of how to grab an object by two hands in a virtual reality world.
The invention provides an object capture simulation method based on an illusion engine, which comprises the following steps:
acquiring a first position rotation of a target object and a second position rotation of a collision body when a collision body performs a grabbing action, wherein the collision body comprises a first collision body and a second collision body;
when the collision body grabs the target object to move, acquiring the rotation of the current third position of the collision body, and calculating the rotation change value of the collision body according to the rotation of the second position and the rotation of the third position;
calculating a fourth position rotation to which the target object is expected to move currently according to the first position rotation and the rotation change value;
controlling the target item to move with the first collider and the second collider based on the fourth position rotation.
Optionally, in a first implementation manner of the first aspect of the present invention, before the acquiring the first position rotation of the target object and the second position rotation of the collision volume when the collision volume performs the grabbing motion, the method further includes:
judging whether the collision body does not grab any article;
if the collision body does not grab any article, detecting whether the collision body contacts a collision frame where the target object is located;
and if the collision body is contacted with the collision frame, performing hinge operation on the collision body and the target object to determine that the collision body forms a grabbing action on the target object.
Optionally, in a second implementation manner of the first aspect of the present invention, the determining that the collider forms a grabbing action on the target object by the collider by performing a hinge operation on the collider and the target object if the collider contacts the collision frame includes:
if the collision body contacts the collision frame, judging whether the target object is not grabbed by other collision bodies;
if the target object is not grabbed by other collision volumes, detecting whether the target object is not grabbed by the first collision volume or the second collision volume;
if the target object is not grabbed by the first collision body or the second collision body, respectively performing hinge operations on the first collision body, the second collision body and the target object according to the contact sequence of the first collision body, the second collision body and the target object;
if the target object has been grabbed by the first collider or the second collider, determining whether the collider that has grabbed the target object is different from the collider that again grabbed the target object;
if the collision body which has grabbed the target object is different from the collision body which again grabs the target object, respectively performing hinge operation on the first collision body, the second collision body and the target object according to the grabbing sequence;
determining that the collider constitutes a grabbing action on the target object when the first collider, the second collider and the target object complete the hinge operation.
Optionally, in a third implementation manner of the first aspect of the present invention, the first position rotation includes a first rotation value and a first position coordinate of the target object, the second position rotation includes a second rotation value and a second position coordinate of the center point of the collider, and the third position rotation includes a third rotation value and a third position coordinate of the center point of the collider at present;
wherein the second rotation value is calculated by a first vector from the first collision volume to the second collision volume, and the third rotation value is calculated by a second vector from the current first collision volume to the second collision volume.
Optionally, in a fourth implementation manner of the first aspect of the present invention, the calculating a rotation variation value of the collision volume according to the second position rotation and the third position rotation includes:
constructing a relative rotation conversion variable of the position rotation change when the collision body moves according to the second rotation value and the third rotation value;
and calculating the rotation change value of the collision body by adopting a preset relative rotation conversion algorithm according to the relative rotation conversion variable.
Optionally, in a fifth implementation manner of the first aspect of the present invention, the calculating, according to the first position rotation and the rotation variation value, a fourth position rotation to which the target object is currently expected to move includes:
adding the first rotation value and the rotation change value to obtain a fourth rotation value of the current target object after the target object is expected to move;
setting a third vector from the central point to the target object according to the first position coordinate and the second position coordinate, and constructing a relative coordinate transformation variable of the target object when the collider grabs the target object to move according to the second position coordinate and the rotation change value;
calculating a fourth position coordinate to which the target object is expected to move currently by adopting a preset relative coordinate conversion algorithm according to the third vector and the relative coordinate conversion variable;
and obtaining the fourth position rotation to which the target object is expected to move currently according to the fourth position coordinate and the fourth rotation value.
Optionally, in a sixth implementation manner of the first aspect of the present invention, after the controlling the target object to move along with the first collision volume and the second collision volume based on the fourth position rotation, the method further includes:
detecting whether the first collider or the second collider is separated from the collision frame of the target object;
if the first collider or the second collider is separated from the collision frame of the target object, setting the current first position rotation at the moment of separation as a final position rotation;
and taking the final position rotation as a starting point, and performing falling simulation on the target object.
The second aspect of the present invention provides an object capture simulation apparatus based on a ghost engine, including:
the acquisition module is used for acquiring the first position rotation of a target object and the second position rotation of a collision body when a collision body performs grabbing action, wherein the collision body comprises a first collision body and a second collision body;
the first calculating module is used for acquiring the rotation of the current third position of the collision body when the collision body grabs the target object to move, and calculating the rotation change value of the collision body according to the rotation of the second position and the rotation of the third position;
the second calculation module is used for calculating the rotation of a fourth position to which the target object is expected to move currently according to the first position rotation and the rotation change value;
and the control module is used for controlling the target object to move along with the first collision body and the second collision body based on the fourth position rotation.
Optionally, in a first implementation manner of the second aspect of the present invention, the object grabbing simulation apparatus further includes a grabbing action detection module, where the grabbing action detection module is configured to:
judging whether the collision body does not grab any article;
if the collision body does not grab any article, detecting whether the collision body contacts a collision frame where the target object is located;
and if the collision body is contacted with the collision frame, performing hinge operation on the collision body and the target object to determine that the collision body forms a grabbing action on the target object.
Optionally, in a second implementation manner of the second aspect of the present invention, the grabbing action detection module includes a hinge operation unit, and the hinge operation unit is configured to:
if the collision body contacts the collision frame, judging whether the target object is not grabbed by other collision bodies;
if the target object is not grabbed by other collision volumes, detecting whether the target object is not grabbed by the first collision volume or the second collision volume;
if the target object is not grabbed by the first collision body or the second collision body, respectively performing hinge operations on the first collision body, the second collision body and the target object according to the contact sequence of the first collision body, the second collision body and the target object;
if the target object has been grabbed by the first collider or the second collider, determining whether the collider that has grabbed the target object is different from the collider that again grabbed the target object;
if the collision body which has grabbed the target object is different from the collision body which again grabs the target object, respectively performing hinge operation on the first collision body, the second collision body and the target object according to the grabbing sequence;
determining that the collider constitutes a grabbing action on the target object when the first collider, the second collider and the target object complete the hinge operation.
Optionally, in a third implementation manner of the second aspect of the present invention, the first position rotation includes a first rotation value and a first position coordinate of the target object, the second position rotation includes a second rotation value and a second position coordinate of the center point of the collider, and the third position rotation includes a third rotation value and a third position coordinate of the center point of the collider at present;
wherein the second rotation value is calculated by a first vector from the first collision volume to the second collision volume, and the third rotation value is calculated by a second vector from the current first collision volume to the second collision volume.
Optionally, in a fourth implementation manner of the second aspect of the present invention, the first calculating module further includes:
a construction unit configured to construct a relative rotation conversion variable of a positional rotation change when the collision body moves, according to the second rotation value and the third rotation value;
and the first algorithm calculating unit is used for calculating the rotation change value of the collision body by adopting a preset relative rotation conversion algorithm according to the relative rotation conversion variable.
Optionally, in a fifth implementation manner of the second aspect of the present invention, the second calculating module further includes:
the adding unit is used for adding the first rotation value and the rotation change value to obtain a fourth rotation value of the current target object after the target object is expected to move;
a setting and constructing unit, configured to set a third vector from the central point to the target object according to the first position coordinate and the second position coordinate, and construct a relative coordinate transformation variable of the target object when the collider captures the movement of the target object according to the second position coordinate and the rotation variation value;
the second algorithm calculation unit is used for calculating a fourth position coordinate to which the target object is expected to move currently by adopting a preset relative coordinate conversion algorithm according to the third vector and the relative coordinate conversion variable;
and the combination unit is used for obtaining the fourth position rotation to which the current target object is expected to move according to the fourth position coordinate and the fourth rotation value.
Optionally, in a sixth implementation manner of the second aspect of the present invention, the object capture simulation apparatus further includes a drop simulation module, and the drop simulation module further includes:
a detection unit that detects whether the first collider or the second collider is separated from the collision frame of the target object;
a positioning unit configured to set the current first position rotation at the moment of separation as a final position rotation if the first collider or the second collider is separated from the collision frame of the target object;
and the falling simulation unit is used for performing falling simulation on the target object by taking the final position rotation as a starting point.
The third aspect of the present invention provides an object capture simulation device based on a ghost engine, including: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line; the at least one processor invokes the instructions in the memory to cause the ghost engine based object grab simulation apparatus to perform the ghost engine based object grab simulation method described above.
A fourth aspect of the present invention provides a computer-readable storage medium having stored therein instructions, which, when run on a computer, cause the computer to execute the above-mentioned ghost engine-based object capture simulation method.
According to the technical scheme provided by the invention, a first position rotation of a target object and a second position rotation of a collision body are acquired when a collision body performs grabbing action, and the collision body comprises a first collision body and a second collision body; when the collision body grabs the target object to move, acquiring the rotation of the current third position of the collision body, and calculating the rotation change value of the collision body according to the rotation of the second position and the rotation of the third position; calculating a fourth position rotation to which the target object is expected to move currently according to the first position rotation and the rotation change value; controlling the target item to move with the first collider and the second collider based on the fourth position rotation. The invention realizes the simulation of simultaneously grabbing the object by two hands based on the illusion engine, so that the object grabbing operation is closer to the scene requirement, and the application scene of object grabbing and moving is expanded.
Drawings
FIG. 1 is a schematic diagram of a first embodiment of an object capture simulation method based on an illusion engine according to the invention;
FIG. 2 is a diagram of a second embodiment of the ghost engine based object capture simulation method according to the present invention;
FIG. 3 is a diagram of a third embodiment of the ghost engine based object capture simulation method according to the present invention;
FIG. 4 is a diagram of a fourth embodiment of the ghost engine based object capture simulation method according to the present invention;
FIG. 5 is a schematic diagram of an embodiment of an illusion-engine-based object capture simulation apparatus of the present invention;
FIG. 6 is a schematic diagram of another embodiment of an illusion-engine-based object capture simulation apparatus of the present invention;
fig. 7 is a schematic diagram of an embodiment of the illusion engine-based object capture simulation device of the invention.
Detailed Description
The embodiment of the invention provides an object grabbing simulation method based on a ghost engine and related equipment, which comprises the steps of acquiring first position rotation and second position rotation of a target object at the grabbing moment of the target object by a collision body, determining third position rotation of the current collision body when the collision body grabs the target object to move, and calculating a rotation change value of the collision body from the grabbing moment to the moving moment through the second position rotation and the third position rotation of the collision body; and finally, converting the rotation change value into the fourth position rotation of the collision body to move along with the collision body.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," or "having," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The core thought of the technical scheme is that the position rotation of the target object and the position rotation between two hands are determined at the moment when the two hands grab the target object, then the position rotation change value of the two hands relative to the grabbing moment is calculated when the two hands grab the target object to move each time, and then the position rotation change value is converted into the current position rotation of the target object.
Firstly, defining an object class interactable object of a target object in the UE4, wherein the object class interactable object comprises an FPickupInfo structure type variable, a initiation type variable, a FVector type variable, and declaring the following important variables in the object class interactable object:
1. the FPickupInfo structure type variable PickupInfo has three variables in total, namely, a pool type variable (1) bPickUp, and if the target object is grabbed by the collider, the target object is set to true, otherwise the target object is set to false, the component pointer (2) is the first collider HandCollision, (3) the second collider WeightsOtherHandCollision, and the pointers of the colliders are recorded, (2) the colliders that are grabbed earlier are shown, and (3) the colliders that are grabbed later are shown.
2. The gyration-type variable first rotation value WeightsBaseRotation records the first rotation value of the target object at the moment the collision volume takes the grabbing action.
3. And recording a first vector from the first collision body to the second collision body at the moment when the collision body takes the grabbing action.
4. The second position coordinate WeightsBaseCollision ToThisVector of the FVector type variable records a third vector from the center point of the instant collider to the object when the collider takes the grabbing action.
Next, several key functions are added in the UE 4:
1. and declaring a grabbing detection function PickUp on the virtual role class, wherein the grabbing detection function PickUp is used for being called when the collision body makes a grabbing action.
2. A grabbing function, WeightEnableOnePlayer, is declared on the object class interactabObject class for calling when the collision body grabs the target object, and a pointer-in parameter, InputHandCollision, is used for transmitting a pointer of the collision body for grabbing.
3. A drop function, WeightsDropOnePlayer, is declared on the object class interactabObject class for functions called when a collision volume drops a target object, and a parameter pointer import parameter InputHandCollision is used to import a pointer to the dropping collision volume.
4. And updating the UpdateWeightTransfrom function, and calculating the moving position of the target object in the process of grabbing the target object.
For convenience of understanding, a detailed flow of an embodiment of the present invention is described below, and referring to fig. 1, a first embodiment of an object capture simulation method based on a ghost engine according to an embodiment of the present invention includes:
101. acquiring a first position rotation of a target object and a second position rotation of a collision body when a collision body performs a grabbing action, wherein the collision body comprises a first collision body and a second collision body;
it is to be understood that the executing entity of the present invention may be an object capture simulation apparatus based on a ghost engine, and may also be a terminal or a server, which is not limited herein. The embodiment of the present invention is described by taking a server as an execution subject.
In this embodiment, the first collision body and the second collision body in the collision bodies may be two hands of a virtual character in a virtual space or props controlled by the two hands respectively; next, description of the embodiment will be made with both hands as collision bodies.
The collision body is in a grabbing state, namely the first collision body and the second collision body grab a target object, for example, the two hands touch the target object or touch the object by accident, and do not grab the object;
the first position rotation comprises a first rotation value and a first position coordinate of the target object and is used for determining the space display state and the display position of the target object; the second position rotation includes a second rotation value and a second position coordinate of the center point of the collision volume for determining the spatial display state of the collision volume and the position of the center point thereof, and the second rotation value is calculated from a first vector from the first collision volume to the second collision volume; and the first position rotation and the second position rotation are both global position rotations in the virtual space.
Specifically, for example, when both hands grasp the target item, the following variable settings are performed in the UE 4:
1. calling a grabbing function WeiightEnableOnePlayer;
2. judging whether the PickupInfo. bPickup is false, if so, continuing to execute the function;
3. grabbing the target object by two hands: the pointer import parameter InputHandCollision is utilized to sequentially import the first collision body HandCollision and the second collision body WeightsOtherHandCollision into the grab function WeightEnableOnePoliayer function;
4. setting a first position to rotate: WeightsBaseRotation is set to a first rotation value, WeightsBaseLocation is set to a first position coordinate, wherein the first position rotation comprises the first rotation value and the first position coordinate;
5. setting a second position to rotate: setting a weightssbasetwocollelisionvector to a first vector and setting a weightssbasetoolvector to a second position coordinate via a findlooktartposition function to determine a second rotation value for the hands, wherein the second position rotation comprises the second rotation value and the second position coordinate;
6. execution of the grab function weightEnableOnePlayer ends, and the pickupInfo. bPickUp variable is set to true.
102. When the collision body grabs the target object to move, acquiring the rotation of the current third position of the collision body, and calculating the rotation change value of the collision body according to the rotation of the second position and the rotation of the third position;
in this embodiment, in each preset period, the third position rotation of the collider when grabbing the target object to move is acquired, and the fourth position rotation of the target object is converted according to the third position rotation of the collider, so as to adjust the spatial form of the target object; the third position rotation comprises a third rotation value and a third position coordinate of the center point of the current collision volume, and is used for determining the spatial display state of the current collision volume and the position of the center point of the current collision volume, and the second rotation value is calculated by a second vector from the current first collision volume to the second collision volume; wherein the second positional rotation is a global positional rotation in the virtual space. It should be noted that the preset period is generally short, and the moving interval cannot be visually and organoleptically distinguished, so that the real-time rotation of the target object can be adjusted in real time according to the position rotation of the collider.
When the collision body and the target object are relative to the grabbing moment, the change of the moved space form can be determined by using a position rotation change value, wherein the position rotation change value comprises a position coordinate change and a rotation change value, but the position rotation change value can be converted into the position coordinate and the rotation value of the moved target object only by the rotation change of the collision body so as to determine the space form of the target object.
Specifically, for example, when the gripping target object is moved by both hands, in the UE4, the following setting is made for the third position rotation:
1. setting the CurrentTwoCollisionVec as a second vector, storing the second vector as a temporary variable, and determining a third rotation value through a FindLookAtRotation function;
2. setting CurrentCollisionlocation as a third position coordinate, and storing the third position coordinate as a temporary variable, wherein the third position rotation comprises a third rotation value and the third position coordinate;
and in the UE4, the following calculation is performed for the rotation variation values of both hands when the two-hand grasping target object moves:
1. calling an update function UpdateWeiightTransfrom;
2. inputting the second rotation value and the third rotation value into a relative rotation conversion function InverseTransformRotation in the update function UpdateWeightTransfrom;
3. the first vector WeightsBaseTwoCollisionVector to the second vector currenttwocollisionvector, i.e. the rotation variation after the two hands have been moved, are calculated.
103. Calculating a fourth position rotation to which the target object is expected to move currently according to the first position rotation and the rotation change value;
in this embodiment, the rotation variation value of the collision volume is converted into the rotation of the fourth position of the target object, so that the target object moves along with the object in the real-world simulation. The fourth position rotation comprises a fourth position coordinate of the position to which the target object is expected to move and a fourth rotation value, so that the spatial reality form and the display position of the target object after the target object moves are determined.
Specifically, in UE4, the following calculation is performed for the fourth position rotation of the target object:
1. inputting the first rotation value WeightsBaseRotation and the rotation variation value into a composition function composeorotators in the update function UpdateWeightTransfrom;
2. calculating a fourth rotation value;
3. inputting the third position coordinate CurrentCollision location, the second position coordinate WeightsBasolLisionToThisVector and the rotation change value into a relative coordinate conversion function Transformlocation in an update function UpdateWeightTransfrom;
4. calculating a fourth position coordinate;
5. execution of the update function UpdateWeightTransfrom ends, rotating the fourth rotation value and the fourth position coordinate as a fourth position.
104. Controlling the target item to move with the first collider and the second collider based on the fourth position rotation.
In this embodiment, when the collision body grabs the target object and moves, the rotation of the fourth position after the target object moves is determined by the rotation change value of the collision body, and the target object rotates according to the fourth position to display the spatial form and position. It should be noted that although the fourth position rotation after the target object is moved is determined by the rotation variation value of the collision body, since the calculation time interval between the first position rotation and the second position rotation is extremely short and invisible to the naked eye, the collision body is visually formed to grab the target object to move without forming the visual effect that the collision body precedes and the target object follows. In the UE4, the target object is set to rotate with a rotation execution function setacorrotation.
In addition, when the collision volume discards the target object, the steps are performed as follows:
detecting whether the first collider or the second collider is separated from the collision frame of the target object;
if the first collider or the second collider is separated from the collision frame of the target object, setting the current first position rotation at the moment of separation as a final position rotation;
and taking the final position rotation as a starting point, and performing falling simulation on the target object.
Specifically, in the UE4, the drop simulation of the target object by both hands is completed by the following steps:
1. when one hand is separated from the collision frame of the target object, calling a drop function WeightsDropOnePlayer;
2. inputting the first collision volume HandCollision or the second collision volume WeightsOtherHandCollision separated from the pre-collision frame into the drop function WeightsDropOnePlayer;
3. judging whether PickupInfo. bPickup is true in a grab function WeiightsEnablOnePlalayer;
4. if yes, judging whether a pointer transmission parameter InputHandCollision in the grab function WeightsEnableOnePlayer is the same as PickupInfo HandCollision or PickupInfo WeightsOtherHandCollision;
5. if yes, the PickupInfo. bPickup is set to false, and PickupInfo. HandCollision or PickupInfo. WeightsOtherHandCollision in 4 is set to be empty so as to imitate the falling simulation of the target object.
In the embodiment of the invention, the rotation change value of the collision body from the grabbing moment to the moved rotation change value is calculated by acquiring the first position rotation and the second position rotation of the target object at the grabbing moment of the collision body, determining the third position rotation of the current collision body when the collision body grabs the target object to move, and then rotating the second position rotation and the third position rotation of the collision body; and finally, the rotation change value is converted into the rotation of the fourth position of the collision body so as to move along with the collision body, so that the simulation of grabbing articles simultaneously by two hands based on the unreal engine is realized, the article grabbing operation is closer to the scene requirement, and the application scene of article grabbing and moving is expanded.
Referring to fig. 2, a second embodiment of the object capture simulation method based on the illusion engine according to the embodiment of the invention includes:
201. judging whether the collision body does not grab any article;
in this embodiment, before the collision body grasps the target object, it is determined that neither the first collision body nor the second collision body grasps any object, and the target object can be grasped.
In the UE4, a grab detection function PickUp is called, and a value set to true or false in the grab detection function PickUp is detected, and if true, both hands grab the article, and if false, the collider does not grab any article.
202. If the collision body does not grab any article, detecting whether the collision body contacts a collision frame where the target object is located;
in this embodiment, if the collision body does not grab any article, the collision detection is performed to screen the collision frames of all the articles contacted by the collision body one by one to determine whether the target article exists in the articles.
Specifically, in the UE4, whether the two hands touch the collision frame where the target item is located is detected by the following steps:
1. executing a collision detection function GetOverlapComponents:
(1) acquiring all articles in contact with both hands;
(2) detecting whether the object class of the current latest article is the object class interactable object of the target article;
(3) if yes, determining that the current latest article is the target article, and stopping collision detection;
(4) if not, jumping to (2), until detecting that the object class where the current latest article is located is the object class interactibleobject of the target article, determining that the current latest article is the target article, and stopping collision detection.
2. Stopping the collision detection function GetOverlapComponents, and determining a collision frame corresponding to the target object.
203. If the collision body contacts the collision frame, performing hinge operation on the collision body and the target object to determine that the collision body forms a grabbing action on the target object;
in this embodiment, the target object cannot be hinged with both hands at the same time, so that both hands are used as a whole, and the target object is hinged one-to-one with the center points of both hands; after the two hands perform the hinge operation with the target object, it can be determined that the collider and the target object constitute the grabbing action.
Specifically, the hinge operation of the collider and the target object includes:
judging whether the target object is not captured by other collision bodies;
if the target object is not grabbed by other collision volumes, detecting whether the target object is not grabbed by the first collision volume or the second collision volume;
if the target object is not grabbed by the first collision body or the second collision body, respectively performing hinge operations on the first collision body, the second collision body and the target object according to the contact sequence of the first collision body, the second collision body and the target object;
if the target object has been grabbed by the first collider or the second collider, determining whether the collider that has grabbed the target object is different from the collider that again grabbed the target object;
if the collision body which has grabbed the target object is different from the collision body which again grabs the target object, respectively performing hinge operation on the first collision body, the second collision body and the target object according to the grabbing sequence;
determining that the collider constitutes a grabbing action on the target object when the first collider, the second collider and the target object complete the hinge operation.
In addition, in the UE4, the specific example of the two hands grasping the target object is as follows
1. Calling a grabbing function WeiightEnableOnePlayer;
2. judging whether PickupInfo. bPickup is false in the interactabObject of the object class;
3. if yes, judging whether the PickupInfo and HandCollision are null pointers or not;
4. if yes, the PickupInfo. HandCollision is set as an input InputHandCollision pointer; otherwise, judging whether the pointer of the pickupinfo, HandCollision and the two hands input again are different;
5. if so, the second collision object weightsOtherHandCollision is set as the InputHandCollision pointer, so that both hands form a grasping action for the target object.
204. Acquiring a first position rotation of a target object and a second position rotation of a collision body when a collision body performs a grabbing action, wherein the collision body comprises a first collision body and a second collision body;
205. when the collision body grabs the target object to move, acquiring the rotation of the current third position of the collision body, and calculating the rotation change value of the collision body according to the rotation of the second position and the rotation of the third position;
206. calculating a fourth position rotation to which the target object is expected to move currently according to the first position rotation and the rotation change value;
207. controlling the target item to move with the first collider and the second collider based on the fourth position rotation.
In the embodiment of the invention, a detailed process of detecting a target object which is captured by a collision body in advance is described, so that the collision body only acquires an article which is in the authority or is captured definitely, and the collision body does not actively or passively acquire articles which are not acquired after contacting with other articles, and the capturing efficiency is increased.
Referring to fig. 3, a third embodiment of the object capture simulation method based on the illusion engine according to the embodiment of the invention includes:
301. acquiring a first position rotation of a target object and a second position rotation of a collision body when a collision body performs a grabbing action, wherein the collision body comprises a first collision body and a second collision body;
302. when the collision body grabs the target object to move, acquiring the third position rotation of the current collision body;
specifically, the first position rotation includes a first rotation value and a first position coordinate of the target object, the second position rotation includes a second rotation value and a second position coordinate of the center point of the collider, and the third position rotation includes a third rotation value and a third position coordinate of the center point of the collider at present;
wherein the second rotation value is calculated by a first vector from the first collision volume to the second collision volume, and the third rotation value is calculated by a second vector from the current first collision volume to the second collision volume.
303. Constructing a relative rotation conversion variable of the position rotation change when the collision body moves according to the second rotation value and the third rotation value;
in this embodiment, in the UE4, the relative rotation transformation variable transform inputs a global rotation value B in a relative rotation transformation function InverseTransformRotation function by setting the global rotation value a, and then transforms the global rotation value B into a rotation variation value from a to B, where the rotation variation value is a local relative rotation value; here, the relative rotation conversion variable transform is initialized, then the global rotation value a is set to the second rotation value corresponding to the first vector WeightsBaseTwoCollisionVector, the global rotation value B is input to the fourth rotation value corresponding to the second vector currenttwocollisionvector, and the relative rotation conversion variable is preliminarily constructed.
304. Calculating a rotation change value of the collision body by adopting a preset relative rotation conversion algorithm according to the relative rotation conversion variable;
in this embodiment, the relative rotation conversion algorithm uses a relative rotation conversion function InverseTransformRotation function, and then, according to a relative rotation variable in the function, a second rotation value corresponding to the first vector WeightsBaseTwoCollisionVector set in the previous step and a fourth rotation value corresponding to the second vector currenttwocollisionvector are input, and incremental rotation from the first vector WeightsBaseTwoCollisionVector to the second vector currenttwocollisionvector, that is, a rotation variation value of a position rotation of both hands from a capturing moment to a moved position, is calculated.
305. Calculating a fourth position rotation to which the target object is expected to move currently according to the first position rotation and the rotation change value;
306. controlling the target item to move with the first collider and the second collider based on the fourth position rotation.
In the embodiment of the invention, the calculation process of the rotation change value of the collision body in the UE4 is described in detail, so that the subsequent target object can calculate the fourth position rotation after the movement according to the rotation change value, and the spatial form of the real-situation object moving along with the two hands can be really simulated.
Referring to fig. 4, a fourth embodiment of the object capture simulation method based on the illusion engine according to the embodiment of the invention includes:
401. acquiring a first position rotation of a target object and a second position rotation of a collision body when a collision body performs a grabbing action, wherein the collision body comprises a first collision body and a second collision body;
402. when the collision body grabs the target object to move, acquiring the third position rotation of the current collision body;
specifically, the first position rotation includes a first rotation value and a first position coordinate of the target object, the second position rotation includes a second rotation value and a second position coordinate of the center point of the collider, and the third position rotation includes a third rotation value and a third position coordinate of the center point of the collider at present;
wherein the second rotation value is calculated by a first vector from the first collision volume to the second collision volume, and the third rotation value is calculated by a second vector from the current first collision volume to the second collision volume.
403. Constructing a relative rotation conversion variable of the position rotation change when the collision body moves according to the second rotation value and the third rotation value;
404. calculating a rotation change value of the collision body by adopting a preset relative rotation conversion algorithm according to the relative rotation conversion variable;
405. adding the first rotation value and the rotation change value to obtain a fourth rotation value of the current target object after the target object is expected to move;
in this embodiment, since the relative position relationship between the target object and the central points of the two hands is fixed, and the rotation variation value of the two hands is equal to the rotation variation value of the central points, the fourth rotation value of the target object after the target object is expected to move can be obtained only by adding the first rotation value and the rotation variation value. It should be noted that after the two hands grab the target object and move, the third position rotation and the fourth position rotation of the two hands and the target object are calculated by using the first position rotation and the second position rotation at the moment of grabbing as references.
Specifically, in UE4, the fourth rotation value after the target object has moved is obtained by adding the first rotation value WeightsBaseRotation to the rotation variation values of both hands through the combination function composeorotators function.
406. Setting a third vector from the central point to the target object according to the first position coordinate and the second position coordinate, and constructing a relative coordinate transformation variable of the target object when the collider grabs the target object to move according to the second position coordinate and the rotation change value;
in this embodiment, the third vector from the first position coordinate to the second position coordinate of the center point between the two hands of the user is determined and set by the first position coordinate of the target object and the second position coordinate of the center point between the two hands of the user. In UE4, set the second position coordinate weightsbasecholelisiontothiesv vector to the third vector; and initializing the relative coordinate transformation variable transform T, setting the rotation value as the rotation change value, and setting the position coordinate as the third position coordinate Current collisionlocation, thereby obtaining the relative coordinate transformation variable transform T.
407. Calculating a fourth position coordinate to which the target object is expected to move currently by adopting a preset relative coordinate conversion algorithm according to the third vector and the relative coordinate conversion variable;
in this embodiment, in the UE4, a relative coordinate transformation function TransformLocation is called, and then the second position coordinate weightsbaseprecision tothessisvector and T are input into the function, so as to obtain the fourth position coordinate of the target object, where the fourth position coordinate and the fourth rotation value are both global position rotations.
408. Obtaining a fourth position rotation to which the target object is expected to move currently according to the fourth position coordinate and the fourth rotation value;
in this embodiment, the fourth position coordinate and the fourth rotation value together form the fourth position rotation of the target object moving with the two hands, so as to determine the spatial form display and the position display of the target object after moving.
409. Calculating a fourth position rotation to which the target object is expected to move currently according to the first position rotation and the rotation change value;
410. controlling the target item to move with the first collider and the second collider based on the fourth position rotation.
In the embodiment of the invention, the rotation of the fourth position of the target object is calculated according to the rotation change value of the collision body, the rotation of the fourth position after the target object moves is changed along with the rotation change value of the collision body by fixing the relative position rotation of the target object and the central point of the collision body, the actual situation can be simulated more truly, and articles can be grabbed by two hands to move in the spatial form display.
In the above description of the object capture simulation method based on the illusion engine in the embodiment of the present invention, referring to fig. 5, the object capture simulation apparatus based on the illusion engine in the embodiment of the present invention is described below, and an embodiment of the object capture simulation apparatus based on the illusion engine in the embodiment of the present invention includes:
an acquiring module 501, configured to acquire a first position rotation of a target object and a second position rotation of a collision volume when a capture action occurs in the collision volume, where the collision volume includes a first collision volume and a second collision volume;
a first calculating module 502, configured to obtain a third position rotation of the current collision volume when the collision volume grabs the target object to move, and calculate a rotation variation value of the collision volume according to the second position rotation and the third position rotation;
a second calculating module 503, configured to calculate a fourth position rotation to which the target object is expected to move currently according to the first position rotation and the rotation variation value;
a control module 504 for controlling the target item to move with the first collider and the second collider based on the fourth position rotation.
In the embodiment of the invention, the rotation change value of the collision body from the grabbing moment to the moved rotation change value is calculated by acquiring the first position rotation and the second position rotation of the target object at the grabbing moment of the collision body, determining the third position rotation of the current collision body when the collision body grabs the target object to move, and then rotating the second position rotation and the third position rotation of the collision body; and finally, converting the rotation variation value into the fourth position rotation of the collision body so as to move along with the collision body, and realizing the simulation process of grabbing articles by two hands simultaneously in the virtual reality world.
Referring to fig. 6, another embodiment of the object capture simulation apparatus based on the illusion engine according to the embodiment of the invention includes:
an acquiring module 501, configured to acquire a first position rotation of a target object and a second position rotation of a collision volume when a capture action occurs in the collision volume, where the collision volume includes a first collision volume and a second collision volume;
a first calculating module 502, configured to obtain a third position rotation of the current collision volume when the collision volume grabs the target object to move, and calculate a rotation variation value of the collision volume according to the second position rotation and the third position rotation;
a second calculating module 503, configured to calculate a fourth position rotation to which the target object is expected to move currently according to the first position rotation and the rotation variation value;
a control module 504 for controlling the target item to move with the first collider and the second collider based on the fourth position rotation.
Specifically, the object grabbing simulation apparatus further includes a grabbing action detection module 505, where the grabbing action detection module 505 is configured to:
judging whether the collision body does not grab any article;
if the collision body does not grab any article, detecting whether the collision body contacts a collision frame where the target object is located;
and if the collision body is contacted with the collision frame, performing hinge operation on the collision body and the target object to determine that the collision body forms a grabbing action on the target object.
Specifically, the grabbing action detection module 505 includes a hinge operation unit, and the hinge operation unit is configured to:
if the collision body contacts the collision frame, judging whether the target object is not grabbed by other collision bodies;
if the target object is not grabbed by other collision volumes, detecting whether the target object is not grabbed by the first collision volume or the second collision volume;
if the target object is not grabbed by the first collision body or the second collision body, respectively performing hinge operations on the first collision body, the second collision body and the target object according to the contact sequence of the first collision body, the second collision body and the target object;
if the target object has been grabbed by the first collider or the second collider, determining whether the collider that has grabbed the target object is different from the collider that again grabbed the target object;
if the collision body which has grabbed the target object is different from the collision body which again grabs the target object, respectively performing hinge operation on the first collision body, the second collision body and the target object according to the grabbing sequence;
determining that the collider constitutes a grabbing action on the target object when the first collider, the second collider and the target object complete the hinge operation.
Specifically, the first position rotation includes a first rotation value and a first position coordinate of the target object, the second position rotation includes a second rotation value and a second position coordinate of the center point of the collider, and the third position rotation includes a third rotation value and a third position coordinate of the center point of the collider at present;
wherein the second rotation value is calculated by a first vector from the first collision volume to the second collision volume, and the third rotation value is calculated by a second vector from the current first collision volume to the second collision volume.
Specifically, the first calculating module 502 further includes:
a constructing unit 5021, configured to construct a relative rotation conversion variable of the position rotation change when the collision body moves according to the second rotation value and the third rotation value;
the first algorithm calculating unit 5022 is configured to calculate a rotation variation value of the collision volume by using a preset relative rotation conversion algorithm according to the relative rotation conversion variable.
Specifically, the second calculating module 503 further includes:
an adding unit 5031, configured to add the first rotation value and the rotation variation value to obtain a fourth rotation value after the current target object is expected to move;
a setting and constructing unit 5032 configured to set a third vector from the center point to the target object according to the first position coordinate and the second position coordinate, and construct a relative coordinate conversion variable of the target object when the collider catches the target object to move according to the second position coordinate and the rotation variation value;
a second algorithm calculation unit 5033, configured to calculate, according to the third vector and the relative coordinate conversion variable, a fourth position coordinate to which the target object is expected to move currently by using a preset relative coordinate conversion algorithm;
a combining unit 5034, configured to obtain, according to the fourth position coordinate and the fourth rotation value, a fourth position rotation to which the current target object is expected to move.
Specifically, the object grabbing simulation apparatus further includes a drop simulation module 506, and the drop simulation module 506 further includes:
a detection unit 5061 for detecting whether the first collider or the second collider is separated from the collision frame of the target object;
a positioning unit 5062, configured to set the current first position rotation at the moment of separation as a final position rotation if the first collision volume or the second collision volume is separated from the collision frame of the target object;
a drop simulation unit 5063, configured to perform drop simulation on the target object with the final position rotation as a starting point.
In the embodiment of the invention, the first position rotation and the second position rotation of the target object are acquired at the moment when the target object is captured by the collider, then when the collider captures the target object to move, the third position rotation of the current collider is determined, and then the rotation change value of the collider from the capturing moment to the moving moment is calculated through the second position rotation and the third position rotation of the collider; and finally, converting the rotation variation value into the fourth position rotation of the collision body so as to move along with the collision body, and realizing the simulation process of grabbing articles by two hands simultaneously in the virtual reality world. The detailed process of detecting the target object to be grabbed by the collision body is specifically described, so that the collision body only acquires the articles in the authority or definitely grabbed, and does not actively or passively acquire the articles which are not required to be acquired after contacting with other articles, and the grabbing efficiency is increased; then, the calculation process of the rotation change value of the collision body in the UE4 is described in detail, so that the subsequent target object can calculate the fourth position rotation after the movement according to the rotation change value, and the spatial form of the real-situation simulation object when the object moves along with the two hands is truly simulated; the following describes in detail that the rotation of the fourth position of the target object is calculated through the rotation change value of the collision body, and the rotation of the fourth position after the target object moves changes along with the rotation change value of the collision body through the rotation of the relative position of the fixed target object and the central point of the collision body, so that the real situation can be simulated more truly, and articles can be grabbed by two hands to move in the spatial form display.
Fig. 5 and fig. 6 describe the phantom engine based object capture simulation apparatus in the embodiment of the present invention in detail from the perspective of the modular functional entity, and the phantom engine based object capture simulation apparatus in the embodiment of the present invention is described in detail from the perspective of hardware processing.
Fig. 7 is a schematic structural diagram of an object capture simulation apparatus based on a ghost engine according to an embodiment of the present invention, where the object capture simulation apparatus 700 based on a ghost engine may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 710 (e.g., one or more processors) and a memory 720, one or more storage media 730 (e.g., one or more mass storage devices) storing an application 733 or data 732. Memory 720 and storage medium 730 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 730 may include one or more modules (not shown), each of which may include a series of instruction operations for the ghost engine based object capture simulation apparatus 700. Still further, the processor 710 may be configured to communicate with the storage medium 730 to execute a series of instruction operations in the storage medium 730 on the ghost engine based object capture simulation apparatus 700.
The ghost engine based object capture simulation apparatus 700 may also include one or more power supplies 740, one or more wired or wireless network interfaces 750, one or more input-output interfaces 760, and/or one or more operating systems 731, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, and the like. Those skilled in the art will appreciate that the configuration of the ghost engine based object capture simulation apparatus shown in fig. 7 does not constitute a limitation of the ghost engine based object capture simulation apparatus and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The present invention also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium, and which may also be a volatile computer-readable storage medium, having stored therein instructions, which, when run on a computer, cause the computer to perform the steps of the phantom engine based object capture simulation method.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An object capture simulation method based on an illusion engine is characterized by comprising the following steps:
acquiring a first position rotation of a target object and a second position rotation of a collision body when a collision body performs a grabbing action, wherein the collision body comprises a first collision body and a second collision body;
when the collision body grabs the target object to move, acquiring the rotation of the current third position of the collision body, and calculating the rotation change value of the collision body according to the rotation of the second position and the rotation of the third position;
calculating a fourth position rotation to which the target object is expected to move currently according to the first position rotation and the rotation change value;
controlling the target item to move with the first collider and the second collider based on the fourth position rotation.
2. The object capture simulation method of claim 1, further comprising, before the rotation of the first position of the target object and the rotation of the second position of the collision volume when the capture collision volume is in capture motion:
judging whether the collision body does not grab any article;
if the collision body does not grab any article, detecting whether the collision body contacts a collision frame where the target object is located;
and if the collision body is contacted with the collision frame, performing hinge operation on the collision body and the target object to determine that the collision body forms a grabbing action on the target object.
3. The object capture simulation method according to claim 2, wherein the determining that the collider forms the capture action on the target object by performing a hinge operation on the collider and the target object if the collider contacts the collision frame comprises:
if the collision body contacts the collision frame, judging whether the target object is not grabbed by other collision bodies;
if the target object is not grabbed by other collision volumes, detecting whether the target object is not grabbed by the first collision volume or the second collision volume;
if the target object is not grabbed by the first collision body or the second collision body, respectively performing hinge operations on the first collision body, the second collision body and the target object according to the contact sequence of the first collision body, the second collision body and the target object;
if the target object has been grabbed by the first collider or the second collider, determining whether the collider that has grabbed the target object is different from the collider that again grabbed the target object;
if the collision body which has grabbed the target object is different from the collision body which again grabs the target object, respectively performing hinge operation on the first collision body, the second collision body and the target object according to the grabbing sequence;
determining that the collider constitutes a grabbing action on the target object when the first collider, the second collider and the target object complete the hinge operation.
4. The object grabbing simulation method according to claim 1, wherein the first position rotation includes a first rotation value and a first position coordinate of the target object, the second position rotation includes a second rotation value and a second position coordinate of the center point of the collider, and the third position rotation includes a third rotation value and a third position coordinate of the center point of the collider at present;
wherein the second rotation value is calculated by a first vector from the first collision volume to the second collision volume, and the third rotation value is calculated by a second vector from the current first collision volume to the second collision volume.
5. The object capture simulation method of claim 4, wherein the calculating a rotation variation value of the collider based on the second position rotation and the third position rotation comprises:
constructing a relative rotation conversion variable of the position rotation change when the collision body moves according to the second rotation value and the third rotation value;
and calculating the rotation change value of the collision body by adopting a preset relative rotation conversion algorithm according to the relative rotation conversion variable.
6. The object capture simulation method according to claim 5, wherein the calculating a fourth position rotation to which the target object is currently expected to move according to the first position rotation and the rotation variation value comprises:
adding the first rotation value and the rotation change value to obtain a fourth rotation value of the current target object after the target object is expected to move;
setting a third vector from the central point to the target object according to the first position coordinate and the second position coordinate, and constructing a relative coordinate transformation variable of the target object when the collider grabs the target object to move according to the second position coordinate and the rotation change value;
calculating a fourth position coordinate to which the target object is expected to move currently by adopting a preset relative coordinate conversion algorithm according to the third vector and the relative coordinate conversion variable;
and obtaining the fourth position rotation to which the target object is expected to move currently according to the fourth position coordinate and the fourth rotation value.
7. The object capture simulation method of any one of claims 1-6, wherein after the controlling the target item to move with the first collider and the second collider based on the fourth position rotation, further comprising:
detecting whether the first collider or the second collider is separated from the collision frame of the target object;
if the first collider or the second collider is separated from the collision frame of the target object, setting the current first position rotation at the moment of separation as a final position rotation;
and taking the final position rotation as a starting point, and performing falling simulation on the target object.
8. An object capture simulation device based on a ghost engine, the object capture simulation device based on the ghost engine comprising:
the acquisition module is used for acquiring the first position rotation of a target object and the second position rotation of a collision body when a collision body performs grabbing action, wherein the collision body comprises a first collision body and a second collision body;
the first calculating module is used for acquiring the rotation of the current third position of the collision body when the collision body grabs the target object to move, and calculating the rotation change value of the collision body according to the rotation of the second position and the rotation of the third position;
the second calculation module is used for calculating the rotation of a fourth position to which the target object is expected to move currently according to the first position rotation and the rotation change value;
and the control module is used for controlling the target object to move along with the first collision body and the second collision body based on the fourth position rotation.
9. An object capture simulation device based on a ghost engine, the object capture simulation device comprising: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line;
the at least one processor invokes the instructions in the memory to cause the object grasp simulation apparatus to perform the object grasp simulation method of any of claims 1-7.
10. A computer-readable storage medium, having stored thereon a computer program, wherein the computer program, when executed by a processor, implements an object grasp simulation method according to any one of claims 1-7.
CN202010630560.0A 2020-07-03 2020-07-03 Object grabbing simulation method based on illusion engine and related equipment Active CN111784850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010630560.0A CN111784850B (en) 2020-07-03 2020-07-03 Object grabbing simulation method based on illusion engine and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010630560.0A CN111784850B (en) 2020-07-03 2020-07-03 Object grabbing simulation method based on illusion engine and related equipment

Publications (2)

Publication Number Publication Date
CN111784850A true CN111784850A (en) 2020-10-16
CN111784850B CN111784850B (en) 2024-02-02

Family

ID=72758358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010630560.0A Active CN111784850B (en) 2020-07-03 2020-07-03 Object grabbing simulation method based on illusion engine and related equipment

Country Status (1)

Country Link
CN (1) CN111784850B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112121410A (en) * 2020-10-22 2020-12-25 深圳市瑞立视多媒体科技有限公司 Method for loading equipment into cabinet based on VR game
CN113041621A (en) * 2021-03-02 2021-06-29 深圳市瑞立视多媒体科技有限公司 Method and device for recovering tools in virtual reality game and computer equipment

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955295A (en) * 2014-04-17 2014-07-30 北京航空航天大学 Real-time grabbing method of virtual hand based on data glove and physical engine
US20160239080A1 (en) * 2015-02-13 2016-08-18 Leap Motion, Inc. Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments
CN106582012A (en) * 2016-12-07 2017-04-26 腾讯科技(深圳)有限公司 Method and device for processing climbing operation in VR scene
US20170329488A1 (en) * 2016-05-10 2017-11-16 Google Inc. Two-handed object manipulations in virtual reality
CN107515674A (en) * 2017-08-08 2017-12-26 山东科技大学 It is a kind of that implementation method is interacted based on virtual reality more with the mining processes of augmented reality
CN107773978A (en) * 2017-10-26 2018-03-09 广州市雷军游乐设备有限公司 Method, apparatus, terminal device and the storage medium of control crawl prop model
CN107993545A (en) * 2017-12-15 2018-05-04 天津大学 Children's acupuncture training simulation system and emulation mode based on virtual reality technology
CN108227928A (en) * 2018-01-10 2018-06-29 三星电子(中国)研发中心 Pick-up method and device in a kind of virtual reality scenario
CN108958471A (en) * 2018-05-17 2018-12-07 中国航天员科研训练中心 The emulation mode and system of virtual hand operation object in Virtual Space
CN108983978A (en) * 2018-07-20 2018-12-11 北京理工大学 virtual hand control method and device
CN109358748A (en) * 2018-09-30 2019-02-19 深圳仓谷创新软件有限公司 A kind of device and method interacted with hand with mobile phone A R dummy object
CN109690450A (en) * 2017-11-17 2019-04-26 腾讯科技(深圳)有限公司 Role playing method and terminal device under VR scene
CN109785420A (en) * 2019-03-19 2019-05-21 厦门市思芯微科技有限公司 A kind of 3D scene based on Unity engine picks up color method and system
CN110431513A (en) * 2018-01-25 2019-11-08 腾讯科技(深圳)有限公司 Media content sending method, device and storage medium
US20190366539A1 (en) * 2017-02-28 2019-12-05 Siemens Product Lifecycle Management Software Inc. System and method for determining grasping positions for two-handed grasps of industrial objects
CN110598297A (en) * 2019-09-04 2019-12-20 浙江工业大学 Virtual assembly method based on part geometric transformation information
US10607413B1 (en) * 2015-09-08 2020-03-31 Ultrahaptics IP Two Limited Systems and methods of rerendering image hands to create a realistic grab experience in virtual reality/augmented reality environments
CN111177888A (en) * 2019-12-09 2020-05-19 武汉光庭信息技术股份有限公司 Simulation scene collision detection method and system

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955295A (en) * 2014-04-17 2014-07-30 北京航空航天大学 Real-time grabbing method of virtual hand based on data glove and physical engine
US20160239080A1 (en) * 2015-02-13 2016-08-18 Leap Motion, Inc. Systems and methods of creating a realistic grab experience in virtual reality/augmented reality environments
CN107430437A (en) * 2015-02-13 2017-12-01 厉动公司 The system and method that real crawl experience is created in virtual reality/augmented reality environment
US10607413B1 (en) * 2015-09-08 2020-03-31 Ultrahaptics IP Two Limited Systems and methods of rerendering image hands to create a realistic grab experience in virtual reality/augmented reality environments
US20170329488A1 (en) * 2016-05-10 2017-11-16 Google Inc. Two-handed object manipulations in virtual reality
CN106582012A (en) * 2016-12-07 2017-04-26 腾讯科技(深圳)有限公司 Method and device for processing climbing operation in VR scene
US20190366539A1 (en) * 2017-02-28 2019-12-05 Siemens Product Lifecycle Management Software Inc. System and method for determining grasping positions for two-handed grasps of industrial objects
CN107515674A (en) * 2017-08-08 2017-12-26 山东科技大学 It is a kind of that implementation method is interacted based on virtual reality more with the mining processes of augmented reality
CN107773978A (en) * 2017-10-26 2018-03-09 广州市雷军游乐设备有限公司 Method, apparatus, terminal device and the storage medium of control crawl prop model
CN109690450A (en) * 2017-11-17 2019-04-26 腾讯科技(深圳)有限公司 Role playing method and terminal device under VR scene
CN107993545A (en) * 2017-12-15 2018-05-04 天津大学 Children's acupuncture training simulation system and emulation mode based on virtual reality technology
CN108227928A (en) * 2018-01-10 2018-06-29 三星电子(中国)研发中心 Pick-up method and device in a kind of virtual reality scenario
CN110431513A (en) * 2018-01-25 2019-11-08 腾讯科技(深圳)有限公司 Media content sending method, device and storage medium
CN108958471A (en) * 2018-05-17 2018-12-07 中国航天员科研训练中心 The emulation mode and system of virtual hand operation object in Virtual Space
CN108983978A (en) * 2018-07-20 2018-12-11 北京理工大学 virtual hand control method and device
CN109358748A (en) * 2018-09-30 2019-02-19 深圳仓谷创新软件有限公司 A kind of device and method interacted with hand with mobile phone A R dummy object
CN109785420A (en) * 2019-03-19 2019-05-21 厦门市思芯微科技有限公司 A kind of 3D scene based on Unity engine picks up color method and system
CN110598297A (en) * 2019-09-04 2019-12-20 浙江工业大学 Virtual assembly method based on part geometric transformation information
CN111177888A (en) * 2019-12-09 2020-05-19 武汉光庭信息技术股份有限公司 Simulation scene collision detection method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112121410A (en) * 2020-10-22 2020-12-25 深圳市瑞立视多媒体科技有限公司 Method for loading equipment into cabinet based on VR game
CN112121410B (en) * 2020-10-22 2024-04-12 深圳市瑞立视多媒体科技有限公司 VR game-based cabinet-entering method
CN113041621A (en) * 2021-03-02 2021-06-29 深圳市瑞立视多媒体科技有限公司 Method and device for recovering tools in virtual reality game and computer equipment

Also Published As

Publication number Publication date
CN111784850B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN111784850A (en) Object capture simulation method based on illusion engine and related equipment
CN107694093B (en) Method, device, equipment and storage medium for controlling grabbing of prop model in game
CN114972958B (en) Key point detection method, neural network training method, device and equipment
CN112206515B (en) Game object state switching method, device, equipment and storage medium
JP2019121388A (en) Systems and methods for long distance interactions of virtual reality
Hilman et al. Virtual hand: VR hand controller using IMU and flex sensor
CN110929422A (en) Robot cluster simulation method and device
CN113119104B (en) Mechanical arm control method, mechanical arm control device, computing equipment and system
CN107145706B (en) Evaluation method and device for performance parameters of virtual reality VR equipment fusion algorithm
CN113870418B (en) Virtual article grabbing method and device, storage medium and computer equipment
CN116978112A (en) Motion detection and virtual operation method and system thereof, storage medium and terminal equipment
CN115861496A (en) Power scene virtual human body driving method and device based on dynamic capture system
CN112276947B (en) Robot motion simulation method, device, equipment and storage medium
CN116069157A (en) Virtual object display method, device, electronic equipment and readable medium
CN111310347B (en) Method, device and equipment for loosening dry powder of simulated fire extinguisher and storage medium
Gordón et al. Autonomous robot KUKA YouBot navigation based on path planning and traffic signals recognition
Gomes et al. Deep Reinforcement learning applied to a robotic pick-and-place application
Han et al. Multi-sensors based 3D gesture recognition and interaction in virtual block game
CN116824014B (en) Data generation method and device for avatar, electronic equipment and medium
CN109948579B (en) Human body limb language identification method and system
Majeed et al. Trajectory Controller Building for the (KUKA, JACO) simulated manipulator robot arm Using ROS
Li et al. Application and research of kinect motion sensing technology on substation simulation training system
CN111009022B (en) Model animation generation method and device
CN116736976A (en) Virtual-real mapping method and device based on intention understanding, medium and electronic equipment
CN115062364A (en) Simulation demonstration method for aircraft, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant