CN114415827A - Method, system and device for providing entity interaction for virtual object - Google Patents

Method, system and device for providing entity interaction for virtual object Download PDF

Info

Publication number
CN114415827A
CN114415827A CN202111564482.XA CN202111564482A CN114415827A CN 114415827 A CN114415827 A CN 114415827A CN 202111564482 A CN202111564482 A CN 202111564482A CN 114415827 A CN114415827 A CN 114415827A
Authority
CN
China
Prior art keywords
virtual
virtual object
basic body
entity
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111564482.XA
Other languages
Chinese (zh)
Inventor
翁冬冬
江海燕
东野啸诺
王涌天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202111564482.XA priority Critical patent/CN114415827A/en
Publication of CN114415827A publication Critical patent/CN114415827A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method, a system and a device for providing entity interaction for virtual objects, which can provide entity feedback for different virtual objects in real time by the technology provided by the invention, and form non-prefabricated entities corresponding to the virtual objects by automatically assembling basic bodies by using a mobile device so as to provide entity interaction. A method of providing physical interaction for virtual objects comprising the steps of: and acquiring the virtual object characteristics according to the virtual object specified by the user. And calculating the entity basic body composition information of the virtual object based on the characteristics of the virtual object. And according to the virtual basic body information forming the virtual object, the control device builds the entity basic body in real time. And constructing a corresponding virtual scene according to the virtual scene and the real scene of the terminal equipment, and realizing the entity interaction between the user and the virtual object. The method provided by the invention can create a physical entity module corresponding to the current virtual object for the user in real time, provide an interactive entity for the virtual object and provide rich tactile interaction.

Description

Method, system and device for providing entity interaction for virtual object
Technical Field
The invention relates to the technical field of virtual reality, in particular to a method, a system and a device for providing entity interaction for a virtual object.
Background
The article: the Robotic Assembly of contextual Proxy Objects for both real-time Interaction and Virtual Reality discloses the use of small robots for real-time Assembly to provide physical Interaction for Virtual Objects in a Virtual environment. The small robot with small wheels is optically tracked by a light projector of a high-speed DLP structure through the position of a 2.4G radio controller to detect the real-time position and direction thereof, and is assembled by being attracted by a magnet. The 2.5D robot shell is introduced to form a solid body with a specific shape, and a slope device is added to realize the assembly of two layers of objects. The small robot can only carry out motion assembly in a plane space, and the slope device is used for realizing the assembly of at most two layers of objects. Secondly, the article only realizes the assembly mode of the robot, can provide entity interaction for the object, and needs to define the motion mode of the robot in advance according to the virtual object.
The article: TangGi, singular procedures for animated Object expansion and management in Virtual Reality, discloses that a user can provide physical interaction for objects in a Virtual environment by manually assembling primitive shapes including cubes, triangles, hemispheres, and rods. Wherein the velcro strips can be used to assemble the original shapes into a larger basketball sized composite object. This technique requires the user to manually assemble the base shape according to the avatar, requires assembly prior to immersion in the virtual environment, and does not enable automatic assembly.
In a virtual environment, a virtual object generally only has an avatar but not a physical entity, and when a user interacts with the virtual object, due to lack of force feedback, the interactivity of the object is reduced, and the immersion feeling of the user is easily broken. The physical module is often used to provide force feedback for the virtual object. One way is to provide force feedback for corresponding virtual objects through one-to-one physical props, because the number of props is limited, but the variety of virtual objects and physical properties such as shape, size, etc. are easily changed, this way is not suitable for providing physical entities for a large number of various virtual objects. One way is to provide force feedback for different virtual objects by combining basic shapes, but at present, users are generally required to assemble in advance for presetting, the real-time performance is not good, and the method is difficult to adapt to virtual scenes with different virtual objects.
Disclosure of Invention
In view of this, the present invention provides a method, a system and a device for providing entity interaction for virtual objects, which can provide entity feedback for different virtual objects in real time by using the technology provided by the present invention, and automatically assemble a basic body by using a mobile device to form a non-prefabricated entity corresponding to a virtual object, thereby providing entity interaction.
In order to achieve the above object, a method for providing entity interaction for a virtual object according to a technical solution of the present invention includes the following steps:
step 1) acquiring virtual object characteristics according to a virtual object specified by a user.
And 2) calculating virtual basic body composition information of the virtual object based on the characteristics of the virtual object.
And 3) determining the entity basic body according to the virtual basic body information forming the virtual object, and building the entity basic body in real time by using the control device.
And 4) constructing a corresponding virtual scene according to the virtual scene and the real scene of the terminal equipment, and realizing the entity interaction between the user and the virtual object.
Further, acquiring the virtual object characteristics according to the virtual object specified by the user specifically includes:
the virtual object is from other terminal equipment or the server and is stored in the server, and after the terminal equipment receives the instruction of acquiring the virtual object, the virtual object is loaded into the terminal equipment from the server.
The virtual object is created by a user in real time, RGB images or victory reading image information of a far-end article is acquired through a camera or a scanner, and the construction of a 3D virtual model is completed through a real-time reconstruction method.
And acquiring virtual object characteristics according to the virtual object model information, wherein the virtual object characteristics comprise the size, the direction and the surface texture information of the virtual object, and storing the virtual object information by using a voxel, three-dimensional point cloud, a Mesh grid, octree representation and a TSDF (time dependent dynamic distribution) method.
Further, calculating the basic body composition information of the virtual object entity based on the virtual object characteristics specifically includes:
calculating the category, size, quantity and pose information of the required basic body according to the virtual object characteristics, wherein the virtual object characteristics comprise the size, direction and surface texture information of the virtual object; the basic body contains square, cuboid, spheroid, hemisphere and centrum, and a virtual object corresponds the entity and comprises more than one kind of basic body, and the basic body has multiple size, and the basic body of entity and virtual object's basic body have the same size.
Further, the category, size, quantity and pose information of the required basic body are calculated according to the characteristics of the virtual object, and the following reinforcement learning mode is adopted for calculation:
firstly, a combined library containing all virtual basic bodies is constructed according to the existing basic bodies.
Then, when the virtual environment does not contain the virtual basic body, the agent executes the added action, adds one virtual basic body in any virtual basic body combination library, adds one virtual basic body in the virtual environment and calculates the reward or punishment; the agent executes actions according to the virtual basic bodies in the current virtual environment, the actions are adding, deleting, left moving, right moving, up moving, down moving, forward moving and backward moving of each virtual basic body in the virtual environment, the current virtual environment is the number, type, pose and size of the existing virtual bodies, a new state is obtained, a strengthening signal is generated at the same time, the strengthening signal is reward or punishment, and the agent selects the next action according to the current environment and the strengthening signal.
The specific calculation mode of the strengthening signals, namely the reward and the punishment is that the volume coincidence degree is calculated according to the pose and the size of the selected virtual basic body and the size of the virtual object, the coincidence degree is higher than that before the action is executed and is recorded as the reward, and otherwise, the coincidence degree is recorded as the punishment; if the action is reward, the next action is one of increase, left shift, right shift, up shift, down shift, forward shift and backward shift, and if the action is punishment, the next action is one of delete, left shift, right shift, up shift, down shift, forward shift and backward shift; adding a virtual basic body in the basic body combined library, and acting other actions on the existing virtual basic body in an environment.
And executing the action until the superposition degree of the sizes and the volumes of the selected and placed virtual basic body and the virtual object is maximum, wherein the information such as the type, the pose and the like of the selected and placed virtual basic body is the virtual basic body information of the virtual object.
Further, calculating the category, size, quantity and pose information of the required basic body according to the characteristics of the virtual object by adopting a learning mode of a convolutional neural network:
firstly, constructing a training sample, inputting voxel or point cloud information of various objects into a training network, predicting the type and the pose of a required virtual basic body through a convolutional network, calculating loss according to a prediction result, performing back propagation, and updating network parameters; the losses include: predicting the size coincidence degree of the virtual basic body and the virtual object; the physical rationality of the virtual base body combination, e.g. the virtual base body cannot be suspended, needs to be placed on the ground or other virtual base bodies. The network is trained and stopped when the losses no longer decrease.
When the virtual basic body prediction method is used, the characteristics of the virtual object, namely voxel or point cloud information, are input, and the type and the pose of the virtual basic body are predicted through a convolution network, namely the virtual basic body information of the virtual object is obtained.
Further, according to the virtual basic body information constituting the virtual object, the control device builds the entity basic body in real time, specifically as follows:
according to the existing virtual scene, the virtual object for building the entity is rendered into the virtual scene, and the user sees the virtual object through the display equipment and interacts with the entity corresponding to the virtual object in the real environment to form force feedback.
And in the interaction process, if the virtual object moves, updating in real time, and re-assembling part of the entity.
The interaction with the entity is the detection of the interaction of the user by installing a sensor on the entity or by acquiring an image of the actual environment.
Another embodiment of the invention provides a system for providing physical interaction for virtual objects, comprising a computing module, a control module, a display module and an interaction module.
The computing module comprises a virtual object feature acquisition module, a virtual basic body extraction module, an entity basic body control information computing module and a virtual scene control module. The virtual object feature acquisition module is used for acquiring the features of the virtual object according to the virtual object specified by the user and transmitting the features to the virtual basic body extraction module. The virtual basic body extraction module is used for calculating virtual basic body information of the virtual object based on the virtual characteristic of the virtual object and transmitting the virtual basic body information to the entity basic body control information calculation module. The entity basic body control information calculation module is used for calculating and assembling the determination, the combination sequence and the combination path planning of the entity basic bodies according to the virtual basic body information forming the virtual object, and transmitting the result to the control module. The virtual scene control module is used for calculating information of other virtual scenes and virtual objects with physical entities and transmitting the information to the display module for rendering and displaying.
And the control module assembles the entity basic body module according to the determination, the combination sequence and the combined path planning information of the assembled entity basic body.
The display module is used for real virtual scene information, including original virtual scene information and virtual objects with entities.
The interaction module is used for acquiring the interaction information of the user and transmitting the interaction information to the computing module.
Another embodiment of the present invention provides a system apparatus for providing physical interaction for virtual objects, the system apparatus comprising a processor, a control apparatus, a display apparatus, an interaction apparatus, and a memory; the parts in the system device are connected through wires or wirelessly.
The processor is connected with the memory; the memory stores processor-readable instructions executable by the processor to perform the functions of the system described above.
The memory is a non-volatile computer readable medium having stored thereon a computer program; the memory is a local memory or a cloud memory, and the processor is a local processor or a cloud memory.
The display device is used for displaying the virtual environment and the virtual objects therein and is connected with the processor; the processor and memory may be located in the display device or separately.
The control device is used for controlling the combination of the entities, and can use an external fixed device and a mobile device.
The interaction device is composed of a device for detecting the interaction intention of the user and at least one detection device, and the interaction device is positioned on the display device or in the actual environment and comprises but is not limited to an eye movement detection device, a head movement detection device, a gesture detection device, a pressure sensor and a temperature sensor.
Has the advantages that:
1. the invention provides a method, a system and a device for providing interactive entities for virtual objects, which can use limited types of basic body combinations to form corresponding entities of different virtual objects, provide force feedback for the virtual objects in a virtual environment, and improve the immersion, entertainment and interaction efficiency of users in the virtual environment. The entity construction cost is reduced, the entity basic body can be reused and used for different virtual objects, and the problem that similar 3D printing technology is used for printing objects and only force feedback can be provided for a single virtual object is solved; the entity construction efficiency is improved, the processor can automatically extract basic entity construction information according to the virtual object information, the construction process is automatically completed through the control equipment, and artificial and manual construction of different virtual objects is not needed. The method provided by the invention can create a physical entity module corresponding to the current virtual object for the user in real time, provide an interactive entity for the virtual object and provide rich tactile interaction.
2. The invention can abstract the virtual object into a form which can be formed by a basic module through the neural network according to the virtual object information in real time, and the mobile device controls the assembly of the entity basic module according to the form information, thereby providing entity feedback for the virtual object.
3. The invention uses different combinations of limited kinds of basic bodies to provide force feedback for different virtual objects; the basic body construction information is automatically calculated by using an algorithm, instead of the traditional manual construction mode, so that the combination efficiency is improved; through controlling means such as arm, robot, accomplish the combination of basic body automatically, reduce artifical combination cost, improve the combination efficiency.
Drawings
FIG. 1 is a schematic diagram of an example of a basic body in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a system for providing physical interaction for virtual objects according to the present invention;
FIG. 3 is a flow chart of a system for providing physical interaction for virtual objects according to the present invention;
FIG. 4 is an exemplary schematic diagram of a single basic body constitution in the embodiment of the present invention; wherein: fig. 4(a) is a schematic diagram of the robot arm completing the assembly of a virtual object single corresponding entity; fig. 4(b) is a schematic diagram of a virtual object with single-entity feedback;
FIG. 5 is an exemplary illustration of various basic body configurations in an embodiment of the present invention; wherein: fig. 5 (a) is a schematic diagram of the robot arm completing the assembly of the virtual multiple object corresponding entities; fig. 5 (b) is a schematic diagram of a virtual object with various physical feedbacks.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention can abstract the virtual object into a form which can be formed by a basic module through the neural network according to the virtual object information in real time, and the mobile device controls the assembly of the entity basic module according to the form information, thereby providing entity feedback for the virtual object.
The invention provides a method, a system and a device for providing entity interaction for virtual objects in a virtual environment.
The specific method comprises the following steps: 1) acquiring virtual object characteristics according to a virtual object specified by a user; 2) calculating the basic body composition information of the virtual object entity based on the characteristics of the virtual object; 3) according to the virtual basic body information forming the virtual object, the control device builds the entity basic body in real time; 4) and constructing a corresponding virtual scene according to the virtual scene and the real scene of the terminal equipment, and realizing the entity interaction between the user and the virtual object.
1) And acquiring the characteristics of the virtual object according to the specified virtual object.
The virtual object can be from other terminal equipment or a server and stored in the server, and after receiving the instruction of acquiring the virtual object, the terminal equipment can load the virtual object from the server into the terminal equipment. The virtual object can also be created by the user in real time, for example, RGB images or victory reading image information of the remote object are acquired by a camera or a scanner, and the construction of the 3D virtual model is completed by a real-time reconstruction method.
And acquiring the characteristic information of the virtual object according to the model information of the virtual object, and generating a basic body corresponding to the virtual object in the next step. The characteristic information may include size, direction, surface texture information, etc. of the virtual object, and the virtual object information may be stored using voxel, three-dimensional point cloud, Mesh grid, octree representation, TSDF, etc. methods.
Virtual objects may have different sizes and shapes, for example, shooting guns and desks.
2) Virtual basic body information of the virtual object is calculated based on the virtual features of the virtual object.
And calculating the information such as the category, the size, the quantity, the pose and the like of the required basic body according to the information such as the size, the direction, the texture and the like of the virtual object. The basic body is composed of a cube, a cuboid, a sphere, a hemisphere, a cone and the like, and can contain one or more basic bodies in actual use, as shown in fig. 1. The basic body has multiple sizes, divide into entity basic body and virtual basic body. The physical basic body and the virtual basic body have the same size.
In addition, additional modules can be added on the basic body module, for example, wheels are added below the modules, so that the modules can move in a real environment; and a vibration module is added to provide active force feedback.
The calculation method can use a traditional calculation method and can also use a reinforcement learning and deep learning method based on a neural network for calculation. The entity basic body is an actual entity module in a real scene, the number, the size and the like of the existing entity modules need to be input into the calculation to be used as constraints, and the virtual basic body which is obtained through calculation and needs is ensured to be the existing basic body. In addition, the physical properties of the entity combination need to be considered, and constraints are imposed, for example, the basic body cannot exist in the air. The required virtual basic body composition information obtained by calculation includes the type and the number of the basic bodies required to be used and the six-degree-of-freedom pose information of each basic body.
One way is to directly specify the type and number of physical modules to be assembled and the position and direction of each basic body by the user.
One of the conventional calculation methods is that a user performs intersection and comparison calculation on a certain number of small basic virtual squares and virtual objects to obtain intersected virtual squares, and then merges the virtual squares to calculate different basic virtual bodies which can be formed.
One of the ways is based on reinforcement learning, which is a decision framework, and the agent decides to update the environment by performing actions and interacting with the environment with a new state and getting rewards or penalties. The calculation of the reward and the penalty is calculated according to the characteristics of the virtual objects and the combined basic body.
In the invention, the method specifically comprises the following steps: a combinatorial library containing all virtual basic bodies is first constructed from existing basic bodies. Then, at the beginning, the virtual environment does not contain virtual basic bodies, the agent executes the added action, adds one virtual basic body in any virtual basic body combination library, adds one virtual basic body in the virtual environment, and calculates the reward or punishment. The agent executes actions according to the virtual basic bodies in the current environment, the actions are adding, deleting, moving left, moving right, moving up, moving down, moving forward and moving backward to each virtual basic body in the environment, the current virtual environment is the number, type, pose and size of the existing virtual bodies, a new state is obtained, a strengthening signal (reward or punishment) is generated at the same time, and the agent selects the next action according to the current virtual environment and the strengthening signal.
The specific calculation mode of the reward and the punishment is that the volume coincidence degree is calculated according to the pose and the size of the selected virtual basic body and the size of the virtual object, the coincidence degree is higher than that before the action is executed and is recorded as the reward, and otherwise, the coincidence degree is recorded as the punishment. If the action is reward, the next action is one of increase, left shift, right shift, up shift, down shift, forward shift and backward shift, and if the action is punishment, the next action is one of delete, left shift, right shift, up shift, down shift, forward shift and backward shift; adding a virtual basic body in the basic body combined library, and acting other actions on the existing virtual basic body in an environment.
And executing the action until the superposition degree of the sizes and the volumes of the selected and placed virtual basic body and the virtual object is maximum, wherein the information such as the type, the pose and the like of the selected and placed virtual basic body is the virtual basic body information of the virtual object.
One of the deep learning methods is a learning method based on a convolutional neural network.
Firstly, a training sample is constructed, the input of a training network is voxel or point cloud information of various objects, the type and the pose of a required virtual basic body are predicted through a convolution network, then, the loss is calculated according to the prediction result, the back propagation is carried out, and the network parameters are updated. The losses include: predicting the size coincidence degree of the virtual basic body and the virtual object; the physical rationality of the virtual base body combination, e.g. the virtual base body cannot be suspended, needs to be placed on the ground or other virtual base bodies. The network is trained and stopped when the losses no longer decrease.
When the virtual basic body prediction method is used, voxel or point cloud information of an object is input, and the type and the pose of the virtual basic body are predicted through a convolution network, namely the virtual basic body information of the virtual object is obtained.
3) And determining an entity basic body according to the virtual basic body information forming the virtual object, and constructing the entity basic body in real time by using a control device to construct a combined entity, wherein the control information of the entity basic body is obtained according to the information of the virtual basic body.
And the control device determines the entity basic bodies to be assembled according to the type and the number of the virtual basic bodies corresponding to the virtual objects generated in the step and the six-degree-of-freedom information, plans a combination sequence and a combination path and controls the movement and the combination of the entity basic bodies.
Each step of the determination of the basic bodies of the assembly entities, the combination sequence and the combined path planning can be specified by a user or in an automatic mode. The user designation mode directly designates the basic bodies to be assembled and the construction sequence of each basic body through a user, and a path is constructed. The basic bodies can be determined in an automatic mode by acquiring RGB or RGBD images or other information of all entity basic bodies, calculating, acquiring the size and the like of each basic body, and determining the basic bodies to be used by using the information generated in the previous step; the determination of the combination sequence in the automatic mode can be calculated according to the information generated in the previous step and a certain rule to obtain the combination sequence of the basic body, for example, the lowest layer object is placed first, the largest object is placed first by the lowest layer object, and the like; the path planning in the automatic mode can be obtained by calculation according to the existing pose of each entity basic body and the pose of the entity in the combination.
The solid base bodies need to be capable of being connected effectively, for example, the solid base bodies have magnetic force, and can be connected stably.
4) The corresponding virtual scene is constructed according to the existing virtual scene and the real scene of the terminal equipment, so that the entity interaction between the user and the virtual object is realized, and the basic body can be constructed by the movement of the virtual object.
According to the existing virtual scene, the virtual object for constructing the entity is rendered into the virtual scene, and a user can see the virtual object through the display equipment and can interact with the entity corresponding to the virtual object in the real environment to form force feedback. In the interaction process, if the virtual object moves, the real-time updating can be carried out, and the assembly of part of the entity is carried out again.
Further, user interaction may result in a change of the virtual object or entity. Such as changing the color of the virtual object when the user interacts with a specific object location, or causing the virtual object to move, when causing the virtual object to move, re-assembly of the entity needs to be performed in real time. The user interaction may be detected by virtual interaction information or entity interaction information. The virtual interaction information comprises information of virtual hands, head rays, eye movements and the like of the user. The physical interaction may be detected by a user's interaction by installing a sensor (e.g., a pressure sensor, a temperature sensor, etc.) on the physical body or by acquiring an image of the actual environment, etc.
The embodiment of the invention also provides a system for providing entity interaction for the virtual object in the virtual environment, which comprises: a calculation module, a control module, a display module and an interaction module, as shown in fig. 2.
The computing module comprises a virtual object feature acquisition module, a virtual basic body extraction module, an entity basic body control information computing module and a virtual scene control module. The virtual object feature acquisition module is used for acquiring the features of the virtual object according to the virtual object specified by the user and transmitting the features to the virtual basic body extraction module. The virtual basic body extraction module is used for calculating virtual basic body information of the virtual object based on the virtual characteristic of the virtual object and transmitting the virtual basic body information to the entity basic body control information calculation module. The entity basic body control information calculation module is used for calculating and assembling the determination, the combination sequence and the combination path planning of the entity basic bodies according to the virtual basic body information forming the virtual object, and transmitting the result to the control module. The virtual scene control module is used for calculating information of other virtual scenes and virtual objects with physical entities and transmitting the information to the display module for rendering and displaying.
And the control module assembles the entity basic body module according to the determination, the combination sequence and the combined path planning information of the assembled entity basic body.
The display module is used for real virtual scene information, including original virtual scene information and virtual objects with entities.
The interaction module is used for acquiring the interaction information of the user and transmitting the interaction information to the computing module.
The system work flow chart is shown in FIG. 3: firstly, determining a virtual object needing entity interaction from a virtual environment, then extracting the characteristics of the virtual object, calculating and constructing information of a required virtual basic body according to the extracted information, then calculating to obtain the category, the number, the combination sequence and the combination path of the corresponding entity, carrying out entity combination according to the calculation result, wherein the combined entity corresponds to a selected virtual object in the virtual environment, and the virtual object and the virtual environment are displayed to a user through display equipment; the user can interact with the selected virtual object in the virtual environment, or interact with the combined entity corresponding to the selected virtual object, and the interaction result acts on the virtual environment including the selected virtual object.
The system device comprises a processor, a control device, a display device, an interaction device and a memory. The system devices are connected by wire or wirelessly.
The processor is connected with the memory. The memory stores processor-readable instructions executable by the processor to perform the functions of the system described above. The memory is a non-volatile computer readable medium having stored thereon a computer program. The memory can be a local memory or a cloud memory, and the processor can be a local processor or a cloud memory.
The display device is used for displaying the virtual environment and the virtual objects therein and is connected with the processor. The processor and memory may be located on the display device or may be separate.
The control device is used for controlling the combination of the entities, and external fixed devices and mobile devices, such as mobile robots, mechanical arms and the like, can be used.
The interaction device is composed of a device for detecting the interaction intention of the user, more than one detection device, can be positioned on the display device, and can also be positioned in the actual environment, including but not limited to eye movement detection devices, head movement detection devices, gesture detection devices, pressure sensors, temperature sensors, and the like.
Fig. 4 shows an example of a single base body, which is a cube in this example. In fig. 4(a), the robot arm is used as a control device, and existing entity basic bodies are assembled into corresponding entities of the virtual object according to the information of the corresponding entities obtained by calculating the virtual object. As shown in fig. 4(b), a user can interact with an entity in a real environment, the entity providing force feedback for a corresponding virtual object.
Fig. 5 shows an example of a plurality of basic bodies, in which the basic bodies are formed of a cube, a rectangular parallelepiped, or a hemisphere.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A method for providing physical interaction for a virtual object, comprising the steps of:
step 1) acquiring virtual object characteristics according to a virtual object specified by a user;
step 2) calculating virtual basic body composition information of the virtual object based on the characteristics of the virtual object;
step 3) determining an entity basic body according to the virtual basic body information forming the virtual object, and building the entity basic body in real time by using a control device;
and 4) constructing a corresponding virtual scene according to the virtual scene and the real scene of the terminal equipment, and realizing the entity interaction between the user and the virtual object.
2. The method according to claim 1, wherein the obtaining of the virtual object feature according to the virtual object specified by the user includes:
the virtual object is from other terminal equipment or a server and is stored in the server, and after receiving an instruction for acquiring the virtual object, the terminal equipment loads the virtual object from the server into the terminal equipment;
the virtual object is created by a user in real time, RGB images or victory reading image information of a far-end article is obtained through a camera or a scanner, and the construction of a 3D virtual model is completed through a real-time reconstruction method;
and acquiring virtual object characteristics according to the virtual object model information, wherein the virtual object characteristics comprise the size, the direction and the surface texture information of the virtual object, and storing the virtual object information by using a voxel, three-dimensional point cloud, a Mesh grid, octree representation and a TSDF (time dependent dynamic distribution) method.
3. The method according to claim 1 or 2, wherein the calculating of the basic physical composition information of the virtual object entity based on the virtual object features comprises:
calculating the category, size, quantity and pose information of the required basic body according to the virtual object characteristics, wherein the virtual object characteristics comprise the size, direction and surface texture information of the virtual object; the basic body contains square, cuboid, spheroid, hemisphere and centrum, and a virtual object corresponds the entity and comprises more than one kind of basic body, and the basic body has multiple size, and the basic body of entity and virtual object's basic body have the same size.
4. The method according to claim 3, wherein the category, size, quantity and pose information of the basic bodies required by the calculation according to the virtual object features are calculated by adopting a reinforcement learning mode as follows:
firstly, constructing a combined library containing all virtual basic bodies according to the existing basic bodies;
then, when the virtual environment does not contain the virtual basic body, the agent executes the added action, adds one virtual basic body in any virtual basic body combination library, adds one virtual basic body in the virtual environment and calculates the reward or punishment; the agent executes actions according to virtual basic bodies in the current virtual environment, the actions are adding, deleting, left moving, right moving, up moving, down moving, forward moving and backward moving of each virtual basic body in the virtual environment, the current virtual environment is the number, type, pose and size of the existing virtual bodies, a new state is obtained, a strengthening signal is generated at the same time, the strengthening signal is reward or punishment, and the agent selects the next action according to the current environment and the strengthening signal;
the specific calculation mode of the strengthening signals, namely the reward and the punishment is that the volume coincidence degree is calculated according to the pose and the size of the selected virtual basic body and the size of the virtual object, the coincidence degree is higher than that before the action is executed and is recorded as the reward, and otherwise, the coincidence degree is recorded as the punishment; if the action is reward, the next action is one of increase, left shift, right shift, up shift, down shift, forward shift and backward shift, and if the action is punishment, the next action is one of delete, left shift, right shift, up shift, down shift, forward shift and backward shift; adding a virtual basic body in a basic body combined library, and acting other actions on the existing virtual basic body in an environment;
and executing the action until the superposition degree of the sizes and the volumes of the selected and placed virtual basic body and the virtual object is maximum, wherein the information such as the type, the pose and the like of the selected and placed virtual basic body is the virtual basic body information of the virtual object.
5. The method of claim 3, wherein the category, size, quantity and pose information of the basic bodies required by the calculation according to the virtual object features are calculated by adopting a learning mode of a convolutional neural network:
firstly, constructing a training sample, inputting voxel or point cloud information of various objects into a training network, predicting the type and the pose of a required virtual basic body through a convolutional network, calculating loss according to a prediction result, performing back propagation, and updating network parameters; the losses include: predicting the size coincidence degree of the virtual basic body and the virtual object; the physical rationality of the virtual base body combination, e.g. the virtual base body cannot be suspended, needs to be placed on the ground or other virtual base bodies. Training the network, and stopping when the loss is not reduced any more;
when the virtual basic body prediction method is used, the characteristics of the virtual object, namely voxel or point cloud information, are input, and the type and the pose of the virtual basic body are predicted through a convolution network, namely the virtual basic body information of the virtual object is obtained.
6. The method according to claim 3, 4 or 5, wherein the entity base body is determined according to the virtual base body information constituting the virtual object, and the entity base body is built in real time by using the control device, specifically:
rendering a virtual object for constructing an entity into a virtual scene according to the existing virtual scene, and enabling a user to see the virtual object through display equipment and interact with the entity corresponding to the virtual object in a real environment to form force feedback;
in the interaction process, if the virtual object moves, updating in real time, and re-assembling part of the entity;
the interaction with the entity is the detection of the interaction of the user by installing a sensor on the entity or by acquiring an image of the actual environment.
7. A system for providing physical interaction for virtual objects is characterized by comprising a computing module, a control module, a display module and an interaction module;
the computing module comprises a virtual object feature acquisition module, a virtual basic body extraction module, an entity basic body control information computing module and a virtual scene control module. The virtual object feature acquisition module is used for acquiring the features of the virtual object according to the virtual object specified by the user and transmitting the features to the virtual basic body extraction module. The virtual basic body extraction module is used for calculating virtual basic body information of the virtual object based on the virtual characteristic of the virtual object and transmitting the virtual basic body information to the entity basic body control information calculation module. The entity basic body control information calculation module is used for calculating and assembling the determination, the combination sequence and the combination path planning of the entity basic bodies according to the virtual basic body information forming the virtual object, and transmitting the result to the control module. The virtual scene control module is used for calculating information of other virtual scenes and virtual objects with physical entities and transmitting the information to the display module for rendering and displaying;
the control module assembles the entity basic body module according to the determination, the combination sequence and the combined path planning information of the assembled entity basic body;
the display module is used for realizing virtual scene information, and comprises original virtual scene information and a virtual object with an entity;
the interaction module is used for acquiring the interaction information of the user and transmitting the interaction information to the computing module.
8. A system apparatus for providing physical interaction for a virtual object, the system apparatus comprising a processor, a control apparatus, a display apparatus, an interaction apparatus, and a memory; all parts in the system device are connected through wires or wirelessly;
the processor is connected with the memory; the memory stores processor-readable instructions executable by the processor to perform the functions of the system described above;
the memory is a non-volatile computer readable medium having stored thereon a computer program; the memory is a local memory or a cloud memory, and the processor is a local processor or a cloud memory;
the display device is used for displaying the virtual environment and the virtual objects therein and is connected with the processor; the processor and the memory are located in the display device or separately;
the control device is used for controlling the combination of the entities and can use an external fixed device and a mobile device;
the interaction device is composed of a device for detecting the interaction intention of the user and at least one detection device, and the interaction device is positioned on the display device or in the actual environment and comprises but is not limited to an eye movement detection device, a head movement detection device, a gesture detection device, a pressure sensor and a temperature sensor.
CN202111564482.XA 2021-12-20 2021-12-20 Method, system and device for providing entity interaction for virtual object Pending CN114415827A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111564482.XA CN114415827A (en) 2021-12-20 2021-12-20 Method, system and device for providing entity interaction for virtual object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111564482.XA CN114415827A (en) 2021-12-20 2021-12-20 Method, system and device for providing entity interaction for virtual object

Publications (1)

Publication Number Publication Date
CN114415827A true CN114415827A (en) 2022-04-29

Family

ID=81266553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111564482.XA Pending CN114415827A (en) 2021-12-20 2021-12-20 Method, system and device for providing entity interaction for virtual object

Country Status (1)

Country Link
CN (1) CN114415827A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150057424A (en) * 2013-11-19 2015-05-28 한국전자통신연구원 A system and method for interaction with augmented reality avatar
CN106648038A (en) * 2015-10-30 2017-05-10 北京锤子数码科技有限公司 Method and apparatus for displaying interactive object in virtual reality
CN107643820A (en) * 2016-07-20 2018-01-30 郎焘 The passive humanoid robots of VR and its implementation method
CN107728778A (en) * 2017-09-14 2018-02-23 北京航空航天大学 A kind of active force/haptic feedback system and its method of work based on servo control mechanism
CN108874123A (en) * 2018-05-07 2018-11-23 北京理工大学 A kind of general modular virtual reality is by active haptic feedback system
CN110478892A (en) * 2018-05-14 2019-11-22 彼乐智慧科技(北京)有限公司 A kind of method and system of three-dimension interaction
CN111612917A (en) * 2020-04-02 2020-09-01 清华大学 Augmented reality interaction method based on real scene feedback and touchable prop
CN112181135A (en) * 2020-08-31 2021-01-05 南京信息工程大学 6-DOF visual touch interaction method based on augmented reality
CN113181631A (en) * 2021-04-06 2021-07-30 北京电影学院 Active and passive force feedback-based virtual pet system and interaction control method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150057424A (en) * 2013-11-19 2015-05-28 한국전자통신연구원 A system and method for interaction with augmented reality avatar
CN106648038A (en) * 2015-10-30 2017-05-10 北京锤子数码科技有限公司 Method and apparatus for displaying interactive object in virtual reality
CN107643820A (en) * 2016-07-20 2018-01-30 郎焘 The passive humanoid robots of VR and its implementation method
CN107728778A (en) * 2017-09-14 2018-02-23 北京航空航天大学 A kind of active force/haptic feedback system and its method of work based on servo control mechanism
CN108874123A (en) * 2018-05-07 2018-11-23 北京理工大学 A kind of general modular virtual reality is by active haptic feedback system
CN110478892A (en) * 2018-05-14 2019-11-22 彼乐智慧科技(北京)有限公司 A kind of method and system of three-dimension interaction
CN111612917A (en) * 2020-04-02 2020-09-01 清华大学 Augmented reality interaction method based on real scene feedback and touchable prop
CN112181135A (en) * 2020-08-31 2021-01-05 南京信息工程大学 6-DOF visual touch interaction method based on augmented reality
CN113181631A (en) * 2021-04-06 2021-07-30 北京电影学院 Active and passive force feedback-based virtual pet system and interaction control method

Similar Documents

Publication Publication Date Title
Cartillier et al. Semantic mapnet: Building allocentric semantic maps and representations from egocentric views
CN110930483B (en) Role control method, model training method and related device
US7536655B2 (en) Three-dimensional-model processing apparatus, three-dimensional-model processing method, and computer program
CN111566651A (en) Virtual/augmented reality modeling applications for architectures
KR20140007367A (en) Three-dimensional environment reconstruction
CA2762571A1 (en) Creation of a playable scene with an authoring system
US20090186693A1 (en) Interactive video game display method, apparatus, and/or system for object interaction
US11625882B2 (en) Method for simulating fluids interacting with submerged porous materials
TWI831074B (en) Information processing methods, devices, equipments, computer-readable storage mediums, and computer program products in virtual scene
WO2023142609A1 (en) Object processing method and apparatus in virtual scene, device, storage medium and program product
US20230025285A1 (en) Method for Generating Simulations of Fluid Interfaces for Improved Animation of Fluid Interactions
Du et al. Design and evaluation of a teleoperated robotic 3-D mapping system using an RGB-D sensor
CN112802091B (en) DQN-based agent countermeasure behavior realization method under augmented reality condition
CN114415827A (en) Method, system and device for providing entity interaction for virtual object
JP7006810B2 (en) 3D measuring device, mobile robot, push wheel type moving device and 3D measurement processing method
Gomes et al. Two level control of non-player characters for navigation in 3d games scenes: A deep reinforcement learning approach
KR101267570B1 (en) Virtual character steering behaviors simulation method and apparatus based on attractive field with 2d texture image, and virtual ecology park visualization method
CN107728811A (en) Interface control method, apparatus and system
JP2007048143A (en) Method for creating motion of three-dimensional object model
CN112999657B (en) Method, device, equipment and medium for displaying phantom of virtual character
US11210849B2 (en) System for procedural generation of braid representations in a computer image generation system
US11074738B1 (en) System for creating animations using component stress indication
Piza et al. A platform to design and run dynamic virtual environments
Rudomin et al. Groups and Crowds with behaviors specified in the environment.
Bianca-Cerasela-Zelia Blaga et al. DAR: Implementation of a Drone Augmented Reality Video Game.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination