CN116048273A - Virtual object simulation method and related equipment - Google Patents

Virtual object simulation method and related equipment Download PDF

Info

Publication number
CN116048273A
CN116048273A CN202310084842.9A CN202310084842A CN116048273A CN 116048273 A CN116048273 A CN 116048273A CN 202310084842 A CN202310084842 A CN 202310084842A CN 116048273 A CN116048273 A CN 116048273A
Authority
CN
China
Prior art keywords
scene
virtual object
information
determining
target position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310084842.9A
Other languages
Chinese (zh)
Inventor
郭嘉
方迟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202310084842.9A priority Critical patent/CN116048273A/en
Publication of CN116048273A publication Critical patent/CN116048273A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure provides a simulation method of a virtual object and related equipment. Specifically, the simulation method includes: acquiring scene information of a first scene; wherein the first scene includes at least one object; determining characteristic information of each object according to the scene information; responding to a user to determine a virtual object and a target position of the virtual object in a second scene, and determining a matched simulation effect according to characteristic information of an object corresponding to the target position and the virtual object; the second scene is a display scene which is displayed by the augmented reality equipment and comprises the first scene image. Therefore, the virtual object can adapt to the object in the first scene, thereby optimizing XR man-machine interaction and enabling a user to obtain better XR experience.

Description

Virtual object simulation method and related equipment
Technical Field
The disclosure relates to the technical field of augmented reality, in particular to a simulation method of a virtual object and related equipment.
Background
Augmented Reality (XR for short) refers to that a Virtual environment capable of man-machine interaction is created by combining Reality with Virtual through a computer, and is also a generic term of various technologies such as augmented Reality (Augmented Reality for short), virtual Reality (VR for short), and mixed Reality (Mix Reality for short).
XR may allow users to interact with a virtual digital world, a way of interaction that is shifted from 2D interactions to more efficient 3D interactions, and complex environments that will also utilize digital technology to implement real-time interactions in the future. However, the simulation effect of the virtual object generated by the XR is not satisfactory, so that the immersion in the XR environment is destroyed, and the user experience is reduced.
Disclosure of Invention
In view of the above, the present disclosure is directed to a virtual object simulation method and related devices.
Based on the above object, in a first aspect, an embodiment of the present disclosure provides a method for simulating a virtual object, including:
acquiring scene information of a first scene; wherein the first scene includes at least one object;
determining characteristic information of each object according to the scene information;
responding to a user to determine a virtual object and a target position of the virtual object in a second scene, and determining a matched simulation effect according to characteristic information of an object corresponding to the target position and the virtual object;
the second scene is a display scene which is displayed by the augmented reality equipment and comprises the first scene image.
In a second aspect, an embodiment of the present disclosure provides an emulation apparatus of a virtual object, including:
the acquisition module is configured to acquire scene information of a first scene; wherein the first scene includes at least one object;
a feature extraction module configured to determine feature information of each object according to the scene information;
the simulation matching module is configured to respond to the determination of a virtual object and a target position of the virtual object in a second scene by a user, and determine a matched simulation effect according to characteristic information of an object corresponding to the target position and the virtual object;
the second scene is a display scene which is displayed by the augmented reality equipment and comprises the first scene image.
In a third aspect, embodiments of the present disclosure provide an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the simulation method as in the first aspect when executing the program.
In a fourth aspect, embodiments of the present disclosure provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the simulation method of the first aspect.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising computer program instructions which, when run on a computer, cause the computer to perform the simulation method according to the first aspect.
As can be seen from the foregoing, the simulation method, apparatus, electronic device, storage medium and program product for a virtual object provided by the present disclosure determine the number of objects in a first scene and feature information of each object by acquiring scene information of the first scene; responding to a user to determine a virtual object and a target position of the virtual object in a second scene, and determining a matched simulation effect according to characteristic information of an object corresponding to the target position and the virtual object; the second scene is a display scene which is displayed by the augmented reality equipment and contains the first scene image, so that the virtual object can adapt to the object in the first scene, the XR man-machine interaction is optimized, and a user can obtain better XR experience.
Drawings
In order to more clearly illustrate the technical solutions of the present disclosure or related art, the drawings required for the embodiments or related art description will be briefly described below, and it is apparent that the drawings in the following description are only embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort to those of ordinary skill in the art.
Fig. 1 is a flow chart of a simulation method of a virtual object according to an embodiment of the disclosure;
fig. 2 is a schematic diagram of a display scene of a virtual object according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram illustrating a display effect of a portion of the virtual objects of the display scene in FIG. 2;
FIG. 4 is a schematic diagram of a display scenario of yet another virtual object provided by an embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating a display effect of a virtual object of the display scene in FIG. 4;
FIG. 6 is a schematic diagram of a display effect of another virtual object of the display scene of FIG. 4;
fig. 7 is a flowchart illustrating a display effect of a virtual object according to an embodiment of the disclosure;
fig. 8 is a schematic structural diagram of a virtual object simulation apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purposes of promoting an understanding of the principles and advantages of the disclosure, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same.
It should be noted that unless otherwise defined, technical or scientific terms used in the embodiments of the present disclosure should be given the ordinary meaning as understood by one of ordinary skill in the art to which the present disclosure pertains. The terms "first," "second," and the like, as used in embodiments of the present disclosure, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the steps preceding the word are meant to encompass the steps listed after the word as well as equivalents thereof, without excluding other steps. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed.
As described in the background section, the simulation effect of the virtual object generated by XR is still unsatisfactory, for example, the proportion of the virtual object and the real object in the augmented reality device is unbalanced, and the user is sometimes required to adjust and place the virtual object and the real object in the augmented reality device in a correct position.
In view of this, the embodiments of the present disclosure provide a simulation method for a virtual object, so that the virtual object can adapt to an object in a first scene, thereby optimizing XR man-machine interaction, and enabling a user to obtain better XR experience.
Fig. 1 shows a flow chart of a simulation method of a virtual object according to an embodiment of the disclosure. Specifically, the simulation method includes:
step S101: acquiring scene information of a first scene; wherein the first scene includes at least one object.
Here, the first scenario is a real environment scenario where the user uses the XR device, such as a bedroom, living room, beach, etc.
As will be appreciated by those skilled in the art, a first scene may include a plurality of objects, and different first scenes, the corresponding objects being different. Illustratively, please refer to fig. 2, which shows that the first scene image contained in the scene 200 is a bedroom, and the objects include, but are not limited to, beds, curtains, skylights, cupboards, and bed sheets.
In some embodiments, the scene information is point cloud data.
Note that Point Cloud data (Point Cloud) refers to a massive set of points of the target surface characteristics. Typically obtained by laser or photogrammetry.
In the embodiment of the disclosure, the point cloud data may be a laser point cloud, a photographic point cloud, or a combination of the two, which is not limited in particular. The laser point cloud typically includes three-dimensional coordinates (XYZ) and laser reflection intensity; the photographing point cloud generally includes three-dimensional coordinates (XYZ) and color information.
Here, the point cloud data may be acquired by a sensor. Optionally, the sensor is selected from one or more of a lidar, a 3D image sensor, a depth sensor, a Time-of-Flight (ToF) sensor. The above list is merely exemplary and not limiting of the sensors, and those skilled in the art may select other suitable sensors, and are not limited herein.
Alternatively, the sensor may be disposed on an XR device, such as a head mounted display device, or may be disposed in the first scene, without limitation.
In some alternative embodiments, the point cloud data of the first scene is acquired by an intelligent terminal, such as a mobile phone configured with a TOF sensor, and then transmitted to the XR device via a network, thereby acquiring the point cloud data of the first scene.
That is, the point cloud data may be acquired by the XR device itself, or may be acquired by other terminals, which is not limited in this disclosure.
In some alternative embodiments, the scene information is image data. The image data may be acquired by an image acquisition device (e.g., a camera). Similar to the sensors described above, the image acquisition device may be disposed on the XR device, or disposed in the first scene, to acquire image data. Of course, the image data may also be acquired by a smart terminal (e.g., a cell phone, ipad, etc.) and transmitted to the XR device.
In some embodiments, the scene information includes point cloud data and image data. The method for acquiring the point cloud data and the image data is as described above and will not be described in detail.
Step S103: and determining characteristic information of each object according to the scene information.
Here, the characteristic information may indicate physical characteristics of the objects, such as size, hardness, texture, color, and the like, and may indicate a relative positional relationship between the objects, and the like, which are not illustrated here.
It should be noted that the feature information may be represented in the form of a semantic tag, or may be any other manner capable of indicating the object characteristics, which is not limited in this disclosure.
In some optional embodiments, the step of determining feature information of each of the objects according to the scene information includes:
determining the number of the objects and the characteristic information of each object according to the scene information by using a characteristic extraction model; wherein the feature extraction model is a machine learning model.
The feature extraction model is a machine learning model trained in advance. Illustratively, the machine learning model may be based on a deep neural network, a convolutional neural network, or the like. By way of example, the training of the feature extraction model may employ supervised learning, semi-supervised learning, or unsupervised learning, as not limited thereto.
Those skilled in the art will appreciate that the feature extraction model may employ one algorithm, or may be a combination of algorithms, which is not limited in this disclosure.
For example, if the scene information is point cloud data, the training data of the machine learning model is the point cloud data, and the adopted algorithm may include a minimum Zhang Shu algorithm, a connection graph algorithm and the like.
For example, if the scene information is image data, the training data of the machine learning model is image data, and the adopted algorithm may include an image recognition algorithm, a three-dimensional reconstruction algorithm, and the like. As will be appreciated by those skilled in the art, based on image data from different angles, the three-dimensional size of an object in an image may be calculated, for example using a multi-objective SLAM algorithm in an instant localization and mapping algorithm (Simultaneous Localization and Mapping, abbreviated SLAM).
Of course, if the scene information includes point cloud data and image data, a person skilled in the art may combine various algorithms and train to obtain the feature extraction model based on the point cloud data and the image data. The present disclosure is not particularly limited thereto.
Step S105: responding to a user to determine a virtual object and a target position of the virtual object in a second scene, and determining a matched simulation effect according to characteristic information of an object corresponding to the target position and the virtual object;
the second scene is a display scene which is displayed by the augmented reality equipment and comprises the first scene image.
For example, the augmented reality device may present the second scene via a head mounted display device, VR glasses, projector, or the like. In some embodiments, the augmented reality device comprises a near-eye display device, the display scene of the near-eye display device being the second scene.
The second scene is a scene in which a real scene (corresponding to the first scene) and a virtual scene are fused.
Here, the user may select the virtual object (e.g., pillar 201, apple 202 in fig. 2) and the target position of the virtual object in the second scene by hand, handle, etc. As will be appreciated by those skilled in the art, when a user determines a target location, the distance of the virtual object from the user, the distance of the object from the target location, and the like may be generally considered to be less than or equal to zero.
And determining a matched simulation effect by utilizing the characteristic information of the object corresponding to the target position and the virtual object, so that the virtual object and the object in the second scene can present a more real effect, and the immersion of the user is increased.
In some embodiments, the characteristic information includes size information; the step of determining the matched simulation effect according to the characteristic information of the object corresponding to the target position and the virtual object comprises the following steps:
determining the proportional relation between the virtual object and the object according to the preset size information of the virtual object and the size information of the object corresponding to the target position; here, the preset size information may be determined based on the size of the real object of the virtual object, for example, the size of the virtual apple (virtual apple 202 in fig. 2) is determined based on the size of the real apple, and the size of the virtual dart (virtual dart 403 in fig. 4) is determined according to the size of the real dart.
Based on the scaling relationship, a display size of the virtual object that matches a display size of the object in the second scene is determined. Taking the characteristics of the near size and the far size of the object in the display image into consideration, the adoption of the display size can ensure the proportion coordination of the virtual object and the object in the second scene.
The display size of the object may be obtained by an augmented reality device (e.g., a near-eye display device) or may be obtained by scene information, which is not limited herein.
In this way, the dimensional proportion of the virtual object and the object can be ensured to be coordinated, and the real state is more similar.
In some optional embodiments, after the step of determining the display size of the virtual object that matches the display size of the object in the second scene, further comprising:
determining a relative positional relationship of the virtual object and the object in the second scene based on the target position, the display size of the virtual object, and the display size of the object; here, the relative positional relationship is a complete overlap, for example, the virtual object is completely placed on the object (the virtual apple 202 is located on the sheet as shown in fig. 2), and the virtual object is completely covered by the cover, for example, the virtual tablecloth is covered on the table; the relative positional relationship may also be partially overlapping, for example, the virtual cup may be partially on a table and partially suspended.
Determining a superimposed area of the virtual object and the object in the second scene in response to the relative positional relationship being a partial overlap; alternatively, the overlapping area may be calculated from the target position, the size of the virtual object in the second scene, and the size of the object.
And determining that the simulation effect comprises a moving picture of the virtual object separated from the object in response to the superposition area being smaller than half of the projection area of the virtual object to the direction of the object.
By the method, the real feeling of placing the virtual object by the user can be increased, and the interaction diversity is increased.
In some embodiments, the characteristic information further comprises location information; here, the positional information includes a relative position between objects, such as a bed sheet on a bed, a table in the middle of a floor, a cabinet on the floor and against a wall, which is not limited herein.
As shown in fig. 7, after the step of determining that the simulation effect includes the virtual object moving away from the moving picture of the object, the method further includes:
step S701: and determining the moving direction of the virtual object away from the object based on a preset gravity direction.
It should be noted that the preset gravity direction may be identical to the real gravity direction, or may be different from the real gravity direction, so that the interactive interest may be increased by the preset gravity direction.
Step S703: determining an object corresponding to the virtual object at a moving end point according to the moving direction and the position information;
step S705: and determining a matched simulation effect according to the characteristic information of the object corresponding to the moving end point and the virtual object.
The simulation effect includes at least one of a display effect and a sound effect. When the object corresponding to the target position and the object corresponding to the moving end point are different, the corresponding simulation effects are different.
In some embodiments, the characteristic information includes texture information; the step of determining the matched simulation effect according to the characteristic information of the object corresponding to the target position and the virtual object comprises the following steps:
and matching the preset material information of the virtual object with the material information of the object corresponding to the target position by utilizing a preset display effect library to obtain an attached display effect of the virtual object on the object.
It should be noted that, the preset material of the virtual object will also affect the adhesion display effect of the virtual object after being placed on the object.
The preset display effect library comprises different virtual objects and various corresponding material information thereof, different objects and corresponding material information thereof, and the attachment display effects corresponding to various combinations. In response to determining the virtual object and its material information, the attachment display effect can be obtained by matching.
Based on the attached display effect, a corresponding display effect can be realized by using the existing image processing tool. The image processing tool can be flexibly selected by those skilled in the art, and is not particularly limited herein.
Here, the adhesion display effect includes object wrinkles, object ripples, object cracks, dipping, and the like.
Illustratively, referring to fig. 2, the virtual object is a virtual apple 202, and if the material is consistent with a real apple, the sheet will exhibit the creasing effect as shown in fig. 3 due to the weight of the virtual apple 202 when the virtual apple is placed on the sheet. If the material is plastic, the virtual apple 202 is lighter in weight, and the sheet has little or no wrinkles.
Illustratively, referring to fig. 2, the virtual object is a virtual pillar 201, and in combination with the virtual pillar 201 having a size larger than that of the room, the virtual pillar 201 is made of a hard incompressible material, and then a crack effect is exhibited at a corresponding position in the room.
Illustratively, referring to the display scene 400 of fig. 4, the virtual object is a dart 403, and if the material of the virtual object is steel, the wall may exhibit a crack effect as shown in fig. 6.
Illustratively, referring to the display scenario 400 of fig. 4, the virtual object is a duck 402, and if water is within the pond 401, the water exhibits a ripple effect as shown in fig. 5.
Further, responding to the number of objects corresponding to the target position to be a plurality of objects; the step of obtaining the attachment display effect of the virtual object placed on the object by matching according to the preset material information of the virtual object and the material information of the object corresponding to the target position by using a preset display effect library comprises the following steps:
and matching the preset material information of the virtual object with the material information of each object corresponding to the target position by using a preset display effect library to obtain corresponding attached display effects.
Illustratively, referring to display scenario 400 of FIG. 4, the virtual object is a pond 401; the target position corresponds to the ground 405 and the carpet 404, and if the ground 405 is made of marble, the pond 401 will cause the display effect of marble cracks, water is in the pond 401, and the carpet 404 is made of cashmere, the display effect of adhesion in the soaking water is formed.
In some embodiments, the simulation method further comprises:
and matching by utilizing a preset sound effect library according to the preset material information of the virtual object and the material information of the object corresponding to the target position to obtain the attached sound effect of the virtual object placed on the object.
Here, the preset sound effect library is similar to the preset display effect library, except that the sound effect and the display effect are matched, respectively.
Illustratively, the plastic cup falls on a marble floor, the corresponding sound effect being bouncing sound; the ceramic cup falls on the marble floor, and the corresponding sound effect is breakage sound.
It will be appreciated by those skilled in the art that the playing of the attached sound effects may be accomplished by playing pre-recorded sound.
It should be noted that the method of the embodiments of the present disclosure may be performed by a single device, such as a computer or a server. The method of the embodiment can also be applied to a distributed scene, and is completed by mutually matching a plurality of devices. In the case of such a distributed scenario, one of the devices may perform only one or more steps of the methods of embodiments of the present disclosure, the devices interacting with each other to accomplish the methods.
It should be noted that the foregoing describes some embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments described above and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Based on the same inventive concept, the present disclosure also provides a simulation device of a virtual object, corresponding to the method of any embodiment.
Referring to fig. 8, the simulation apparatus includes:
an acquisition module 801 configured to acquire scene information of a first scene; wherein the first scene includes at least one object;
a feature extraction module 803 configured to determine feature information of each object from the scene information;
a simulation matching module 805 configured to determine, in response to a user determining a virtual object and a target position of the virtual object in a second scene, a matching simulation effect according to feature information of an object corresponding to the target position and the virtual object;
the second scene is a display scene which is displayed by the augmented reality equipment and comprises the first scene image.
In some embodiments, the feature extraction module 803 is further configured to:
determining the number of the objects and the characteristic information of each object according to the scene information by using a characteristic extraction model; wherein the feature extraction model is a machine learning model.
In some embodiments, the scene information is selected from at least one of point cloud data and image data.
In some embodiments, the characteristic information includes size information; the simulation matching module 805 is further configured to: determining the proportional relation between the virtual object and the object according to the preset size information of the virtual object and the size information of the object corresponding to the target position;
determining a display size of the virtual object matching a display size of the object in the second scene based on the scaling relationship; and/or
The characteristic information comprises material information; the simulation matching module 805 is further configured to:
and matching the preset material information of the virtual object with the material information of the object corresponding to the target position by utilizing a preset display effect library to obtain an attached display effect of the virtual object on the object.
In some embodiments, the simulation matching module 805 is further configured to:
determining a relative positional relationship of the virtual object and the object in the second scene based on the target position, the display size of the virtual object, and the display size of the object;
determining a superimposed area of the virtual object and the object in the second scene in response to the relative positional relationship being a partial overlap;
and determining that the simulation effect comprises a moving picture of the virtual object separated from the object in response to the superposition area being smaller than half of the projection area of the virtual object to the direction of the object.
In some embodiments, the characteristic information further comprises location information; the simulation matching module 805 is further configured to:
determining the moving direction of the virtual object away from the object based on a preset gravity direction;
determining an object corresponding to the virtual object at a moving end point according to the moving direction and the position information;
and determining a matched simulation effect according to the characteristic information of the object corresponding to the moving end point and the virtual object.
In some embodiments, the number of objects corresponding to the target location is a plurality; the simulation matching module 805 is further configured to:
and matching the preset material information of the virtual object with the material information of each object corresponding to the target position by using a preset display effect library to obtain corresponding attached display effects.
In some embodiments, the simulation matching module 805 is further configured to:
and matching by utilizing a preset sound effect library according to the preset material information of the virtual object and the material information of the object corresponding to the target position to obtain the attached sound effect of the virtual object placed on the object.
In some embodiments, the adhesion display effect includes at least one of object wrinkles, object ripples, object cracks, object dips.
For convenience of description, the above devices are described as being functionally divided into various modules, respectively. Of course, the functions of the various modules may be implemented in the same one or more pieces of software and/or hardware when implementing the present disclosure.
The device of the foregoing embodiment is configured to implement the corresponding simulation method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
The embodiment of the disclosure discloses a simulation method of a virtual object, comprising the following steps:
acquiring scene information of a first scene; wherein the first scene includes at least one object;
determining characteristic information of each object according to the scene information;
responding to a user to determine a virtual object and a target position of the virtual object in a second scene, and determining a matched simulation effect according to characteristic information of an object corresponding to the target position and the virtual object;
the second scene is a display scene which is displayed by the augmented reality equipment and comprises the first scene image.
In some embodiments, the step of determining feature information of each object according to the scene information includes:
determining the number of the objects and the characteristic information of each object according to the scene information by using a characteristic extraction model; wherein the feature extraction model is a machine learning model.
In some embodiments, the scene information is selected from at least one of point cloud data and image data.
In some embodiments, the characteristic information includes size information; the step of determining the matched simulation effect according to the characteristic information of the object corresponding to the target position and the virtual object comprises the following steps:
determining the proportional relation between the virtual object and the object according to the preset size information of the virtual object and the size information of the object corresponding to the target position;
determining a display size of the virtual object matching a display size of the object in the second scene based on the scaling relationship; and/or
The characteristic information comprises material information; the step of determining the matched simulation effect according to the characteristic information of the object corresponding to the target position and the virtual object comprises the following steps:
and matching the preset material information of the virtual object with the material information of the object corresponding to the target position by utilizing a preset display effect library to obtain an attached display effect of the virtual object on the object.
In some embodiments, after the step of determining the display size of the virtual object that matches the display size of the object in the second scene, further comprising:
determining a relative positional relationship of the virtual object and the object in the second scene based on the target position, the display size of the virtual object, and the display size of the object;
determining a superimposed area of the virtual object and the object in the second scene in response to the relative positional relationship being a partial overlap;
and determining that the simulation effect comprises a moving picture of the virtual object separated from the object in response to the superposition area being smaller than half of the projection area of the virtual object to the direction of the object.
In some embodiments, the characteristic information further comprises location information;
after the step of determining that the simulation effect includes the virtual object moving away from the moving picture of the object, the method further includes:
determining the moving direction of the virtual object away from the object based on a preset gravity direction;
determining an object corresponding to the virtual object at a moving end point according to the moving direction and the position information;
and determining a matched simulation effect according to the characteristic information of the object corresponding to the moving end point and the virtual object.
In some embodiments, the number of objects corresponding to the target location is a plurality;
the step of obtaining the attachment display effect of the virtual object placed on the object by matching according to the preset material information of the virtual object and the material information of the object corresponding to the target position by using a preset display effect library comprises the following steps:
and matching the preset material information of the virtual object with the material information of each object corresponding to the target position by using a preset display effect library to obtain corresponding attached display effects.
In some embodiments, further comprising:
and matching by utilizing a preset sound effect library according to the preset material information of the virtual object and the material information of the object corresponding to the target position to obtain the attached sound effect of the virtual object placed on the object.
In some embodiments, the adhesion display effect includes at least one of object wrinkles, object ripples, object cracks, object dips.
The embodiment of the disclosure also provides a simulation device of the virtual object, which comprises:
an acquisition module 801 configured to acquire scene information of a first scene; wherein the first scene includes at least one object;
a feature extraction module 803 configured to determine feature information of each object from the scene information;
a simulation matching module 805 configured to determine, in response to a user determining a virtual object and a target position of the virtual object in a second scene, a matching simulation effect according to feature information of an object corresponding to the target position and the virtual object;
the second scene is a display scene which is displayed by the augmented reality equipment and comprises the first scene image.
Based on the same inventive concept, the present disclosure also provides an electronic device, such as a VR device, an XR device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the simulation method according to any of the embodiments.
Fig. 9 shows a more specific hardware architecture of an electronic device according to this embodiment, where the device may include: a processor 1010, a memory 1020, an input/output interface 1030, a communication interface 1040, and a bus 1050. Wherein processor 1010, memory 1020, input/output interface 1030, and communication interface 1040 implement communication connections therebetween within the device via a bus 1050.
The processor 1010 may be implemented by a general-purpose CPU (Central Processing Unit ), microprocessor, application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc. for executing relevant programs to implement the technical solutions provided in the embodiments of the present disclosure.
The Memory 1020 may be implemented in the form of ROM (Read Only Memory), RAM (Random Access Memory ), static storage device, dynamic storage device, or the like. Memory 1020 may store an operating system and other application programs, and when the embodiments of the present specification are implemented in software or firmware, the associated program code is stored in memory 1020 and executed by processor 1010.
The input/output interface 1030 is used to connect with an input/output module for inputting and outputting information. The input/output module may be configured as a component in a device (not shown) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
Communication interface 1040 is used to connect communication modules (not shown) to enable communication interactions of the present device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 1050 includes a path for transferring information between components of the device (e.g., processor 1010, memory 1020, input/output interface 1030, and communication interface 1040).
It should be noted that although the above-described device only shows processor 1010, memory 1020, input/output interface 1030, communication interface 1040, and bus 1050, in an implementation, the device may include other components necessary to achieve proper operation. Furthermore, it will be understood by those skilled in the art that the above-described apparatus may include only the components necessary to implement the embodiments of the present description, and not all the components shown in the drawings.
The electronic device of the foregoing embodiment is configured to implement the corresponding simulation method in any of the foregoing embodiments, and has the beneficial effects of the corresponding method embodiment, which is not described herein.
Based on the same inventive concept, corresponding to any of the above embodiments of the method, the present disclosure further provides a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the simulation method as described in any of the above embodiments.
The computer readable media of the present embodiments, including both permanent and non-permanent, removable and non-removable media, may be used to implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The storage medium of the foregoing embodiments stores computer instructions for causing the computer to execute the simulation method described in any of the foregoing embodiments, and has the advantages of the corresponding method embodiments, which are not described herein.
Based on the same inventive concept, the present disclosure also provides a computer program product, corresponding to the method of any of the embodiments described above, comprising a computer program. In some embodiments, the computer program is executed by one or more processors to cause the processors to perform the simulation method described in the above embodiments.
Those of ordinary skill in the art will appreciate that: the discussion of any of the embodiments above is merely exemplary and is not intended to suggest that the scope of the disclosure, including the claims, is limited to these examples; the technical features of the above embodiments or in the different embodiments may also be combined under the idea of the present disclosure, the steps may be implemented in any order, and there are many other variations of the different aspects of the embodiments of the present disclosure as described above, which are not provided in details for the sake of brevity.
Additionally, well-known power/ground connections to Integrated Circuit (IC) chips and other components may or may not be shown within the provided figures, in order to simplify the illustration and discussion, and so as not to obscure the embodiments of the present disclosure. Furthermore, the devices may be shown in block diagram form in order to avoid obscuring the embodiments of the present disclosure, and this also accounts for the fact that specifics with respect to implementation of such block diagram devices are highly dependent upon the platform on which the embodiments of the present disclosure are to be implemented (i.e., such specifics should be well within purview of one skilled in the art). Where specific details (e.g., circuits) are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that embodiments of the disclosure can be practiced without, or with variation of, these specific details. Accordingly, the description is to be regarded as illustrative in nature and not as restrictive.
While the present disclosure has been described in conjunction with specific embodiments thereof, many alternatives, modifications, and variations of those embodiments will be apparent to those skilled in the art in light of the foregoing description. For example, other memory architectures (e.g., dynamic RAM (DRAM)) may use the embodiments discussed.
The disclosed embodiments are intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the appended claims. Accordingly, any omissions, modifications, equivalents, improvements, and the like, which are within the spirit and principles of the embodiments of the disclosure, are intended to be included within the scope of the disclosure.

Claims (13)

1. A method for simulating a virtual object, comprising:
acquiring scene information of a first scene; wherein the first scene includes at least one object;
determining characteristic information of each object according to the scene information;
responding to a user to determine a virtual object and a target position of the virtual object in a second scene, and determining a matched simulation effect according to characteristic information of an object corresponding to the target position and the virtual object;
the second scene is a display scene which is displayed by the augmented reality equipment and comprises the first scene image.
2. The simulation method according to claim 1, wherein the step of determining feature information of each object based on the scene information includes:
determining the number of the objects and the characteristic information of each object according to the scene information by using a characteristic extraction model; wherein the feature extraction model is a machine learning model.
3. The simulation method according to claim 1, wherein the scene information is selected from at least one of point cloud data and image data.
4. A simulation method according to claim 1, wherein,
the characteristic information includes size information; the step of determining the matched simulation effect according to the characteristic information of the object corresponding to the target position and the virtual object comprises the following steps:
determining the proportional relation between the virtual object and the object according to the preset size information of the virtual object and the size information of the object corresponding to the target position;
determining a display size of the virtual object matching a display size of the object in the second scene based on the scaling relationship; and/or
The characteristic information comprises material information; the step of determining the matched simulation effect according to the characteristic information of the object corresponding to the target position and the virtual object comprises the following steps:
and matching the preset material information of the virtual object with the material information of the object corresponding to the target position by utilizing a preset display effect library to obtain an attached display effect of the virtual object on the object.
5. The simulation method according to claim 4, further comprising, after the step of determining the display size of the virtual object matching the display size of the object in the second scene:
determining a relative positional relationship of the virtual object and the object in the second scene based on the target position, the display size of the virtual object, and the display size of the object;
determining a superimposed area of the virtual object and the object in the second scene in response to the relative positional relationship being a partial overlap;
and determining that the simulation effect comprises a moving picture of the virtual object separated from the object in response to the superposition area being smaller than half of the projection area of the virtual object to the direction of the object.
6. The simulation method according to claim 5, wherein the characteristic information further includes position information;
after the step of determining that the simulation effect includes the virtual object moving away from the moving picture of the object, the method further includes:
determining the moving direction of the virtual object away from the object based on a preset gravity direction;
determining an object corresponding to the virtual object at a moving end point according to the moving direction and the position information;
and determining a matched simulation effect according to the characteristic information of the object corresponding to the moving end point and the virtual object.
7. The simulation method according to claim 4, wherein the number of objects corresponding to the target position is plural;
the step of obtaining the attachment display effect of the virtual object placed on the object by matching according to the preset material information of the virtual object and the material information of the object corresponding to the target position by using a preset display effect library comprises the following steps:
and matching the preset material information of the virtual object with the material information of each object corresponding to the target position by using a preset display effect library to obtain corresponding attached display effects.
8. The simulation method according to claim 4, further comprising:
and matching by utilizing a preset sound effect library according to the preset material information of the virtual object and the material information of the object corresponding to the target position to obtain the attached sound effect of the virtual object placed on the object.
9. The simulation method according to claim 4, wherein the adhesion display effect includes at least one of object wrinkles, object ripples, object cracks, and object soaking.
10. A virtual object simulation apparatus, comprising:
the acquisition module is configured to acquire scene information of a first scene; wherein the first scene includes at least one object;
a feature extraction module configured to determine feature information of each object according to the scene information;
the simulation matching module is configured to respond to the determination of a virtual object and a target position of the virtual object in a second scene by a user, and determine a matched simulation effect according to characteristic information of an object corresponding to the target position and the virtual object;
the second scene is a display scene which is displayed by the augmented reality equipment and comprises the first scene image.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 9 when the program is executed by the processor.
12. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 9.
13. A computer program product comprising computer program instructions which, when run on a computer, cause the computer to perform the simulation method of any of claims 1 to 9.
CN202310084842.9A 2023-01-16 2023-01-16 Virtual object simulation method and related equipment Pending CN116048273A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310084842.9A CN116048273A (en) 2023-01-16 2023-01-16 Virtual object simulation method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310084842.9A CN116048273A (en) 2023-01-16 2023-01-16 Virtual object simulation method and related equipment

Publications (1)

Publication Number Publication Date
CN116048273A true CN116048273A (en) 2023-05-02

Family

ID=86129456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310084842.9A Pending CN116048273A (en) 2023-01-16 2023-01-16 Virtual object simulation method and related equipment

Country Status (1)

Country Link
CN (1) CN116048273A (en)

Similar Documents

Publication Publication Date Title
US10482674B1 (en) System and method for mobile augmented reality
CN102270275B (en) The method of selecting object and multimedia terminal in virtual environment
CN105637564B (en) Generate the Augmented Reality content of unknown object
US9659381B2 (en) Real time texture mapping for augmented reality system
US20180350145A1 (en) Augmented Reality Devices and Methods Thereof for Rendering Virtual Objects
US8660362B2 (en) Combined depth filtering and super resolution
JP2021192250A (en) Real time 3d capture using monocular camera and method and system for live feedback
US20150187108A1 (en) Augmented reality content adapted to changes in real world space geometry
US11276238B2 (en) Method, apparatus and electronic device for generating a three-dimensional effect based on a face
KR20210013655A (en) Multiple synchronization integrated model for device position measurement
CN105190703A (en) Using photometric stereo for 3D environment modeling
KR102433857B1 (en) Device and method for creating dynamic virtual content in mixed reality
Montero et al. Designing and implementing interactive and realistic augmented reality experiences
CN109144252B (en) Object determination method, device, equipment and storage medium
CN111373347B (en) Apparatus, method and computer program for providing virtual reality content
CN113721804A (en) Display method, display device, electronic equipment and computer readable storage medium
CN108694073A (en) Control method, device, equipment and the storage medium of virtual scene
CN114514493A (en) Reinforcing apparatus
CN109360275A (en) A kind of methods of exhibiting of article, mobile terminal and storage medium
US9261974B2 (en) Apparatus and method for processing sensory effect of image data
CN111083391A (en) Virtual-real fusion system and method thereof
EP4279157A1 (en) Space and content matching for augmented and mixed reality
CN116048273A (en) Virtual object simulation method and related equipment
CN114862997A (en) Image rendering method and apparatus, medium, and computer device
KR20140078083A (en) Method of manufacturing cartoon contents for augemented reality and apparatus performing the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination