WO2023071630A1 - 基于增强显示的信息交互方法、装置、设备和介质 - Google Patents

基于增强显示的信息交互方法、装置、设备和介质 Download PDF

Info

Publication number
WO2023071630A1
WO2023071630A1 PCT/CN2022/120156 CN2022120156W WO2023071630A1 WO 2023071630 A1 WO2023071630 A1 WO 2023071630A1 CN 2022120156 W CN2022120156 W CN 2022120156W WO 2023071630 A1 WO2023071630 A1 WO 2023071630A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
physics engine
interaction data
virtual
interaction
Prior art date
Application number
PCT/CN2022/120156
Other languages
English (en)
French (fr)
Inventor
高林森
黎小凤
韦祎
刘佳成
张羽鸿
Original Assignee
北京字节跳动网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字节跳动网络技术有限公司 filed Critical 北京字节跳动网络技术有限公司
Publication of WO2023071630A1 publication Critical patent/WO2023071630A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Definitions

  • the present application belongs to the field of augmented reality technology, and in particular relates to an information interaction method, device, device and medium based on augmented display.
  • Augmented Reality is a technology that ingeniously integrates virtual information with the real world. After simulating and simulating virtual information such as text, images, three-dimensional models, audio and video generated by computers, it is applied to the real world to achieve two The effect of information complementing each other.
  • the present disclosure provides an information interaction method, device, device and medium based on enhanced display.
  • the present disclosure provides an information interaction method based on enhanced display, which is applied to a first client, and the method includes:
  • the present disclosure also provides an information interaction method based on enhanced display, which is applied to the first server, and the method includes:
  • first interaction data is generated by the first virtual object performing interactive operations in the virtual reality space
  • second interaction data is generated by the second virtual object in the generated by performing interactive operations in a virtual reality space
  • the first virtual object and the second virtual object share the virtual reality space
  • the present disclosure also provides an augmented reality-based information interaction device configured on a client, and the device includes:
  • a first interaction data generating module configured to generate first interaction data in response to the interactive operation of the first virtual object in the virtual reality space, and send the first interaction data to the first server;
  • the second interaction data receiving module is configured to receive the second interaction data corresponding to the second virtual object sent by the first server; wherein, the second virtual object shares the virtual reality space with the first virtual object ;
  • An interactive rendering result display module configured to call a physics engine to perform an analysis of the first virtual object and the second virtual object in the virtual reality space based on the first interaction data and the second interaction data.
  • the present disclosure also provides an augmented reality-based information interaction device configured on a first server, and the device includes:
  • the interaction data receiving module is configured to receive the first interaction data and the second interaction data respectively; wherein, the first interaction data is generated when the first virtual object performs an interactive operation in the virtual reality space, and the second interaction data is The second virtual object is generated by interactive operation in the virtual reality space, and the first virtual object and the second virtual object share the virtual reality space;
  • an interaction data sending module configured to send the first interaction data and the second interaction data to the first client corresponding to the first virtual object and the second client corresponding to the second virtual object, to Make the first client and the second client respectively invoke a physics engine to render the interactive operation based on the first interaction data and the second interaction data, and generate and display an interactive rendering result.
  • an electronic device which includes:
  • the processor is configured to read executable instructions from the memory, and execute the executable instructions to implement the augmented reality-based information interaction method applied to the first client provided by any embodiment of the present disclosure, or to implement the information interaction method provided by any embodiment of the present disclosure.
  • the augmented reality-based information interaction method applied to the first server is configured to read executable instructions from the memory, and execute the executable instructions to implement the augmented reality-based information interaction method applied to the first client provided by any embodiment of the present disclosure, or to implement the information interaction method provided by any embodiment of the present disclosure.
  • the augmented reality-based information interaction method applied to the first server is configured to read executable instructions from the memory, and execute the executable instructions to implement the augmented reality-based information interaction method applied to the first client provided by any embodiment of the present disclosure, or to implement the information interaction method provided by any embodiment of the present disclosure.
  • the augmented reality-based information interaction method applied to the first server is configured to read executable instructions from the memory, and execute the executable instructions to implement the augmented reality-based information interaction method applied to
  • the present disclosure provides a computer-readable storage medium, the storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements the method applied to the first client provided by any embodiment of the present disclosure.
  • the enhanced display-based information interaction solution of the embodiment of the present disclosure can respond to the first virtual object in response to the first virtual object corresponding to the first user and the second virtual object corresponding to the second user sharing the same virtual reality space.
  • the interactive operation in the virtual reality space generates the first interactive data, sends the first interactive data to the first server, and receives the second interactive data corresponding to the second virtual object sent by the first server, so that the first user
  • the first interaction data and the second interaction data may be exchanged between the corresponding client and the client corresponding to the second user.
  • each client can call the 3D physics engine based on the same first interaction data and second interaction data, respectively render the interactive operations corresponding to the first virtual object and the second virtual object, and generate and display the interactive rendering results . It realizes the interactive process based on actual interactive operation of different users in the virtual reality space, and improves the combination degree of the virtual world and the real world in the interactive application program based on augmented reality, thus improving the user experience.
  • FIG. 1 is an architecture diagram of an information interaction system based on enhanced display provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of an enhanced display-based information interaction method applied to a first client provided by an embodiment of the present disclosure
  • Fig. 3 is a schematic display diagram of a room list page provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic diagram of displaying an object adding control in a virtual reality space page provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of displaying furniture options in a virtual reality space page provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic flowchart of an enhanced display-based information interaction method applied to a first server provided by an embodiment of the present disclosure
  • FIG. 7 is a schematic structural diagram of an enhanced display-based information interaction device configured on a client provided by an embodiment of the present disclosure
  • FIG. 8 is a schematic structural diagram of an information interaction device based on enhanced display configured on a first server provided by an embodiment of the present disclosure
  • Fig. 9 is a schematic structural diagram of an information interaction device based on enhanced display provided by an embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • interactive applications based on augmented reality technology mainly superimpose virtual scenes and real scenes to generate a virtual reality space, and provide some fixed virtual objects in the virtual reality space and set preset types for virtual objects interactive action.
  • the virtual object can only perform the above-mentioned preset type of interaction.
  • user A and user B both use the same interactive application program, and user B presented in user A's client is only a virtual object constructed based on the basic information of user B (such as life value, appearance, etc.), which can only Perform some preset types of actions, but cannot perform the interactive actions performed by the B user through its client. In this way, the interaction process based on real interactive actions in the virtual reality space cannot be realized between different users.
  • the embodiments of the present disclosure provide an information interaction scheme based on augmented reality, which enables different users to share the same virtual reality space, so that the clients corresponding to different users can exchange interactive data, and then through the physical engine in the client To process and render the same interactive data, so that the interactive operation of each virtual object presented in the client of different users is the same as the real interactive operation of the user, so as to achieve the interactive effect between different users, thereby improving the user experience.
  • the augmented reality-based information interaction solution provided by the embodiments of the present disclosure can be applied to various interactive application programs developed based on augmented reality technology. For example, interactive home games based on virtual rooms, exhibition applications based on virtual exhibition halls, conference applications based on virtual conference halls, room escape games based on virtual secret rooms, and so on.
  • FIG. 1 is a structure diagram of an augmented reality-based information interaction system provided by an embodiment of the present disclosure.
  • the augmented reality-based information interaction system 100 at least includes a first client 11 , a first client 12 , a first server 13 and a second server 14 that are communicatively connected to each other.
  • the first server 13 is a server that executes background data processing of interactive application programs, and is at least used for creating and managing virtual reality space, processing and sending interactive data uploaded by each client, and so on.
  • the first client 11 is a client corresponding to the first user
  • the first client 12 is a client corresponding to the second user
  • each client runs an interactive application program.
  • the first server 13 can be implemented as an independent server, or as a server cluster.
  • the second server 14 is a server that performs address management and sharing of the virtual reality space. In the case that the first server 13 implements no server cluster, the second server 14 is also used to dispatch a suitable server from the server cluster to the user.
  • the first client 11 sends a request for creating a virtual reality space to the first server 13.
  • the first server 13 queries whether there is a virtual reality space corresponding to the first client 11 in the historical virtual reality space based on the user information of the first client 11. If there is, then the information of the historical virtual reality space inquired is sent to the first client 11; if not, a new virtual reality space is created, and the information of the new virtual reality space is sent to the first client 11 .
  • the first client 11 sends the information of its virtual reality space to the second server 14 , so that the second server 14 sends the information of the virtual reality space to the second client 12 .
  • both the first client 11 and the first client 12 send a request to the second server 14 to enter the same virtual reality space based on the information of the virtual reality space.
  • the second server 14 schedules a suitable server for each client according to the load of each server in the cluster, and sends the scheduled server information to the first client 11 and the first client 12 respectively, so that the first client The terminal 11 and the first client 12 enter the same virtual reality space.
  • both the first client 11 and the first client 12 send requests to the first server 13 to enter the same virtual reality space based on the information of the virtual reality space.
  • the first server 13 sends its server information to the first client 11 and the first client 12, so that the first client 11 and the first client 12 enter the same virtual reality space.
  • the first user and the second user respectively perform interactive operations in the virtual reality space. After detecting the corresponding interactive operations, the first client 11 and the first client 12 respectively generate first interaction data and second interaction data.
  • the first client 11 sends the first interaction data to the first server 13
  • the first client 12 sends the second interaction data to the first server 13 .
  • the first server 13 transparently transmits the first interaction data and the second interaction data to the first client 11 and the first client 12 respectively.
  • the first client 11 and the first client 12 respectively invoke the 3D physics engine therein to process the first interaction data and the second interaction data, perform rendering, generate and display an interactive rendering result.
  • the augmented reality-based information interaction method applied to the first client provided by the embodiments of the present disclosure will first be described below with reference to FIGS. 2-5 .
  • the method may be executed by an augmented reality-based information interaction device configured on the first client, the device may be implemented by software and/or hardware, and the device may be integrated in a device with position tracking and display function in electronic equipment.
  • the electronic device may include, but is not limited to, a mobile terminal such as a smartphone, a PDA (Personal Digital Assistant), a PAD (Tablet Computer), a wearable device, and the like.
  • Fig. 2 shows a schematic flowchart of an augmented reality-based information interaction method applied to a first client provided by an embodiment of the present disclosure.
  • the augmented reality-based information interaction method applied to the first client may include the following steps:
  • the virtual object refers to the virtual character of the user in the virtual reality space.
  • the first virtual object refers to a virtual object corresponding to the first user.
  • the first virtual object is constructed based on the character attribute information of the first user.
  • the character attribute information may include height, gender, hairstyle, clothing, and the like.
  • the first user can input character attribute information through the electronic device; or take a self-image through the camera of the electronic device, and obtain the character attribute information by performing target recognition and other processing on the self-image; or scan the body part through the radar sensor of the electronic device. Generate point cloud data, and obtain character attribute information by processing the point cloud data.
  • the electronic device can upload the character attribute information to the first server.
  • the first server constructs a three-dimensional model of the character by using the character attribute information to obtain the first virtual object. This can increase the connection between the user and the virtual reality space, improve the user's visual effect, and further enhance the user experience.
  • a virtual reality space refers to a virtual space generated based on a real environment (such as a room), and the virtual space has the same structure and layout as the real environment.
  • the virtual reality space is constructed based on the real space where the first virtual object is located.
  • the first user can generate point cloud data by three-dimensionally scanning the real space (such as a room) where the first user is located through the radar sensor of his electronic device, and upload the point cloud data to the first server.
  • the first server obtains the virtual reality space by processing the point cloud data.
  • some virtual reality spaces are preset in the first server, and the first user can choose a preset virtual reality space that has the same/similar structure and layout as the real space in which he is located.
  • Interaction data refers to relevant data generated by interactive operations, such as the amount of position change and movement speed generated by the mobile operation.
  • the first interaction data refers to interaction data corresponding to the first virtual object.
  • the electronic device after the first user logs in to the client in his electronic device, the electronic device (unless otherwise specified, in each method embodiment applied to the first client, the electronic device refers to the electronic device corresponding to the first client ) can display the information of each virtual reality space.
  • the first user may perform a trigger operation (for example, click, gesture control trigger, voice control trigger, eye movement control trigger, etc.) corresponding to the virtual reality space information that the user wants to enter.
  • the electronic device detects the user's triggered operation, it displays the virtual reality space corresponding to the triggered virtual reality space information.
  • a room list page 301 is displayed in the electronic device 300 of FIG. and "My Room” 305.
  • the electronic device 300 displays the virtual reality space as shown in FIG. 4 .
  • the electronic device 400 displays the virtual reality space corresponding to "my room”.
  • the electronic device 400 may also display a return to room list control 401 , and the first user may return to the room list page 301 shown in FIG. 3 by performing a trigger operation on the return to room list control 401 .
  • the first user can carry the electronic device to perform some interactive operations.
  • the client developed based on augmented reality technology is equipped with logic for detecting user position, posture, gesture, user’s triggering of the screen and other interactive operations, then the electronic device installed with the client can perform the detection of interactive operations according to the above logic. .
  • the electronic device can detect the interactive operation of the first virtual object in the virtual reality space. Then, the electronic device generates first interaction data according to the detected interaction operation. Afterwards, the electronic device uploads the first interaction data to the first server to share the first interaction data.
  • S220 Receive second interaction data corresponding to the second virtual object sent by the first server; wherein, the second virtual object shares a virtual reality space with the first virtual object.
  • the second virtual object refers to a virtual object corresponding to the second user.
  • the second virtual object is constructed based on the character attribute information of the second user.
  • For the construction of the second virtual object refer to the description of the construction of the first virtual object in S210.
  • the second interaction data is interaction data generated by the second virtual object performing interactive operations in the virtual reality space.
  • the first virtual object and the second virtual object simultaneously exist in the virtual display space.
  • the electronic device corresponding to the second client will also detect the interactive operation of the second user, generate second interactive data based on the interactive operation, and upload the second interactive data to the first server , to share the second interaction data.
  • the server After receiving the first interaction data and the second interaction data, the server judges that there is no intersection (such as duplication, conflict, etc.) between the first interaction data and the second interaction data, and combines the first interaction data and the second interaction data Transparent transmission to electronic equipment.
  • the electronic device can receive the first interaction data and the second interaction data, that is, the first client simultaneously obtains the interaction data generated after the first virtual object and the second virtual object perform interactive operations in the virtual reality space, then its It has a data basis for displaying the interaction process of each corresponding virtual object according to the real interaction operation of each user.
  • the second client will also receive the first interaction data and the second interaction data transparently transmitted by the first server.
  • the electronic device in order to enable the first user and the second user to share the virtual reality space, sends the space address of the virtual reality space to the second server before S220, so that the second server sends the space address to the second client corresponding to the second virtual object, and in response to the space sharing operation of the second client, scheduling the target server corresponding to the first server for the second client.
  • the second user can select the same virtual reality space as the first user through its electronic device.
  • the electronic device first sends the space address of its virtual reality space to the second server.
  • the second server receives the space address for storage, it can forward the space address to the second server according to the space authorization information sent by the first client (such as everyone can see its virtual reality space, friends can see its virtual reality space, etc.).
  • the electronic equipment corresponding to the second client is not limited to the second client.
  • the second user may also display a room list page, and the room list page displays the virtual reality space information corresponding to the first user.
  • the second user may perform a trigger operation (that is, a space sharing operation) on the virtual reality space information to request to join the virtual reality space corresponding to the first user.
  • the electronic device corresponding to the second user sends the information related to the space sharing operation to the second server.
  • the second server schedules a server corresponding to the first server with an appropriate load (ie, the target server) for the second client, and sends server information of the target server to the second client.
  • the second client can then connect to the target server based on the server information to enter the virtual reality space corresponding to the first user. In this way, it can ensure that the first client and the second client share the same virtual reality space, reduce the delay of data transmission, and improve the intercommunication efficiency of the first interactive data and the second interactive data.
  • the physics engine is used to calculate the motion interaction and dynamics between objects and scenes, between objects and virtual objects, and between objects and objects in two-dimensional scenes or three-dimensional scenes, which uses object properties (momentum, torque or Elasticity) to simulate rigid body behavior.
  • the physics engine in the embodiments of the present disclosure refers to a three-dimensional (ie, 3D) physics engine, which is used to simulate virtual objects and rigid body behaviors of virtual objects in a three-dimensional scene.
  • the electronic device calls the physics engine to perform rigid body motion simulation processing on the first virtual object and its first interaction data, the second virtual object and its second interaction data, and calls the rendering engine to render the processing result of the physics engine, Generate interactive rendering results. Afterwards, the electronic device displays the interactive rendering result.
  • the interactive rendering result is the presentation of the first virtual object and the second virtual object performing the same interaction process as the real interaction in the virtual reality space.
  • the first virtual object corresponding to the first user and the second virtual object corresponding to the second user can share the same virtual reality space.
  • the first interaction data is generated, and the first interaction data is sent to the first server, and the corresponding information of the second virtual object sent by the first server is received.
  • the second interaction data enables the client corresponding to the first user and the client corresponding to the second user to communicate the first interaction data and the second interaction data.
  • each client can call the 3D physics engine based on the same first interaction data and second interaction data, respectively render the interactive operations corresponding to the first virtual object and the second virtual object, and generate and display the interactive rendering results , realize the interaction process based on actual interactive operation of different users in the virtual reality space, improve the combination degree of the virtual world and the real world in the interactive application program based on augmented reality, thereby improving the user experience.
  • the above interaction data is state information of the target physics engine.
  • the state information of the physics engine refers to the state-related information of the rigid body motion generated by the physics engine based on the virtual object/virtual object and its interactive operation. hidden data.
  • the state information of the physics engine may include the falling position, speed, direction and elastic force of the virtual sphere in contact with the ground, and so on.
  • the state information of the physics engine may include the displacement direction, size and displacement speed of the virtual object after the collision.
  • the state information of the target physics engine refers to the state information of the physics engine corresponding to the virtual object after the user performs an interactive operation.
  • the state information of the target physics engine can be directly uploaded to the first server, so that the electronic devices corresponding to the first client and the second client can also directly receive the state information of the target physics engine for rendering, avoiding the separate rendering of each client.
  • Invoking the physics engine to simulate the explicit interaction data results in differences between the simulation results obtained by each client, thereby further improving the consistency of the interactive rendering results between the first client and the second client.
  • the target physics engine state information includes historical physics engine state information and current physics engine state information.
  • the current physics engine status information is the physics engine status information at the current moment, that is, the physics engine status information generated by the interactive operation at the current moment.
  • the historical physics engine status information refers to the physics engine status information at the time before the current time, that is, the physics engine status information generated by the interactive operation at the time before the current time.
  • the first server can judge the respective states of the first virtual object and the second virtual object before the current interactive operation according to the previous interactive operation of the first virtual object and the second virtual object, thereby judging the status of the first virtual object and the second virtual object. Whether the current interactive operation of the virtual object has duplication, conflict, etc., and then determine whether to fuse the state information of the target physical engine uploaded by the first virtual object and the second virtual object respectively. This can avoid the problem of inconsistent interaction results between the two clients in some special cases. It not only improves the implementation logic of the continuous processing of interactive operations, but also further improves the interactive rendering results of the first client and the second client. consistency.
  • the first server transparently transmits the first interaction data and the second interaction data to the first client and the second interaction data when judging that there is no intersection between the first interaction data and the second interaction data. of the second client. Then, if there is an intersection between the first interaction data and the second interaction data, the first server needs to process the two interaction data first, and then send the processed interaction data to each client.
  • the augmented reality-based information interaction method applied to the first client further includes: receiving the first virtual object corresponding to the first virtual object sent by the first server.
  • the state information of the fusion physics engine and the state information of the second fusion physics engine corresponding to the second virtual object are fused.
  • the fusion of the state information of the physics engine refers to the result obtained by integrating the state information of at least two physics engines (such as deduplication, conflict handling, etc.).
  • the fused state information of the physics engine is obtained by integrating the state information of the first target physics engine and the state information of the second target physics engine.
  • the information integration process please refer to the description of the following embodiments for details.
  • the state information of the first target physics engine there is an intersection between the state information of the first target physics engine and the state information of the second target physics engine.
  • the first target physics engine state information and the second target physics engine state information may have duplicate content.
  • the first virtual object performs an interactive operation of pulling the second virtual object
  • the second virtual object performs an interactive operation of leaving the virtual reality space.
  • the two interactive operations cannot form an interactive process.
  • the state information of the first target physics engine and the state information of the second target physics engine have conflicting contents.
  • the first server analyzes the status information of the two target physics engines to determine that there is an intersection between them, and then integrates the status information of the first target physics engine and the status information of the second target physics engine. For example, corresponding to the above-mentioned interaction situation of dragging and leaving, the first server can process the state information of the first target physics engine into the state information of the first fusion physics engine that the virtual object moves forward or backward with the action of pulling but not , and the action of maintaining the second virtual object leaving the virtual reality space is to use the state information of the second target physics engine as the state information of the second fusion physics engine.
  • the first server sends the state information of the first fusion physics engine and the state information of the second fusion physics engine to the electronic device and the electronic device corresponding to the second client.
  • the corresponding electronic device receives the status information of the two fusion physics engines.
  • S230 is implemented as: calling the physics engine to render the interactive operation of the first virtual object and the second virtual object in the virtual reality space based on the state information of the first fusion physics engine and the state information of the second fusion physics engine, generating and Display interactive rendering results. That is, the electronic device uses the state information of two fused physics engines to perform rendering, and obtains interactive rendering results that are more suitable for interactive scenarios. Such a setting can further improve the interaction consistency among the virtual objects, thereby further improving the user experience.
  • S210 may be implemented as: displaying an object attribute setting interface in response to the interactive operation of the first virtual object on the virtual object in the virtual reality space; responding Based on the input operation on the object attribute setting interface, the object operation attribute information of the virtual object is obtained, and the first interaction data is generated based on the object operation attribute information.
  • the first user performs an interactive operation on an object in the virtual reality space.
  • the electronic device may display an interface for setting object attributes (that is, an object attribute setting interface).
  • the first user may input attribute values of each attribute in the object attribute setting interface.
  • the electronic device can obtain each attribute value input by the user (ie object operation attribute information), and then generate the first interaction data according to the object operation attribute information.
  • the above-mentioned interactive operation on the virtual object in the virtual reality space is a furniture adding operation on the virtual furniture
  • the process of generating the first interaction data is: responding to the first virtual object adding the virtual furniture in the virtual reality space
  • the furniture adding operation is to display the furniture attribute setting interface of the virtual furniture; in response to the input operation on the furniture attribute setting interface, the furniture adding attribute information of the virtual furniture is obtained, and the first interaction data is generated based on the furniture adding attribute information.
  • the electronic device 400 may further display a furniture adding control 402 in the virtual reality space.
  • furniture options that can be added appear in the interface of the virtual reality space, as shown in FIG. 5 .
  • a virtual reality space page 501 is displayed on the electronic device 500
  • a furniture option 502 is displayed on the virtual reality space page 501.
  • the furniture option 502 includes furniture icon controls such as a stool, a piano, a microwave oven, and a coffee machine.
  • the first user can trigger the furniture icon control that he wants to add, and the electronic device can present the furniture attribute setting interface.
  • the furniture attribute setting interface may be an interface that provides attribute fields and corresponding input boxes, or an interactive three-dimensional object model that provides functions such as dragging and dropping, modifying size, and the like.
  • the first user inputs furniture adding attribute information such as position, size, style, color, etc. of the added furniture through the furniture attribute setting interface.
  • the electronic device After receiving the furniture adding attribute information, the electronic device generates the corresponding first interaction data. In this way, users can add furniture to the virtual reality space according to their real environment or their preferences, so as to improve the operability of the user to the virtual reality space, thereby enhancing the interest.
  • the aforementioned interactive operation on the virtual object in the virtual reality space is a furniture deletion operation on the virtual furniture.
  • the process of generating the first interaction data is: generating the first interaction data in response to the furniture removal operation of the first virtual object on the target virtual furniture in the virtual reality space.
  • the electronic device may display a furniture deletion control around the target virtual furniture.
  • the first user triggers the furniture deletion control, and the electronic device can detect the furniture removal operation of the first virtual object on the target virtual furniture, and then the electronic device deletes the relevant data of the target virtual furniture from the virtual reality space, and generates the second furniture. - interactive data.
  • the user can delete some furniture in the virtual reality space according to his real environment or his preferences, and can also improve the operability of the user to the virtual reality space, thereby enhancing the interest.
  • the above-mentioned interactive operation on the virtual object in the virtual reality space is a furniture deletion operation on the virtual furniture
  • the process of generating the first interaction data is: in response to the first virtual object’s virtual interaction with the target virtual object in the virtual reality space
  • the furniture modification operation of the furniture displays the furniture attribute setting interface; in response to the input operation on the furniture attribute setting interface, the furniture modification attribute information of the target virtual furniture is obtained, and the first interaction data is generated based on the furniture modification attribute information.
  • the electronic device may also display furniture modification controls around the target virtual furniture.
  • the electronic device can detect the furniture modification operation of the first virtual object on the target virtual furniture.
  • the electronic device displays the furniture property setting interface on the basis of the virtual reality space page.
  • the first user inputs modified attribute information (namely furniture modified attribute information) for certain attributes of the target virtual furniture, such as furniture position, size, style, color, etc.
  • the electronic device After receiving the furniture modification attribute information, the electronic device generates the corresponding first interaction data. In this way, the user can modify the information of the furniture in the virtual reality space according to his real environment or his preferences, and can also improve the operability of the user to the virtual reality space, thereby enhancing the interest.
  • the second client implements the augmented reality-based information interaction method, it does not have the authority to add, modify and delete furniture in this embodiment because it is a visitor relative to the virtual reality space, that is, the application
  • the augmented reality-based information interaction method on the second client does not have the functions in these embodiments.
  • the above-mentioned interactive operation on the virtual object in the virtual reality space is an item addition operation on a non-furniture virtual item
  • the process of generating the first interaction data is: responding to the first virtual object to the virtual reality space
  • the electronic device 400 may also display an item adding control 403 for non-furniture virtual items in the virtual reality space.
  • the electronic device detects the item adding operation corresponding to the first virtual user, and then displays the addable item option 404 in the interface of the virtual reality space.
  • the item option 404 may include item icon controls such as flowers, paper balls, toys (not shown in FIG. 4 ).
  • the first user can trigger the icon control of the item that he wants to add, and the electronic device can present the item attribute setting interface to prompt the first user to input some attribute information of the virtual item (ie item adding attribute information).
  • message information for virtual objects in the virtual reality space For example, message information for virtual objects in the virtual reality space, display time and display duration of virtual items, and the like.
  • the electronic device may receive the item addition attribute information, and then generate corresponding first interaction data.
  • users can place items such as gifts in the virtual reality space, and can also add their message information, which further enhances the interactivity of different users in the same virtual reality space, thereby further enhancing the fun.
  • the item addition operation of the non-furniture virtual item may also be an interactive operation of the second user, that is, the augmented reality-based information interaction method applied to the second client has the functions of this embodiment.
  • the augmented reality-based information interaction method applied to the first client further includes: after displaying the interactive rendering result, adding attributes to the item If the message includes message information, the message message of the virtual item is displayed in response to the interactive operation of the first virtual object on the virtual item in the virtual reality space.
  • the first user can see the virtual item.
  • the first user may perform interactive operations such as touching and picking up the virtual item.
  • the electronic device detects the interactive operation, it can display the message corresponding to the virtual item. In this way, the interaction of message information can be realized, and the effect of message reminder/reminder to other users can be achieved, and the consistency between the interaction process of different users in the virtual reality space and the interaction process of real users can be further enhanced, thereby further improving user experience.
  • the electronic device 400 may also display a text control 405 for leaving a message in text form and/or a voice control 406 for leaving a message/interaction in a voice form on the page of the virtual reality space.
  • the first user can leave a text message by triggering the text control 405 .
  • the first user can also perform voice message or voice interaction by triggering the voice control 406 .
  • the above-mentioned text message, voice message, and interactive voice can all be generated as the first interactive data for data interaction and rendering display between clients. This can further increase the way users interact in the virtual reality space, thereby further enhancing the fun and user experience.
  • the first interaction data can also be generated for the passive collision/contact interaction between the virtual object and furniture, objects, etc. in the virtual reality space (for example, the virtual object touches the virtual furniture during the movement). That is, as long as any of the virtual objects, virtual furniture, and virtual items in the virtual reality space changes, the first interaction data can be generated, and subsequent steps are performed to render and display, so as to realize the virtual reality space and its contained content. dynamic process.
  • the information communication between each client and the first server is implemented based on remote procedure call technology.
  • the active interactive operation or passive interactive operation of virtual objects in the virtual reality space needs to generate and synchronize interactive data, and the number of users participating in the interaction in the same virtual reality space can be many, so each customer
  • the communication between the client and the server is very frequent, and the data content is large, and it is necessary to develop an independent, abstract virtual object for each user and their interaction.
  • the embodiments of the present disclosure adopt stable, low-latency ,
  • a communication framework that can reasonably perform abstract encapsulation of objects such as a communication framework related to Remote Procedure Call (RPC).
  • RPC Remote Procedure Call
  • An embodiment of the present disclosure also provides an augmented reality-based information interaction method applied to a first server, and the method may be executed by an augmented reality-based information interaction device configured on the first server.
  • the device can be realized by software and/or hardware, and the device can be integrated in an electronic device with relatively large data processing capability.
  • the electronic equipment may include, but is not limited to, devices such as notebook computers, desktop computers, servers, and the like.
  • Fig. 6 shows a schematic flowchart of an augmented reality-based information interaction method applied to a first server provided by an embodiment of the present disclosure. Descriptions of terms and steps in each embodiment of the method that are the same as or similar to those in the above embodiments will not be repeated here.
  • the augmented reality-based information interaction method applied to the first server may include the following steps:
  • S610 Receive first interaction data and second interaction data respectively; wherein, the first interaction data is generated by the first virtual object performing interactive operations in the virtual reality space, and the second interaction data is generated by the second virtual object in the virtual reality space generated through interactive operations, and the first virtual object and the second virtual object share the virtual reality space.
  • the first server may receive the first interaction data and the second interaction data from the first client and the second client.
  • the first server determines that there is no intersection between the first interaction data and the second interaction data, it synchronously transparently transmits the first interaction data and the second interaction data to the first client and the second client.
  • the first client and the second client can perform rigid body motion simulation and rendering display based on the same interaction data.
  • the augmented display-based information interaction method applied to the first server can be used as a bridge for the exchange of interactive data between the first client and the second client to summarize the first information corresponding to the shared virtual reality space.
  • the client and the second client generate the first interaction data and the second interaction data, and send the first interaction data and the second interaction data to the first client and the second client respectively, so that the first client Based on the same interactive data as the second client, the 3D physics engine is invoked to perform interactive rendering and display the same interactive rendering results, which realizes the interaction process based on actual interactive operations of different users in the virtual reality space, and improves the interaction based on augmented reality.
  • the combination of the virtual world and the real world in similar applications improves the user experience.
  • the augmented reality-based information interaction method applied to the first server further includes: when the interaction data is the state information of the target physics engine, and determining the second interaction data If there is an intersection between the state information of a target physics engine and the state information of a second target physics engine, based on the state information of the first target physics engine and the state information of the second target physics engine, a first fusion physics engine corresponding to the first virtual object is generated.
  • the first server judges that there is an intersection between the state information of the first target physics engine and the state information of the second target physics engine. For example, when the electronic devices corresponding to the two clients both detect the same interactive operation of the first virtual object, the first target physics engine state information and the second target physics engine state information may have duplicate content. For another example, the first virtual object executes the interactive operation of pulling the second virtual object, and the second virtual object executes the interactive operation of leaving the virtual reality space. At this time, the two interactive operations cannot form an interactive process.
  • the first target physical The engine status information and the second target physical engine status information have conflicting content.
  • both the first virtual object and the second virtual object perform interactive operations on the same virtual item (such as a soccer ball), there may be cross content in the first target physics engine status information and the second target physics engine status information, and so on.
  • the first server will integrate the state information of the first target physics engine and the state information of the second target physics engine according to specific interactive operations, and generate the state information of the first fusion physics engine and the state information of the second fusion physics engine.
  • S620 may be implemented as: sending the first fusion physics engine state information and the second fusion physics engine state information to the first client and the second client. That is, the data synchronously delivered by the first server to the first client and the second client is the state information of the first fusion physics engine and the state information of the second fusion physics engine.
  • Such a setting can further improve the interaction consistency among the virtual objects, and improve the consistency between the interaction process of the virtual objects and the real interaction process, thereby further improving the user experience.
  • generating the fused physics engine status information may be implemented as: generating The first fused physics engine state information and the second fused physics engine state information.
  • the preset priority is an interaction priority set in advance for each virtual object in the same virtual reality space, and the higher the preset priority, the earlier the interaction operation will be responded.
  • the preset priority can be set according to the user's authority in the virtual reality space. For example, the preset priority of a first user (such as a room owner) is higher than that of a second user (such as a room visitor).
  • the preset priority may also be set according to the sequence in which users enter the virtual reality space. For example, the virtual object corresponding to the user who enters the virtual reality space first has a higher preset priority.
  • the preset priority of each virtual object is preset in the first server.
  • the first server judges that there is an intersection between the state information of the first target physics engine and the state information of the second target physics engine, the first server retains the state information of the target physics engine with a higher preset priority, and Physics engine state information to modify another target physics engine state information.
  • the technical solution of this example is applicable to the situation that both the first virtual object and the second virtual object perform interactive operations on the same virtual item (such as football).
  • the first service The terminal directly determines the state information of the first target physics engine as the state information of the first fusion physics engine, and then modifies the state information of the second target physics engine according to the state information of the first fusion physics engine to generate the state information of the second fusion physics engine.
  • the first virtual object can realize the interaction of kicking the football according to the strength and direction of its kicking, and the effect of the kicking operation of the second virtual object will be much smaller than the effect of the kicking operation of the first virtual object.
  • generating the fused physics engine state information may be realized as follows: in the case that the first virtual object and the second virtual object perform an interaction operation with an interaction sequence, according to the interaction sequence, based on the first target physics engine state information and The state information of the second target physics engine generates the state information of the first fusion physics engine and the state information of the second fusion physics engine.
  • the validity of the interaction operations triggered simultaneously by the two virtual objects can be determined according to the interaction sequence.
  • the order of interaction is determined by the order of moves.
  • the first server may determine that the next sub-operation to be performed is the second virtual object. Then, the first server may directly determine the state information of the second target physics engine as the state information of the second fusion physics engine, and set the state information of the first target physics engine as invalid.
  • the first server can use the state information of the first target physics engine as the main information in the state information of the first fusion physics engine, delete the information related to moves at the same time, and add the first virtual object to it. Relevant prompt information for executing moves.
  • the first server may also directly ignore the state information of the first target physics engine, and set the state information of the first fusion physics engine to remain unchanged. This can ensure that the interactive game in the virtual reality space maintains consistent game rules and interactive effects with the real interactive game.
  • generating fused physics engine state information may be implemented as: generating a first fused Physics engine status information and second fusion physics engine status information.
  • the first server may pre-set the priority of each state quantity in the physical engine state information in the interactive operation. Then, when the interactive operations of the first virtual object and the second virtual object respectively trigger different state quantities, the first server can generate the first fusion physics engine state information and the second fusion physics engine state information according to the priority of each state quantity Engine status information.
  • the first server may directly determine the state information of the second target physics engine of the second virtual object as the state information of the second fusion physics engine, so as to ensure that the second user exits the virtual reality space normally. Then, the first server modifies the state information of the first target physics engine when the second virtual object is empty.
  • the first server may process the state information of the first target physics engine into the state information of the first fused physics engine that has a pulling action but does not move forward or backward after pulling.
  • the interaction of each virtual object in the virtual reality space can be more in line with the actual interaction logic, and the effectiveness and authenticity of the virtual object interaction can be further improved.
  • the first server can base on the two The first fusion physics engine state information and the second fusion physics engine state information are generated based on the magnitude relationship of the value of the state quantity in the target physics engine state information.
  • the kicking force of the first virtual object is greater than that of the second virtual object.
  • the first server can base on the fact that the first virtual object kicks the ball more vigorously, and the movement of the football is more the same as the kicking operation of the first virtual object, but will be affected by the actual law of motion of the second virtual object’s kicking operation. , performing a comprehensive calculation on the state quantities such as kicking force and kicking direction in the first target physics engine state information and the second target physics engine state information to generate the first fusion physics engine state information and the second fusion physics engine state information. In this way, the interactive operation in the virtual reality space can be kept consistent with the motion law of the actual interactive operation, and the effectiveness and authenticity of the virtual object interaction can be further improved.
  • Fig. 7 shows a schematic structural diagram of an augmented reality-based information interaction device configured on a first client provided by an embodiment of the present disclosure.
  • the augmented reality-based information interaction device 700 configured on the first client may include:
  • the first interaction data generation module 710 is configured to generate first interaction data in response to the interaction operation of the first virtual object in the virtual reality space, and send the first interaction data to the first server;
  • the second interaction data receiving module 720 is configured to receive the second interaction data corresponding to the second virtual object sent by the first server; wherein, the second virtual object shares the virtual reality space with the first virtual object;
  • An interactive rendering result display module 730 configured to call the physics engine to render the interactive operation of the first virtual object and the second virtual object in the virtual reality space based on the first interaction data and the second interaction data, and generate and display an interactive rendering result .
  • the augmented reality-based information interaction device configured on the first client, it is possible to respond to the first virtual object on the basis that the first virtual object corresponding to the first user and the second virtual object corresponding to the second user share the same virtual reality space.
  • the interactive operation of the virtual object in the virtual reality space generates the first interaction data, sends the first interaction data to the first server, and receives the second interaction data corresponding to the second virtual object sent by the first server, so that The first interaction data and the second interaction data can be communicated between the client corresponding to the first user and the client corresponding to the second user, so that each client can communicate based on the same first interaction data and second interaction data.
  • Data call the 3D physics engine, respectively render the interactive operations corresponding to the first virtual object and the second virtual object, generate and display the interactive rendering results, and realize the interactive process based on the actual interactive operation of different users in the virtual reality space, Improves the degree of integration of the virtual world and the real world in augmented reality-based interactive applications, thereby improving user experience.
  • the interaction data is target physics engine state information.
  • the target physics engine state information includes historical physics engine state information and current physics engine state information.
  • the augmented reality-based information interaction device 700 configured on the first client further includes a fusion information receiving module, configured to:
  • the fused physics engine state information is obtained based on the first target physics engine state information and the second target physics engine state information;
  • the interactive rendering result display module 730 is specifically used for:
  • the physics engine is called to render the interactive operation of the first virtual object and the second virtual object in the virtual reality space, and an interactive rendering result is generated and displayed.
  • the first interaction data generating module 710 is specifically used to:
  • an object property setting interface is displayed;
  • object operation attribute information of the virtual object is obtained, and first interaction data is generated based on the object operation attribute information.
  • the augmented reality-based information interaction device 700 configured on the first client further includes a spatial address sending module, configured to:
  • the virtual reality space is constructed based on the real space where the first virtual object is located, and the first virtual object and the second virtual object are constructed based on the character attribute information of the first user and the second user, respectively.
  • the augmented reality-based information interaction device 700 configured on the first client shown in FIG. 7 can execute various steps in the method embodiments shown in FIGS. 2 to 5 , and realize the Each process and effect in the method embodiment shown in the figure will not be repeated here.
  • Fig. 8 shows a schematic structural diagram of an augmented reality-based information interaction device configured on a first server provided by an embodiment of the present disclosure.
  • the augmented reality-based information interaction device 800 configured on the first server may include:
  • the interaction data receiving module 810 is configured to receive the first interaction data and the second interaction data respectively; wherein, the first interaction data is generated by the interactive operation of the first virtual object in the virtual reality space, and the second interaction data is generated by the second virtual object
  • the objects are generated through interactive operations in the virtual reality space, and the first virtual object and the second virtual object share the virtual reality space;
  • An interaction data sending module 820 configured to send the first interaction data and the second interaction data to the first client corresponding to the first virtual object and the second client corresponding to the second virtual object, so that the first client and the second Based on the first interaction data and the second interaction data, the two clients call the physics engine to render the interaction operation, generate and display the interaction rendering result.
  • the augmented reality-based information interaction device configured on the first server, it can be used as a bridge for the exchange of interactive data between the first client and the second client, summarizing the first client and the second client corresponding to the shared virtual reality space
  • the first interaction data and the second interaction data generated by the terminal, and send the first interaction data and the second interaction data to the first client and the second client respectively, so that the first client and the second client are based on
  • the same interactive data calls the 3D physics engine for interactive rendering and displays the same interactive rendering results, which realizes the interactive process based on actual interactive operations for different users in the virtual reality space, and improves the virtual world in interactive applications based on augmented reality.
  • the degree of integration with the real world improves the user experience.
  • the augmented reality-based information interaction device 800 configured on the first server further includes an information fusion module for:
  • the interaction data is target physics engine state information, and it is determined that there is an intersection between the first target physics engine state information and the second target physics engine state information, based on The first target physics engine state information and the second target physics engine state information generate the first fusion physics engine state information corresponding to the first virtual object and the second fusion physics engine state information corresponding to the second virtual object;
  • the interactive data sending module 820 is specifically used for:
  • the information fusion module is specifically used for:
  • the preset priorities corresponding to the first virtual object and the second virtual object based on the first target physics engine status information and the second target physics engine status information, generate the first fusion physics engine status information and the second fusion physics engine status information ;
  • a first fused physics engine is generated state information and second fusion physics engine state information
  • first target physics engine state information and the second target physics engine state information based on the value of the same state quantity or the priority of different state quantities in the first target physics engine state information and the second target physics engine state information, generate the first fusion physics engine state information and the second fusion physics engine state information.
  • the augmented reality-based information interaction device 800 configured on the first server shown in FIG. 8 can execute each step in the method embodiment shown in FIG. 6 , and implement the method embodiment shown in FIG. 6 The various processes and effects of , will not be repeated here.
  • An embodiment of the present disclosure also provides an electronic device, which may include a processor and a memory, and the memory may be used to store executable instructions.
  • the processor can be used to read executable instructions from the memory, and execute the executable instructions to implement the augmented reality-based information interaction method applied to the first client in any of the above embodiments, or the method applied to the first server. Information interaction method based on augmented reality.
  • the electronic device may be the first client 11 or the second client 12 shown in FIG. 1 in the case of performing functions such as generating interactive data and generating and displaying interactive rendering results.
  • the electronic device may be the first server 13 shown in FIG. 1 in the case of performing functions such as summarizing and delivering interaction data.
  • Fig. 9 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure. It should be noted that the electronic device 900 shown in FIG. 9 is only an example, and should not limit the functions and scope of use of this embodiment of the present disclosure.
  • the electronic device 900 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 901, which may be stored in a read-only memory (ROM) 902 or loaded from a storage device 908 to a random Various appropriate actions and processes are executed by accessing programs in the memory (RAM) 903 . In the RAM 903, various programs and data necessary for the operation of the information processing device 900 are also stored.
  • the processing device 901, ROM 902, and RAM 903 are connected to each other through a bus 904.
  • An input/output interface (I/O interface) 905 is also connected to the bus 904 .
  • the following devices can be connected to the I/O interface 905: input devices 906 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 907 such as a computer; a storage device 908 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 909.
  • the communication means 909 may allow the electronic device 900 to perform wireless or wired communication with other devices to exchange data. While FIG. 9 shows electronic device 900 having various means, it is to be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • An embodiment of the present disclosure also provides a computer-readable storage medium, the storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements the method based on the first client in any embodiment of the present disclosure.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 909, or from storage means 908, or from ROM 902.
  • the computer program executes the above-mentioned functions defined in the augmented reality-based information interaction method applied to the first client in any embodiment of the present disclosure, or executes the method applied to the first service in any embodiment of the present disclosure.
  • the above-mentioned functions defined in the augmented reality-based information interaction method of the terminal are examples of the terminal.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • clients and servers can communicate using any currently known or future developed network protocol, such as HTTP, and can be interconnected with any form or medium of digital data communication (eg, a communication network).
  • a communication network examples include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • LANs local area networks
  • WANs wide area networks
  • Internet internetworks
  • peer-to-peer networks e.g., ad hoc peer-to-peer networks
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium bears one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device is made to execute the augmented reality-based application for the first client described in any embodiment of the present disclosure.
  • computer program codes for performing the operations of the present disclosure may be written in one or more programming languages or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional procedural programming languages—such as "C" or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开一种基于增强显示的信息交互方法、装置、设备和介质。其中,应用于第一客户端的基于增强现实的信息交互方法包括:响应于第一虚拟对象在虚拟现实空间中的交互操作,生成第一交互数据,并将第一交互数据发送至第一服务端。接收第一服务端发送的第二虚拟对象对应的第二交互数据。第二虚拟对象与第一虚拟对象共享虚拟现实空间。基于第一交互数据和第二交互数据,调用物理引擎对第一虚拟对象和第二虚拟对象在虚拟现实空间中的交互操作进行渲染,生成并显示交互渲染结果。实现了不同用户在虚拟现实空间中进行基于实际交互操作的交互过程,提高了虚拟世界和真实世界的结合程度。

Description

基于增强显示的信息交互方法、装置、设备和介质
本申请要求于2021年10月29日提交中国国家知识产权局、申请号为202111275803.4、发明名称为“基于增强显示的信息交互方法、装置、设备和介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请属于增强现实技术领域,具体涉及一种基于增强显示的信息交互方法、装置、设备和介质。
背景技术
增强现实(Augmented Reality)技术是一种将虚拟信息与真实世界巧妙融合的技术,其将计算机生成的文字、图像、三维模型、音视频等虚拟信息模拟仿真后,应用到真实世界中,达到两种信息互为补充的效果。
当前已经出现了许多基于AR技术开发的互动类应用程序。用户可通过其移动终端设备在这些应用程序所提供的虚拟现实空间中进行移动、与设定的虚拟对象进行预设类型的交互等。
但是,目前的各种互动类应用程序中均无法实现不同用户在虚拟现实空间中的交互,这就限制了用户的交互过程,降低了用户体验。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本公开提供了一种基于增强显示的信息交互方法、装置、设备和介质。
第一方面,本公开提供了一种基于增强显示的信息交互方法,应用于第一客户端,该方法包括:
响应于第一虚拟对象在虚拟现实空间中的交互操作,生成第一交互数据,并将所述第一交互数据发送至第一服务端;
接收所述第一服务端发送的第二虚拟对象对应的第二交互数据;其中,所述第二虚拟对象与所述第一虚拟对象共享所述虚拟现实空间;
基于所述第一交互数据和所述第二交互数据,调用物理引擎对所述第一虚拟对象和所述第二虚拟对象在所述虚拟现实空间中的所述交互操作进行渲染,生成并显示交互渲染结果。
第二方面,本公开还提供了一种基于增强显示的信息交互方法,应用于第一服务端,该方法包括:
分别接收第一交互数据和第二交互数据;其中,所述第一交互数据为第一虚拟对象在虚拟现实空间中进行交互操作而生成,所述第二交互数据为第二虚拟对象在所述虚拟现实空间中进行交互操作而生成,且所述第一虚拟对象和所述第二虚拟对象共享所述虚拟现实空间;
将所述第一交互数据和所述第二交互数据发送至所述第一虚拟对象对应的第一客户端和所述第二虚拟对象对应的第二客户端,以使所述第一客户端和所述第二客户端分别基于所述第一交互数据和所述第二交互数据,调用物理引擎对所述交互操作进行渲染,生成并显示交互渲染结果。
第三方面,本公开还提供了一种基于增强现实的信息交互装置,配置于客户端,该装置包括:
第一交互数据生成模块,用于响应于第一虚拟对象在虚拟现实空间中的交互操作,生成第一交互数据,并将所述第一交互数据发送至第一服务端;
第二交互数据接收模块,用于接收所述第一服务端发送的第二虚拟对象对应的第二交互数据;其中,所述第二虚拟对象与所述第一虚拟对象共享所述虚拟现实空间;
交互渲染结果显示模块,用于基于所述第一交互数据和所述第二交互数据,调用物理引擎对所述第一虚拟对象和所述第二虚拟对象在所述虚拟现实空间中的所述交互操作进行渲染,生成并显示交互渲染结果。
第四方面,本公开还提供了一种基于增强现实的信息交互装置,配置于第一服务端,该装置包括:
交互数据接收模块,用于分别接收第一交互数据和第二交互数据;其中,所述第一交互数据为第一虚拟对象在虚拟现实空间中进行交互操作而生成,所述第二交互数据为第二虚拟对象在所述虚拟现实空间中进行交互 操作而生成,且所述第一虚拟对象和所述第二虚拟对象共享所述虚拟现实空间;
交互数据发送模块,用于将所述第一交互数据和所述第二交互数据发送至所述第一虚拟对象对应的第一客户端和所述第二虚拟对象对应的第二客户端,以使所述第一客户端和所述第二客户端分别基于所述第一交互数据和所述第二交互数据,调用物理引擎对所述交互操作进行渲染,生成并显示交互渲染结果。
第五方面,本公开提供了一种电子设备,该设备包括:
处理器;
存储器,用于存储可执行指令;
其中,处理器用于从存储器中读取可执行指令,并执行可执行指令以实现本公开任意实施例提供的应用于第一客户端的基于增强现实的信息交互方法,或实现本公开任意实施例提供的应用于第一服务端的基于增强现实的信息交互方法。
第六方面,本公开提供了一种计算机可读存储介质,该存储介质存储有计算机程序,当计算机程序被处理器执行时,使得处理器实现本公开任意实施例提供的应用于第一客户端的基于增强现实的信息交互方法,或实现本公开任意实施例提供的应用于第一服务端的基于增强现实的信息交互方法。
本公开实施例的基于增强显示的信息交互方案,能够在第一用户对应的第一虚拟对象和第二用户对应的第二虚拟对象共享同一虚拟现实空间的基础上,响应于第一虚拟对象在虚拟现实空间中的交互操作,生成第一交互数据,并将第一交互数据发送至第一服务端,且接收第一服务端发送的第二虚拟对象对应的第二交互数据,使得第一用户对应的客户端和第二用户对应的客户端之间可以进行第一交互数据和第二交互数据的互通。从而使得每个客户端都能基于相同的第一交互数据和第二交互数据,调用3D物理引擎,分别对第一虚拟对象和第二虚拟对象对应的交互操作进行渲染,生成并显示交互渲染结果。实现了不同用户在虚拟现实空间中进行基于实 际交互操作的交互过程,提高了基于增强现实的互动类应用程序中虚拟世界和真实世界的结合程度,从而提高了用户体验。
附图说明
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:
图1为本公开实施例提供的一种基于增强显示的信息交互系统的架构图;
图2为本公开实施例提供的一种应用于第一客户端的基于增强显示的信息交互方法的流程示意图;
图3为本公开实施例提供的一种房间列表页面的显示示意图;
图4为本公开实施例提供的一种在虚拟现实空间页面中显示物体添加控件的显示示意图;
图5为本公开实施例提供的一种在虚拟现实空间页面中显示家具选项的显示示意图;
图6为本公开实施例提供的一种应用于第一服务端的基于增强显示的信息交互方法的流程示意图;
图7为本公开实施例提供的一种配置于客户端的基于增强显示的信息交互装置的结构示意图;
图8为本公开实施例提供的一种配置于第一服务端的基于增强显示的信息交互装置的结构示意图;
图9为本公开实施例提供的一种基于增强显示的信息交互设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了 更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
相关技术中基于增强现实技术实现的互动类应用程序,主要是将虚拟场景和真实场景进行叠加生成虚拟现实空间,并在该虚拟现实空间中提供一些固定的虚拟对象以及为虚拟对象设置预设类型的交互动作。用户在使用互动类应用程序过程中,若与虚拟对象发生了交互,那么虚拟对象只能执行上述预设类型的交互。例如,A用户和B用户都使用同一互动类应用程序,在A用户的客户端中所呈现的B用户只是根据B用户的基本信息(如生命值、外形等)构建的虚拟对象,其只能执行一些预设类型的动作,而无法执行B用户通过其客户端所执行的交互动作。这样不同的用户之间就无法实现在虚拟现实空间中基于真实交互动作的交互过程。
基于上述情况,本公开实施例提供一种基于增强现实的信息交互方案,实现不同用户共享同一虚拟现实空间,使得不同用户对应的客户端之间可 互通交互数据,进而通过客户端中的物理引擎来处理并渲染相同的交互数据,使得不同用户的客户端中呈现的各虚拟对象的交互操作与用户的真实交互操作相同,达到不同用户之间的交互效果,从而提高用户体验。
本公开实施例提供的基于增强现实的信息交互方案,可应用于各种基于增强现实技术开发的互动类应用程序中。例如基于虚拟房间的互动类家园游戏、基于虚拟展览厅的展览应用程序、基于虚拟会议厅的会议应用程序、基于虚拟密室的密室逃脱游戏等等。
图1为本公开实施例提供的一种基于增强现实的信息交互系统的架构图。
如图1所示,该基于增强现实的信息交互系统100至少包括互相之间通信连接的第一客户端11、第一客户端12、第一服务端13和第二服务端14。其中,第一服务端13是执行交互类应用程序的后台数据处理的服务端,其至少用于创建和管理虚拟现实空间、处理并发送各客户端上传的交互数据等。第一客户端11是第一用户对应的客户端,第一客户端12是第二用户对应的客户端,各客户端中运行交互类应用程序。第一服务端13可实现为独立服务器,也可实现为服务器集群。第二服务端14是执行虚拟现实空间的地址管理和分享的服务端。在第一服务端13实现未服务器集群的情况下,第二服务端14还用于从服务器集群中调度合适的服务器给用户。
在图1的系统架构下,本公开实施例中的基于增强现实的信息交互的整体流程如下:
S110、第一客户端11向第一服务端13发送创建虚拟现实空间的请求。
S120、第一服务端13基于第一客户端11的用户信息查询历史虚拟现实空间中是否存在第一客户端11对应的虚拟现实空间。若有,则将查询到的历史虚拟现实空间的信息发送至第一客户端11;若没有,则创建新的虚拟现实空间,并将该新的虚拟现实空间的信息发送至第一客户端11。
S130、第一客户端11将其虚拟现实空间的信息发送至第二服务端14,以使第二服务端14将虚拟现实空间的信息发送至第二客户端12。
S140、在第一服务端13实现为服务器集群的情况下,第一客户端11和第一客户端12均基于虚拟现实空间的信息,向第二服务端14发送进入 同一虚拟现实空间的请求。第二服务端14根据集群中各服务器的负载情况,为各客户端调度合适的服务器,并将调度到的服务器信息分别发送至第一客户端11和第一客户端12,以使第一客户端11和第一客户端12进入同一虚拟现实空间。
在第一服务端13实现为独立服务器的情况系下,第一客户端11和第一客户端12均基于虚拟现实空间的信息,向第一服务端13发送进入同一虚拟现实空间的请求。第一服务端13将其服务器信息发送至第一客户端11和第一客户端12,以使第一客户端11和第一客户端12进入同一虚拟现实空间。
S150、第一用户和第二用户分别在虚拟现实空间中执行交互操作,第一客户端11和第一客户端12检测到对应的交互操作后,分别生成第一交互数据和第二交互数据。第一客户端11将第一交互数据发送至第一服务端13,第一客户端12将第二交互数据发送至第一服务端13。
S160、第一服务端13将第一交互数据和第二交互数据分别透传至第一客户端11和第一客户端12。
S170、第一客户端11和第一客户端12分别调用其中的3D物理引擎来处理第一交互数据和第二交互数据,并执行渲染,生成交互渲染结果并显示。
需要说明的是,上述整体流程中涉及的名词及具体操作实现将在后续各实施例中说明。
下面首先结合图2-5对本公开实施例提供的应用于第一客户端的基于增强现实的信息交互方法进行说明。
在本公开实施例中,该方法可以由配置于第一客户端的基于增强现实的信息交互装置来执行,该装置可以由软件和/或硬件的方式实现,该装置可以集成在具有位置跟踪和显示功能的电子设备中。该电子设备可以包括但不限于诸如智能手机、PDA(个人数字助理)、PAD(平板电脑)、可穿戴设备等等的移动终端。
图2示出了本公开实施例提供的一种应用于第一客户端的基于增强现实的信息交互方法的流程示意图。如图2所示,以第一客户端为例进行说 明,该应用于第一客户端的基于增强现实的信息交互方法可以包括如下步骤:
S210、响应于第一虚拟对象在虚拟现实空间中的交互操作,生成第一交互数据,并将第一交互数据发送至第一服务端。
其中,虚拟对象是指用户在虚拟现实空间中的虚拟人物。第一虚拟对象是指第一用户对应的虚拟对象。在一些实施例中,第一虚拟对象基于第一用户的人物属性信息构建。该人物属性信息可以包含身高、性别、发型、服饰等。第一用户可以通过电子设备输入人物属性信息;或者通过电子设备的摄像头拍摄自身图像,并通过对自身图像进行目标识别等处理来获得人物属性信息;或者通过电子设备的雷达传感器扫描自身身体部分来生成点云数据,并通过对点云数据的处理来获得人物属性信息。之后,电子设备可以将人物属性信息上传至第一服务端。第一服务端利用人物属性信息进行人物三维模型的构建,获得第一虚拟对象。这样可以增加用户与虚拟现实空间的联系,提高用户视觉效果,进一步提升用户体验。
虚拟现实空间是指基于真实环境(如房间)生成的虚拟空间,该虚拟空间与真实环境的结构、布局等相同。在一些实施例中,虚拟现实空间基于第一虚拟对象所处的现实空间构建。例如,第一用户可以通过其电子设备的雷达传感器对其所处的现实空间(如房间)进行三维扫描来生成点云数据,并将该点云数据上传至第一服务端。第一服务端通过对点云数据的处理来得到虚拟现实空间。又如,第一服务端中预置了一些虚拟现实空间,第一用户可以选择与其所处的现实空间的结构、布局等均相同/相似的预置的虚拟现实空间。
交互数据是指交互操作生成的相关数据,例如移动操作产生的位置变化量、移动速度等。第一交互数据是指第一虚拟对象对应的交互数据。
具体地,第一用户在其电子设备中登录客户端后,电子设备(若无特殊说明,在应用于第一客户端的各方法实施例中,电子设备均是指第一客户端对应的电子设备)可以显示各虚拟现实空间信息。第一用户可以对其想要进入的虚拟现实空间对应的虚拟现实空间信息进行触发操作(如点击、 手势控制触发、语音控制触发、眼动控制触发等)。电子设备检测到用户触发操作后,显示触发的虚拟现实空间信息对应的虚拟现实空间。
例如,图3的电子设备300中显示有房间列表页面301,该房间列表页面302中显示各虚拟现实空间信息,即“X1的房间”302、“X2的房间”303、“X3的房间”304和“我的房间”305。当第一用户点击“我的房间”305后,电子设备300显示如图4所示的虚拟现实空间。图4中,电子设备400显示“我的房间”对应的虚拟现实空间。此外,电子设备400还可以显示返回房间列表控件401,第一用户可以通过对该返回房间列表控件401进行触发操作来返回图3所示的房间列表页面301。
基于增强现实技术,第一用户可以携带电子设备执行一些交互操作。而基于增强现实技术开发的客户端中设置了检测用户位置、姿态、手势、用户对屏幕的触发等交互操作的逻辑,那么安装有该客户端的电子设备便可以根据上述逻辑来执行交互操作的检测。基于此,当第一用户执行了交互操作时,电子设备可检测到第一虚拟对象在虚拟现实空间中的交互操作。然后,电子设备根据检测到的交互操作来生成第一交互数据。之后,电子设备将该第一交互数据上传至第一服务端,以进行第一交互数据的共享。
S220、接收第一服务端发送的第二虚拟对象对应的第二交互数据;其中,第二虚拟对象与第一虚拟对象共享虚拟现实空间。
其中,第二虚拟对象是指第二用户对应的虚拟对象。在一些实施例中,第二虚拟对象基于第二用户的人物属性信息构建。第二虚拟对象的构建可参见S210中第一虚拟对象的构建说明。第二交互数据是第二虚拟对象在虚拟现实空间中进行交互操作而生成的交互数据。
具体地,在第二用户选择了第一用户选择的虚拟现实空间后,在虚拟显示空间中同时存在第一虚拟对象和第二虚拟对象。
与第一客户端的执行流程相同,第二客户端对应的电子设备也会检测第二用户的交互操作,并基于该交互操作生成第二交互数据,且将第二交互数据上传至第一服务端,以进行第二交互数据的共享。
服务端在接收到第一交互数据和第二交互数据后,在判断第一交互数据和第二交互数据之间无交集(如重复、冲突等)后,将第一交互数据和 第二交互数据透传至电子设备。这样,电子设备便可接收到第一交互数据和第二交互数据,即第一客户端同时获得了第一虚拟对象和第二虚拟对象在虚拟现实空间进行交互操作后产生的交互数据,那么其具备了根据各用户的真实交互操作来显示对应的各虚拟对象的交互过程的数据基础。
应当理解的是,第二客户端中同样会接收到第一服务端透传的第一交互数据和第二交互数据。
在一些实施例中,为了使得第一用户和第二用户共享虚拟现实空间,电子设备在S220之前,将虚拟现实空间的空间地址发送至第二服务端,以使第二服务端将空间地址发送至第二虚拟对象对应的第二客户端,且响应于第二客户端的空间共享操作,为第二客户端调度第一服务端对应的目标服务器。
具体地,根据上述说明,第二用户可以通过其电子设备选择与第一用户相同的虚拟现实空间。在该操作之前,电子设备要先将其虚拟现实空间的空间地址发送至第二服务端。第二服务端接收到空间地址进行存储的同时,可以根据第一客户端发送的空间授权信息(如所有人可见其虚拟现实空间、好友可见其虚拟现实空间等),将该空间地址转发至第二客户端对应的电子设备。
那么,第二用户登录第二客户端后,其电子设备也可显示房间列表页面,该房间列表页面中便显示有第一用户对应的虚拟现实空间信息。第二用户可以对该虚拟现实空间信息进行触发操作(即空间共享操作),以请求加入第一用户对应的虚拟现实空间。第二用户对应的电子设备将该空间共享操作的相关信息发送至第二服务端。第二服务端为第二客户端调度第一服务端对应的负载合适的服务器(即目标服务器),并将目标服务器的服务器信息发送至第二客户端。第二客户端便可基于服务器信息连接目标服务器,以进入第一用户对应的虚拟现实空间。这样可以确保第一客户端和第二客户端共享同一虚拟现实空间,降低数据传输的延时,提高第一交互数据和第二交互数据的互通效率。
S230、基于第一交互数据和第二交互数据,调用物理引擎对第一虚拟对象和第二虚拟对象在虚拟现实空间中的交互操作进行渲染,生成并显示交互渲染结果。
其中,物理引擎是用于计算二维场景或三维场景中的物体与场景之间、物体与虚拟对象之间、物体与物体之间的运动交互和动力学,其使用对象属性(动量、扭矩或者弹性)来模拟刚体行为。本公开实施例中的物理引擎是指三维(即3D)物理引擎,其用于模拟三维场景中的虚拟对象和虚拟物体的刚体行为。
具体地,电子设备调用物理引擎来对第一虚拟对象及其第一交互数据、第二虚拟对象及其第二交互数据进行刚体运动的模拟处理,并且调用渲染引擎来渲染物理引擎的处理结果,生成交互渲染结果。之后,电子设备显示该交互渲染结果。该交互渲染结果便是第一虚拟对象和第二虚拟对象在虚拟现实空间中进行与真实交互相同的交互过程的呈现。
通过本公开实施例中的上述应用于第一客户端的基于增强现实的信息交互方法,能够在第一用户对应的第一虚拟对象和第二用户对应的第二虚拟对象共享同一虚拟现实空间的基础上,响应于第一虚拟对象在虚拟现实空间中的交互操作,生成第一交互数据,并将第一交互数据发送至第一服务端,且接收第一服务端发送的第二虚拟对象对应的第二交互数据,使得第一用户对应的客户端和第二用户对应的客户端之间可以进行第一交互数据和第二交互数据的互通。从而使得每个客户端都能基于相同的第一交互数据和第二交互数据,调用3D物理引擎,分别对第一虚拟对象和第二虚拟对象对应的交互操作进行渲染,生成并显示交互渲染结果,实现了不同用户在虚拟现实空间中进行基于实际交互操作的交互过程,提高了基于增强现实的互动类应用程序中虚拟世界和真实世界的结合程度,从而提高了用户体验。
在一些实施例中,上述交互数据为目标物理引擎状态信息。物理引擎状态信息是指物理引擎基于虚拟对象/虚拟物体及其交互操作而生成的刚体运动的状态相关信息,其不仅包含可直接获得的显性数据,还包含模拟物体/对象的刚体运动所产生的隐性数据。例如,对于虚拟球体的坠落,物 理引擎状态信息可以包含虚拟球体坠落的位置、速度、其与地面相触的方向和弹力等等。对于虚拟对象的碰撞,物理引擎状态信息可以包含虚拟对象碰撞后的位移方向和大小、位移速度等。目标物理引擎状态信息是指用户执行交互操作后虚拟对象对应的物理引擎状态信息。这样可以直接向第一服务端中上传目标物理引擎状态信息,使得第一客户端和第二客户端对应的电子设备也可直接接收到该目标物理引擎状态信息进行渲染,避免了各客户端单独调用物理引擎来对显性的交互数据进行模拟运算而造成的各客户端所得的模拟结果间的差异性,从而进一步提高了第一客户端和第二客户端中交互渲染结果的一致性。
在一些实施例中,目标物理引擎状态信息包括历史物理引擎状态信息和当前物理引擎状态信息。当前物理引擎状态信息是当前时刻的物理引擎状态信息,即当前时刻发生交互操作产生的物理引擎状态信息。历史物理引擎状态信息是指当前时刻之前的时刻的物理引擎状态信息,即当前时刻之前的时刻发生交互操作产生的物理引擎状态信息。那么,电子设备上传至第一服务端的第一交互数据便为当前物理引擎状态信息和至少一个历史物理引擎状态信息。这样,第一服务端可根据第一虚拟对象和第二虚拟对象之前的交互操作,判断发生当前交互操作之前第一虚拟对象和第二虚拟对象各自的状态,从而判断第一虚拟对象和第二虚拟对象的当前交互操作是否存在重复、冲突等情况,继而确定是否对第一虚拟对象和第二虚拟对象分别上传的目标物理引擎状态信息进行融合处理。这能够避免在一些特殊情况下出现两个客户端中的交互结果不一致的问题,不仅完善了交互操作连续处理的实现逻辑,而且更进一步提高了第一客户端和第二客户端中交互渲染结果的一致性。
在一些实施例中,根据上述说明,第一服务端是在判断第一交互数据和第二交互数据之间无交集时,将第一交互数据和第二交互数据透传至第一客户端和第二客户端的。那么,在第一交互数据和第二交互数据之间有交集的情况下,第一服务端需要先对两个交互数据进行处理,而后再将处理后的交互数据下发至各客户端。
基于上述说明,在将第一交互数据发送至第一服务端之后,该应用于第一客户端的基于增强现实的信息交互方法还包括:接收第一服务端发送的第一虚拟对象对应的第一融合物理引擎状态信息和第二虚拟对象对应的第二融合物理引擎状态信息。
其中,融合物理引擎状态信息是指将至少两个物理引擎状态信息进行整合(如去重、冲突处理等)所得的结果。本公开实施例中,融合物理引擎状态信息是对第一目标物理引擎状态信息和第二目标物理引擎状态信息进行整合而得到。该信息整合的过程可具体参见后续实施例的说明。
具体地,对于第一目标物理引擎状态信息和第二目标物理引擎状态信息之间存在交集的情况。例如,两个客户端对应的电子设备均检测到第一虚拟对象的相同的交互操作的情况下,第一目标物理引擎状态信息和第二目标物理引擎状态信息中会存在重复的内容。又如,第一虚拟对象执行了拉拽第二虚拟对象的交互操作,而第二虚拟对象执行了离开虚拟现实空间的交互操作,此时的两个交互操作无法形成交互过程。第一目标物理引擎状态信息和第二目标物理引擎状态信息则存在冲突的内容。第一服务端会分析两个目标物理引擎状态信息来判断出两者之间存在交集,进而整合第一目标物理引擎状态信息和第二目标物理引擎状态信息。例如,对应上述拉拽和离开的交互情况,第一服务端可以将第一目标物理引擎状态信息处理成为有拉拽动作而无拉拽的虚拟对象前移或后退的第一融合物理引擎状态信息,而维持第二虚拟对象离开虚拟现实空间的动作,即将第二目标物理引擎状态信息作为第二融合物理引擎状态信息。之后,第一服务端将第一融合物理引擎状态信息和第二融合物理引擎状态信息下发至电子设备和第二客户端对应的电子设备。相应电子设备则接收到两个融合物理引擎状态信息。
相应地,S230实现为:基于第一融合物理引擎状态信息和第二融合物理引擎状态信息,调用物理引擎对第一虚拟对象和第二虚拟对象在虚拟现实空间中的交互操作进行渲染,生成并显示交互渲染结果。即电子设备利用两个融合物理引擎状态信息来执行渲染,得到更加适合交互场景的交互 渲染结果。这样的设置可以进一步提高各虚拟对象之间的交互一致性,从而更进一步提高用户体验。
在一些实施例中,在交互操作是针对虚拟现实空间中的物体的情况下,S210可实现为:响应于第一虚拟对象对虚拟现实空间中虚拟物体的交互操作,显示物体属性设置界面;响应于对物体属性设置界面的输入操作,获得虚拟物体的物体操作属性信息,并基于物体操作属性信息生成第一交互数据。
具体地,第一用户执行了对虚拟现实空间中的物体的交互操作。电子设备检测到该交互操作后,可以显示设置物体属性的界面(即物体属性设置界面)。第一用户可以在该物体属性设置界面中输入各属性的属性值。电子设备检测到第一用户的输入操作后,可获得用户输入的各属性值(即物体操作属性信息),之后便根据该物体操作属性信息来生成第一交互数据。
在一示例中,上述对虚拟现实空间中虚拟物体的交互操作为对虚拟家具的家具添加操作,那么生成第一交互数据的过程为:响应于第一虚拟对象在虚拟现实空间中增加虚拟家具的家具添加操作,显示虚拟家具的家具属性设置界面;响应于对家具属性设置界面的输入操作,获得虚拟家具的家具添加属性信息,并基于家具添加属性信息生成第一交互数据。
具体地,继续参见图4,电子设备400在虚拟现实空间中还可显示家具添加控件402。当第一用户触发(如点击)该家具添加控件402后,在虚拟现实空间的界面中出现可添加的家具选项,如图5所示。在图5中,电子设备500中显示虚拟现实空间页面501,并在该虚拟现实空间页面501中显示家具选项502,该家具选项502中包含有凳子、钢琴、微波炉、咖啡机等家具图标控件。第一用户可以触发其想要添加的家具图标控件,电子设备便可呈现家具属性设置界面。该家具属性设置界面可以是提供属性字段及其对应的输入框的界面,也可以是提供拖拽移动、修改尺寸等功能的可交互三维物体模型。第一用户通过该家具属性设置界面来输入添加的家具的位置、尺寸、样式、颜色等家具添加属性信息。电子设备接收到家具添加属性信息后,生成其对应的第一交互数据。这样可使用户根据其所 处的真实环境或其喜好向虚拟现实空间中添加家具,提升用户对虚拟现实空间的可操作性,从而提升趣味性。
在另一示例中,上述对虚拟现实空间中虚拟物体的交互操作为对虚拟家具的家具删除操作。那么生成第一交互数据的过程为:响应于第一虚拟对象对虚拟现实空间中的目标虚拟家具的家具移除操作,生成第一交互数据。
具体地,第一用户触发电子设备在虚拟现实空间页面中显示的某一虚拟家具(即目标虚拟家具)后,电子设备可以在该目标虚拟家具周边显示家具删除控件。第一用户触发该家具删除控件,电子设备便可检测到第一虚拟对象对目标虚拟家具的家具移除操作,之后电子设备从虚拟现实空间中删除该目标虚拟家具的相关数据,便可生成第一交互数据。这样可使用户根据其所处的真实环境或其喜好删除虚拟现实空间中的某些家具,也能提升用户对虚拟现实空间的可操作性,从而提升趣味性。
在又一示例中,上述对虚拟现实空间中虚拟物体的交互操作为对虚拟家具的家具删除操作,那么生成第一交互数据的过程为:响应于第一虚拟对象对虚拟现实空间中的目标虚拟家具的家具修改操作,显示家具属性设置界面;响应于对家具属性设置界面的输入操作,获得目标虚拟家具的家具修改属性信息,并基于家具修改属性信息生成第一交互数据。
具体地,第一用户触发电子设备在虚拟现实空间页面中显示的目标虚拟家具后,电子设备也可以在该目标虚拟家具周边显示家具修改控件。第一用户触发该家具修改控件,电子设备便可检测到第一虚拟对象对目标虚拟家具的家具修改操作。然后,电子设备在虚拟现实空间页面的基础上,显示家具属性设置界面。第一用户通过该家具属性设置界面来输入对目标虚拟家具的某些属性的修改后的属性信息(即家具修改属性信息),如家具的位置、尺寸、样式、颜色等。之后,电子设备接收到家具修改属性信息后,生成其对应的第一交互数据。这样可使用户根据其所处的真实环境或其喜好修改虚拟现实空间中的家具的信息,也可提升用户对虚拟现实空间的可操作性,从而提升趣味性。
需要说明的是,若第二客户端实现该基于增强现实的信息交互方法,那么因其是相对于虚拟现实空间的访客,其不具有该实施例中添加、修改和删除家具的权限,即应用于第二客户端的基于增强现实的信息交互方法不具有该些实施例中的功能。
在又一示例中,上述对虚拟现实空间中虚拟物体的交互操作为对非家具类的虚拟物品的物品添加操作,那么生成第一交互数据的过程为:响应于第一虚拟对象向虚拟现实空间中添加非家具类的虚拟物品的物品添加操作,显示虚拟物品的物品属性设置界面;响应于对物品属性设置界面的输入操作,获得虚拟物品的物品添加属性信息,并基于物品添加属性信息生成第一交互数据。
具体地,继续参见图4,电子设备400在虚拟现实空间中还可显示非家具类的虚拟物品的物品添加控件403。当第一用户触发(如点击)该物品添加控件403后,电子设备便检测到第一虚拟用户对应的物品添加操作,之后在虚拟现实空间的界面中显示可添加的物品选项404。该物品选项404中可以包含有鲜花、纸团、玩具(图4未示出)等物品图标控件。第一用户可以触发其想要添加的物品图标控件,电子设备便可呈现物品属性设置界面,以提示第一用户输入该虚拟物品的一些属性信息(即物品添加属性信息)。例如,对虚拟现实空间中的虚拟对象的留言信息、虚拟物品的显示时间、显示时长等。第一用户通过物品属性设置界面输入物品添加属性信息后,电子设备可接收到该物品添加属性信息,继而生成其对应的第一交互数据。这样可使用户在虚拟现实空间中放置礼物等物品,也可附加其留言信息,进一步增强不同用户在同一虚拟现实空间的交互性,从而进一步提升趣味性。
需要说明的是,该非家具类的虚拟物品的物品添加操作也可以是第二用户的交互操作,即应用于第二客户端的基于增强现实的信息交互方法具有该实施例的功能。
在上述向虚拟现实空间中添加非家具类的虚拟物品的实施例的基础上,该应用于第一客户端的基于增强现实的信息交互方法还包括:在显示交互渲染结果之后,且在物品添加属性信息中包含留言信息的情况下,响 应于第一虚拟对象对虚拟现实空间中的虚拟物品的交互操作,显示虚拟物品的留言信息。
具体地,在向虚拟现实空间中添加了非家具类的虚拟物品且渲染显示之后,第一用户便可看到该虚拟物品。第一用户可以对虚拟物品执行触碰、拿起等交互操作。电子设备检测到该交互操作后,可显示该虚拟物品对应的留言信息。这样便可实现留言信息的交互,达到留言提醒/提示其他用户的效果,进一步增强不同用户在虚拟现实空间中的交互过程与真实用户交互过程的一致性,从而进一步提升用户体验。
在又一示例中,参见图4,电子设备400还可以在虚拟现实空间的页面中显示用于文字形式留言的文字控件405和/或用于语音形式留言/交互的语音控件406。第一用户可以通过触发该文字控件405进行文字留言。第一用户也可以通过触发语音控件406来进行语音留言或语音交互。上述文字留言、语音留言、交互的语音均可生成为第一交互数据进行各客户端之间的数据交互和渲染显示。这样可以进一步增加用户在虚拟现实空间中的交互方式,从而进一步提升趣味性和用户体验。
需要说明的是,上述各示例只是举例说明了虚拟对象在虚拟现实空间中进行了一些主动性交互操作而生成第一交互数据。对于虚拟对象在虚拟现实空间中与家具、物品等的被动性碰撞/接触的交互操作(如虚拟对象运动过程中碰到虚拟家具),也可生成第一交互数据。即,只要是虚拟现实空间中的虚拟对象、虚拟家具、虚拟物品等任一个发生变化,都可生成第一交互数据,并执行后续步骤来渲染显示,以实现虚拟现实空间及其所包含的内容的动态过程。
在一些实施例中,各客户端和第一服务端之间的信息通信基于远程过程调用技术实现。
具体地,根据上述说明,虚拟对象在虚拟现实空间中的主动性交互操作或被动性交互操作均需生成交互数据并同步,且参与同一虚拟现实空间的交互的用户数量可以有很多,那么各客户端与服务端的通信就非常频繁、且数据内容较多,而且需要为每个用户及其交互操作开发一个独立的、抽象的虚拟对象。在该情况下,为了提高开发效率以及提高通信效率,以实 现实时地/近似实时地进行各客户端之间的交互数据共享和交互渲染结果的显示,本公开实施例中采用稳定、低延时、能合理地进行对象的抽象封装的通信框架,如远程过程调用(Remote Procedure Call,RPC)相关的通信框架。
本公开实施例还提供了一种应用于第一服务端的基于增强现实的信息交互方法,该方法可以由配置于第一服务端的基于增强现实的信息交互装置来执行。该装置可以由软件和/或硬件的方式实现,该装置可以集成在具有较大的数据处理能力的电子设备中。该电子设备可以包括但不限于诸如笔记本电脑、台式电脑、服务器等。
图6示出了本公开实施例提供的一种应用于第一服务端的基于增强现实的信息交互方法的流程示意图。该方法的各实施例中与上述各实施例相同或相近的术语和步骤说明将不再赘述。如图6所示,该应用于第一服务端的基于增强现实的信息交互方法可以包括如下步骤:
S610、分别接收第一交互数据和第二交互数据;其中,第一交互数据为第一虚拟对象在虚拟现实空间中进行交互操作而生成,第二交互数据为第二虚拟对象在虚拟现实空间中进行交互操作而生成,且第一虚拟对象和第二虚拟对象共享虚拟现实空间。
具体地,第一服务端可接收来自第一客户端和第二客户端的第一交互数据和第二交互数据。
S620、将第一交互数据和第二交互数据发送至第一虚拟对象对应的第一客户端和第二虚拟对象对应的第二客户端,以使第一客户端和第二客户端分别基于第一交互数据和第二交互数据,调用物理引擎对交互操作进行渲染,生成并显示交互渲染结果。
具体地,第一服务端判断第一交互数据和第二交互数据之间无交集时,将第一交互数据和第二交互数据同步透传至第一客户端和第二客户端。这样第一客户端和第二客户端便可基于相同的交互数据进行刚体运动模拟和渲染显示。
通过本公开实施例提供的上述应用于第一服务端的基于增强显示的信息交互方法,能够作为第一客户端和第二客户端中交互数据互通的桥梁, 汇总共享的虚拟现实空间对应的第一客户端和第二客户端生成的第一交互数据和第二交互数据,并将第一交互数据和第二交互数据分别下发至第一客户端和第二客户端,以使第一客户端和第二客户端基于相同的交互数据调用3D物理引擎进行交互渲染并显示相同的交互渲染结果,实现了不同用户在虚拟现实空间中进行基于实际交互操作的交互过程,提高了基于增强现实的互动类应用程序中虚拟世界和真实世界的结合程度,从而提高了用户体验。
在一些实施例中,在分别接收第一交互数据和第二交互数据之后,该应用于第一服务端的基于增强现实的信息交互方法还包括:在交互数据为目标物理引擎状态信息,且确定第一目标物理引擎状态信息和第二目标物理引擎状态信息之间存在交集的情况下,基于第一目标物理引擎状态信息和第二目标物理引擎状态信息,生成第一虚拟对象对应的第一融合物理引擎状态信息和第二虚拟对象对应的第二融合物理引擎状态信息。
具体地,对于第一服务端判断第一目标物理引擎状态信息和第二目标物理引擎状态信息之间存在交集的情况。例如,两个客户端对应的电子设备均检测到第一虚拟对象的相同的交互操作的情况下,第一目标物理引擎状态信息和第二目标物理引擎状态信息中会存在重复的内容。再如,第一虚拟对象执行了拉拽第二虚拟对象的交互操作,而第二虚拟对象执行了离开虚拟现实空间的交互操作,此时的两个交互操作无法形成交互过程,第一目标物理引擎状态信息和第二目标物理引擎状态信息则存在冲突的内容。又如,第一虚拟对象和第二虚拟对象均对同一虚拟物品(如足球)执行了交互操作,第一目标物理引擎状态信息和第二目标物理引擎状态信息中会存在交叉内容等等。第一服务端会根据具体的交互操作对第一目标物理引擎状态信息和第二目标物理引擎状态信息进行整合处理,生成第一融合物理引擎状态信息和第二融合物理引擎状态信息。
相应地,S620可实现为:将第一融合物理引擎状态信息和第二融合物理引擎状态信息发送至第一客户端和第二客户端。即第一服务端同步下发至第一客户端和第二客户端的数据是该第一融合物理引擎状态信息和该第二融合物理引擎状态信息。这样的设置可以进一步提高各虚拟对象之间的 交互一致性,并提高虚拟对象的交互过程与真实交互过程的一致性,从而更进一步提高用户体验。
在一示例中,生成融合物理引擎状态信息可实现为:按照第一虚拟对象和第二虚拟对象对应的预设优先级,基于第一目标物理引擎状态信息和第二目标物理引擎状态信息,生成第一融合物理引擎状态信息和第二融合物理引擎状态信息。
其中,预设优先级是预先为同一虚拟现实空间中的各虚拟对象设置的交互优先级,预设优先级越高,其交互操作越早被响应。该预设优先级可以根据用户在虚拟现实空间中的权限来设置。例如,第一用户(如房间主人)的预设优先级高于第二用户(如房间访客)的预设优先级。该预设优先级也可以根据用户进入虚拟现实空间的先后顺序来设置。例如,先进入虚拟现实空间中的用户对应的虚拟对象具有更高的预设优先级。
具体地,第一服务端中预先设置了各虚拟对象的预设优先级。当第一服务端判断第一目标物理引擎状态信息和第二目标物理引擎状态信息存在交集的情况下,第一服务端保留预设优先级较高的目标物理引擎状态信息,并根据保留的目标物理引擎状态信息来修改另一个目标物理引擎状态信息。该示例的技术方案适用于第一虚拟对象和第二虚拟对象均对同一虚拟物品(如足球)执行了交互操作的情况。
例如,对于第一虚拟对象和第二虚拟对象均执行了对同一足球的踢球操作,且第一虚拟对象的预设优先级高于第二虚拟对象的预设优先级的情况,第一服务端将第一目标物理引擎状态信息直接确定为第一融合物理引擎状态信息,然后根据第一融合物理引擎状态信息对第二目标物理引擎状态信息进行相应修改,生成第二融合物理引擎状态信息。这样,第一虚拟对象能够按照其踢球的力度和方向实现踢足球的交互,而第二虚拟对象的踢球操作的效果将远小于第一虚拟对象的踢球操作的效果。
在另一示例中,生成融合物理引擎状态信息可实现为:在第一虚拟对象和第二虚拟对象执行具有交互顺序的交互操作的情况下,按照交互顺序,基于第一目标物理引擎状态信息和第二目标物理引擎状态信息,生成第一融合物理引擎状态信息和第二融合物理引擎状态信息。
具体地,如果第一虚拟对象和第二虚拟对象在虚拟现实空间中进行具有交互顺序的连续性交互操作,那么该两个虚拟对象同时触发的交互操作的有效性可以根据交互顺序来确定。
例如,第一虚拟对象和第二虚拟对象在虚拟现实空间中玩下棋游戏,那么交互顺序由下子顺序确定。对于第一虚拟对象执行了下子操作后,第一虚拟对象和第二虚拟对象又同时执行了下子操作的情况,第一服务端可以判定下一个该执行下子操作的是第二虚拟对象。那么,第一服务端可将第二目标物理引擎状态信息直接确定为第二融合物理引擎状态信息,而将第一目标物理引擎状态信息设置为无效。此时,第一服务端可以将第一目标物理引擎状态信息作为第一融合物理引擎状态信息中的主要信息,同时删除相关落子的信息,且向其中增加第一虚拟对象的下子操作无效、不执行落子的相关提示信息。第一服务端还可以直接忽略第一目标物理引擎状态信息,而将第一融合物理引擎状态信息设置为保持不动。这样可以确保虚拟现实空间中的交互游戏与真实交互游戏保持一致的游戏规则和交互效果。
在又一示例中,生成融合物理引擎状态信息可实现为:基于第一目标物理引擎状态信息和第二目标物理引擎状态信息中相同状态量的值或不同状态量的优先级,生成第一融合物理引擎状态信息和第二融合物理引擎状态信息。
具体地,第一服务端可以预先为交互操作中的物理引擎状态信息中的各状态量设置状态量的优先级。那么,当第一虚拟对象和第二虚拟对象的交互操作分别触发了不同的状态量时,第一服务端可以根据各状态量的优先级来生成第一融合物理引擎状态信息和第二融合物理引擎状态信息。
例如,对于第一虚拟对象执行了拉拽第二虚拟对象的交互操作,同时第二虚拟对象执行了离开虚拟现实空间的交互操作,且离开操作的状态量的优先级高于在虚拟现实空间中交互的状态量的优先级的情况,第一服务端可以将第二虚拟对象第二目标物理引擎状态信息直接确定为第二融合物理引擎状态信息,以确保第二用户正常退出虚拟现实空间。然后,第一服务端在第二虚拟对象为空的状态下,修改第一目标物理引擎状态信息。例 如,第一服务端可以将第一目标物理引擎状态信息处理成为有拉拽动作而无拉拽后的前移或后退等动作的第一融合物理引擎状态信息。这样可使得各虚拟对象在虚拟现实空间中的交互更加符合实际交互逻辑,进一步提高虚拟对象交互的有效性和真实性。
另外,在第一虚拟对象和第二虚拟对象的交互操作未触发不同状态量的优先级判断的情况下,如果两者的交互操作中有相同的状态量,那么第一服务端可以根据两个目标物理引擎状态信息中该状态量的值的大小关系来生成第一融合物理引擎状态信息和第二融合物理引擎状态信息。
例如,对于第一虚拟对象和第二虚拟对象均执行了对同一足球的踢球操作,且第一虚拟对象的踢球力度大于第二虚拟对象的踢球力度的情况。第一服务端可以根据第一虚拟对象的踢球力度较大,足球的运动更多地与第一虚拟对象的踢球操作相同、但会受到第二虚拟对象的踢球操作影响的实际运动规律,对第一目标物理引擎状态信息和第二目标物理引擎状态信息中的踢球力度、踢球方向等状态量进行综合运算,生成第一融合物理引擎状态信息和第二融合物理引擎状态信息。这样可使得虚拟现实空间中的交互操作与实际交互操作的运动规律保持一致,进一步提高虚拟对象交互的有效性和真实性。
图7示出了本公开实施例提供的一种配置于第一客户端的基于增强现实的信息交互装置的结构示意图。如图7所示,该配置于第一客户端的基于增强现实的信息交互装置700可以包括:
第一交互数据生成模块710,用于响应于第一虚拟对象在虚拟现实空间中的交互操作,生成第一交互数据,并将第一交互数据发送至第一服务端;
第二交互数据接收模块720,用于接收第一服务端发送的第二虚拟对象对应的第二交互数据;其中,第二虚拟对象与第一虚拟对象共享虚拟现实空间;
交互渲染结果显示模块730,用于基于第一交互数据和第二交互数据,调用物理引擎对第一虚拟对象和第二虚拟对象在虚拟现实空间中的交互操作进行渲染,生成并显示交互渲染结果。
通过上述配置于第一客户端的基于增强现实的信息交互装置,能够在第一用户对应的第一虚拟对象和第二用户对应的第二虚拟对象共享同一虚拟现实空间的基础上,响应于第一虚拟对象在虚拟现实空间中的交互操作,生成第一交互数据,并将第一交互数据发送至第一服务端,且接收第一服务端发送的第二虚拟对象对应的第二交互数据,使得第一用户对应的客户端和第二用户对应的客户端之间可以进行第一交互数据和第二交互数据的互通,从而使得每个客户端都能基于相同的第一交互数据和第二交互数据,调用3D物理引擎,分别对第一虚拟对象和第二虚拟对象对应的交互操作进行渲染,生成并显示交互渲染结果,实现了不同用户在虚拟现实空间中进行基于实际交互操作的交互过程,提高了基于增强现实的互动类应用程序中虚拟世界和真实世界的结合程度,从而提高了用户体验。
在一些实施例中,交互数据为目标物理引擎状态信息。
在一些实施例中,目标物理引擎状态信息包括历史物理引擎状态信息和当前物理引擎状态信息。
在一些实施例中,该配置于第一客户端的基于增强现实的信息交互装置700还包括融合信息接收模块,用于:
在将第一交互数据发送至第一服务端之后,接收第一服务端发送的第一虚拟对象对应的第一融合物理引擎状态信息和第二虚拟对象对应的第二融合物理引擎状态信息;其中,融合物理引擎状态信息基于第一目标物理引擎状态信息和第二目标物理引擎状态信息得到;
相应地,交互渲染结果显示模块730具体用于:
基于第一融合物理引擎状态信息和第二融合物理引擎状态信息,调用物理引擎对第一虚拟对象和第二虚拟对象在虚拟现实空间中的交互操作进行渲染,生成并显示交互渲染结果。
在一些实施例中,第一交互数据生成模块710具体用于:
响应于第一虚拟对象对虚拟现实空间中虚拟物体的交互操作,显示物体属性设置界面;
响应于对物体属性设置界面的输入操作,获得虚拟物体的物体操作属性信息,并基于物体操作属性信息生成第一交互数据。
在一些实施例中,该配置于第一客户端的基于增强现实的信息交互装置700还包括空间地址发送模块,用于:
在接收第一服务端发送的第二虚拟对象对应的第二交互数据之前,将虚拟现实空间的空间地址发送至第二服务端,以使第二服务端将空间地址发送至第二虚拟对象对应的第二客户端,且响应于第二客户端的空间共享操作,为第二客户端调度第一服务端对应的目标服务器。
在一些实施例中,虚拟现实空间基于第一虚拟对象所处的现实空间构建,第一虚拟对象和第二虚拟对象分别基于第一用户和第二用户的人物属性信息构建。
需要说明的是,图7所示的配置于第一客户端的基于增强现实的信息交互装置700可以执行图2至图5所示的方法实施例中的各个步骤,并且实现图2至图5所示的方法实施例中的各个过程和效果,在此不做赘述。
图8示出了本公开实施例提供的一种配置于第一服务端的基于增强现实的信息交互装置的结构示意图。如图8所示,该配置于第一服务端的基于增强现实的信息交互装置800可以包括:
交互数据接收模块810,用于分别接收第一交互数据和第二交互数据;其中,第一交互数据为第一虚拟对象在虚拟现实空间中进行交互操作而生成,第二交互数据为第二虚拟对象在虚拟现实空间中进行交互操作而生成,且第一虚拟对象和第二虚拟对象共享虚拟现实空间;
交互数据发送模块820,用于将第一交互数据和第二交互数据发送至第一虚拟对象对应的第一客户端和第二虚拟对象对应的第二客户端,以使第一客户端和第二客户端分别基于第一交互数据和第二交互数据,调用物理引擎对交互操作进行渲染,生成并显示交互渲染结果。
通过上述配置于第一服务端的基于增强现实的信息交互装置,能够作为第一客户端和第二客户端中交互数据互通的桥梁,汇总共享的虚拟现实空间对应的第一客户端和第二客户端生成的第一交互数据和第二交互数据,并将第一交互数据和第二交互数据分别下发至第一客户端和第二客户端,以使第一客户端和第二客户端基于相同的交互数据调用3D物理引擎进行交互渲染并显示相同的交互渲染结果,实现了不同用户在虚拟现实空间 中进行基于实际交互操作的交互过程,提高了基于增强现实的互动类应用程序中虚拟世界和真实世界的结合程度,从而提高了用户体验。
在一些实施例中,该配置于第一服务端的基于增强现实的信息交互装置800还包括信息融合模块,用于:
在分别接收第一交互数据和第二交互数据之后,在交互数据为目标物理引擎状态信息,且确定第一目标物理引擎状态信息和第二目标物理引擎状态信息之间存在交集的情况下,基于第一目标物理引擎状态信息和第二目标物理引擎状态信息,生成第一虚拟对象对应的第一融合物理引擎状态信息和第二虚拟对象对应的第二融合物理引擎状态信息;
相应地,交互数据发送模块820具体用于:
将第一融合物理引擎状态信息和第二融合物理引擎状态信息发送至第一客户端和第二客户端。
在一些实施例中,信息融合模块具体用于:
按照第一虚拟对象和第二虚拟对象对应的预设优先级,基于第一目标物理引擎状态信息和第二目标物理引擎状态信息,生成第一融合物理引擎状态信息和第二融合物理引擎状态信息;
或者,在第一虚拟对象和第二虚拟对象执行具有交互顺序的交互操作的情况下,按照交互顺序,基于第一目标物理引擎状态信息和第二目标物理引擎状态信息,生成第一融合物理引擎状态信息和第二融合物理引擎状态信息;
或者,基于第一目标物理引擎状态信息和第二目标物理引擎状态信息中相同状态量的值或不同状态量的优先级,生成第一融合物理引擎状态信息和第二融合物理引擎状态信息。
需要说明的是,图8所示的配置于第一服务端的基于增强现实的信息交互装置800可以执行图6所示的方法实施例中的各个步骤,并且实现图6所示的方法实施例中的各个过程和效果,在此不做赘述。
本公开实施例还提供了一种电子设备,该电子设备可以包括处理器和存储器,存储器可以用于存储可执行指令。其中,处理器可以用于从存储器中读取可执行指令,并执行可执行指令以实现上述任意实施例中的应用 于第一客户端的基于增强现实的信息交互方法,或者应用于第一服务端的基于增强现实的信息交互方法。
在一些实施例中,在执行交互数据的生成和交互渲染结果的生成及显示等功能的情况下,该电子设备可以为图1中所示的第一客户端11或第二客户端12。在另一些实施例中,在执行交互数据汇总和下发等功能的情况下,该电子设备可以为图1中所示的第一服务端13。
图9示出了本公开实施例提供的一种电子设备的结构示意图。需要说明的是,图9示出的电子设备900仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图9所示,该电子设备900可以包括处理装置(例如中央处理器、图形处理器等)901,其可以根据存储在只读存储器(ROM)902中的程序或者从存储装置908加载到随机访问存储器(RAM)903中的程序而执行各种适当的动作和处理。在RAM 903中,还存储有信息处理设备900操作所需的各种程序和数据。处理装置901、ROM 902以及RAM 903通过总线904彼此相连。输入/输出接口(I/O接口)905也连接至总线904。
通常,以下装置可以连接至I/O接口905:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置906;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置907;包括例如磁带、硬盘等的存储装置908;以及通信装置909。通信装置909可以允许电子设备900与其他设备进行无线或有线通信以交换数据。虽然图9示出了具有各种装置的电子设备900,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
本公开实施例还提供了一种计算机可读存储介质,该存储介质存储有计算机程序,当计算机程序被处理器执行时,使得处理器实现本公开任意实施例中的应用于第一客户端的基于增强现实的信息交互方法,或者实现本公开任意实施例中的应用于第一服务端的基于增强现实的信息交互方法。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品, 其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置909从网络上被下载和安装,或者从存储装置908被安装,或者从ROM 902被安装。在该计算机程序被处理装置901执行时,执行本公开任意实施例的应用于第一客户端的基于增强现实的信息交互方法中限定的上述功能,或者执行本公开任意实施例的应用于第一服务端的基于增强现实的信息交互方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网 (“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备执行本公开任意实施例所说明的应用于第一客户端的基于增强现实的信息交互方法的步骤,或者执行本公开任意实施例所说明的应用于第一服务端的基于增强现实的信息交互方法的步骤。
在本公开实施例中,可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的设备、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用 的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、片上系统(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的 实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (14)

  1. 一种基于增强显示的信息交互方法,其特征在于,应用于第一客户端,包括:
    响应于第一虚拟对象在虚拟现实空间中的交互操作,生成第一交互数据,并将所述第一交互数据发送至第一服务端;
    接收所述第一服务端发送的第二虚拟对象对应的第二交互数据;其中,所述第二虚拟对象与所述第一虚拟对象共享所述虚拟现实空间;
    基于所述第一交互数据和所述第二交互数据,调用物理引擎对所述第一虚拟对象和所述第二虚拟对象在所述虚拟现实空间中的所述交互操作进行渲染,生成并显示交互渲染结果。
  2. 根据权利要求1所述的方法,其特征在于,交互数据为目标物理引擎状态信息。
  3. 根据权利要求2所述的方法,其特征在于,所述目标物理引擎状态信息包括历史物理引擎状态信息和当前物理引擎状态信息。
  4. 根据权利要求2或3所述的方法,其特征在于,在所述将所述第一交互数据发送至第一服务端之后,所述方法还包括:
    接收所述第一服务端发送的所述第一虚拟对象对应的第一融合物理引擎状态信息和第二虚拟对象对应的第二融合物理引擎状态信息;其中,融合物理引擎状态信息基于第一目标物理引擎状态信息和第二目标物理引擎状态信息得到;
    所述基于所述第一交互数据和所述第二交互数据,调用物理引擎对所述第一虚拟对象和所述第二虚拟对象在所述虚拟现实空间中的所述交互操作进行渲染,生成并显示交互渲染结果包括:
    基于所述第一融合物理引擎状态信息和所述第二融合物理引擎状态信息,调用物理引擎对所述第一虚拟对象和所述第二虚拟对象在所述虚拟现实空间中的所述交互操作进行渲染,生成并显示所述交互渲染结果。
  5. 根据权利要求1所述的方法,其特征在于,所述响应于第一虚拟对象在虚拟现实空间中的交互操作,生成第一交互数据,包括:
    响应于第一虚拟对象对所述虚拟现实空间中虚拟物体的交互操作,显 示物体属性设置界面;
    响应于对所述物体属性设置界面的输入操作,获得所述虚拟物体的物体操作属性信息,并基于所述物体操作属性信息生成第一交互数据。
  6. 根据权利要求1所述的方法,其特征在于,在所述接收所述第一服务端发送的第二虚拟对象对应的第二交互数据之前,所述方法还包括:
    将所述虚拟现实空间的空间地址发送至第二服务端,以使所述第二服务端将所述空间地址发送至第二虚拟对象对应的第二客户端,且响应于所述第二客户端的空间共享操作,为所述第二客户端调度所述第一服务端对应的目标服务器。
  7. 根据权利要求1所述的方法,其特征在于,所述虚拟现实空间基于所述第一虚拟对象所处的现实空间构建,所述第一虚拟对象和所述第二虚拟对象分别基于第一用户和第二用户的人物属性信息构建。
  8. 一种基于增强显示的信息交互方法,其特征在于,应用于第一服务端,包括:
    分别接收第一交互数据和第二交互数据;其中,所述第一交互数据为第一虚拟对象在虚拟现实空间中进行交互操作而生成,所述第二交互数据为第二虚拟对象在所述虚拟现实空间中进行交互操作而生成,且所述第一虚拟对象和所述第二虚拟对象共享所述虚拟现实空间;
    将所述第一交互数据和所述第二交互数据发送至所述第一虚拟对象对应的第一客户端和所述第二虚拟对象对应的第二客户端,以使所述第一客户端和所述第二客户端分别基于所述第一交互数据和所述第二交互数据,调用物理引擎对所述交互操作进行渲染,生成并显示交互渲染结果。
  9. 根据权利要求8所述的方法,其特征在于,在所述分别接收第一交互数据和第二交互数据之后,所述方法还包括:
    在交互数据为目标物理引擎状态信息,且确定第一目标物理引擎状态信息和第二目标物理引擎状态信息之间存在交集的情况下,基于所述第一目标物理引擎状态信息和所述第二目标物理引擎状态信息,生成所述第一虚拟对象对应的第一融合物理引擎状态信息和所述第二虚拟对象对应的第二融合物理引擎状态信息;
    所述将所述第一交互数据和所述第二交互数据发送至所述第一虚拟对象对应的第一客户端和所述第二虚拟对象对应的第二客户端包括:
    将所述第一融合物理引擎状态信息和所述第二融合物理引擎状态信息发送至所述第一客户端和所述第二客户端。
  10. 根据权利要求9所述的方法,其特征在于,所述基于所述第一目标物理引擎状态信息和所述第二目标物理引擎状态信息,生成所述第一虚拟对象对应的第一融合物理引擎状态信息和所述第二虚拟对象对应的第二融合物理引擎状态信息包括:
    按照所述第一虚拟对象和所述第二虚拟对象对应的预设优先级,基于所述第一目标物理引擎状态信息和所述第二目标物理引擎状态信息,生成所述第一融合物理引擎状态信息和所述第二融合物理引擎状态信息;
    或者,在所述第一虚拟对象和所述第二虚拟对象执行具有交互顺序的交互操作的情况下,按照所述交互顺序,基于所述第一目标物理引擎状态信息和所述第二目标物理引擎状态信息,生成所述第一融合物理引擎状态信息和所述第二融合物理引擎状态信息;
    或者,基于所述第一目标物理引擎状态信息和所述第二目标物理引擎状态信息中相同状态量的值或不同状态量的优先级,生成所述第一融合物理引擎状态信息和所述第二融合物理引擎状态信息。
  11. 一种基于增强现实的信息交互装置,其特征在于,配置于客户端,包括:
    第一交互数据生成模块,用于响应于第一虚拟对象在虚拟现实空间中的交互操作,生成第一交互数据,并将所述第一交互数据发送至第一服务端;
    第二交互数据接收模块,用于接收所述第一服务端发送的第二虚拟对象对应的第二交互数据;其中,所述第二虚拟对象与所述第一虚拟对象共享所述虚拟现实空间;
    交互渲染结果显示模块,用于基于所述第一交互数据和所述第二交互数据,调用物理引擎对所述第一虚拟对象和所述第二虚拟对象在所述虚拟现实空间中的所述交互操作进行渲染,生成并显示交互渲染结果。
  12. 一种基于增强现实的信息交互装置,其特征在于,配置于第一服务端,包括:
    交互数据接收模块,用于分别接收第一交互数据和第二交互数据;其中,所述第一交互数据为第一虚拟对象在虚拟现实空间中进行交互操作而生成,所述第二交互数据为第二虚拟对象在所述虚拟现实空间中进行交互操作而生成,且所述第一虚拟对象和所述第二虚拟对象共享所述虚拟现实空间;
    交互数据发送模块,用于将所述第一交互数据和所述第二交互数据发送至所述第一虚拟对象对应的第一客户端和所述第二虚拟对象对应的第二客户端,以使所述第一客户端和所述第二客户端分别基于所述第一交互数据和所述第二交互数据,调用物理引擎对所述交互操作进行渲染,生成并显示交互渲染结果。
  13. 一种电子设备,其特征在于,包括:
    处理器;
    存储器,用于存储可执行指令;
    其中,所述处理器用于从所述存储器中读取所述可执行指令,并执行所述可执行指令以实现上述权利要求1-7中任一项所述的应用于第一客户端的基于增强现实的信息交互方法,或实现上述权利要求8-10中任一项所述的应用于第一服务端的基于增强现实的信息交互方法。
  14. 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,当所述计算机程序被处理器执行时,使得处理器实现上述权利要求1-7中任一项所述的应用于第一客户端的基于增强现实的信息交互方法,或实现上述权利要求8-10中任一项所述的应用于第一服务端的基于增强现实的信息交互方法。
PCT/CN2022/120156 2021-10-29 2022-09-21 基于增强显示的信息交互方法、装置、设备和介质 WO2023071630A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111275803.4 2021-10-29
CN202111275803.4A CN116069154A (zh) 2021-10-29 2021-10-29 基于增强显示的信息交互方法、装置、设备和介质

Publications (1)

Publication Number Publication Date
WO2023071630A1 true WO2023071630A1 (zh) 2023-05-04

Family

ID=86160202

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/120156 WO2023071630A1 (zh) 2021-10-29 2022-09-21 基于增强显示的信息交互方法、装置、设备和介质

Country Status (2)

Country Link
CN (1) CN116069154A (zh)
WO (1) WO2023071630A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117193541A (zh) * 2023-11-08 2023-12-08 安徽淘云科技股份有限公司 虚拟形象交互方法、装置、终端和存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180239514A1 (en) * 2015-08-14 2018-08-23 Nitin Vats Interactive 3d map with vibrant street view
CN112198959A (zh) * 2017-07-28 2021-01-08 深圳市瑞立视多媒体科技有限公司 虚拟现实交互方法、装置及系统
CN112884906A (zh) * 2021-01-11 2021-06-01 宁波诺丁汉大学 一种实现多人混合虚拟和增强现实交互的系统及方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180239514A1 (en) * 2015-08-14 2018-08-23 Nitin Vats Interactive 3d map with vibrant street view
CN112198959A (zh) * 2017-07-28 2021-01-08 深圳市瑞立视多媒体科技有限公司 虚拟现实交互方法、装置及系统
CN112884906A (zh) * 2021-01-11 2021-06-01 宁波诺丁汉大学 一种实现多人混合虚拟和增强现实交互的系统及方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117193541A (zh) * 2023-11-08 2023-12-08 安徽淘云科技股份有限公司 虚拟形象交互方法、装置、终端和存储介质
CN117193541B (zh) * 2023-11-08 2024-03-15 安徽淘云科技股份有限公司 虚拟形象交互方法、装置、终端和存储介质

Also Published As

Publication number Publication date
CN116069154A (zh) 2023-05-05

Similar Documents

Publication Publication Date Title
US11617947B2 (en) Video game overlay
US11551403B2 (en) Artificial reality system architecture for concurrent application execution and collaborative 3D scene rendering
CN107852573B (zh) 混合现实社交交互
US9616338B1 (en) Virtual reality session capture and replay systems and methods
US9258337B2 (en) Inclusion of web content in a virtual environment
US7809789B2 (en) Multi-user animation coupled to bulletin board
US9937423B2 (en) Voice overlay
JP2024059715A (ja) 複合現実システムを用いて仮想3次元空間内のウェブページを管理および表示すること
US20170084084A1 (en) Mapping of user interaction within a virtual reality environment
US20100169837A1 (en) Providing Web Content in the Context of a Virtual Environment
KR20120050980A (ko) 실시간 네트워크 통신을 위한 공간 인터페이스
EP3827359A1 (en) Application sharing
WO2023071630A1 (zh) 基于增强显示的信息交互方法、装置、设备和介质
US20220254114A1 (en) Shared mixed reality and platform-agnostic format
JP7244450B2 (ja) コンピュータプログラム、サーバ装置、端末装置、及び方法
CN105653492B (zh) 智能书
WO2023142601A1 (zh) 一种基于区块链的数据处理方法、设备以及可读存储介质
Jeon et al. Support for Mobile Augmented and Synthesized Worlds

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22885517

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18688701

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE