CN116758201B - Rendering processing method, device and system of three-dimensional scene and computer storage medium - Google Patents

Rendering processing method, device and system of three-dimensional scene and computer storage medium Download PDF

Info

Publication number
CN116758201B
CN116758201B CN202311038837.0A CN202311038837A CN116758201B CN 116758201 B CN116758201 B CN 116758201B CN 202311038837 A CN202311038837 A CN 202311038837A CN 116758201 B CN116758201 B CN 116758201B
Authority
CN
China
Prior art keywords
terminal
rendering
cloud
dimensional scene
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311038837.0A
Other languages
Chinese (zh)
Other versions
CN116758201A (en
Inventor
杨中雷
张延�
黄丛宇
杨舟
徐森
李�根
宋金德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taobao China Software Co Ltd
Original Assignee
Taobao China Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taobao China Software Co Ltd filed Critical Taobao China Software Co Ltd
Priority to CN202311038837.0A priority Critical patent/CN116758201B/en
Publication of CN116758201A publication Critical patent/CN116758201A/en
Application granted granted Critical
Publication of CN116758201B publication Critical patent/CN116758201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application provides a rendering processing method, device and system of a three-dimensional scene and a computer storage medium, wherein the rendering processing method of the three-dimensional scene comprises the following steps: determining a three-dimensional scene to be rendered by a terminal; establishing a real-time communication link between the terminal and the cloud so that the terminal can acquire and display a rendering picture of the three-dimensional scene from the cloud through the real-time communication link, wherein the rendering picture is obtained by rendering the cloud; in the process that the terminal acquires the rendering picture from the cloud through the real-time communication link, the terminal acquires the rendering resource required by the three-dimensional scene from the cloud in real time, so that after the terminal acquires the rendering resource, the terminal performs rendering and displaying of the three-dimensional scene on the basis of the rendering resource. According to the technical scheme, when the terminal does not prepare the rendering resources required by the three-dimensional scene, the rendering picture of the three-dimensional scene can be rapidly acquired and displayed through the cloud, so that the scene loading time is effectively shortened, and a user can rapidly enter the three-dimensional scene.

Description

Rendering processing method, device and system of three-dimensional scene and computer storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, a system, and a computer storage medium for rendering a three-dimensional scene.
Background
With the rapid development of three-dimensional technology, the manufactured three-dimensional scene is bigger and finer, the size of a scene inclusion is also expanding continuously, and for the three-dimensional scene, even if a series of simplifying operations such as face reduction, multi-detail Level (LOD) and stream loading are performed on the manufactured three-dimensional scene by using an asset optimization system, the inclusion size of the fine scene is also more than 30MB (compared with a game scene which may have several GB).
For the three-dimensional scene produced, when the user terminal loads the three-dimensional scene for the first time, a period of several seconds is often required, and the loading period is often dependent on the network environment of the user; because the time for loading the three-dimensional scene for the first time is long, the influence of the extension of the loading time on the increase of the user jump rate is very large, so that the good experience of the user on the use of the three-dimensional scene can be reduced, and the expansion of a new user can be seriously influenced.
Disclosure of Invention
The embodiment of the application provides a rendering processing method, equipment, a system and a computer storage medium for a three-dimensional scene, which can shorten the loading time of the three-dimensional scene, ensure good experience of a user on the use of the three-dimensional scene and are beneficial to the expansion operation of a new user.
In a first aspect, an embodiment of the present application provides a rendering processing method of a three-dimensional scene, including:
determining a three-dimensional scene to be rendered by a terminal;
establishing a real-time communication link between a terminal and a cloud end so that the terminal can acquire and display a rendering picture of the three-dimensional scene from the cloud end through the real-time communication link, wherein the rendering picture is obtained by rendering the cloud end;
in the process that the terminal obtains the rendering picture from the cloud through the real-time communication link, the terminal obtains rendering resources required by the three-dimensional scene from the cloud in real time, so that after the terminal obtains the rendering resources, the terminal renders and displays the three-dimensional scene on the terminal based on the rendering resources.
In a second aspect, an embodiment of the present application provides a rendering processing apparatus for a three-dimensional scene, including:
the first determining module is used for determining a three-dimensional scene to be rendered by the terminal;
The first establishing module is used for establishing a real-time communication link between the terminal and the cloud so that the terminal can acquire and display a rendering picture of the three-dimensional scene from the cloud through the real-time communication link, wherein the rendering picture is obtained by rendering the cloud;
the terminal is used for acquiring rendering resources required by the three-dimensional scene from the cloud in real time in the process that the terminal acquires the rendering picture from the cloud through the real-time communication link, so that after the terminal acquires the rendering resources, the terminal performs rendering and displaying of the three-dimensional scene on the terminal based on the rendering resources.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor; the memory is configured to store one or more computer instructions, where the one or more computer instructions, when executed by the processor, implement the method for rendering a three-dimensional scene shown in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer storage medium storing a computer program, where the computer program causes a computer to implement the rendering processing method of the three-dimensional scene shown in the first aspect when the computer program is executed.
In a fifth aspect, embodiments of the present invention provide a computer program product comprising: a computer program which, when executed by a processor of an electronic device, causes the processor to execute the rendering processing method of a three-dimensional scene shown in the first aspect described above.
In a sixth aspect, an embodiment of the present invention provides a rendering processing method for a three-dimensional scene, including:
acquiring a three-dimensional scene to be rendered, which is sent by a terminal;
establishing a real-time communication link with the terminal based on the three-dimensional scene to be rendered;
determining a rendering picture of the three-dimensional scene based on rendering resources required by the three-dimensional scene stored locally at a cloud;
and sending the rendering picture to the terminal for display through the real-time communication link.
In a seventh aspect, an embodiment of the present invention provides a rendering processing apparatus for a three-dimensional scene, including:
the second acquisition module is used for acquiring the three-dimensional scene to be rendered, which is sent by the terminal;
the second determining module is used for establishing a real-time communication link with the terminal based on the three-dimensional scene to be rendered;
the second determining module is further used for determining a rendering picture of the three-dimensional scene based on rendering resources required by the three-dimensional scene stored locally in the cloud;
And the second processing module is used for sending the rendering picture to the terminal for display through the real-time communication link.
In an eighth aspect, an embodiment of the present application provides an electronic device, including: a memory, a processor; the memory is configured to store one or more computer instructions, where the one or more computer instructions, when executed by the processor, implement the method for rendering a three-dimensional scene shown in the sixth aspect.
In a ninth aspect, an embodiment of the present invention provides a computer storage medium storing a computer program, where the computer program causes a computer to implement the rendering processing method of the three-dimensional scene shown in the sixth aspect.
In a tenth aspect, embodiments of the present invention provide a computer program product comprising: a computer program which, when executed by a processor of an electronic device, causes the processor to execute the rendering processing method of a three-dimensional scene shown in the sixth aspect described above.
In an eleventh aspect, an embodiment of the present invention provides a loading system for a three-dimensional scene, including: a terminal and a cloud;
the terminal is used for determining a three-dimensional scene to be rendered by the terminal, and establishing a real-time communication link between the terminal and the cloud so that the terminal can acquire and display a rendering picture of the three-dimensional scene from the cloud through the real-time communication link, wherein the rendering picture is obtained by rendering the cloud;
The cloud end is used for determining a rendering picture of the three-dimensional scene based on rendering resources required by the three-dimensional scene stored locally in the cloud end, and sending the rendering picture to the terminal through the real-time communication link;
the terminal is further configured to obtain, in real time, rendering resources required by the three-dimensional scene from the cloud end in a process of obtaining the rendering screen from the cloud end through the real-time communication link, so that after the terminal obtains the rendering resources, the terminal performs rendering and displaying of the three-dimensional scene at the terminal based on the rendering resources.
According to the rendering processing method, device and system for the three-dimensional scene and the computer storage medium, through determining the three-dimensional scene to be rendered by the terminal, a real-time communication link between the terminal and the cloud is established based on the three-dimensional scene to be rendered in the terminal, and then the rendering picture of the three-dimensional scene can be acquired and displayed from the cloud through the real-time communication link.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description will be given below of the drawings that are needed in the embodiments or the prior art descriptions, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a rendering processing method of a three-dimensional scene according to an embodiment of the present application;
fig. 2 is a flow chart of a rendering processing method of a three-dimensional scene according to an embodiment of the present application;
fig. 3 is a schematic flow diagram one of rendering and displaying a three-dimensional scene at a terminal based on the rendering resource after the terminal provided in the embodiment of the present application acquires the rendering resource;
fig. 4 is a second schematic flow chart of rendering and displaying a three-dimensional scene at a terminal based on the rendering resource after the terminal provided in the embodiment of the present application obtains the rendering resource;
fig. 5 is a flow chart of another method for rendering three-dimensional scene according to an embodiment of the present application;
Fig. 6 is a schematic diagram of a rendered screen of a three-dimensional scene sent by a cloud through a streaming channel according to an embodiment of the present application;
fig. 7 is a schematic diagram of controlling a real-time communication link according to an embodiment of the present application;
fig. 8 is a schematic diagram of interaction with a user based on a rendered screen according to an embodiment of the present application;
fig. 9 is a flowchart of another method for rendering three-dimensional scene according to an embodiment of the present application;
fig. 10 is a flowchart of another method for rendering three-dimensional scene according to an embodiment of the present application;
fig. 11 is a schematic diagram of a rendering processing method of a three-dimensional scene according to an embodiment of the present application;
fig. 12 is a flowchart of another method for rendering three-dimensional scene according to an embodiment of the present application;
fig. 13 is a schematic diagram of a rendering processing method of a three-dimensional scene according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a rendering processing device for a three-dimensional scene according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of an electronic device corresponding to the rendering processing apparatus for three-dimensional scene shown in fig. 14;
fig. 16 is a schematic structural diagram of another three-dimensional scene rendering processing apparatus according to an embodiment of the present application;
Fig. 17 is a schematic structural diagram of an electronic device corresponding to the rendering processing apparatus for three-dimensional scene shown in fig. 16;
fig. 18 is a schematic structural diagram of a rendering processing system for a three-dimensional scene according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, the "plurality" generally includes at least two, but does not exclude the case of at least one.
It should be understood that the term "and/or" as used herein is merely one relationship describing the association of the associated objects, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a product or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such product or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a commodity or system comprising such elements.
In addition, the sequence of steps in the method embodiments described below is only an example and is not strictly limited.
Definition of terms:
three-dimensional Scene (3D Scene): the concept in computer graphics is a virtual environment composed of elements such as three-dimensional models, lights, cameras, etc., used to generate the final rendered image. Three-dimensional scenes are typically created by tools such as three-dimensional modeling software or game engines, and are visualized as images or videos after being loaded, processed, rendered, etc. by the 3D engine.
End cloud collaboration: a distributed computing mode combines terminal equipment and cloud computing resources to realize distributed computing and collaborative processing of tasks. In the process of end-cloud cooperation, the terminal equipment is responsible for collecting and processing data, the cloud computing resource is responsible for computing and storing data, and the result is transmitted back to the terminal equipment through network transmission.
3D rendering: refers to the process of converting elements such as objects, materials, light sources, etc. in a three-dimensional scene into a final two-dimensional image. 3D rendering typically involves a number of computing and graphics processing techniques for generating high quality rendered images to present realistic three-dimensional effects.
3D scene loading: the method is characterized in that elements such as a three-dimensional model, textures, materials, lamplight and the like in a three-dimensional scene are read from a disk or a network into a memory, then are processed and initialized, and finally the complete three-dimensional scene is presented. Scene loading is typically the first step in the start of a 3D application, and for large 3D scenes, the loading time may be relatively long.
WebRTC: the Real-Time Communication Web communication technology realizes the functions of audio and video call, data transmission and the like through a Web browser and can realize the Real-time communication function in a Web application program.
Streaming: streaming, which is a way of transmitting data in a network, i.e. dividing the data into continuous data streams of small segments, transmitting the continuous data streams to a user terminal in real time through the network, and allowing the user to download while watching or listening without waiting for the downloading of the whole file to be finished and playing the file. Common streaming formats include audio streaming, video streaming, live streaming, and the like.
Hybrid rendering: hybrid rendering is a rendering technology, which refers to mixing results of simultaneous actions of multiple rendering pipelines (rasterization, ray tracing, shadow calculation, etc.) or rendering methods, so that better rendering quality is obtained on the basis of ensuring rendering efficiency.
In order to facilitate understanding of the specific implementation manner and implementation effect of the technical solution provided in the present embodiment, the following description describes related technologies:
with the rapid development of three-dimensional technology, the three-dimensional scene is made larger and finer, the size of the scene inclusion is expanded continuously, and for the three-dimensional scene, namely, a series of reduced operations such as face reduction, multi-detail Level (LOD) and stream loading are carried out on the three-dimensional scene by using an asset optimization system, the inclusion size of the fine scene is also more than 30MB (compared with a game scene which may have several GB). For the three-dimensional scene to be produced, when the user terminal performs first loading, a period of several seconds is often required, and the loading period is often dependent on the network environment of the user; because the time for loading the three-dimensional scene for the first time by the user terminal is long (may reach the minute level), the influence of the extension of the loading time on the increase of the user jump rate is very large, so that the good experience of the user on the use of the three-dimensional scene is reduced, and the extension of a new user is seriously influenced.
In the process of loading the three-dimensional scene, for a 3D engine (such as a mobile phone, a personal computer, a tablet personal computer and the like) at a terminal side, particularly an embedded engine, the size of a bag body is severely limited, so that scene resources and resources of an engine logic layer often adopt a plug and play method, namely, after a user clicks into an engine application space, the scene resources and the resources of the engine logic layer are downloaded to a local part through a network and loaded into a memory.
In order to solve the above-mentioned technical problems, the present embodiment provides a method, an apparatus, a system, and a computer storage medium for rendering a three-dimensional scene, where, referring to fig. 1, an execution body of the method for rendering a three-dimensional scene may include: the terminal can be connected with the cloud in real time, specifically, the terminal can be implemented as any computing device with rendering processing capability of a certain three-dimensional scene, and in some examples, the terminal can be a mobile phone, an intelligent wearable device, a tablet computer, a personal computer, a set application program, and the like.
Further, the basic structure of the terminal may include: at least one processor. The number of processors depends on the configuration and type of terminal. The terminal may also include memory, which may be volatile, such as: random access Memory (Random Access Memory, RAM) may be nonvolatile, such as Read-Only Memory (ROM), flash Memory, or the like, or may include both types. The memory typically stores an Operating System (OS), one or more application programs, program data, and the like. In addition to the processing unit and the memory, the terminal comprises some basic configuration, such as a network card chip, an IO bus, a display component, and some peripheral devices. Alternatively, some peripheral devices may include, for example, a keyboard, a mouse, a stylus, a printer, and the like. Other peripheral devices are well known in the art and are not described in detail herein.
Cloud refers to a device that can provide rendering operations of three-dimensional scenes in a network virtual environment, and generally refers to a device that performs information planning and rendering operations of three-dimensional scenes using a network. In a physical implementation, the cloud may be any device that is capable of providing a computing service, responding to a three-dimensional scene to be rendered existing in a terminal, and performing a rendering processing operation of the three-dimensional scene based on the three-dimensional scene to be rendered, for example: may be an edge device, a remote server, a cluster server, a regular server, a cloud host, a virtual center, etc. The cloud device mainly comprises a processor, a hard disk, a memory, a system bus and the like, and is similar to a general computer architecture.
In the embodiment described above, the terminal is connected to the cloud end through a network, and the network connection may be a wireless or wired network connection. If the terminal can be in communication connection with the cloud, the network system of the mobile network can be any one of 2G (GSM), 2.5G (GPRS), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4g+ (lte+), wiMax, 5G, 6G, and the like.
In the embodiment of the application, in order to enable the rendering processing operation of the three-dimensional scene, the terminal may determine the three-dimensional scene to be rendered, in some examples, the three-dimensional scene to be rendered may be obtained through a loading request, at this time, the terminal may obtain the loading request of the three-dimensional scene, where the loading request may include a scene identifier of the three-dimensional scene, and after obtaining the loading request of the three-dimensional scene, the terminal may determine the three-dimensional scene to be rendered based on the loading request; and then, a Real-time communication link between the terminal and the cloud can be established based on the three-dimensional scene to be rendered, wherein the Real-time communication link can be realized as Web Real-time communication (Web Real-Time Communication, webRTC for short), after the Real-time communication link is established between the terminal and the cloud, the terminal can acquire and display a rendering picture of the three-dimensional scene from the cloud through the Real-time communication link, and the rendering picture is obtained by rendering the cloud, so that a user can quickly enter the three-dimensional scene.
In addition, in order to enable the terminal to perform rendering and displaying operations of the three-dimensional scene, in a process that the terminal obtains a rendering picture from the cloud through a real-time communication link, the terminal can obtain (or download) rendering resources required by the three-dimensional scene from the cloud in real time, wherein the rendering resources required by the terminal to obtain the three-dimensional scene from the cloud can include at least one of the following: the initialization information of the user and the scene resource, the initialization information of the user may include at least one of the following: user identity, user type, user related information, user's birth location, user's current perspective, etc.; the scene resource may include at least one of: virtual 3D scenes, building information in scenes, scene lighting information, non-player user information in scenes, and so forth; after the terminal obtains the rendering resources, namely, after the terminal obtains all the rendering resources required by the three-dimensional scene from the cloud, the terminal can conduct rendering and displaying operations of the three-dimensional scene on the basis of the rendering resources, specifically, the terminal can conduct rendering on the terminal on the basis of the obtained or downloaded rendering resources, the three-dimensional scene can be obtained and displayed, and therefore stable conducting of rendering processing operations of the three-dimensional scene is effectively guaranteed.
According to the technical scheme provided by the embodiment, the real-time communication link between the terminal and the cloud is established based on the three-dimensional scene to be rendered in the terminal through determining the three-dimensional scene to be rendered in the terminal, and then the rendering picture of the three-dimensional scene can be acquired and displayed from the cloud through the real-time communication link.
Some embodiments of the present invention are described in detail below with reference to the accompanying drawings. In the case where there is no conflict between the embodiments, the following embodiments and features in the embodiments may be combined with each other. In addition, the sequence of steps in the method embodiments described below is only an example and is not strictly limited.
Referring to fig. 2, the present embodiment provides a three-dimensional scene rendering processing method, where the execution body of the method may be a three-dimensional scene rendering processing device, in some examples, the three-dimensional scene rendering processing device may be implemented as a terminal, that is, the three-dimensional scene rendering processing method may be applied to the terminal, where the terminal may be implemented as software, or a combination of software and hardware, and when the three-dimensional scene rendering processing device is implemented as hardware, it may be specifically various electronic devices having a three-dimensional scene rendering processing operation, including, but not limited to, a tablet computer, a personal computer PC, a handheld terminal, and so on. When the rendering processing apparatus of the three-dimensional scene is implemented as software, it may be installed in the electronic device exemplified above. Specifically, based on the above-mentioned rendering processing device for three-dimensional scene, the rendering processing method for three-dimensional scene in this embodiment may include the following steps:
step S201: and determining the three-dimensional scene to be rendered by the terminal.
Step S202: establishing a real-time communication link between the terminal and the cloud so that the terminal can acquire and display a rendering picture of the three-dimensional scene from the cloud through the real-time communication link, wherein the rendering picture is rendered by the cloud.
Step S203: in the process that the terminal acquires the rendering picture from the cloud through the real-time communication link, the terminal acquires the rendering resource required by the three-dimensional scene from the cloud in real time, so that after the terminal acquires the rendering resource, the terminal performs rendering and displaying of the three-dimensional scene on the basis of the rendering resource.
The specific implementation manner and implementation effect of each step are described in detail below:
step S201: and determining the three-dimensional scene to be rendered by the terminal.
The three-dimensional scene may be an aggregate formed by combining computer graphics related resource elements based on a three-dimensional visualization program, a digital twin technology and the like, and specifically may be a virtual environment formed by elements such as a three-dimensional model, lamplight, a camera and the like, for example: in the application scene of the game, the three-dimensional scene can be a scene of a three-dimensional electronic game; in the application scene of the three-dimensional image processing, the three-dimensional scene may be a scene designed for a three-dimensional image. It should be noted that the three-dimensional scene can be applied to the above application scene, and those skilled in the art can adjust the three-dimensional scene according to specific application requirements, which is not described herein.
Specifically, the terminal is configured with an application component related to a three-dimensional scene, for example: the terminal is configured with three-dimensional game software, three-dimensional scene design software, virtual clothing three-dimensional decoration software, city planning three-dimensional software, three-dimensional building design software and the like, and when a user has a viewing or starting requirement on a certain three-dimensional scene, the three-dimensional scene to be rendered by the terminal can be determined. In some examples, the three-dimensional scene to be rendered may be obtained through a loading request of the three-dimensional scene, at this time, the terminal may first obtain a loading request of the three-dimensional scene, and then determine the three-dimensional scene to be rendered by the terminal based on the loading request of the three-dimensional scene, where the loading request may include identification information of the three-dimensional scene, identification information of a user, source IP address information of the terminal, source port, destination IP address, transmission port, and a transport layer protocol, and so on.
The loading request of the three-dimensional scene can be obtained through interactive operation input by a user in a preset interface, and at this time, the obtaining the loading request of the three-dimensional scene can include: displaying a preset interface of the three-dimensional scene; acquiring a starting operation input by a user in a preset interface; the loading request of the three-dimensional scene is acquired based on the starting operation, so that the accuracy and reliability of acquiring the loading request are effectively ensured.
In other examples, obtaining a load request for a three-dimensional scene may include: acquiring a voice starting instruction input by a user aiming at a three-dimensional scene in a terminal; the loading request of the three-dimensional scene is acquired based on the voice starting instruction, and the accuracy and the reliability of acquiring the loading request are ensured.
In still other examples, obtaining a load request for a three-dimensional scene may include: acquiring a scene starting event pre-configured by a user (for example, pre-configuring a timing starting event for a three-dimensional scene); and generating and acquiring the loading request of the three-dimensional scene based on the scene starting event, and ensuring the accuracy and reliability of acquiring the loading request.
Step S202: establishing a real-time communication link between the terminal and the cloud so that the terminal can acquire and display a rendering picture of the three-dimensional scene from the cloud through the real-time communication link, wherein the rendering picture is obtained by rendering the cloud.
After determining the three-dimensional scene to be rendered by the terminal, the terminal can establish a real-time communication link between the terminal and the cloud based on the three-dimensional scene to be rendered in order to stably realize the rendering processing operation of the three-dimensional scene, so that the terminal can acquire and display a rendering picture of the three-dimensional scene from the cloud through the real-time communication link, and the rendering picture is obtained by rendering the cloud.
The terminal side does not prepare rendering resources required by the three-dimensional scene, that is, the terminal cannot perform image rendering operation, and a user cannot quickly enter and view the three-dimensional scene through the terminal, so that in order to reduce waiting time required by the user to enter the three-dimensional scene as much as possible, the terminal can not only download the rendering resources required by the three-dimensional scene based on a loading request, but also establish a real-time communication link between the terminal and the cloud, and specifically, the establishing of the real-time communication link between the terminal and the cloud may include: generating a connection establishment request corresponding to the cloud end, wherein the connection establishment request can comprise quintuple, tetratuple, six-tuple or seven-tuple information and the like for realizing the establishment of a Real-time communication link, and then sending the connection establishment request to the cloud end so that the cloud end can establish the Real-time communication link with the terminal through the connection establishment request, and the Real-time communication link can be realized as a Web Real-time communication (Web Real-Time Communication, for short WebRTC) link.
After the real-time communication link is acquired, the cloud end can send the rendered rendering picture obtained through rendering to the terminal through the real-time communication link, so that the terminal can quickly receive and display the rendering picture of the three-dimensional scene. Specifically, the cloud end can perform frame image processing operation based on locally stored rendering resources to obtain a rendering image corresponding to the three-dimensional scene, the cloud end can perform cloud side engine starting operation based on the locally stored rendering resources, after the cloud side engine is started, the rendering image of the three-dimensional scene can be sequentially subjected to frame grabbing, compression coding and other processing operations, and the processed rendering image is sent to the terminal through the real-time communication link, so that the terminal can receive the rendering image corresponding to the three-dimensional scene through the real-time communication link.
After receiving the rendering frame corresponding to the three-dimensional scene sent by the cloud, in order to enable the user to quickly view and enter the three-dimensional scene, the terminal can receive the rendering frame corresponding to the three-dimensional scene sent by the cloud through a real-time communication link, and display the received rendering frame by using a display component of the terminal, so that the user can quickly view and enter the three-dimensional scene, good experience of the user is ensured, and expansion operation of a new user is facilitated.
Step S203: in the process that the terminal acquires the rendering picture from the cloud through the real-time communication link, the terminal acquires the rendering resource required by the three-dimensional scene from the cloud in real time, so that after the terminal acquires the rendering resource, the terminal performs rendering and displaying of the three-dimensional scene on the basis of the rendering resource.
In order to ensure the stability and reliability of rendering the three-dimensional scene, in the process that the terminal acquires the rendering picture from the cloud end through the real-time communication link, the terminal can acquire the rendering resource required by the three-dimensional scene from the cloud end in real time, namely, the terminal can acquire or download the rendering resource required by the three-dimensional scene from the cloud end when receiving and acquiring the rendering picture from the cloud end through the real-time communication link, so that after the terminal acquires the rendering resource, the terminal can render and display the three-dimensional scene on the basis of the rendering resource.
For rendering resources required for a three-dimensional scene, it may specifically include at least one of: the user initialization information having an association with the user and the scene resource having an association with the three-dimensional scene may include at least one of: user identity, user type, user's birth location, user's current perspective, etc.; the scene resource may include at least one of: virtual 3D scenes, building information in scenes, scene lighting information, fixed character information in scenes, and the like; so that the terminal can implement rendering processing operation of the three-dimensional scene based on the downloaded rendering resources. It should be noted that, in different application scenarios, the rendering resources required for the three-dimensional scenario are different, i.e. the rendering resources required for the three-dimensional scenario may change continuously with the change of the application scenario.
Specifically, in the process that the terminal obtains the rendering picture from the cloud end through the real-time communication link, the terminal obtains the rendering resource required by the three-dimensional scene from the cloud end in real time may include: determining rendering resources required by a three-dimensional scene to be rendered in the cloud; the terminal acquires or downloads the rendering resources required by the three-dimensional scene from the cloud in real time, so that the accuracy and reliability of acquiring the rendering resources are effectively ensured.
It should be noted that, for the cloud end, in order to reduce the time for the user waiting to enter the three-dimensional scene as much as possible, the rendering resources required by the three-dimensional scene are locally stored in the cloud end, in some examples, when the user uses or enters the three-dimensional scene for the first time, the terminal and the cloud end can download the rendering resources required by the three-dimensional scene, specifically, when the terminal acquires the rendering resources required by the three-dimensional scene from the cloud end in real time (i.e., the terminal downloads the rendering resources required by the three-dimensional scene from the cloud end in real time), whether the rendering resources are acquired (downloaded) is completed can be identified in real time, and in some examples, the identifying whether the rendering resources are acquired or not is completed may include: acquiring a target resource size of a rendering resource; detecting the downloaded resource size of the rendering resource in real time; when the downloaded resource size is the same as the target resource size, determining that the rendering resource is acquired; and when the downloaded resource size is smaller than the target resource size, determining that the rendering resource is not acquired.
In still other examples, not only identifying whether the rendering resource is acquired by the downloaded resource size and the target resource size, but also identifying whether the rendering resource is acquired by detecting status information of the terminal for acquiring the rendering resource, where identifying whether the rendering resource is acquired may include: acquiring state information for identifying the acquisition progress of rendering resources in a terminal; when the state information is in a first preset state, determining that the rendering resources are acquired; when the state information is the second preset state, the fact that the rendering resources are not acquired is determined, and therefore the fact that whether the rendering resources are acquired or not is effectively recognized is achieved.
After the rendering resources required by the three-dimensional scene are downloaded by the cloud, the rendering resources required by the three-dimensional scene can be stored in the local of the cloud, so that when a user uses the rendering resources again or enters the three-dimensional scene, the cloud can rapidly load the rendering resources required by the locally stored three-dimensional scene, and the loading efficiency of the three-dimensional scene can be effectively improved.
It can be understood that when the terminal does not acquire the rendering resources, that is, when the terminal does not download the rendering resources required by the three-dimensional scene, the terminal downloads the rendering resources, so that the rendering pictures of the three-dimensional scene can be acquired and displayed based on the cloud and the rendering resources, and the detailed description is omitted herein.
According to the rendering processing method of the three-dimensional scene, through determining the three-dimensional scene to be rendered by the terminal, a real-time communication link between the terminal and the cloud is established based on the three-dimensional scene to be rendered in the terminal, and then the rendering picture of the three-dimensional scene can be acquired and displayed from the cloud through the real-time communication link.
On the basis of the above embodiment, referring to fig. 3, the method in this embodiment provides a technical solution for rendering and displaying a three-dimensional scene based on a scene state acquired from a cloud, and at this time, after the terminal in this embodiment acquires the rendering resource, the rendering and displaying, by the terminal, the three-dimensional scene at the terminal based on the rendering resource may include:
step S301: and after the terminal acquires the rendering resources, acquiring the scene state of the currently rendered three-dimensional scene from the cloud.
After the terminal obtains the rendering resources (i.e. all the rendering resources needed by the three-dimensional scene are downloaded), the terminal is indicated to be ready for the rendering resources of the three-dimensional scene for performing the picture rendering operation, in theory, the terminal can normally perform the picture rendering operation of the three-dimensional scene at this time, and the current rendering picture displayed in the terminal is obtained through the cloud, so that in order to ensure the continuity of the display of the rendering picture and the visual effect of the user on the rendering picture, the scene state of the current rendering three-dimensional scene can be obtained from the cloud, and in particular, the scene state of the current rendering three-dimensional scene can be sent to the terminal through a real-time communication link. Wherein the scene status may include a user status (user's gesture information, user's identification status, user's attribute information, etc.) and scene information (building information, scene light information, non-player characters in the scene, etc.) in the currently rendered three-dimensional scene,
Step S302: and rendering and displaying the three-dimensional scene at the terminal based on the scene state and the rendering resources.
When the cloud end continuously sends the rendering pictures corresponding to the three-dimensional scene to the terminal through the real-time communication link, the terminal can continuously receive and display the received rendering pictures, and in order to enable the terminal to stably display the three-dimensional scene, after the terminal acquires the scene state and the rendering resources, the terminal can render and display the three-dimensional scene based on the scene state and the rendering resources. In some examples, the rendering and exhibiting operation of the three-dimensional scene may be implemented by a pre-trained machine learning model or a neural network model, and at this time, the rendering and exhibiting of the three-dimensional scene at the terminal based on the scene state and the rendering resource may include: the method comprises the steps of obtaining a pre-trained machine learning model or neural network model, sending scene states and rendering resources to the machine learning model or the neural network model, obtaining a rendered three-dimensional scene output by the machine learning model or the neural network model, and displaying the rendered three-dimensional scene.
In other examples, the rendering and displaying operation of the three-dimensional scene may be implemented not only by a pre-trained machine learning model or a neural network model, but also by directly analyzing the scene state and the rendering resources to implement rendering and displaying of the three-dimensional scene, where the rendering and displaying of the three-dimensional scene by the terminal based on the scene state and the rendering resources may include: the terminal performs rendering operation of the three-dimensional scene on the terminal based on the rendering resource to obtain a rendered three-dimensional scene, and the rendered three-dimensional scene is continuously displayed by taking the scene state as a reference, so that continuous and stable display of the three-dimensional scene is ensured.
For example, the terminal may obtain a rendered image of the three-dimensional scene from the cloud end through a real-time communication link, and the rendered image may change with time; at t1, after the terminal obtains or downloads all the rendering resources required by the three-dimensional scene from the cloud, the scene state of the current three-dimensional scene obtained by the terminal from the cloud is the state of the point a in the three-dimensional scene of the user, and since the terminal is ready for performing all the rendering resources of the three-dimensional scene, the terminal can stably perform the rendering and displaying operation of the three-dimensional scene at this time, so that the scene picture of the picture displayed by the terminal is switched to the scene picture rendered by the terminal, in order to ensure the continuity and stability of the display picture, the terminal can continuously perform stable display on the three-dimensional scene based on the scene state of the current three-dimensional scene obtained from the cloud, namely, after the terminal performs the rendering operation of the three-dimensional scene, the terminal can continuously perform rendering and displaying on the three-dimensional scene based on the scene state of the current three-dimensional scene, namely, the terminal can continuously perform stable display on the three-dimensional scene in the state of the point a.
It should be noted that, when the terminal performs rendering and exhibition of the three-dimensional scene at the terminal based on the scene state and the rendering resource, since the obtained scene state is obtained at a historical time with respect to the rendering and exhibition operation of the three-dimensional scene, for example: at time t1, the cloud acquires the scene state of the currently rendered three-dimensional scene, the scene state is sent to the terminal from the cloud, and when the terminal acquires the scene state, the rendering and the displaying of the three-dimensional scene at the terminal can be performed at time t2 based on the scene state and rendering resources, namely, the synchronous operation of the scene state of the three-dimensional scene rendered by the cloud and the scene state of the three-dimensional scene rendered at the terminal side is realized. Because the time t2 is later than the time t1, a certain hysteresis exists in the rendering and displaying operation of the three-dimensional scene at the terminal side relative to the scene state of the currently rendered three-dimensional scene acquired by the cloud, and thus the scene state of the three-dimensional scene rendered by the terminal may have a certain delay relative to the cloud. In general, the user does not perceive the delay, but in order to shorten the synchronization delay between the cloud and the terminal for the rendered three-dimensional scene as much as possible and ensure the quality and effect of information synchronization, the terminal may not only perform rendering and displaying operations of the three-dimensional scene at the terminal based on the scene state and the rendering resource, but also perform rendering and displaying operations of the three-dimensional scene at the terminal in combination with the scene picture frame, and at this time, the rendering and displaying of the three-dimensional scene at the terminal based on the scene state and the rendering resource by the terminal may include: acquiring a scene picture frame of a currently rendered three-dimensional scene from a cloud; and the terminal performs rendering and displaying of the three-dimensional scene on the terminal based on the scene state, the scene picture frame and the rendering resource.
Specifically, after the terminal obtains the rendering resources, not only the scene state of the currently rendered three-dimensional scene but also the scene picture frame of the currently rendered three-dimensional scene can be obtained from the cloud. After the scene frame is acquired, the rendering and displaying of the three-dimensional scene can be performed based on the scene state, the scene frame and the rendering resource, and in some examples, the rendering and displaying operation of the three-dimensional scene can be implemented through a pre-trained machine learning model or a neural network model, and the description in the above embodiments may be referred to for details, which are not repeated herein.
In other examples, the rendering and displaying operation of the three-dimensional scene may be implemented not only by a pre-trained machine learning model or a neural network model, but also by directly rendering and displaying the three-dimensional scene based on the scene state, the scene picture frame and the rendering resource, where the rendering and displaying of the three-dimensional scene by the terminal based on the scene state, the scene picture frame and the rendering resource may include: the terminal performs rendering operation of the three-dimensional scene on the terminal based on the rendering resource to obtain a rendered three-dimensional scene, and the rendered three-dimensional scene is continuously displayed by taking the scene state and the scene picture frame as the reference, so that continuous stable display of the three-dimensional scene is ensured.
It should be noted that, since the rendering and displaying operation of the scene frame acquired from the cloud end relative to the three-dimensional scene of the terminal has a certain hysteresis, in order to reduce or shorten the difference between the scene frame acquired from the terminal and the real-time frame of the three-dimensional scene of the cloud end, when the terminal performs the rendering operation of the three-dimensional scene based on the rendering resource, the three-dimensional scene can be acquired, then the real-time frame of the cloud end can be estimated based on the scene frame, and then the rendering three-dimensional scene is continuously displayed with the scene state and the real-time frame as references, thereby effectively ensuring continuous and stable display of the three-dimensional scene.
For example, the time corresponding to the scene frame of the three-dimensional scene rendered at the terminal cloud is t1, and then the terminal performs the rendering operation of the three-dimensional scene at the terminal at t2 based on the scene state, the scene frame and the rendering resource, and since the time t2 is later than the time t1, in order to shorten the gap between the three-dimensional scene rendered by the terminal and the three-dimensional scene rendered by the cloud, the time difference corresponding to the scene frame can be determined=t2-t 1, the terminal may determine the frame operation recorded in the difference time and the analysis result corresponding to the frame operation (whether frame chase is required and the number of frame chases required); and then rendering and displaying the three-dimensional scene according to the scene state, the real-time picture frame and the analysis result corresponding to the frame operation, thereby effectively ensuring continuous and stable displaying operation of the three-dimensional scene.
In still other examples, when the terminal performs rendering and presentation operations of the three-dimensional scene, due to limited visual area of the terminal, for example: in an application scene of a game or an application scene of a virtual clothing state, an avatar of a user is located at a point in a map in a three-dimensional scene, most of information in a view angle area of a terminal is main information that the user can view all environment information, user gestures and the like currently, and influence of related information located at other positions in the map is not great. Therefore, in order to enable quality and efficiency of three-dimensional scene rendering, when the terminal acquires the scene state, or the scene state and the scene picture frame, of the currently rendered three-dimensional scene from the cloud, not all the scene information, and all the scene picture frame of the three-dimensional scene may be acquired, but only the key part information of the three-dimensional scene may be acquired, and at this time, acquiring the scene state of the currently rendered three-dimensional scene from the cloud may include: acquiring a current visual area of a three-dimensional scene displayed in a terminal; acquiring a scene state of a currently rendered three-dimensional scene from the cloud based on the current visual area, and rendering and displaying the three-dimensional scene on the terminal based on the scene state and rendering resources; or, acquiring the scene state and the scene picture frame of the currently rendered three-dimensional scene from the cloud based on the current visual area; and rendering and displaying the three-dimensional scene on the terminal based on the scene state, the scene picture frame and the rendering resource, so that the quality and efficiency of rendering and displaying the three-dimensional scene can be further improved.
In this embodiment, after the terminal obtains the rendering resource, that is, the terminal can perform rendering and displaying operations of the three-dimensional scene, the terminal can not obtain the rendering picture of the three-dimensional scene through the cloud end any more, at this time, the displayed three-dimensional scene can be directly switched from rendering by the cloud end to switching by the terminal, specifically, the scene state of the three-dimensional scene currently rendered is obtained from the cloud end, and then rendering and displaying of the three-dimensional scene are performed at the terminal based on the scene state and the rendering resource, thereby effectively ensuring the stability and reliability of displaying the three-dimensional scene.
On the basis of the above embodiment, referring to fig. 4, after the terminal obtains the rendering resources, not only the three-dimensional scene rendered by the terminal may be displayed, but also the displayed three-dimensional scene may be gradually rendered by the cloud end and switched to be rendered by the terminal, that is, a technical scheme for collaborative rendering of the three-dimensional scene based on the terminal and the cloud end is implemented, and at this time, after the terminal obtains the rendering resources, the terminal may perform rendering and displaying of the three-dimensional scene at the terminal based on the rendering resources, which includes:
Step S401: after the terminal obtains the rendering resources, obtaining a first dynamic weight of a first rendering picture of the three-dimensional scene rendered by the cloud and a second dynamic weight of a second rendering picture of the three-dimensional scene rendered by the terminal, wherein the sum of the first dynamic weight and the second dynamic weight is 1, the first dynamic weight is reduced along with the change of time, and the second dynamic weight is increased along with the change of time.
Step S402: a rendered picture of the three-dimensional scene is determined based on the first dynamic weight, the second dynamic weight, and the first rendered picture and the second rendered picture.
Step S403: and displaying the rendering picture.
The first dynamic weight and the second dynamic weight may dynamically change with time, and in the process that the terminal obtains the rendering resources required by the three-dimensional scene from the cloud, the displayed picture is the first rendering picture completely rendered by the cloud, therefore, the initial value of the first dynamic weight may be 1, the initial value of the second dynamic weight may be 0, the first dynamic weight may gradually decrease with time, and the second dynamic weight may gradually increase with time, and in particular, the user flexibly configures the variation amplitude of the first dynamic weight and the second dynamic weight, which are not described in detail herein.
In order to enable collaborative rendering operation between the cloud and the terminal, after the terminal acquires the rendering resource, a first dynamic weight of a first rendering picture of the three-dimensional scene rendered by the cloud and a second dynamic weight of a second rendering picture of the three-dimensional scene rendered by the terminal may be acquired. The first dynamic weight, the second dynamic weight, the first rendered frame and the second rendered frame may then be analyzed to determine a rendered frame of the three-dimensional scene, and in some examples, the rendered frame may be obtained by a pre-trained neural network model, where determining the rendered frame of the three-dimensional scene based on the first dynamic weight, the second dynamic weight, and the first rendered frame and the second rendered frame may include: the method comprises the steps of obtaining a pre-trained neural network model, inputting a first dynamic weight, a second dynamic weight, a first rendering picture and a second rendering picture into the neural network model, and obtaining a rendering picture of a three-dimensional scene output by the neural network model.
In still other examples, the rendered frame of the three-dimensional scene may be obtained not only by the neural network model, but also by analyzing the first dynamic weight, the second dynamic weight, the cloud side rendered frame and the end side rendered frame by a preset algorithm, and at this time, determining the rendered frame of the three-dimensional scene based on the first dynamic weight, the second dynamic weight and the first rendered frame and the second rendered frame may include: obtaining a first part of rendered picture based on the first dynamic weight and the first rendered picture; obtaining a second partial rendered picture based on the second dynamic weight and the second rendered picture; and mixing the first part of rendering pictures and the second part of rendering pictures to obtain rendering pictures of the three-dimensional scene.
Specifically, after the first dynamic weight and the cloud side rendering picture are acquired, the first dynamic weight and the cloud side rendering picture are analyzed and processed, and in some examples, the product value between the first dynamic weight and the cloud side rendering picture can be determined as the first part rendering picture, so that the accuracy and the reliability of determining the first part rendering picture are effectively ensured; similarly, after the second dynamic weight and the end-side rendered frame are acquired, the second dynamic weight and the end-side rendered frame may be analyzed, and in some examples, a product value between the second dynamic weight and the end-side rendered frame may be determined as the second partial rendered frame, thereby effectively ensuring accuracy and reliability of determining the second partial rendered frame.
The first part of the rendering picture is used for marking the picture part rendered by the cloud, the second part of the rendering picture is used for marking the picture part rendered by the terminal, and because the first part of the rendering picture and the second part of the rendering picture are part of the rendering picture, after the first part of the rendering picture and the second part of the rendering picture are obtained, the first part of the rendering picture and the second part of the rendering picture can be subjected to mixed processing, so that the rendering picture of the three-dimensional scene can be obtained, the obtained rendering picture is the picture obtained by the cooperative rendering operation of the cloud and the terminal, and then the rendering picture can be displayed, so that a user can stably and quickly view or enter the three-dimensional scene.
In this embodiment, after the terminal obtains the rendering resources, the terminal obtains the first dynamic weight of the first rendering frame of the three-dimensional scene rendered by the cloud and the second dynamic weight of the second rendering frame of the three-dimensional scene rendered by the terminal, then determines the rendering frame of the three-dimensional scene based on the first dynamic weight, the second dynamic weight and the first rendering frame and the second rendering frame, and then displays the rendering frame, so that the three-dimensional scene can be obtained and displayed from the cloud in the process that the terminal obtains the rendering resources required by the three-dimensional scene through the cloud, and after the terminal obtains all the rendering resources required by the three-dimensional scene from the cloud, the displayed three-dimensional scene is gradually switched from rendering to rendering operation by the terminal, and thus the continuity and fluency of displaying the three-dimensional scene are effectively ensured.
On the basis of the above embodiment, referring to fig. 5, since the terminal has different viewport modes, and the different viewport modes are applicable to different application scenarios, the method in this embodiment may further include a technical scheme for adjusting the viewport mode of the terminal, and specifically, after establishing a real-time communication link between the terminal and the cloud, the method in this embodiment may further include:
Step S501: and acquiring the view port mode of the terminal.
Step S502: when the view port mode is a non-streaming mode, the view port mode of the terminal is adjusted to be a streaming mode, and the streaming mode is used for receiving and displaying pictures rendered by the cloud; the non-streaming mode is used for receiving and displaying a picture rendered by the terminal or a picture rendered by the cooperation of the terminal and the cloud.
Wherein, the terminal is configured with different viewport modes, and the different viewport modes are applicable to different application scenes, and in some examples, the different viewport modes may include: the streaming mode is used for receiving and displaying the pictures rendered by the cloud, and the non-streaming mode is used for receiving and displaying the pictures rendered by the terminal or the pictures rendered by the cooperation of the terminal and the cloud. After the real-time communication link between the terminal and the cloud is established, in order to ensure stable rendering and displaying operation on the three-dimensional scene, not only the cloud or the terminal needs to obtain a rendering picture of the three-dimensional scene, but also the view port mode of the terminal can be considered to perform rendering and displaying operation on the three-dimensional scene, at this time, the view port mode of the terminal can be obtained, and in some examples, the view port mode can be determined by a preset mode identifier, for example: when the preset mode identifier is a first preset identifier '1', the view port mode is determined to be a non-streaming mode, and when the preset mode identifier is a second preset identifier '0', the view port mode is determined to be a streaming mode, so that the acquisition operation of the view port mode of the terminal is effectively ensured.
When the view port mode is the non-streaming mode, the view port mode of the terminal cannot accurately display the rendered picture of the three-dimensional scene obtained from the cloud end at this time, so that the view port mode of the terminal is adjusted to be the streaming mode, specifically, after the real-time communication link is established between the cloud end and the terminal, the cloud end can send prompt information for identifying that the real-time communication link between the terminal and the cloud end is successfully established to the terminal, the terminal can receive the prompt information for identifying that the real-time communication link between the terminal and the cloud end is successfully established, the view port mode of the terminal can default to the non-streaming mode, and then after the terminal receives the prompt information, the view port mode of the terminal can be adjusted to be the streaming mode based on the prompt information, so that the picture rendered by the cloud end can be displayed based on the streaming mode, and the adjustment operation of the view port mode of the terminal is effectively realized.
Correspondingly, in this embodiment, not only the viewport mode may be adjusted from the non-streaming mode to the streaming mode based on the application scene, but also the viewport mode may be adjusted from the streaming mode to the non-streaming mode based on the application scene, and specifically, after the viewport mode of the terminal is adjusted to the streaming mode, the method in this embodiment may further include: and after the terminal acquires the rendering resources, adjusting the view port mode of the terminal from a streaming mode to a non-streaming mode. Specifically, after the terminal acquires the rendering resources, the terminal can perform rendering and displaying operations of the three-dimensional scene, the view port mode of the terminal can be adjusted from the streaming mode to the non-streaming mode, and then the three-dimensional scene rendered by the terminal can be displayed based on the non-streaming mode, so that the stability and reliability of displaying the three-dimensional scene are guaranteed.
In some examples, referring to fig. 6, for a real-time communication link established between a terminal and a cloud end, in order to ensure stable reliability of transmission of a rendered image, the real-time communication link may include: the streaming channel and the data channel, where the streaming channel is used to transmit a frame of a frame rendered by the cloud, the data channel is used to transmit an interaction operation, an interaction result, a data processing operation and a data processing result, specifically, the real-time communication link may include the streaming channel, and in order to ensure accuracy and reliability of displaying a rendered frame, the terminal in this embodiment may acquire and display, from the cloud, the rendered frame of the three-dimensional scene through the real-time communication link, including: the terminal receives a rendering picture of the three-dimensional scene sent by the cloud through a streaming channel; after receiving the rendering picture, the rendering picture can be displayed based on the streaming mode of the terminal, so that the rendering picture is effectively ensured to be displayed.
In this embodiment, by acquiring the view port mode of the terminal, when the view port mode is the non-streaming mode, the view port mode of the terminal can be adjusted to the streaming mode, and then the rendered picture of the three-dimensional scene sent by the cloud terminal can be displayed based on the streaming mode of the terminal, so that the stability and reliability of displaying the rendered picture are ensured.
On the basis of the above embodiment, referring to fig. 7, after the terminal establishes a real-time communication link with the cloud end and after the terminal obtains rendering resources (i.e., rendering resources required by the terminal to have downloaded the three-dimensional scene), the method in this embodiment may include that the user may flexibly control the real-time communication link, where in this case, the method in this embodiment may further include:
step S701: a link control request corresponding to the real-time communication link is obtained.
Step S702: the real-time communication link is controlled to remain in an off state or to remain in an on state based on the link control request.
When the rendering resource is downloaded, the user can flexibly control the real-time communication link according to the requirement, and at this time, the terminal can obtain a link control request corresponding to the real-time communication link, in some examples, the link control request can be automatically generated through a state that the rendering resource is downloaded, and at this time, obtaining the link control request corresponding to the real-time communication link can include: and automatically generating a link control request corresponding to the real-time communication link in response to the downloaded state of the rendering resource. In other examples, obtaining a link control request corresponding to a real-time communication link may include: displaying a link control operation corresponding to the real-time communication link; and generating a link control request corresponding to the real-time communication link based on the link control operation, thereby effectively ensuring the accuracy and reliability of acquiring the link control request.
Because the link control request includes the identification information of the real-time communication link, after the link control request is acquired, the real-time communication link can be controlled based on the link control request, and in some examples, the real-time communication link can be controlled to maintain a closed state, namely, a communication connection state between the terminal and the cloud end is disconnected based on the link control request; alternatively, the real-time communication link may be controlled to maintain an on state based on the link control request.
In this embodiment, by acquiring the link control request corresponding to the real-time communication link, and then controlling the real-time communication link to maintain the closed state or to maintain the open state based on the link control request, flexible control operation on the real-time communication link based on the requirements is effectively achieved, and the flexibility and reliability of the method are further improved.
On the basis of any one of the above embodiments, referring to fig. 8, the real-time communication link includes a data channel, after the terminal obtains and displays the rendered image of the three-dimensional scene from the cloud through the real-time communication link, the user may perform an interactive operation with the three-dimensional scene according to the requirement, and at this time, the method in this embodiment may further include:
Step S801: and acquiring an information query request input by a user aiming at a preset object in the rendering picture.
Step S802: and sending the information inquiry request to the cloud end through the data channel, so that the cloud end analyzes and processes the information inquiry request to obtain an inquiry result corresponding to the information inquiry request.
Step S803: and receiving the query result sent by the cloud through the data channel.
After receiving and displaying the rendered screen of the three-dimensional scene sent by the cloud through the real-time communication link, the user may input an information query request for the rendered screen according to the requirement, for example: when the three-dimensional scene is a game scene and a clothing decoration scene of a virtual character, a user can interact with the virtual user image, the non-player character and a building in the scene in the three-dimensional scene through a mouse or a keyboard, so that an information inquiry request input by the user for a preset object in a rendering picture can be obtained; or when the three-dimensional scene is a design scene of the three-dimensional building, the user can interact with the control for realizing the preset function in the three-dimensional scene through a mouse or a keyboard, so that the information query request input by the user for the rendering picture can be obtained.
Because the displayed rendering picture is sent to the terminal by the cloud, after the information query request input by the user for the rendering picture is obtained, a data channel included in the real-time communication link can be determined, the information query request is sent to the cloud through the data channel, the information query request (click operation, sliding operation, double click operation, parameter input operation, information viewing operation and the like) input by the user can be obtained by the cloud, and then the information query request can be analyzed and processed to obtain a query result corresponding to the information query request.
For the query result, when the query result can be represented by a rendering screen of a three-dimensional scene rendered by a cloud, for example: when the information query request input by the user is a user gesture adjustment request, the query result can be the adjusted user gesture, and the user gesture at the moment can be displayed through a rendering picture. At this time, in order to stably implement the interactive operation, the query result may be combined with the rendered frame of the three-dimensional scene rendered by the cloud, and then the rendered frame combined with the query result is sent to the terminal through the streaming channel, so that the terminal may learn the interactive result through the rendered frame. It should be noted that, the cloud end at this time does not need to transmit the query result to the terminal separately through the data channel, but combines the query result with the rendering picture, and sends the rendering picture after the combination to the terminal through the streaming channel.
When the query result cannot be represented by the rendered screen of the three-dimensional scene rendered by the cloud, for example: when the information query request input by the user is a red packet amount query request, the query result can be the amount included in the red packet which is opened, and the amount cannot be displayed through the rendering picture, the query result is required to be independently sent from the data channel to the terminal, specifically, in order to accurately realize the interactive operation in the three-dimensional scene, after the query result corresponding to the information query request is obtained, the query result can be returned to the terminal through the data channel, so that the terminal can receive the query result sent by the cloud through the data channel, and the query result can be displayed, so that the user can quickly and intuitively acquire the interactive operation result, and the interactive experience of the user is improved.
On the basis of any one of the above embodiments, referring to fig. 9, in this embodiment, not only the rendering processing operation of the three-dimensional scene can be achieved by the cooperation of the terminal and the cloud, but also the data processing operation can be achieved by the cooperation of the terminal and the cloud, and at this time, after the terminal obtains and displays the rendering picture of the three-dimensional scene from the cloud through the real-time communication link, the method in this embodiment may further include:
Step S901: and acquiring a processing request input for a preset object in the three-dimensional scene.
Step S902: the computing resources required to process the request are determined.
Step S903: and when the computing resource is greater than or equal to a preset threshold, sending a processing request to the cloud end so that the cloud end can process based on the data processing request to obtain a processing result.
Step S904: and receiving the processing result sent by the cloud through a real-time communication link.
In the process that the terminal performs rendering processing operation on the three-dimensional scene, the terminal may acquire a processing request input by a preset object in the three-dimensional scene, and it may be understood that the processing requests corresponding to different scenes are different, and in some examples, the processing request may include at least one of the following: data adjustment requests (e.g., in a game scene, user images, user names, clothing, rides, clothing colors, etc. in a three-dimensional scene; in a model design scene, models in a three-dimensional scene, etc.), data viewing requests (e.g., in a game scene, detailed information of a user, relevant attributes of a user, etc. in a model design scene, partial data in a three-dimensional scene of a historical design; in a model design scene, etc.), data deletion requests (e.g., in a game scene, deletion of equipment owned by a virtual user in a three-dimensional scene, deletion of interaction information, etc., in a model design scene, viewing certain design data in a three-dimensional scene; in a model design scene, etc.), data call requests (e.g., call viewing of a sub-page in a three-dimensional scene, call viewing of a scene function in a three-dimensional scene; in a game scene, etc.), rendering requests of a three-dimensional scene (e.g., dynamic global illumination rendering requests, special effect particle rendering requests, etc.), simulation requests (e.g., simulation requests, soft body simulation requests, etc.), etc.
From the above, it is known that different types of processing requests can implement different data processing operations, and the computing resources required for the different data processing operations are different, so, after the processing request input for the preset object in the three-dimensional scene is acquired, in order to enable stable data processing operations, the computing resources required for the processing request can be determined, where the computing resources required for the processing request can be determined by a preset algorithm or a neural network model trained in advance.
Because some processing requests require less computing resources, some processing requests require more computing resources, in order to avoid influencing loading quality and efficiency of the terminal on a three-dimensional scene due to excessive computing resources applied by the terminal, and improve utilization rate of computing resources in the cloud, after the computing resources required by the processing requests are obtained, the computing resources can be analyzed and compared with a preset threshold, and when the computing resources are smaller than the preset threshold, the computing resources required by the processing requests at the moment are less, so that the processing requests can be directly analyzed and processed by the terminal, and processing results corresponding to the processing requests can be obtained.
When the computing resources are greater than or equal to the preset threshold, it is indicated that more computing resources are required for the processing request at this time, in order to ensure loading quality and effect of the terminal on the three-dimensional scene, the processing request can be analyzed and processed through the cloud end, specifically, the processing request can be sent to the cloud end, so that the cloud end processes based on the processing request to obtain a processing response, after the processing response is obtained, the processing response can be sent to the terminal, so that the terminal can receive a processing result sent by the cloud end through a real-time communication link, and the processing result can be displayed, so that a user can quickly view the processing result.
In this embodiment, by acquiring a processing request input for a preset object in the three-dimensional scene, determining a computing resource required by the processing request, when the required computing resource is more, analyzing and processing the processing request through the cloud end, and receiving a processing result sent by the cloud end through a real-time communication link; when the required computing resources are fewer, the terminal analyzes and processes the processing request, so that the flexibility and reliability of the processing operation are expanded, the stability and reliability of the processing operation of the terminal are ensured, in addition, the cloud resource utilization rate can be improved, and the practicability of the method is further improved.
On the basis of any one of the above embodiments, referring to fig. 10, in this embodiment, not only the rendering processing operation of the three-dimensional scene can be realized by the cooperation of the terminal and the cloud, but also the deployment operation of the component can be realized by the cooperation of the terminal and the cloud, where in this case, the method in this embodiment may further include:
step S1001: acquiring space resources required to be occupied by a preset component in a three-dimensional scene;
step S1002: and when the space resource is greater than or equal to a preset threshold value, deploying the preset component on the cloud end.
In the process of rendering processing operation of the three-dimensional scene by the terminal, the terminal can deploy some preset components for realizing the rendering processing operation of the three-dimensional scene, and in different application scenes, the space resources required to be occupied by the preset components in the three-dimensional scene are different, so that when the preset components exist in the three-dimensional scene, the space resources required to be occupied by the preset components in the three-dimensional scene can be acquired, wherein the space resources required to be occupied by the preset components in the three-dimensional scene can be determined through a preset algorithm or a pre-trained neural network model.
Because the space resources required to be occupied by some preset components are smaller, the space resources required to be occupied by some preset components are larger, in order to avoid the influence of the terminal on the loading quality and efficiency of the three-dimensional scene due to the fact that the terminal is occupied by excessive space resources, the utilization rate of the space resources in the cloud can be improved, after the space resources required to be occupied by the preset components in the three-dimensional scene are obtained, the space resources can be analyzed and compared with a preset threshold value, and when the space resources are smaller than the preset threshold value, the fact that the space resources required to be occupied by the preset components at the moment are smaller is indicated, and then the preset components can be directly deployed on the terminal, so that the preset functional operation of the three-dimensional scene can be realized through the preset components in the terminal.
When the space resource is larger than or equal to the preset threshold value, the fact that the space resource occupied by the preset component at the moment is larger is indicated, at this moment, in order to ensure the loading quality and effect of the terminal on the three-dimensional scene, the preset component can be deployed on the cloud end, a user can call the preset component deployed on the cloud end through the terminal, the preset component on the cloud end can respond through a call request, and a response result can be returned to the terminal, so that the utilization rate of the space resource in the cloud end can be improved, and the relevant functional operation of the three-dimensional scene is realized by cooperation with the cloud end.
In the embodiment, space resources required to be occupied by preset components in the three-dimensional scene are acquired; when the space resource is greater than or equal to the preset threshold value, the preset component is deployed on the cloud, and when the space resource is less than the preset threshold value, the preset component is deployed on the terminal, so that the flexible reliability of component deployment operation is expanded, the stable reliability of data processing operation of the terminal is ensured, in addition, the resource utilization rate of the cloud can be improved, and the practicability of the method is further improved.
Based on any one of the embodiments, the method in this embodiment may pre-start a preset number of cloud ends for a three-dimensional scene to further improve loading quality and efficiency of the three-dimensional scene, and specifically, before a real-time communication link between a terminal and the cloud ends is established, the method may include:
step S1101: historical operation data of the three-dimensional scene is obtained.
Step S1102: and determining the quantity information of cloud instances corresponding to the three-dimensional scene based on the historical operation data.
Step S1103: the method comprises the steps of sending the quantity information of the cloud terminal instances to the cloud terminal to pre-start the cloud terminal instances meeting the quantity information before a real-time communication link between the terminal and the cloud terminal is established.
The number of users of the three-dimensional scene may be hundreds or thousands or more, when a plurality of users of the three-dimensional scene load the three-dimensional scene at the same time, in order to improve the loading efficiency of the three-dimensional scene and ensure good experience of the user using the three-dimensional scene, a preset number of cloud instances may be started in advance, and the cloud server is taken as an example of the cloud instance to describe, with reference to fig. 11, specifically, when the user starts or uses software corresponding to the three-dimensional scene, the frequency of the three-dimensional scene started or used by the user often follows a certain rule, and generally, different users can correspond to different three-dimensional scenes with different use habits or start habits, so that the use frequency or start frequency of the three-dimensional scene can be determined by counting the use information or start information of the user on the three-dimensional scene in a preset time period.
Based on the foregoing, in order to ensure the reasonability of the preset number, the historical operation data corresponding to the three-dimensional scene may be acquired, where the historical operation data may refer to relevant operation data of the user using or loading the three-dimensional scene in a preset historical period (15 days of the preset history, 30 days of the preset history, etc.), which may specifically include: the historical operation data can be stored in a preset database or a preset area, and the historical operation data corresponding to the three-dimensional scene can be obtained by accessing the preset database or the preset area.
After the historical operation data is obtained, the historical operation data may be analyzed, so that the number information of cloud servers corresponding to the three-dimensional scene may be determined, in some examples, the number information of cloud servers may be obtained by using a preset algorithm or a pre-trained machine learning model, and at this time, determining the number information of cloud servers corresponding to the three-dimensional scene based on the historical operation data may include: and acquiring a pre-trained machine learning model, inputting historical operation data into the machine learning model, and acquiring the quantity information of cloud servers corresponding to the three-dimensional scene output by the machine learning model.
After the quantity information of the cloud servers is acquired, the quantity information of the cloud servers can be sent to the cloud end, so that the cloud end can pre-start cloud end examples meeting the quantity information before a real-time communication link between the terminal and the cloud end is established. For example, before a real-time communication link between the terminal and the cloud is established, historical operation data corresponding to the three-dimensional scene may be obtained; and then analyzing and processing the historical operation data to determine that the number information of cloud servers corresponding to the three-dimensional scene can be 500, and after the terminal acquires the number information of the cloud servers, sending the number information 500 of the cloud servers to the cloud end to pre-start 500 cloud servers before a real-time communication link between the terminal and the cloud end is established, wherein rendering resources required by the three-dimensional scene are locally stored in the cloud servers, so that the cloud end pre-starts 500 cloud servers to wait for a user to start and load the three-dimensional scene, and loading quality and efficiency of the three-dimensional scene can be effectively improved.
In the embodiment, the number information of the cloud instances corresponding to the three-dimensional scene is determined based on the historical operation data by acquiring the historical operation data corresponding to the three-dimensional scene, and the number information of the cloud instances is sent to the cloud, so that the cloud instances meeting the number information can be started in advance before a real-time communication link between the terminal and the cloud is established, and when a user starts and loads the three-dimensional scene, the rendering processing operation of the three-dimensional scene can be cooperatively realized by utilizing the cloud instances started in advance, and the practicability of the method is further improved.
Referring to fig. 12, this embodiment provides a rendering method of a three-dimensional scene, where an execution body of the method may be a rendering device of the three-dimensional scene, and the rendering device of the three-dimensional scene may be implemented as a cloud, that is, the rendering method of the three-dimensional scene may be applied to the cloud, and it should be noted that, for the cloud, in order to reduce a time for a user to wait for entering the three-dimensional scene as much as possible, local areas of the cloud store rendering resources required by the three-dimensional scene, specifically, when the user uses or enters the three-dimensional scene for the first time, the terminal and the cloud may download the rendering resources required by the three-dimensional scene, after the rendering resources required by the three-dimensional scene are downloaded by the cloud, the rendering resources required by the three-dimensional scene may be stored in the local areas of the cloud, so that when the user uses or enters the three-dimensional scene again, the rendering resources required by the three-dimensional scene may be quickly loaded by the cloud, and thus the loading efficiency of the three-dimensional scene may be effectively improved. Specifically, based on the above-mentioned rendering processing device for three-dimensional scene, the rendering processing method for three-dimensional scene in this embodiment may include the following steps:
Step S1201: and acquiring the three-dimensional scene to be rendered, which is sent by the terminal.
When a user has a cooperative requirement of the cloud end and the terminal aiming at the three-dimensional scene, the cloud end can acquire the three-dimensional scene to be rendered sent by the terminal, so that the cloud end can realize rendering processing operation aiming at the three-dimensional scene.
Specifically, when the terminal sends the three-dimensional scene to be rendered to the cloud, the terminal may download rendering resources required for the three-dimensional scene from the cloud based on the three-dimensional scene to be rendered, where the rendering resources may include at least one of the following: the initialization information of the user and the scene resource, the initialization information of the user may include at least one of the following: user identity, user type, user related information, user's birth location, user's current perspective, etc.; the scene resource may include at least one of: virtual 3D scenes, building information in the scenes, scene lighting information, non-player characters in the scenes, and the like, so that the terminal can implement rendering processing operations of the three-dimensional scenes based on the downloaded rendering resources.
Step S1202: and establishing a real-time communication link with the terminal based on the three-dimensional scene to be rendered.
After the three-dimensional scene to be rendered is acquired, a real-time communication link can be established with the terminal based on the three-dimensional scene to be rendered, and the real-time communication link can be established with the terminal through quintuple information, six tuple information or seven tuple information sent by the terminal.
Step S1203: based on rendering resources required for the three-dimensional scene stored locally at the cloud, a rendering picture of the three-dimensional scene is determined.
In order to realize the rendering processing operation of the three-dimensional scene, when or after the cloud end and the terminal establish a real-time communication link, the rendering resources required by the three-dimensional scene stored locally in the cloud end can be determined, and the rendering resources are the same as the rendering resources required by the three-dimensional scene downloaded by the terminal and can be used for realizing the rendering processing operation of the three-dimensional scene.
After the rendering resources are acquired, the rendering picture corresponding to the three-dimensional scene can be determined based on the rendering resources, specifically, the rendering resources required by the three-dimensional scene can be loaded into the memory, and after the engine of the cloud is started, the rendering operation of the picture can be directly performed, so that the rendering picture is obtained.
Step S1204: and sending the rendered picture to the terminal for display through a real-time communication link.
After the rendered picture is obtained, the rendered picture can be sent to the terminal for display through a real-time communication link, and the terminal can be in different states in the process of sending the rendered picture to the terminal for display by the cloud; in some examples, sending the rendered screen to the terminal for display over the real-time communication link may include: in the process that the terminal acquires the rendering resources required by the three-dimensional scene from the cloud, the rendering pictures are sent to the terminal for display through the real-time communication link, specifically, the cloud sends the rendering pictures to the terminal through the real-time communication link, after the terminal acquires the rendering pictures, the rendering pictures can be displayed, and the terminal can be in the process of downloading the rendering resources at the moment, namely, the terminal can synchronously perform the display operation of the rendering pictures and the downloading operation of the rendering resources.
In still other examples, sending the rendered screen to a terminal for display over a real-time communication link may include: in a preset time period after the terminal obtains rendering resources required by the three-dimensional scene from the cloud, transmitting the rendering picture to the terminal for display through a real-time communication link; the preset time period can be any value from 200ms to 500ms, and a person skilled in the art can adjust the preset time period according to specific application scenes or application requirements. After the cloud acquires the rendering picture, the cloud can send the rendering picture to the terminal through the real-time communication link, the rendering picture can be displayed after the terminal acquires the rendering picture, and it is noted that the terminal can download the rendering resource required by the three-dimensional scene at this time, namely, the terminal can download the rendering resource first and then can display the rendering picture.
It should be noted that, the cloud end in this embodiment may not only execute the method steps in the above embodiment, but also execute the method steps corresponding to the cloud end in fig. 1-11, and detailed descriptions thereof will be omitted herein.
According to the rendering processing method of the three-dimensional scene, the loading request of the three-dimensional scene sent by the terminal is obtained, the real-time communication link is established with the terminal based on the loading request, rendering resources required by the three-dimensional scene stored in the cloud are determined, rendering pictures corresponding to the three-dimensional scene are determined based on the rendering resources, the rendering pictures are sent to the terminal for display through the real-time communication link, the rendering processing operation of the three-dimensional scene by the cloud and the terminal is effectively achieved, the loading time of the three-dimensional scene can be shortened, good experience of a user is guaranteed, expansion operation of a new user is facilitated, and the practicability of the method is further improved.
In specific application, referring to fig. 13, the application embodiment combines the principle of end-cloud coordination to provide a three-dimensional scene loading and rendering method based on end-cloud coordination, an execution main body of the method may include a terminal side and a cloud side, the method may include a cloud hot start stage, a cloud rendering plug-flow stage, an end-cloud state synchronization and switching stage, a hybrid rendering stage and other stages, the method can greatly reduce the first loading time of a user on a three-dimensional scene, and ensure and improve good experience of the user when using an end-side engine, and the method may specifically include:
The first stage: the cloud hot start stage comprises the following steps:
step 10: and responding to a starting request of a user for the three-dimensional scene, and controlling the starting of an end-side engine of the terminal side.
Wherein the end side engine may comprise at least one of: any rendering Engine supporting cross-platform rendering, such as 3D engines AceNNR, un real Engine, unity3D, openGL-based engines, etc.
Step 11: the terminal side sends a connection establishment request to the cloud side through the communication layer so as to realize that the terminal side engine and the cloud side engine establish a remote real-time communication channel under the action of the communication layer; the terminal side engine can start to download rendering resources required by the three-dimensional scene while the terminal side initiates the connection establishment request.
The connection establishment request may include five-tuple information corresponding to the terminal side, identity identification of the terminal side, a virtual 3D scene, a person in the scene, a current view angle of the person, and other information, so that the cloud side engine may establish a real-time communication channel based on the information in the connection establishment request, and start corresponding rendering resources based on association information corresponding to the three-dimensional scene included in the connection establishment request.
In addition, the communication layer is responsible for and realizes that the terminal side and the cloud side can establish connection with the engine instance; rendering resources required for a three-dimensional scene may include: resources necessary for rendering three-dimensional scenes such as 3D scene packets and 3D roles are not available in the end-side engine at the moment. In addition, the established Real-time communication Channel can be implemented as Web Real-time communication (webreal-Time Communication, webRTC for short), and a plurality of Data channels (Data channels) and Stream channels (Stream channels) can be established between the end-side engine and the cloud-side engine, and the Data channels are used for acquiring and transmitting user operations and cloud-side engine calling results; the streaming channel is used for realizing the transmission operation of streaming video and the transmission operation of rendering pictures.
In still other examples, when the real-time communication channel is established between the terminal side and the cloud side, the real-time communication channel may be established through a network protocol such as a full duplex communication protocol (WebSocket) based on TCP, a real-time communication network library (for example, a real-time streaming protocol (Real Time Streaming Protocol, RTSP for short)), and the like, where a signaling server may be established first, and a connection establishment request may be forwarded to the cloud side through the signaling server, so that the real-time communication channel is established between the terminal side and the cloud side.
Step 12: the cloud side may be deployed with a plurality of cloud side engine instances (corresponding to the cloud sides in the above embodiment), after the cloud side obtains the connection request, the cloud side engine instance corresponding to the end side engine may be determined based on the connection request, and the cloud side engine instance may establish a real-time communication channel with the end side engine based on the connection request.
The type and number information of cloud-side engine instances can be dynamically allocated based on the user's usage needs or scenarios.
Step 13: the cloud side locally stores rendering resources corresponding to the three-dimensional scene in advance, after the cloud side engine and the end side engine establish a real-time communication channel, the rendering resources of the local cache can be quickly started and loaded in a hot start mode, and meanwhile, a prompt message of completion of connection establishment is returned to the terminal side.
In still other examples, the hot start of the cloud side engine may be an engine instance running in advance, so that the cloud side is ready for a preset number of engine instances (the preset number may be determined by the scene demand or the instance demand determined by the historical statistics data) before receiving the start request of the three-dimensional scene, and is allowed to wait in a resource loading stage, and memory may be shared between the engine instances, so that resources required for storing the three-dimensional scene are ready for all possible scene resources in advance on the cloud side or by sharing the memory are realized, so that the cloud side engine may quickly load the required resources into the memory, and quality and efficiency of starting and loading the rendering resources by the cloud side engine are further improved.
Step 14: and the terminal side receives a prompt message for indicating that the connection is completed, and switches the video port of the terminal side to a streaming mode based on the prompt message so as to start to prepare to receive and display the video stream pushed by the cloud side.
The working modes of the view port at the terminal side can include: the streaming mode and the non-streaming mode can flexibly adjust the working mode of the viewport according to requirements, and the streaming mode can display streaming data sent by a third party; the non-streaming mode can display data on the terminal side.
And a second stage: the cloud plug flow stage comprises the following steps:
step 20: and after the cloud side engine is started, directly starting to render the picture, and transmitting the rendered picture to the terminal side through a real-time communication channel.
After the cloud side engine is started, the frame rendering operation can be performed, then the frame capturing, compression, encoding and other processes are performed on the rendered frames, and each frame of the rendered frames is sequentially sent to the terminal side through a streaming channel of the communication layer in a streaming data mode. Note that H264 coding or other hardware coding may be used in the cloud-side coding operation of the rendered picture.
Step 21: the terminal side obtains a rendering picture sent by the cloud side through a real-time communication channel, and displays the rendering picture by utilizing a view port in a streaming mode.
Specifically, each time a terminal receives a new rendered picture transmitted by the cloud side, the new rendered picture can be decoded, and then the decoded rendered picture is displayed by using a view port in a streaming mode, so that the rendered picture sent by the cloud side engine can be displayed on a screen of the terminal side.
In still other examples, the terminal side can display the rendering picture sent by the cloud side, and can also simultaneously implement interaction with the user, and specifically can include the following steps:
Step 31: the terminal side may acquire an execution operation input by the user for the displayed rendering screen.
Step 32: and the terminal side sends the execution operation to the cloud side through the real-time communication channel.
Specifically, for the terminal side, the viewport of the terminal side not only can display the rendering screen, but also can continuously acquire the execution operation of the user for the input of the rendering screen, wherein the execution operation can include a keyboard input operation, a click touch operation and the like, then the captured user operation and the captured user input can be sent to the cloud side through the data channel of the communication layer, and in some examples, the captured user operation and the captured user input can be transmitted in binary codes after being sequenced in a certain rule.
Step 33: the cloud side receives the execution operation sent by the terminal side through the real-time communication channel, analyzes and processes the execution operation, obtains a response result corresponding to the execution operation, and sends the response result to the terminal side.
After the cloud side receives the execution operation, the execution operation may be parsed (de-serialized) and responded, so that a response result corresponding to the execution operation may be obtained, and the response result may include at least one of: controlling virtual character results, calling function results, etc. in a three-dimensional scene, different scenes may correspond to different calling functions for the calling function results, for example: the calling functions may include a page calling function for realizing calling of the sub-page, a function for adjusting a building in the three-dimensional scene, a function for controlling a virtual user in the three-dimensional scene, and the like, that is, the above calling function may be a function capable of controlling any one of a person in the three-dimensional scene and a scene object in the scene environment, and it should be noted that different calling functions may correspond to different calling function results.
After the response result is obtained, the response result can be sent to the terminal side, specifically, after the cloud side completes each response, the corresponding response result (such as a function return value, a callback function result and the like) can be sent back to the terminal side through a data channel of the communication layer, so that the terminal side can be ensured to always know the state of the cloud side engine or the user can obtain the required engine state, and the interaction effect between the user of the terminal side and the cloud side engine can be realized.
Step 34: and the terminal receives a response result corresponding to the execution operation and displays the response result so as to realize complete user interaction operation.
It should be noted that the above method steps need to be continuously executed until the engine and the scene resources at the terminal side are completely downloaded and loaded, and then the next stage can be entered: and (3) a synchronization stage.
And a third stage: a synchronization phase comprising the steps of:
step 40: detecting whether an end-side engine of the terminal side finishes downloading rendering resources required by the three-dimensional scene, generating prompt information for identifying that the rendering resources are downloaded when the end-side engine finishes downloading the rendering resources required by the three-dimensional scene, and sending the prompt information to the cloud side.
When the end-side engine has downloaded the rendering resources required by the three-dimensional scene, the rendering resources can be loaded into the memory, and after the rendering resources are completely loaded, the rendering operation of the picture can be performed. In addition, after the prompt information is generated, the prompt information can be sent to the cloud side through a message channel in the real-time communication channel so as to inform the cloud side engine of the end side engine state at the moment.
Step 41: yun Cejie receives prompt information for identifying that rendering resources are downloaded, obtains the current cloud side state of the three-dimensional scene in the cloud based on the prompt information, and sends the current cloud side state to the terminal side to achieve information synchronization operation.
After the cloud side engine receives the prompt information, the cloud side current state of the three-dimensional scene can be obtained, wherein the cloud side current state can comprise at least one of the following: the virtual character, the current view angle, the physical property of the scene, the memory snapshot corresponding to the three-dimensional scene and other information in the three-dimensional scene, and then the current state of the cloud side can be sent to the terminal side through a data channel.
Step 42: the terminal side obtains a cloud side current state sent by the cloud side, and determines an end side state of a three-dimensional scene in the terminal based on the cloud side current state, wherein the cloud side current state is the same as the end side state.
After the terminal side obtains the cloud side current state, the scene rendering state of the terminal side engine can be updated by using the cloud side current state, so that the scene rendering state is consistent with the cloud side engine rendering current state, and namely the state synchronization operation is realized.
Step 43: the terminal side can display the corresponding rendered picture of the three-dimensional scene based on the terminal side state so as to realize switching the picture rendered by the terminal side display cloud side to the picture rendered by the terminal side.
After the state synchronization operation is completed, the working mode of the view port on the terminal side can be switched, namely, the working mode of the view port is switched from a streaming mode to a non-streaming mode, so that the video stream transmitted by the display cloud side engine is switched to a real-time picture rendered by the display end side engine, and the smooth transition from cloud side rendering to end side rendering is achieved.
In still other examples, when performing the information synchronization operation, not only the end-side state and the cloud-side state of the three-dimensional scene but also the image frame of the three-dimensional scene may be synchronized, where the method in this embodiment may include:
step 41': yun Cejie receives prompt information for identifying that rendering resources are downloaded, obtains a cloud side current state and a cloud side current frame of a three-dimensional scene in the cloud based on the prompt information, and sends the cloud side current state and the cloud side current frame to the terminal side to realize information synchronization operation.
Step 42': the terminal side obtains a cloud side current state and a cloud side current frame sent by the cloud side, determines an end side state of a three-dimensional scene in the terminal based on the cloud side current state, and determines an end side current frame of the three-dimensional scene to be rendered in the terminal based on the cloud side current frame, wherein the cloud side current state is identical to the end side state, and the cloud side current frame is identical to the end side current frame.
Step 43': the terminal side can display the corresponding rendered picture of the three-dimensional scene based on the terminal side state and the terminal side current frame, so that the picture rendered by the terminal side display cloud side is switched to the picture rendered by the terminal side.
In other examples, in order to make the process of information synchronous switching smoother, after the end-side engine completes the state synchronous operation, the frame is not directly switched from cloud plug flow to end-real-time rendering, but frame synchronous operation can be performed within a small period of time (for example, between 200ms and 500 ms) after that, that is, the end-side current frame is determined by the cloud-side current frame. The frame synchronization means that the user operation and state of each frame are enabled to act on the end side engine and the cloud side engine, and the pictures on the cloud side and the terminal side are simultaneously rendered. In the process of performing frame synchronization operation, the end side view port display can be gradually switched from the cloud side rendering picture to the end side rendering picture, specifically, the gradual switching can be realized by mixing the cloud side rendering frame picture and the end side rendering frame picture, namely, firstly, the mixing weight of the end side rendering frame is gradually increased from the weight 0, and the weight of the cloud side rendering frame is reduced, so that the effect of gradual switching is achieved.
Fourth stage: a stabilization phase comprising the steps of:
step 51: acquiring a channel control request corresponding to a real-time communication channel;
step 52: based on the channel control request, the real-time communication channel is controlled to be kept in a closed state or kept in an open state.
After the switching from cloud rendering plug flow to end side real-time rendering is completed, the technical scheme in the embodiment enters a stable stage, a user can flexibly control the real-time communication channel according to requirements, and specifically, when a maintenance opening request corresponding to the real-time communication channel is acquired, the real-time communication channel can be controlled to maintain an opening state based on the maintenance opening request; when a closing request corresponding to the real-time communication channel is acquired, the real-time communication channel can be controlled to keep a closing state based on the closing request.
It should be noted that, not only the real-time communication channel can be controlled according to the user requirement, but also the user can flexibly control the real-time communication channel based on the cloud side algorithm and the end side rendering effect, and specifically, the method in this embodiment may include:
step 61: the terminal side acquires a data processing request corresponding to the three-dimensional scene and determines computing resources required by the data processing request.
Step 62: and when the computing resource is greater than or equal to a preset threshold value, sending a data processing request to the cloud side.
Step 63: the cloud side acquires a data processing request sent by the terminal side, processes the data processing request, acquires a data processing response, and sends the data processing response to the terminal side through a real-time communication channel.
Step 64: and the terminal side receives the data processing response sent by the cloud side through the real-time communication channel.
In addition, the method in the embodiment may further include: the terminal side obtains space resources occupied by a preset component in the three-dimensional scene, and when the space resources are larger than or equal to a preset threshold value, the preset component is deployed on the cloud side.
The user can maintain the real-time communication channel to be kept in an open state or close the real-time communication channel according to the computing power demand or the resource occupation demand, if the cloud side computing power is still rich or the cloud side rendering demand is high and the cloud side auxiliary rendering is needed, the open state of the real-time communication channel can be kept continuously, and when the cloud side utilizes the computing power to calculate the required data (such as a dynamic illumination probe), the data can be transmitted to the end side through the communication layer, and the engine of the terminal side can combine the data to perform the mixed rendering operation with better effect. In general, after information synchronization is performed based on a cloud side and a terminal side, an end-cloud collaborative auxiliary rendering with an end-side engine as a main and a cloud side engine as an auxiliary is performed, so that a calculation force can be switched to a cloud side auxiliary rendering mode or the cloud side rendering engine is turned off.
When the cloud side is utilized to assist in rendering three-dimensional scenes, a user can perform customizable collaborative operations on the cloud side engine and the end side engine according to cloud side computing force, for example: when the events such as global illumination, physical system updating and particle effect rendering are included in the three-dimensional scene, the cloud side engine can be used for rendering the events such as global illumination, physical system updating and particle effect rendering and the like and transmitting the events to the terminal side through the communication layer, and the terminal side engine can realize better mixed rendering by combining the data.
According to the technical scheme provided by the application embodiment, the technical scheme of rapid loading and end cloud collaborative hybrid rendering of the 3D scene is realized based on the processing principle of the 3D engine and the real-time communication architecture, and a capability foundation of rapid loading and hybrid rendering can be provided for the 3D engine (especially an APP embedded engine) at the end side; specifically, after the user enters the end-side engine, the user firstly sees a rendering picture pushed by the cloud-side hot-started 3D engine and can interact with the rendering picture, the process is far faster than the process of waiting for the end-side engine resource to be downloaded and then presenting the picture, and the second-level starting effect can be achieved, so that the user experience and the retention rate are greatly improved. In addition, after the end-side engine finishes the downloading and loading of the resources, the rendering picture can be delivered to the end-side engine by the cloud-side engine to be calculated, so that the process of switching from the cloud-side engine to the end-side engine is realized, the cloud-side engine can release the calculation force, the utilization rate of the cloud-side calculation force is greatly improved, and the cost of the resources can be reduced; in addition, the user can flexibly realize the cooperative operation of the cloud side engine and the end side engine according to the requirements, so that the mixed rendering effect on the rendered pictures can be realized, the mixed rendering effect at the moment is more, the practicability of the scheme is further improved, and the popularization and the application of the market are facilitated.
Referring to fig. 14, the present embodiment provides a three-dimensional scene rendering apparatus for executing the three-dimensional scene rendering method shown in fig. 2, and specifically, the three-dimensional scene rendering apparatus may include:
a first determining module 11, configured to determine a three-dimensional scene to be rendered by the terminal;
the first establishing module 12 is configured to establish a real-time communication link between the terminal and the cloud, so that the terminal obtains and displays a rendered picture of the three-dimensional scene from the cloud through the real-time communication link, where the rendered picture is obtained by rendering the cloud;
the first processing module 13 is configured to, in a process of acquiring the rendered image from the cloud end through the real-time communication link, acquire, in real time, rendering resources required by the three-dimensional scene from the cloud end by the terminal, so that after the terminal acquires the rendering resources, render and display the three-dimensional scene at the terminal based on the rendering resources.
In some examples, after the first processing module 13 obtains the rendering resources, when the terminal performs rendering and displaying of the three-dimensional scene on the terminal based on the rendering resources, the first processing module 13 is configured to: after the terminal acquires the rendering resources, acquiring the scene state of the currently rendered three-dimensional scene from the cloud; and rendering and displaying the three-dimensional scene at the terminal based on the scene state and the rendering resources.
In some examples, when the first processing module 13 performs rendering and presentation of the three-dimensional scene at the terminal based on the scene status and the rendering resources, the first processing module 13 is configured to perform: acquiring a scene picture frame of a currently rendered three-dimensional scene from a cloud; and rendering and displaying the three-dimensional scene at the terminal based on the scene state, the scene picture frame and the rendering resource.
In some examples, after the first processing module 13 obtains the rendering resource, when the rendering resource is used for rendering and displaying the three-dimensional scene at the terminal, the first processing module 13 is configured to perform: after the rendering resources are obtained, a first dynamic weight of a first rendering picture of the three-dimensional scene rendered by the cloud and a second dynamic weight of a second rendering picture of the three-dimensional scene rendered by the terminal are obtained, wherein the sum of the first dynamic weight and the second dynamic weight is 1, the first dynamic weight is reduced along with the change of time, and the second dynamic weight is increased along with the change of time; determining a rendered picture of the three-dimensional scene based on the first dynamic weight, the second dynamic weight, and the first rendered picture and the second rendered picture; and displaying the rendered picture.
In some examples, after the real-time communication link between the terminal and the cloud is established, the first processing module 13 in this embodiment is configured to perform the following steps: acquiring a view port mode of a terminal; when the view port mode is a non-streaming mode, the view port mode of the terminal is adjusted to be a streaming mode, and the streaming mode is used for receiving and displaying pictures rendered by the cloud; the non-streaming mode is used for receiving and displaying a picture rendered by the terminal or a picture rendered by the cooperation of the terminal and the cloud.
In some examples, after adjusting the viewport mode of the terminal to the streaming mode, the first processing module 13 in this embodiment is configured to perform: and after the terminal acquires the rendering resources, the view port mode of the terminal is adjusted from the streaming mode to the non-streaming mode.
In some examples, the real-time communication link includes a streaming channel; acquiring and displaying the rendered picture of the three-dimensional scene from the cloud through the real-time communication link at the first processing module 13, including: receiving a rendering picture of the three-dimensional scene sent by the cloud through a streaming channel; and displaying the rendered picture based on the streaming mode.
In some examples, the real-time communication link includes a data channel, and after the terminal obtains and displays the rendered screen of the three-dimensional scene from the cloud through the real-time communication link, the first processing module 13 in this embodiment is configured to perform: acquiring an information query request input by a user aiming at a preset object in a rendering picture; the information inquiry request is sent to the cloud through the data channel, so that the cloud analyzes and processes the information inquiry request to obtain an inquiry result corresponding to the information inquiry request; and receiving the query result sent by the cloud through the data channel.
In some examples, after the rendered screen of the three-dimensional scene is acquired and displayed from the cloud through the real-time communication link, the first processing module 13 in this embodiment is configured to: acquiring a processing request input aiming at a preset object in the three-dimensional scene; determining computing resources required for processing the request; when the computing resource is greater than or equal to a preset threshold, sending a processing request to the cloud end so that the cloud end can process based on the processing request to obtain a processing result; and receiving the processing result sent by the cloud through a real-time communication link.
In some examples, the first processing module 13 in this embodiment is configured to perform the following steps before the real-time communication link between the terminal and the cloud is established: acquiring historical operation data of a three-dimensional scene; determining the quantity information of cloud instances corresponding to the three-dimensional scene based on the historical operation data; the method comprises the steps that the quantity information of cloud instances is sent to the cloud, and the cloud instances meeting the quantity information are started in advance before a real-time communication link between a terminal and the cloud is established.
The apparatus shown in fig. 14 may perform the method of the embodiment shown in fig. 1-11 and 13, and reference is made to the relevant description of the embodiment shown in fig. 1-11 and 13 for parts of this embodiment not described in detail. The implementation process and technical effects of this technical solution are described in the embodiments shown in fig. 1 to 11 and 13, and are not described herein.
In one possible design, the structure of the rendering processing apparatus for three-dimensional scene shown in fig. 14 may be implemented as an electronic device, which may be a terminal, a tablet computer, a personal computer, or other devices. As shown in fig. 15, the electronic device may include: a first processor 21 and a first memory 22. The first memory 22 is used for storing a program for executing the three-dimensional scene rendering processing method provided in the embodiments shown in fig. 1 to 11 and 13 by the corresponding electronic device, and the first processor 21 is configured to execute the program stored in the first memory 22.
The program comprises one or more computer instructions, wherein the one or more computer instructions, when executed by the first processor 21, are capable of performing the steps of: determining a three-dimensional scene to be rendered by a terminal; establishing a real-time communication link between the terminal and the cloud so that the terminal can acquire and display a rendering picture of the three-dimensional scene from the cloud through the real-time communication link, wherein the rendering picture is obtained by rendering the cloud; in the process that the terminal acquires the rendering picture from the cloud through the real-time communication link, the terminal acquires the rendering resource required by the three-dimensional scene from the cloud in real time, so that after the terminal acquires the rendering resource, the terminal performs rendering and displaying of the three-dimensional scene on the basis of the rendering resource.
Further, the first processor 21 is further configured to perform all or part of the steps in the embodiments shown in fig. 1-11 and 13.
The electronic device may further include a first communication interface 23 in a structure for the electronic device to communicate with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium, configured to store computer software instructions for an electronic device, where the computer storage medium includes a program related to a method for executing the rendering processing method of the three-dimensional scene in the method embodiments shown in fig. 1 to 11 and 13.
Furthermore, the present embodiment is directed to a computer program product comprising: the computer program, when executed by the processor of the electronic device, causes the processor to execute the method for rendering the three-dimensional scene in the method embodiments shown in fig. 1 to 11 and 13.
Referring to fig. 16, the present embodiment provides another three-dimensional scene rendering processing apparatus, in some examples, the three-dimensional scene rendering processing apparatus may be implemented as a cloud, where the three-dimensional scene rendering processing apparatus is configured to execute the three-dimensional scene rendering processing method shown in fig. 11, and specifically, the three-dimensional scene rendering processing apparatus may include:
A second obtaining module 31, configured to obtain a three-dimensional scene to be rendered sent by the terminal;
a second determining module 32, configured to establish a real-time communication link with the terminal based on the three-dimensional scene to be rendered;
a second determining module 32, configured to determine a rendering frame of the three-dimensional scene based on rendering resources required for the three-dimensional scene stored locally in the cloud;
the second processing module 33 is configured to send the rendered frame to the terminal for display through a real-time communication link.
In some examples, when the second processing module 33 sends the rendered screen to the terminal for display via the real-time communication link, the second processing module 33 is configured to perform: in the process that the terminal acquires rendering resources required by the three-dimensional scene from the cloud, a rendering picture is sent to the terminal for display through a real-time communication link; or in a preset time period after the terminal obtains the rendering resources required by the three-dimensional scene from the cloud, sending the rendering picture to the terminal for display through a real-time communication link.
The apparatus of fig. 16 may perform the method of the embodiment of fig. 12-13, and reference is made to the relevant description of the embodiment of fig. 12-13 for parts of this embodiment not described in detail. The implementation process and technical effects of this technical solution are described in the embodiments shown in fig. 12 to 13, and are not described herein.
In one possible design, the structure of the rendering processing apparatus for three-dimensional scene shown in fig. 16 may be implemented as an electronic device, and the electronic device may be implemented as various devices on the cloud side, such as a cloud server, an edge server, and a remote server. As shown in fig. 17, the electronic device may include: a second processor 41 and a second memory 42. The second memory 42 is used for storing a program for executing the rendering processing method of the three-dimensional scene provided in the embodiment shown in fig. 12 to 13 described above by the corresponding electronic device, and the second processor 41 is configured to execute the program stored in the second memory 42.
The program comprises one or more computer instructions, wherein the one or more computer instructions, when executed by the second processor 41, are capable of performing the steps of: acquiring a three-dimensional scene to be rendered, which is sent by a terminal; establishing a real-time communication link with the terminal based on the three-dimensional scene to be rendered; determining a rendering picture of the three-dimensional scene based on rendering resources required by the three-dimensional scene stored locally at a cloud; and sending the rendering picture to the terminal for display through the real-time communication link.
Further, the second processor 41 is further configured to perform all or part of the steps in the embodiments shown in fig. 12-13.
The electronic device may further include a second communication interface 43 in the structure of the electronic device, for communicating with other devices or a communication network.
An embodiment of the present invention provides a computer storage medium storing computer software instructions for an electronic device, where the computer storage medium includes a program for executing the method for rendering a three-dimensional scene in the method embodiment shown in fig. 12 to 13.
Furthermore, the present embodiment is directed to a computer program product comprising: a computer program, when executed by a processor of an electronic device, causes the processor to perform the method for rendering a three-dimensional scene in the method embodiments shown in fig. 12-13.
Referring to fig. 18, the present embodiment provides a loading system for a three-dimensional scene, where the loading system may include: a terminal 51 and a cloud 52;
the terminal 51 is configured to determine a three-dimensional scene to be rendered by the terminal 51, and establish a real-time communication link between the terminal 51 and the cloud 52, so that the terminal obtains and displays a rendering picture of the three-dimensional scene from the cloud 52 through the real-time communication link, where the rendering picture is rendered by the cloud 52;
the cloud 52 is configured to determine a rendering screen of the three-dimensional scene based on rendering resources required for the three-dimensional scene stored locally in the cloud, and send the rendering screen to the terminal 51 through a real-time communication link;
The terminal 51 is further configured to obtain, in real time, rendering resources required for the three-dimensional scene from the cloud 52 in a process of obtaining the rendering screen from the cloud 52 through the real-time communication link, so that after the terminal 51 obtains the rendering resources, the terminal 51 performs rendering and displaying of the three-dimensional scene at the terminal 51 based on the rendering resources.
The system shown in fig. 18 may perform the method of the embodiment shown in fig. 1-13, and reference is made to the relevant description of the embodiment shown in fig. 1-13 for parts of this embodiment that are not described in detail. The implementation process and the technical effect of this technical solution are described in the embodiments shown in fig. 1 to 13, and are not described herein.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by adding necessary general purpose hardware platforms, or may be implemented by a combination of hardware and software. Based on such understanding, the foregoing aspects, in essence and portions contributing to the art, may be embodied in the form of a computer program product, which may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (12)

1. A rendering processing method of a three-dimensional scene, comprising:
determining a three-dimensional scene to be rendered by a terminal;
establishing a real-time communication link between a terminal and a cloud end so that the terminal can acquire and display a rendering picture of the three-dimensional scene from the cloud end through the real-time communication link, wherein the rendering picture is obtained by rendering the cloud end;
in the process that the terminal acquires the rendering picture from the cloud through the real-time communication link, the terminal downloads rendering resources required by the three-dimensional scene from the cloud in real time;
after the terminal has downloaded rendering resources required by the three-dimensional scene, acquiring a cloud side current state of the three-dimensional scene from the cloud, wherein the cloud side current state comprises at least one of the following: virtual characters in a three-dimensional scene, a current view angle, physical properties of the scene and a memory snapshot corresponding to the three-dimensional scene;
updating the terminal side state of the terminal by utilizing the cloud side current state, so that the updated terminal side state is the same as the cloud side current state;
and the terminal performs rendering and displaying of the three-dimensional scene on the terminal based on the updated terminal side state, the scene picture frame and the rendering resource, so as to realize switching of the terminal display picture rendered by the cloud side to the picture rendered by the terminal.
2. The method of claim 1, wherein switching the terminal display from the cloud-side rendered view to the terminal rendered view comprises:
after the terminal acquires the rendering resources, acquiring a first dynamic weight of a first rendering picture of the three-dimensional scene rendered by the cloud and a second dynamic weight of a second rendering picture of the three-dimensional scene rendered by the terminal, wherein the sum of the first dynamic weight and the second dynamic weight is 1, the first dynamic weight is reduced along with the change of time, and the second dynamic weight is increased along with the change of time;
determining a rendered picture of a three-dimensional scene based on the first dynamic weight, the second dynamic weight, and the first rendered picture and the second rendered picture;
and displaying the rendering picture.
3. The method of claim 1, wherein after establishing the real-time communication link of the terminal with the cloud, the method further comprises:
acquiring a view port mode of the terminal;
when the view port mode is a non-streaming mode, the view port mode of the terminal is adjusted to be a streaming mode, and the streaming mode is used for receiving and displaying pictures rendered by a cloud; the non-streaming mode is used for receiving and displaying a picture rendered by the terminal or a picture rendered by the cooperation of the terminal and the cloud.
4. The method of claim 3, wherein after adjusting the viewport mode of the terminal to the streaming mode, the method further comprises:
and after the terminal acquires the rendering resources, adjusting the view port mode of the terminal from a streaming mode to a non-streaming mode.
5. The method of claim 3, wherein the real-time communication link comprises a streaming channel; the terminal obtains and displays the rendering picture of the three-dimensional scene from the cloud through the real-time communication link, and the rendering picture comprises the following steps:
the terminal receives a rendering picture of the three-dimensional scene sent by the cloud through the streaming channel;
and displaying the rendering picture based on the streaming mode.
6. The method of claim 3, wherein the real-time communication link comprises a data channel, and wherein after the terminal obtains and presents the rendered view of the three-dimensional scene from the cloud via the real-time communication link, the method further comprises:
acquiring an information query request input by a user aiming at a preset object in the rendering picture;
the information inquiry request is sent to the cloud end through the data channel, so that the cloud end analyzes and processes the information inquiry request to obtain an inquiry result corresponding to the information inquiry request;
And receiving the query result sent by the cloud through the data channel.
7. The method according to any one of claims 1-5, wherein after the terminal obtains and presents the rendered picture of the three-dimensional scene from the cloud via the real-time communication link, the method further comprises:
acquiring a processing request input aiming at a preset object in the three-dimensional scene;
determining computing resources required for the processing request;
when the computing resource is greater than or equal to a preset threshold, the processing request is sent to a cloud end, so that the cloud end processes based on the processing request, and a processing result is obtained;
and receiving the processing result sent by the cloud through the real-time communication link.
8. The method according to any of claims 1-5, wherein prior to establishing the real-time communication link of the terminal with the cloud, the method further comprises:
acquiring historical operation data of the three-dimensional scene;
determining the quantity information of cloud instances corresponding to the three-dimensional scene based on the historical operation data;
the quantity information of the cloud instances is sent to the cloud, and the cloud instances meeting the quantity information are started in advance before a real-time communication link between the terminal and the cloud is established.
9. A rendering processing method of a three-dimensional scene, comprising:
acquiring a three-dimensional scene to be rendered, which is sent by a terminal;
establishing a real-time communication link with the terminal based on the three-dimensional scene to be rendered;
determining a rendering picture of the three-dimensional scene based on rendering resources required by the three-dimensional scene stored locally at a cloud;
transmitting the rendered picture to the terminal for display through the real-time communication link;
the method further comprises the steps of:
after the terminal has downloaded the rendering resources required by the three-dimensional scene, receiving prompt information sent by the terminal and used for identifying that the rendering resources are downloaded;
acquiring a cloud side current state of a three-dimensional scene in the cloud based on the prompt information, wherein the cloud side current state comprises at least one of the following: virtual characters in a three-dimensional scene, a current view angle, physical properties of the scene and a memory snapshot corresponding to the three-dimensional scene;
and sending the current state of the cloud side to the terminal so as to realize switching of the display of the terminal from the picture rendered by the cloud side to the picture rendered by the terminal.
10. A rendering processing system for a three-dimensional scene, comprising: a terminal and a cloud;
The terminal is used for determining a three-dimensional scene to be rendered by the terminal, and establishing a real-time communication link between the terminal and the cloud so that the terminal can acquire and display a rendering picture of the three-dimensional scene from the cloud through the real-time communication link, wherein the rendering picture is obtained by rendering the cloud;
the cloud end is used for determining a rendering picture of the three-dimensional scene based on rendering resources required by the three-dimensional scene stored locally in the cloud end, and sending the rendering picture to the terminal through the real-time communication link;
the terminal is further configured to, in a process of acquiring the rendering screen from the cloud end through the real-time communication link, download rendering resources required by the three-dimensional scene from the cloud end in real time, and acquire a cloud side current state of the three-dimensional scene from the cloud end after the terminal has downloaded the rendering resources required by the three-dimensional scene, where the cloud side current state includes at least one of: virtual characters in a three-dimensional scene, a current view angle, physical properties of the scene and a memory snapshot corresponding to the three-dimensional scene; updating the terminal side state of the terminal by utilizing the cloud side current state, so that the updated terminal side state is the same as the cloud side current state; and the terminal performs rendering and displaying of the three-dimensional scene on the terminal based on the updated terminal side state, the scene picture frame and the rendering resource, so as to realize switching of the terminal display picture rendered by the cloud side to the picture rendered by the terminal.
11. An electronic device, comprising: a memory, a processor; wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the rendering processing method of a three-dimensional scene as claimed in any one of claims 1 to 9.
12. A computer storage medium storing a computer program which, when executed by a computer, implements the rendering processing method of a three-dimensional scene according to any one of claims 1 to 9.
CN202311038837.0A 2023-08-16 2023-08-16 Rendering processing method, device and system of three-dimensional scene and computer storage medium Active CN116758201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311038837.0A CN116758201B (en) 2023-08-16 2023-08-16 Rendering processing method, device and system of three-dimensional scene and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311038837.0A CN116758201B (en) 2023-08-16 2023-08-16 Rendering processing method, device and system of three-dimensional scene and computer storage medium

Publications (2)

Publication Number Publication Date
CN116758201A CN116758201A (en) 2023-09-15
CN116758201B true CN116758201B (en) 2024-01-12

Family

ID=87959476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311038837.0A Active CN116758201B (en) 2023-08-16 2023-08-16 Rendering processing method, device and system of three-dimensional scene and computer storage medium

Country Status (1)

Country Link
CN (1) CN116758201B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456113B (en) * 2023-12-26 2024-04-23 山东山大华天软件有限公司 Cloud offline rendering interactive application implementation method and system

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105263050A (en) * 2015-11-04 2016-01-20 山东大学 Mobile terminal real-time rendering system and method based on cloud platform
CN106713889A (en) * 2015-11-13 2017-05-24 中国电信股份有限公司 3D frame rendering method and system and mobile terminal
CN110415324A (en) * 2019-07-23 2019-11-05 南阳市润德数码科技有限公司 A method of it is rendered based on cloud
CN110415325A (en) * 2019-07-25 2019-11-05 杭州经纬信息技术股份有限公司 Cloud renders three-dimensional the Visual Implementation method and system
CN112770295A (en) * 2021-01-12 2021-05-07 北京知优科技有限公司 Mobile VR development method, system and medium based on GPU cloud server and 5G/WIFI6 network transmission technology
CN114470745A (en) * 2021-12-27 2022-05-13 炫彩互动网络科技有限公司 Cloud game implementation method, device and system based on SRT
CN114501062A (en) * 2022-01-27 2022-05-13 腾讯科技(深圳)有限公司 Video rendering coordination method, device, equipment and storage medium
CN114708371A (en) * 2022-04-12 2022-07-05 联通(广东)产业互联网有限公司 Three-dimensional scene model rendering and displaying method, device and system and electronic equipment
CN114972594A (en) * 2022-04-25 2022-08-30 北京百度网讯科技有限公司 Data processing method, device, equipment and medium for meta universe
CN115454637A (en) * 2022-09-16 2022-12-09 北京字跳网络技术有限公司 Image rendering method, device, equipment and medium
CN115445194A (en) * 2022-10-10 2022-12-09 网易(杭州)网络有限公司 Rendering method, device and equipment of game and storage medium
CN115661011A (en) * 2022-09-28 2023-01-31 北京有竹居网络技术有限公司 Rendering method, device, equipment and storage medium
CN116260824A (en) * 2021-12-10 2023-06-13 腾讯科技(深圳)有限公司 Service data transmission method, system, storage medium and related equipment
CN116319790A (en) * 2023-03-13 2023-06-23 北京新唐思创教育科技有限公司 Rendering method, device, equipment and storage medium of full-true scene

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112562433B (en) * 2020-12-30 2021-09-07 华中师范大学 Working method of 5G strong interaction remote delivery teaching system based on holographic terminal

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105263050A (en) * 2015-11-04 2016-01-20 山东大学 Mobile terminal real-time rendering system and method based on cloud platform
CN106713889A (en) * 2015-11-13 2017-05-24 中国电信股份有限公司 3D frame rendering method and system and mobile terminal
CN110415324A (en) * 2019-07-23 2019-11-05 南阳市润德数码科技有限公司 A method of it is rendered based on cloud
CN110415325A (en) * 2019-07-25 2019-11-05 杭州经纬信息技术股份有限公司 Cloud renders three-dimensional the Visual Implementation method and system
CN112770295A (en) * 2021-01-12 2021-05-07 北京知优科技有限公司 Mobile VR development method, system and medium based on GPU cloud server and 5G/WIFI6 network transmission technology
CN116260824A (en) * 2021-12-10 2023-06-13 腾讯科技(深圳)有限公司 Service data transmission method, system, storage medium and related equipment
CN114470745A (en) * 2021-12-27 2022-05-13 炫彩互动网络科技有限公司 Cloud game implementation method, device and system based on SRT
CN114501062A (en) * 2022-01-27 2022-05-13 腾讯科技(深圳)有限公司 Video rendering coordination method, device, equipment and storage medium
CN114708371A (en) * 2022-04-12 2022-07-05 联通(广东)产业互联网有限公司 Three-dimensional scene model rendering and displaying method, device and system and electronic equipment
CN114972594A (en) * 2022-04-25 2022-08-30 北京百度网讯科技有限公司 Data processing method, device, equipment and medium for meta universe
CN115454637A (en) * 2022-09-16 2022-12-09 北京字跳网络技术有限公司 Image rendering method, device, equipment and medium
CN115661011A (en) * 2022-09-28 2023-01-31 北京有竹居网络技术有限公司 Rendering method, device, equipment and storage medium
CN115445194A (en) * 2022-10-10 2022-12-09 网易(杭州)网络有限公司 Rendering method, device and equipment of game and storage medium
CN116319790A (en) * 2023-03-13 2023-06-23 北京新唐思创教育科技有限公司 Rendering method, device, equipment and storage medium of full-true scene

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Towards Efficient Edge Cloud Augmentation for Virtual Reality MMOGs;Zhang Wuyang et al;《IEEE Symposium on Edge Computing (SEC)》;全文 *
基于VRay引擎的云渲染研究和实现;吴佳伟;《中国优秀硕士学位论文全文数据库电子期刊 工程科技II辑》;第2022卷(第1期);全文 *
基于云的3DTV移动终端关键技术研究与实现;付航;《中国优秀硕士学位论文全文数据库电子期刊 信息科技辑》;第2013卷(第5期);全文 *

Also Published As

Publication number Publication date
CN116758201A (en) 2023-09-15

Similar Documents

Publication Publication Date Title
US10306180B2 (en) Predictive virtual reality content streaming techniques
US11195332B2 (en) Information interaction method based on virtual space scene, computer equipment and computer-readable storage medium
AU2019233201B2 (en) Resource configuration method and apparatus, terminal, and storage medium
US10560755B2 (en) Methods and systems for concurrently transmitting object data by way of parallel network interfaces
US9454282B2 (en) Sending application input commands over a network
CN116758201B (en) Rendering processing method, device and system of three-dimensional scene and computer storage medium
JP6861287B2 (en) Effect sharing methods and systems for video
US11792245B2 (en) Network resource oriented data communication
EP4282499A1 (en) Data processing method and apparatus, and device and readable storage medium
CN111142967B (en) Augmented reality display method and device, electronic equipment and storage medium
KR102441514B1 (en) Hybrid streaming
CN114570020A (en) Data processing method and system
CN110971974A (en) Configuration parameter creating method, device, terminal and storage medium
CN114697703A (en) Video data generation method and device, electronic equipment and storage medium
CN111249723B (en) Method, device, electronic equipment and storage medium for display control in game
CN114071170B (en) Network live broadcast interaction method and device
CN116843802A (en) Virtual image processing method and related product
CN114513512A (en) Interface rendering method and device
CN111367598B (en) Method and device for processing action instruction, electronic equipment and computer readable storage medium
KR20220146801A (en) Method, computer device, and computer program for providing high-definition image of region of interest using single stream
CN112770185B (en) Method and device for processing Sprite map, electronic equipment and storage medium
CN118262022A (en) Scene generation method, device and storage medium in metaspace
CN118154746A (en) Hierarchical rendering method, device and storage medium in metaspace
Lu A Real Time Draggable Frame Capture System with Mobile Device
CN115814406A (en) Image processing method and device for virtual scene and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant