CN115661011A - Rendering method, device, equipment and storage medium - Google Patents

Rendering method, device, equipment and storage medium Download PDF

Info

Publication number
CN115661011A
CN115661011A CN202211193470.5A CN202211193470A CN115661011A CN 115661011 A CN115661011 A CN 115661011A CN 202211193470 A CN202211193470 A CN 202211193470A CN 115661011 A CN115661011 A CN 115661011A
Authority
CN
China
Prior art keywords
rendering
image frame
cloud
terminal
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211193470.5A
Other languages
Chinese (zh)
Inventor
王帅
王剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youzhuju Network Technology Co Ltd
Original Assignee
Beijing Youzhuju Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Youzhuju Network Technology Co Ltd filed Critical Beijing Youzhuju Network Technology Co Ltd
Priority to CN202211193470.5A priority Critical patent/CN115661011A/en
Publication of CN115661011A publication Critical patent/CN115661011A/en
Pending legal-status Critical Current

Links

Images

Abstract

According to an embodiment of the disclosure, a rendering method, an apparatus, a device and a storage medium are provided. In the method, a terminal device renders a first portion of content to be presented to generate a first image frame. At least a first component in the first portion can be switched to render in the cloud. The terminal device receives a second image frame from the remote device. The second image frame is generated by rendering a second portion of the content at the cloud. At least a second component in the second portion can be switched to render locally. The terminal device generates a composite image frame for presentation based on the first image frame and the second image frame. Therefore, the terminal computing power and the cloud computing power can be simultaneously exerted, and the image quality presented to the user is improved.

Description

Rendering method, device, equipment and storage medium
Technical Field
Example embodiments of the present disclosure generally relate to the field of computers, and in particular, to rendering methods, apparatuses, devices, and computer-readable storage media.
Background
With improvements in end user network access quality, increased computing power for cloud computing service providers to deploy at the center and edge, and the growing maturity and stability of network system architectures, real-time cloud rendering technologies combining real-time communication technologies and high-quality graphics image rendering algorithms have come into existence, e.g., cloud games, cloud X Reality (XR), such as Augmented Reality (AR), virtual Reality (VR), mixed Reality (MR), and so forth. One feature of the real-time cloud rendering application is that a user can locally configure only necessary input and output devices without expensive computing devices such as a high-performance Graphics Processing Unit (GPU), and the like, and high-quality audio and video rendering processing can be performed in real time in response to user interaction. Real-time cloud rendering applications have certain requirements on network transmission and delay. However, in practice, network transmission and network delay conditions are often uncontrollable.
Disclosure of Invention
In a first aspect of the present disclosure, a rendering method is provided. The method includes rendering a first portion of content to be presented to generate a first image frame, at least a first component in the first portion being switchable to be rendered in a cloud; receiving a second image frame from the remote device, the second image frame generated by rendering a second portion of the content at the cloud, at least a second component in the second portion being switchable to be rendered locally; and generating a composite image frame for presentation based on the first image frame and the second image frame.
In a second aspect of the disclosure, a rendering method is provided. The method comprises determining a first portion and a second portion of content to be presented, the first portion being capable of being rendered at the terminal device and comprising at least a first component capable of being switched to rendering at the cloud, the second portion being capable of being rendered at the cloud and comprising at least a second component capable of being switched to rendering at the terminal device; generating a second image frame by causing a second portion of the content to be rendered at the cloud; and transmitting the second image frame to the terminal device.
In a third aspect of the present disclosure, a rendering apparatus is provided. The apparatus includes a first rendering module configured to render a first portion of content to be presented to generate a first image frame, at least a first component of the first portion capable of being switched to rendering in a cloud; a receiving module configured to receive a second image frame from the remote device, the second image frame generated by rendering a second portion of the content at the cloud, at least a second component in the second portion being switchable to be rendered locally; and a compositing module configured to generate a composite image frame for presentation based on the first image frame and the second image frame.
In a fourth aspect of the present disclosure, a rendering apparatus is provided. The apparatus includes a grouping module configured to determine a first portion and a second portion of content to be presented, the first portion being renderable at the terminal device and including at least a first component being switchable to render at a cloud, the second portion being renderable at the cloud and including at least a second component being switchable to render at the terminal device; a second rendering module configured to generate a second image frame by causing a second portion of the content to be rendered at the cloud; and a transmitting module configured to transmit the second image frame to the terminal device.
In a fifth aspect of the present disclosure, an electronic device is provided. The apparatus comprises at least one processing unit; and at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit. The instructions, when executed by the at least one processing unit, cause the apparatus to perform the method of the first or second aspect.
In a sixth aspect of the disclosure, a computer-readable storage medium is provided. The medium has stored thereon a computer program which, when executed by a processor, implements the method of the first or second aspect.
It should be understood that the statements made in this section are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure can be implemented;
FIG. 2 illustrates an example rendering process according to some embodiments of the present disclosure;
FIG. 3 illustrates an example rendering method according to some embodiments of the present disclosure;
FIG. 4 illustrates an example rendering method according to some other embodiments of the present disclosure;
FIG. 5 illustrates an example architecture for performing a rendering process in accordance with some embodiments of the present disclosure;
6A-6F illustrate example image frames generated by terminal and cloud collaborative rendering in accordance with some embodiments of the present disclosure;
FIG. 7 illustrates a block diagram of a rendering apparatus, in accordance with some embodiments of the present disclosure;
fig. 8 shows a block diagram of a rendering apparatus according to some other embodiments of the present disclosure; and
fig. 9 illustrates a block diagram of a device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been illustrated in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding thereof. It should be understood that the drawings and the embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The term "some embodiments" should be understood as "at least some embodiments". Other explicit and implicit definitions are also possible below.
It will be appreciated that the data referred to in this disclosure, including but not limited to the data itself, the procurement or use of the data, should comply with the requirements of the relevant laws and regulations.
It is understood that before the technical solutions disclosed in the embodiments of the present disclosure are used, the user should be informed of the type, the use range, the use scene, etc. of the personal information related to the present disclosure and obtain the authorization of the user through an appropriate manner according to the relevant laws and regulations.
For example, when responding to the receiving of the user's active request, prompt information is sent to the user to explicitly prompt the user that the operation requested to be performed will require obtaining and using personal information to the user, so that the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the disclosed technical solution according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request from the user, the prompt information is sent to the user, for example, a pop-up window may be used, and the prompt information may be presented in text in the pop-up window. In addition, a selection control for providing personal information to the electronic equipment by the user's selection of "agree" or "disagree" can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and not limiting, and other ways of satisfying relevant laws and regulations may be applied to the implementation of the present disclosure.
The real-time cloud rendering technology can combine a real-time communication technology with a high-quality graphic image rendering algorithm to provide real-time and high-quality audio and video rendering processing. Web-based real-time cloud rendering is evolving. This is a lightweight real-time audio-video solution. The scheme has low requirements on the software of the terminal, only needs a Web browser without additionally installing client software, and can realize real-time rendering by means of powerful real-time audio and video calculation of the cloud.
Meanwhile, along with the updating of the terminal equipment, the terminal computing power is continuously improved. Many terminal devices, such as mobile phones, tablet computers, VR all-in-one machines, game books, and the like, already have an independent GPU and a high-performance carrying chip, and can support software graphics libraries and graphics hardware architecture interfaces, so that performance requirements of various terminal games can be met.
Terminal 3A (e.g., high cost, high quality, high volume stand-alone games) gaming typically places high demands on Central Processing Units (CPUs) and/or GPUs, and run-time memory, etc., in addition to requiring terminals to be computationally expensive. Lower hardware configurations may result in reduced frame rates, use of stutter, or even failure to meet operating conditions. In terms of storage, large spaces are often required to install applications, large games or applications requiring even Gigabytes (GB) of space. The upgrade update process also tends to be time consuming, and relatively frequent application update upgrades result in more time being consumed, which reduces the user experience. In addition, the consumption of power for large games due to rendering overhead is also a big problem.
With the maturity of a script engine of a Web browser, and the upgrade of a Graphics standard and a Graphics Library implementation capability of a mobile terminal such as an Open Graphics Library (OpenGL) based on Web, the use of the computing power of a GPU by the Web browser has become possible. The real-time rendering capability of the Web terminal application is greatly mined. Thanks to this, many Web page three-dimensional (3D) programming frameworks have come into play. The method is mainly characterized in that 3D content is rendered by using a GPU of the terminal equipment, meanwhile, games do not need to be completely installed and downloaded, and the latest content can be kept all the time.
Web-based terminal games, although not necessarily installed, still require the downloading of necessary scenes and materials. If the scene is complex, the downloading process required for switching game scenes still causes certain user waiting time. On the other hand, since the computing power of the terminal is still limited compared to the cloud, it is impossible to support a scene with many facets and complicated illumination, material and physical simulation calculations.
In addition, cloud games are often implemented through adaptation and modification. For example, the terminal 3A game may be moved to a cloud-implemented end-stream cloud. The migration mode converts the original performance requirement on the terminal equipment into the requirement on network transmission and delay. In practice, the network transmission and network delay conditions are often uncontrollable.
For example, cloud real-time rendering has certain requirements on network bandwidth, and low bit rate may cause image quality loss. For example, for a 1080p video stream, a 2Mbps rate can only ensure a clear picture at a frame rate of about 10fps, a rate of about 6Mbps is required for a frame rate of 60fps, a rate of 30Mbps is required for a frame rate of 144fps, and a rate of 50Mbps or more is required for an excellent picture experience. If the network bandwidth cannot meet the required bandwidth requirement, in order to continue providing real-time streaming media transmission for the user, the cloud generally reduces the bit rate, reduces the video frame rate, and even reduces the picture resolution, so that the picture definition seen by the user is reduced, the ambiguity is increased, and the visual experience of the user is affected.
Cloud real-time rendering also has low latency requirements, and high latency can affect user experience. For example, the delay from the user input control, to the cloud response rendering, and to the streaming media back to the user side is inevitable. When the delay exceeds 100 milliseconds, the user can obviously perceive the delay, and particularly for some VR applications, the user can be dazzled, and the use experience is affected.
The embodiment of the disclosure provides a rendering scheme, which can complete a real-time rendering task by combining terminal computing power and cloud rendering. According to this scheme, a portion of the content to be presented (referred to as "first portion") is rendered into an image frame (referred to as "first image frame") locally at the terminal device. At least one component (referred to as a "first component") in the first portion of content can be switched to render in the cloud. The terminal device also receives additional image frames (referred to as "second image frames") from the remote device. The second image frame is generated by rendering another portion of the content (referred to as a "second portion") at the cloud. At least one component (referred to as a "second component") in the second portion of content can be switched to render locally. Based on the first image frame and the second image frame, the terminal device generates a composite image frame for local rendering.
In this way, a content component classification architecture is introduced, and the content to be rendered is divided into a terminal rendering group for rendering at the terminal device and a cloud rendering group for rendering at the cloud. The rendering work of the terminal rendering group is completed by means of the terminal computing power, the rendering work of the cloud rendering group is completed by means of the cloud computing power, and finally the composite display can be completed on the user terminal (for example, a Web browser).
Therefore, the terminal computing power and the cloud computing power can be exerted simultaneously, the image quality is improved, the sensitivity to the network bandwidth is reduced, the network delay influence is reduced, and the picture quality problem caused by low code rate is improved. Moreover, the method can be used immediately, so that the downloading time and the terminal storage space are saved, and the user experience is improved. In addition, the components in the terminal rendering group and the cloud rendering group can be dynamically switched, and controllable rendering of the content at the component level is realized.
FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure can be implemented.
Environment 100 includes a plurality of terminal devices 105-1, 105-2 \8230 \ 8230, 105-N (individually or collectively referred to as terminal devices 105), each having an application 110-1, 110-2 \8230 \ 8230 \ 110-N (individually or collectively referred to as applications 110) installed thereon, where N is any suitable positive integer greater than 2. For ease of discussion, the application 110 is also referred to as a terminal application 110. By way of example, the terminal application 110 may be implemented as a Web application, such as a browser application, that retrieves, presents, and delivers Web information resources. The terminal device 105 may present the content to the user by running the terminal application 110. For ease of discussion, some embodiments will be discussed with a browser application as an example of the terminal application 110.
Terminal equipment 105 may be any type of equipment including both virtual and physical equipment. By way of example, the end devices 105 may include, but are not limited to, mobile devices, fixed or portable devices, and the like, including cell phones, desktop computers, laptop computers, notebook computers, netbook computers, tablet computers, media computers, multimedia tablets, personal Communication Systems (PCS) devices, personal navigation devices, personal Digital Assistants (PDAs), audio/video players, digital cameras/camcorders, positioning devices, television receivers, radio broadcast receivers, electronic book devices, virtual Reality (VR) machines, game consoles, game books, or any combination of the foregoing, including accessories and peripherals for these devices, or any combination thereof. In some embodiments, terminal device 105 can also support any type of interface to the user (such as "wearable" circuitry, etc.).
Also included in environment 100 are a plurality of remote devices 115-1, 115-2 \8230 \ 8230; 115-M (individually or collectively remote devices 115), each having mounted thereon remote instances 120-1, 120-2 \8230ofthe terminal application \8230; 8230; 120-M (individually or collectively remote instances 120), where M is any suitable positive integer greater than 2. The remote instance 120 may process the request from the terminal application 110.
Remote device 115 may be any type of device including both virtual and physical devices. By way of example, the remote device 115 may include, but is not limited to, a mainframe, an edge computing node, a rack server, a router computer, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a desktop computer, and so forth. In some embodiments, the remote device 115 may be implemented as a virtual machine, a container, or a bare metal server.
In some embodiments, the remote device 115 may be located at a server, for example implemented by a server of the server. In these embodiments, the remote instance 120 installed on the remote device 115 is also referred to as a server instance. In some other embodiments, one terminal device (e.g., terminal device 105-1) may act as a remote device to other terminal devices (e.g., terminal device 105-2 \8230; 105-M).
As shown in fig. 1, terminal devices 105-1, 105-2 \8230 \ 823030, 105-N may communicate with remote devices 115-1, 115-2 \8230 \ 8230 \ 8230, 115-M over network 125. The network 125 may include any type of wired and/or wireless network, such as a wired and/or wireless wide area network (e.g., a cellular network), a wired and/or wireless local area network, and so forth. Communications in environment 100 may employ any suitable wired and wireless communication means, following any suitable communication protocol, and the scope of the present disclosure is not limited in this respect. In some embodiments, a Real Time Communication (RTC) channel may be established between end device 105 and remote device 115.
It should be understood that any suitable number of terminal devices 105 and remote devices 115 may be disposed in environment 100. One terminal device may communicate with one or more remote devices, and one remote device may communicate with one or more terminal devices. It should also be understood that one remote instance 120 of one terminal application is shown in fig. 1 as being installed on one remote device 115, which is by way of example only and not by way of limitation. In some embodiments, multiple remote instances 120-1, 120-2 \8230 \ 8230; 120-M may be installed on one remote device 115 by deployment in a virtual machine, container, or bare metal server environment.
In some embodiments, the user may access the server using a terminal application 110 installed on the terminal device 105. The requests of the terminal application 110 for each terminal device 105 may be scheduled by the computing platform 140 for network and resource unification, and a corresponding remote instance 120 is created accordingly. Computing platforms 140 may include, but are not limited to, cloud computing platforms, edge computing platforms, and the like. Computing platform 140 may include one or more computing devices (e.g., a rack-mounted server, a router computer, a server computer, a personal computer, a mainframe computer, a laptop computer, a tablet computer, a desktop computer, etc.), data stores (e.g., hard disks, memory, databases), networks, software components, and/or hardware components, among others. Remote device 115 may be implemented as a component of computing platform 140, or may be a standalone device, or may be part of another platform or system.
In some embodiments, remote devices 115-1, 115-2 \8230, remote instances 120-1, 120-2 \8230on115-M, 120-M, cloud application runtimes 130-1, 130-2 \8230, 120-M may invoke cloud application runtimes 130-1, 130-2 \8230, 8230, 130-K (alone or collectively referred to as cloud application runtimes 130) to execute cloud applications 135-1, 135-2 \8230, 135-K (alone or collectively referred to as cloud applications 135), where K is any suitable positive integer greater than 2. The cloud application 135 may be implemented as a standalone application running in a virtual machine, container, or bare metal server environment. Cloud application 135 may consume hardware computing power provided by the environment, such as a GPU. The environment hardware configuration may be scheduled for decision by the computing platform 140. It should be understood that a remote instance of a terminal application (e.g., remote instance 120-1) may invoke execution of one or more cloud applications 135-1, 135-2 \8230: \823030; 135-K, and a cloud application (e.g., cloud application 135-1) may be invoked by one or more remote instances 120-1, 120-2 \8230: \8230; 120-M.
The cloud application 135 may cloud render content to be presented to the user by the terminal device 105. The cloud application 135 may be implemented in any suitable form. For example, it may be implemented in the form of a browser script, or may be implemented in the form of a rendering engine.
It should be understood that the cloud application runtime 130 and the cloud application 135 are shown in fig. 1 as being disposed separately from the remote instance 120 for purposes of example only and are not intended to suggest any limitation as to the scope of the present disclosure. In some embodiments, the cloud application runtime 130 and the cloud application 135 may be disposed on the remote device 115 by being deployed in a virtual machine, a container, or a bare metal server environment.
It should also be understood that the structural and functional details of environment 100 are described for illustrative purposes only and do not imply any limitations on the scope of the disclosure. In some implementations, environment 100 may include the same, fewer, more, or different systems, devices, components, and/or elements arranged or configured in the same or different ways as shown in fig. 1.
In the environment 100, the terminal device 105 may cooperate with the remote device 115 to collectively complete the rendering and presentation of the content to be presented. For example, in an embodiment in which the terminal application 110 is a browser application and the computing platform 140 is implemented by a cloud rendering platform, the Web application requests for the browser application of the terminal device 105 may be uniformly scheduled and correspondingly created by the cloud rendering platform for the remote instance of the Web application. The remote instance of the Web application invokes the cloud application runtime 130 to execute the cloud application 135 accordingly. For example, a browser may be invoked to execute a Web page script, or a real-time rendering engine application may be invoked to complete cloud screen rendering. The rendered picture may be encoded as a video stream by the remote instance of the Web application and transmitted to the terminal device 105 for composite presentation.
FIG. 2 illustrates an example rendering process 200 according to some embodiments of the present disclosure. In this example, the terminal application 110 may be implemented by a browser application 205, and its remote instance 120 may be implemented by a remote instance 210 of the browser application. Remote device 115 is located at a Web application server.
As shown in fig. 2, in process 200, a user accesses remote device 115, which is a Web application server, using a browser application 205 installed on terminal device 105. As an example, the access request may be sent using a hypertext Transfer Protocol (HTTP).
Multiple components (also referred to as composition parts) of content to be presented may be divided into a terminal rendering group and a cloud rendering group. As shown in fig. 2, component 220 is included in the terminal rendering group. The terminal device 105 may load and execute a Web page script to render the terminal rendering group locally resulting in the first image frame 215. By local rendering of the terminal device 105, dependence on network transmission and delay quality can be reduced, and image quality presented to a user and interactive experience of the user can be improved.
At the remote device 115, the cloud application 135 may be launched and started to execute to perform cloud rendering. The second image frame 220 is obtained by rendering the cloud rendering group. The remote device 115 may establish an RTC data channel with the end device 105, may encode the second image frames 220 into a real-time audio video stream (where audio is optional), and transmit to the end device 105 over the RTC data channel. By rendering part of the scene content by the cloud, the computational cost of the terminal device 105 can be reduced, the loading of the terminal device 105 on files such as assets and maps can be reduced, and the average waiting time of a user can be reduced.
After receiving the real-time audio and video stream through the browser application 205, the terminal device 105 decodes the real-time audio and video stream to obtain a second image frame 220. Then, the terminal device 105 may synthesize the first image frame 215 and the second image frame 220 by the page script, obtain a synthesized image frame 225 generated by real-time rendering, and present it to the user.
With the operation of a user, partial components of the content to be presented can be switched to terminal or cloud rendering according to actual needs. Therefore, each component of the scene content is dynamically adjusted between the terminal and the cloud computing power, and the use experience of a user can be comprehensively improved.
In this example, component 220 may be switched from the terminal rendering group to the cloud rendering group according to control data input by a user. Another image frame (referred to as a third image frame) 235 is rendered at the terminal device 105 by the local rendering terminal rendering group. At the remote device 115, the component 220 is added to the cloud rendering group. By rendering at the cloud, a further image frame (referred to as fourth image frame) 240 results. The remote device 115 may encode the fourth image frame 240 into a real-time audio-video stream (with audio optional) for transmission to the terminal device 105.
The terminal device 105 may receive the real-time audio video stream through the browser application 205 and decode the fourth image frame 240. The third image frame 235 and the fourth image frame 240 may be combined by the terminal script to obtain an updated combined image frame 245, and presented to the user.
Fig. 3 illustrates an example rendering method 300 according to some embodiments of the present disclosure. The method 300 may be implemented at the terminal device 105. For ease of discussion, the method 300 will be described in conjunction with fig. 1 and 2.
At block 305, the terminal device 105 renders a first portion of content to be presented to generate a first image frame (e.g., the first image frame 215 in fig. 2). The content to be presented may include multimedia content such as video, images, collections of images, and so forth. At least a first component (e.g., component 220 in fig. 2) in a first portion of content to be presented can be switched to render in the cloud. In some embodiments, some, or all of the components in the first portion may switch to rendering in the cloud.
In some embodiments, the terminal device 105 may receive a first portion of content to be presented to be rendered locally from the remote device 115 (e.g., as a server). For example, after a user requests presentation of content, such as audio-video, using the terminal application 110, the terminal device 105 can establish a connection (e.g., an RTC connection) with the remote device 115 and establish a corresponding channel (e.g., an RTC data channel). Through this channel, terminal device 105 may download terminal rendering group content, e.g., including scripts, materials, etc., from remote device 115.
In some embodiments, the content to be presented may be divided into three portions, wherein one portion is rendered at the terminal device 105, one portion is rendered at a server (e.g., a cloud), and the other portion may be cooperatively rendered at the terminal device 105 and the cloud. The portions of the collaborative rendering may be classified and dynamically adjusted which are rendered at the terminal and which are rendered at the cloud based on the computing power of the terminal device 105 and the cloud. Application implementation, user preference settings, real-time network quality, computational power variation of the terminal and the cloud, and the like may also be considered. Therefore, the characteristics of high computing power and instant use brought by cloud rendering can be enjoyed, and smooth experience brought by computing power of the terminal is fully utilized. The content to be presented may be executed at the remote device 115 or at the server side. Accordingly, the terminal device 105 may download from the remote device 115 the portion of the collaborative rendering portion to be rendered by the terminal device 105.
Terminal device 105 may use any suitable rendering technique currently known and developed in the future, and the scope of the present disclosure is not limited in this respect. In embodiments where the terminal application 110 is implemented by a browser application, the rendering technique used by the terminal device 105 may rely on a browser-supported rendering framework, such as the three.
At block 310, terminal device 105 receives a second image frame (e.g., second image frame 225 in fig. 2) from remote device 115. The second image frame is generated by rendering a second portion of the content at the cloud. At least a second component in the second portion can be switched to render locally. The cloud rendering may employ any suitable rendering technique, and the scope of the present disclosure is not limited in this respect. Cloud operation is described in further detail below in conjunction with fig. 4.
At block 315, the terminal device 105 generates a composite image frame (e.g., composite image frame 225 in fig. 2) for presentation based on the first image frame and the second image frame. The terminal device 105 may perform the compositing of the image frames in any suitable manner. For example, the terminal device 105 may superimpose the first image frame and the second image frame to render a final image frame. In some embodiments, the terminal device 105 may use the second image frame rendered by the cloud as a background and the first image frame rendered by the terminal device 105 as a foreground by means of the rendering framework, so that the image frames generated by the terminal rendering group and the cloud rendering group may be prevented from being overlapped.
In some embodiments, occlusion elimination may be performed upon merging if an overlay condition occurs for the first image frame and the second image frame. For example, if the second image frame is overlaid on the first image frame, the overlay pattern profile may be found locally at the terminal device 105 by calculating the ray casting angle from the camera perspective to obtain the mask layer. The first image frame may be eliminated using a mask layer to achieve the occlusion effect in the final composite image frame without the need for actual rendering.
In some embodiments, the first portion of the content to be rendered locally at the terminal device 105 may be updated. For example, a first component (e.g., component 220 in fig. 2) in the first portion may be switched to render in the cloud. In this way, the first component may be removed from the first portion as the terminal rendering group and added to the second portion of the content to be presented as the cloud rendering group. As another example, a second component in a second portion of the content to be presented rendered at the cloud may be switched to render locally at the terminal device 105. In this example, the second component may be removed from the second portion and added to the first portion.
The terminal device 105 may render the updated first portion to generate a third image frame (e.g., the third image frame 235 in fig. 2). Terminal device 105 may also receive a fourth image frame (e.g., fourth image frame 240 in fig. 2) from remote device 115. The fourth image frame is generated by rendering the second portion updated in association with the first portion at the cloud. Based on the third image frame and the fourth image frame, terminal device 105 may generate an updated composite image frame (e.g., composite image frame 245 in fig. 2).
In some embodiments, the updating of the first portion of the locally rendered content to be presented may take into account any suitable factors, such as an application execution environment, such as a hardware environment, a software environment, a network environment, business factors, and so forth. The update may also take into account user preferences. For example, a determination may be made as to which component or components to switch from local rendering to cloud rendering, or vice versa, based on control information input by a user.
In some embodiments, the server may determine how to switch the terminal rendering group and the cloud rendering group. After the server determines to switch the first component in the first portion of the content to be presented to be rendered by the cloud, the terminal device 105 may receive an indication from the remote device 115 to update the components contained in the first portion, and then perform a corresponding update. The way in which the terminal device 105 communicates with the remote device 115 may be predefined, for example, the indication may be transmitted by means of the RTC data channel.
In some embodiments, the updating of the first portion may be performed by the terminal device 105. In this case, the terminal device 105 may send an indication to the remote device 115 to update the components contained in the second portion of the cloud rendering in association with the first portion.
The indication may be implemented in any suitable manner. In some embodiments, terminal device 105 may receive the updated identification of the components in the first portion as the update indication. After receiving the identification, the terminal device 105 may determine to update the first portion of the local rendering. As an example, a corresponding (unique) identification may be established for the component that can be cooperatively rendered at the terminal device 105 and the cloud, and the terminal device 105 and the cloud are synchronously mapped. If the server determines to switch the components in the terminal rendering group or the cloud rendering group, the server may interact with the terminal device 105 so as to synchronize the updated terminal rendering group and the updated cloud rendering group.
Fig. 4 illustrates a rendering method 400 according to some embodiments of the disclosure. The method 400 may be implemented at the remote device 115. For ease of discussion, the method 400 will be described in conjunction with fig. 1 and 2.
At block 405, the remote device 115 determines a first portion and a second portion of content to be presented. The first portion can be rendered at the terminal device 105 and the second portion can be rendered at the cloud. The first portion includes at least a first component that can be switched to rendering at the cloud, and the second portion includes at least a second component that can be switched to rendering at the terminal device 105.
The first portion and the second portion may together constitute a rendering switchable portion of the content to be presented. The determination of the first portion and the second portion may be determined by the remote device 115 and the terminal device 105 in cooperation. In addition to rendering switchable portions, the content to be presented may also include portions or components that are rendered only at the terminal device 105 and/or only at the server (e.g., remote device 115).
At block 410, the remote device 115 generates a second image frame by causing a second portion of the content to be rendered at the cloud. Any suitable cloud rendering technique may be employed. For example, for a cloud rendering group, the remote device 115 may invoke the cloud application 135 to render. The cloud application 135 may be implemented in a variety of forms. For example, as with the terminal applications 110, the cloud application 135 may be implemented as a browser-based application, such as a browser script. Alternatively or additionally, the cloud application 135 may be implemented as a real-time rendering engine-based implementation. Rendering may be accomplished by a rendering engine of the cloud application runtime 130. For example, for the cloud application 135 in the form of browser scripts, rendering may be accomplished by, for example, the Threejs rendering framework and the WebGL rendering engine. For the cloud application 135 in the form of a real-time rendering engine, such as Unity and/or unregealengine, rendering may be performed by the corresponding real-time rendering engine.
In some embodiments, the remote device 115 may utilize the remote instance 120 of the terminal application for resource loading prior to rendering. For example, scripts and data required to be downloaded by cloud rendering are loaded, and scene scripts, model files, material files, media files and the like related to the cloud rendering group are loaded.
At block 410, the remote device 115 transmits a second image frame to the terminal device 105. For example, the cloud application 135 may be audio-video captured and streamed and the real-time audio-video stream sent to the terminal device 105 over the RTC data channel with the terminal device 105.
In some embodiments, the remote device 115 may send the first portion of the content to the terminal device 105. For example, a first portion of content to be rendered that requires rendering by terminal device 105 may be provided to terminal device 105 for loading according to the rendering grouping information declared by the application.
In some embodiments, the components in the second portion may be updated. For example, dynamic adjustment can be performed on the terminal and the cloud rendering group according to an application implementation manner, user preference setting, real-time network quality, computing power change conditions and the like, so that the real-time rendering application running environment, such as a hardware environment, a software environment, a network environment, service factors and the like, can be flexibly adapted. Remote device 115 may render the updated second portion to generate a fourth image frame (e.g., fourth image frame 240 in fig. 2), and then transmit the fourth image frame to terminal device 105.
In some embodiments, the update of the second portion may be initiated by the remote device 115 or the server. In these embodiments, the remote device 115 may send an indication to the terminal device 105 to update the components contained in the first portion in association with the second portion. As an example, the indication may include an identification of the components in the updated first portion. Thus, remote device 115 may synchronize the two-end rendering group change with terminal device 105.
In some embodiments, the update of the second portion may be initiated by the terminal device 105. In these embodiments, remote device 115 may receive an indication from terminal device 105 to update the components contained in the second portion. Based on the indication, the remote device 115 may determine that one or more components (e.g., at least the first component) in the first portion are switched to rendering in the cloud. Alternatively or additionally, one or more components (e.g., at least a second component) in the second portion are switched to render locally.
In some embodiments, the update indication of the second portion may be similar to the update indication of the first portion, also implemented by the identification of the components in the updated first portion. Based on the identification, the remote device 115 may determine that the portion excluding the updated first portion needs to be rendered in the cloud.
It should be understood that the features and operations of the remote device 115 or the server and corresponding effects discussed above with reference to fig. 1-3 in the method 300 are equally applicable to the method 400 and will not be described again here.
In some embodiments, in a case where the terminal application 110 is implemented as a Web application and the service provided by the server is cloud rendering, a set of Software Development Kit (SDK) or plug-in may be provided at the server to develop the server application, the cloud application, and the terminal script of the Web application. The SDK can complete the grouping of 3D scene contents used in the terminal and the server application, the establishment and maintenance of RTC connection, rendering composition and the cooperative interaction between end to end. The access capability of the cloud rendering platform can be improved through the SDK, and the real-time rendering capability of the Web application can be expanded.
Aiming at 3D scene components needing to be rendered by Web application, the 3D scene components can be grouped into a terminal rendering group and a cloud rendering group by calling an SDK interface. Unique identifiers can be established for scene content components added to the terminal rendering group and the cloud rendering group, and synchronous mapping is carried out on the terminal and the cloud. The mapping allows scene content components to interact collaboratively between the ends and to establish correspondence at packet switches. For the scene content components which only need to be rendered at the terminal, rendering can be performed by using a terminal rendering framework at the front end, and the scene content components do not need to be added into the terminal rendering group through an SDK interface.
An example application scenario of a rendering scheme according to an embodiment of the present disclosure is discussed below in conjunction with fig. 5 and 6A-6F.
Fig. 5 illustrates an example architecture 500 for performing a rendering process in accordance with some embodiments of the present disclosure.
In this example, computing platform 140 may be implemented as a cloud rendering platform, with remote device 115 located at a server and may be implemented as a component of the cloud rendering platform. The terminal application 110 may be implemented as a Web-based browser application, and the terminal and the server perform operations such as content grouping, RTC connection, rendering composition, and cooperative interaction between end-to-end through an SDK interface. Therefore, the terminal device 104 may only need to have a browser, and operations such as downloading, installation, version updating and the like of the terminal application are not required, and network transmission and waiting processes are not required, so that storage of the terminal can be saved, and meanwhile, security and portability are improved.
In the framework 500, a new application request may start when a user requests a cloud rendering platform using a terminal browser 505, and after receiving the request of the user browser 505, the cloud rendering platform performs network and resource scheduling to create a Web server application instance 510 to complete subsequent operations.
The Web server application instance 510 may be implemented as a standalone application that is scheduled by the cloud rendering platform and deployed in a virtual machine, container, or bare metal server environment. After the Web server application instance 510 is created, the request of the terminal browser 505 can be processed by it.
The Web server can provide the terminal with related page data resources 512 for the terminal browser 505 to initially load, including some (e.g., necessary) page scripts and static resources. The terminal rendering group script may be provided to the terminal device 105 for loading according to the scene content grouping information declared by the Web application.
For the cloud rendering group, the cloud application 135 may be invoked for rendering. The cloud application 135 may be implemented in a variety of forms. For example, it may be implemented as a browser-based application, similar to the front-end terminal, or may be implemented as a Unity or unregealengine-based real-time rendering engine application. The cloud application 135 may use the SDK interface 514 to implement collaborative interaction with the terminal browser 505 and the Web server application instance 510, loading and rendering of classified scene content.
The implementation of the SDK interface 514 may vary based on the cloud application 135. For example, the sdk interface 514 may be implemented by scripts for the cloud application 135 implemented as a browser. For Unity and/or UnrealEngine based cloud applications 135, the SDK interface 514 may be implemented via a corresponding plug-in.
After the cloud application 135 is ready, the Web server may initialize an RTC connection through the RTC module 516, start audio and video capture of the cloud application 135, and wait for the RTC module 518 of the terminal device 105 to access.
On the user side, after a user uses the terminal browser 505 to request a service from the Web server application instance 510, a page script of the terminal browser 505 is downloaded and run in the terminal browser 505 environment, and can receive input control of the user, perform local rendering on the terminal rendering group scene content, and simultaneously acquire real-time rendered audio and video streams from the cloud, complete synthesis of a final real-time rendered frame, and present the final real-time rendered frame to the user.
After the page script is loaded, the SDK interface 520 may be instantiated and the various terminal modules initialized. The collaboration module 522 may notify the Web server's collaboration module 524 to launch the application instance 510 and wait for the Web server to be ready. After the Web service end is ready, the RTC module 518 can establish a connection, access the RTC service, wait for a real-time audio/video stream, and establish a data channel. Any suitable RTC technology, both currently known and developed in the future, may be employed, and the scope of the present disclosure is not limited in this respect.
The loading module 524 may download content, including scripts and materials, to the terminal rendering group. The rendering module 526 may be enabled to perform local rendering to render the scene content of the terminal group. The implementation of rendering may depend on a Web 3D framework (e.g., three. Js rendering framework) and a rendering engine (WebGL rendering engine) supported by the browser.
The RTC module 518 sends user input control to the server and obtains the server real-time audio and video for further operation by the composition module 528. The synthesis module 528 performs overlay processing on the rendering result of the terminal rendering group and the real-time rendering picture (for example, image frame) of the cloud rendering group, and presents a final picture.
The composition of the image frames may be achieved in any suitable way. For example, a screen generated by the cloud rendering group may be used as a background and the terminal rendering group may be rendered as a foreground by the rendering framework. In some embodiments, in order to prevent the terminal rendering group and the cloud rendering group from generating a coverage situation, the terminal rendering group may be mainly used as a foreground, and the cloud rendering group may be mainly used as a background. For the coverage solution, the mask layer can be obtained by performing an occlusion calculation on the cloud rendering group locally at the terminal (for example, finding a graphic profile by calculating a ray projection angle from a camera view angle). In this case, the actual rendering may not be performed, and the terminal rendering group rendering result may be eliminated using a mask layer, so that a shielding effect may be achieved when the final rendering image is synthesized.
During the application running process, the cooperation module 522 of the terminal device 105 and the cooperation module 530 of the cloud application 135 synchronize the rendering group changes at both ends and perform dynamic adjustment. The terminal and the cloud can adopt any appropriate communication mode. For example, both may communicate via the RTC data channel.
The Web application server launches the cloud application 135 to perform real-time rendering of the cloud rendered group. The cooperation module 530 communicates with the cooperation module 524 of the Web server, notifies the cloud end of updating the components in the rendering group, obtains terminal classification scene synchronization, and notifies the loading module 532 of loading content. The loading module 532 is responsible for loading the resources of the application instance at the server, for example, scripts and data to be downloaded in cloud rendering, and scene scripts, model files, material files, media files and the like related to the cloud rendering group. Since most resource files are local to the server, this loading process can substantially ignore network transmission times.
The rendering module 534 may be responsible for rendering the loaded cloud group, and rendering is completed by a rendering engine in the runtime of the cloud application. While the rendering module 534 outputs the rendering result, the RTC module 516 may perform audio and video capture and streaming on the cloud application 135, and send the real-time audio and video stream to the RTC peer. The RTC module 518 of the terminal device 105 will receive the real-time audio video stream pictures.
During the application running process, the cooperation module 530 of the cloud application 135 and the cooperation module 522 of the terminal may synchronize the two-end rendering group change and perform dynamic adjustment. The two-end collaboration module may employ any suitable communication means, for example, by means of an RTC data channel between the terminal device 105 and the remote device 115.
Therefore, by the rendering mode of the scene content component level, the scene components can be dynamically adjusted in different classification rendering groups according to the application implementation mode, the user preference setting, the real-time network quality, the calculation capacity change condition and the like, and the real-time rendering application running environment, such as a hardware environment, a software environment, a network environment, a business factor and the like, is flexibly adapted. This is favorable to the variety, flexibility and the plasticity of application innovation, is favorable to improving the ability that cloud rendering platform mandates real-time rendering application.
An example process for collaborative rendering by a terminal and a cloud based on the architecture 500 shown in fig. 5 will be discussed below in conjunction with fig. 6A-6F. In this example, the content presented to the user includes video content.
In the cloud-rendered screen 600 shown in fig. 6A, the changing background 601 of the video content may be composed of many randomly spinning cubes, which may be rendered by the cloud.
In the terminal rendering screen 602 shown in fig. 6B, the foreground of the video content includes three randomly spinning cubes 604, 606, and 608 (labeled as cube #1, cube #2, and cube #3, respectively). Four flat text prompt boxes 610, 612, 614, and 616 rendered by the terminal indicate that the current background (e.g., background 600 in fig. 6A) or rendering positions of the three cubes 604, 606, and 608 are terminal rendering or cloud rendering.
The upper right hand corner of foreground 602 includes control menu 618. The user may complete the switching of the cubes 604, 606, and 608 between the terminal rendering group and the cloud rendering group by clicking on items in a control menu 618, for example, composed of a hypertext Markup Language (HTML) Document Object Model (DOM), and update the corresponding flat Text prompt boxes 610, 612, 614, and/or 616.
The following pseudo code fragment illustrates an example process of Web application development. The pseudo code segment may be applied to the Web-based terminal application 110 and the cloud application 135, and may be executed separately at each end.
Figure BDA0003869899480000201
Figure BDA0003869899480000211
Figure BDA0003869899480000221
As an example, the terminal browser 505 may initiate a request and obtain an initial page resource including the above pseudo code, and wait for the Web server to complete initialization. The Web server receives the request, starts the cloud application 135, initializes the RTC connection, and notifies the onLoadReady event to the terminal browser 505 and the cloud application 135.
The onLoadReady event is received by the terminal browser 505, and the terminal rendering group may be rendered, so as to obtain a terminal rendering screen shown in fig. 6B. The cloud application 135 receives the onLoadReady event, and may render the cloud rendering group to obtain a cloud rendering frame as shown in fig. 6A.
The RTC module 516 of the server performs audio and video capture, and transmits the cloud rendered image to the terminal device 105. The RTC module 518 of the terminal receives the VIDEO stream and presents a control menu 618 implemented by a VIDEO _ DOM component on which the terminal rendering can be superimposed so that its composition requires no additional operations. Thereby, a final rendered screen 620 as shown in fig. 6C can be obtained.
When the first item 622 of the control menu "cube # 1-switch rendering mode" is clicked, the terminal script calls the toggle render group () function to determine and switch the rendering group in which the cube 604 is located using the SDK interface 520. The SDK interface 520 will notify each peer collaboration module 522, 524, and 530 to update and trigger an onLoadReady event. As a result, a terminal rendered screen 624 shown in fig. 6D is obtained, in which "cube #1 — cloud rendering" is displayed in the text prompt box 610. Further, a cloud rendered frame 626 is obtained as shown in fig. 6E, which includes the traditional body 604 in addition to the background 601. The cloud rendered screen 626 is transmitted to the terminal device 105, resulting in a final rendered screen 628 as shown in fig. 6F.
It should be appreciated that the modules and/or components illustrated in fig. 5 may be implemented in a variety of ways including software, hardware, firmware, or any combination thereof. It should also be understood that the modules and/or components included in the architecture for enabling collaborative rendering of a terminal and a cloud are shown for purposes of example only and are not intended to imply any limitations. The architecture may include more or fewer modules and/or components.
Fig. 7 shows a schematic block diagram of a rendering apparatus 700 according to some embodiments of the present disclosure. The apparatus 700 may be embodied as or included in the terminal device 105.
As shown in fig. 7, the apparatus 700 includes a first rendering module 710, a first receiving module 720, and a composition module 730. The first rendering module 710 is configured to render a first portion of content to be presented to generate a first image frame. At least a first component in the first portion can be switched to render in the cloud. The first receiving module 720 is configured to receive a second image frame from the remote device 115. The second image frame is generated by rendering a second portion of the content at the cloud, at least a second component in the second portion being switchable to be rendered locally. The compositing module 730 is configured to generate a composite image frame based on the first image frame and the second image frame for rendering.
In some embodiments, the first receiving module 720 may also be configured to receive a first portion of content from the remote device 115.
In some embodiments, the apparatus 700 may further include a first update module configured to update the components contained in the first portion. The first rendering module 710 may be configured to render the updated first portion to generate a third image frame. The first receiving module 720 may be further configured to receive a fourth image frame from the remote device 115, the fourth image frame generated by rendering the second portion updated in association with the first portion at the cloud. The compositing module 730 may be further configured to generate an updated composite image frame based on the third image frame and the fourth image frame.
In some embodiments, the first receiving module 720 may be further configured to receive an indication from the remote device 115 to update the components contained in the first portion. In these embodiments, the first update module may be configured to perform the update in response to receiving the indication.
In some embodiments, the first receiving module 720 may be configured to receive an identification of components in the updated first portion.
In some embodiments, apparatus 700 may further include a first sending module configured to send an indication to remote device 115 to update the components contained in the second portion in association with the first portion.
In some embodiments, the indication indicates at least one of: at least a first component is switched to render in the cloud; at least the second component is switched to render locally.
Fig. 8 shows a schematic block diagram of a rendering apparatus 800 according to some embodiments of the present disclosure. The apparatus 800 may be embodied as or included in the remote device 115.
As shown in fig. 8, the apparatus 800 includes a grouping module 810, a second rendering module 820, and a second transmitting module 830. The grouping module 810 is configured to determine a first portion and a second portion of content to be presented. The first portion is capable of being rendered at the terminal device and includes at least a first component that is capable of being switched to rendering at a cloud. The second portion can be rendered at the cloud and includes at least a second component that can be switched to rendering at the terminal device 105. The second rendering module 820 is configured to generate a second image frame by causing a second portion of the content to be rendered at the cloud. The second transmitting module 830 is configured to transmit the second image frame to the terminal device 105.
In some embodiments, the second transmitting module 830 may be further configured to transmit the first portion of the content to the terminal device 105.
In some embodiments, the apparatus 800 may further include a second update module configured to update the components contained in the second portion. The second rendering module 820 may also be configured to render the updated second portion to generate a fourth image frame. The second transmitting module 830 may be further configured to transmit the fourth image frame to the terminal device 105.
In some embodiments, the second transmitting module 830 may be further configured to transmit an indication to the terminal device 105 to update the components contained in the first portion in association with the second portion.
In some embodiments, the second sending module 830 may be further configured to send the identification of the components in the updated first portion.
In some embodiments, the apparatus 800 may further include a second receiving module configured to receive an indication from the terminal device 105 to update the components contained in the second portion.
In some embodiments, the indication indicates at least one of: at least a first component is switched to render in the cloud; at least the second component is switched to render locally.
It should be understood that the features and effects discussed above with respect to process 200 and methods 300 and 400 with reference to fig. 1-5 and 6A-6F are equally applicable to apparatuses 700 and 800 and will not be described again here. Additionally, the modules included in the apparatuses 700 and 800 may be implemented using various means including software, hardware, firmware, or any combination thereof. In some embodiments, one or more modules may be implemented using software and/or firmware, such as machine-executable instructions stored on a storage medium. In addition to, or in the alternative to, machine-executable instructions, some or all of the modules in the apparatus 700 and 800 may be implemented at least in part by one or more hardware logic components. By way of example, and not limitation, exemplary types of hardware logic components that may be used include Field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standards (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and so forth.
FIG. 9 illustrates a block diagram that shows an electronic device 900 in which one or more embodiments of the disclosure may be implemented. It should be understood that the electronic device 900 illustrated in FIG. 9 is merely exemplary and should not be construed as limiting in any way the functionality and scope of the embodiments described herein. The electronic device 900 shown in fig. 9 may be used to implement the terminal device 105 or the remote device 115 of fig. 1.
As shown in fig. 9, the electronic device 900 is in the form of a general purpose computing device. Components of electronic device 900 may include, but are not limited to, one or more processors or processing units 910, memory 920, storage 930, one or more communication units 940, one or more input devices 950, and one or more output devices 960. The processing unit 910 may be a real or virtual processor and can perform various processes according to programs stored in the memory 920. In a multi-processor system, multiple processing units execute computer-executable instructions in parallel to improve the parallel processing capabilities of electronic device 900.
Electronic device 900 typically includes a number of computer storage media. Such media may be any available media that is accessible by electronic device 900 and includes, but is not limited to, volatile and non-volatile media, removable and non-removable media. The memory 920 may be volatile memory (e.g., registers, cache, random Access Memory (RAM)), non-volatile memory (e.g., read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory), or some combination thereof. Storage 930 may be a removable or non-removable medium and may include a machine-readable medium, such as a flash drive, a magnetic disk, or any other medium that may be capable of being used to store information and/or data (e.g., training data for training) and that may be accessed within electronic device 900.
The electronic device 900 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in FIG. 9, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, non-volatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data media interfaces. Memory 920 may include a computer program product 925 having one or more program modules configured to perform the various methods or acts of the various embodiments of the disclosure.
The communication unit 940 enables communication with other computing devices over a communication medium. Additionally, the functionality of the components of the electronic device 900 may be implemented in a single computing cluster or multiple computing machines, which are capable of communicating over a communications connection. Thus, the electronic device 900 may operate in a networked environment using logical connections to one or more other servers, network Personal Computers (PCs), or another network node.
The input device 950 may be one or more input devices such as a mouse, keyboard, trackball, or the like. Output device 960 may be one or more output devices such as a display, speakers, printer, etc. Electronic device 900 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., communicating with one or more devices that enable a user to interact with electronic device 900, or any devices (e.g., network cards, modems, etc.) that enable electronic device 900 to communicate with one or more other computing devices, via communication unit 940, as desired. Such communication may be performed via input/output (I/O) interfaces (not shown).
According to an exemplary implementation of the present disclosure, a computer-readable storage medium having stored thereon computer-executable instructions is provided, wherein the computer-executable instructions are executed by a processor to implement the above-described method. According to an exemplary implementation of the present disclosure, there is also provided a computer program product, tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions, which are executed by a processor to implement the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices and computer program products implemented in accordance with the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer-readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing has described implementations of the present disclosure, and the above description is illustrative, not exhaustive, and is not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The terminology used herein was chosen in order to best explain the principles of various implementations, the practical application, or improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand various implementations disclosed herein.

Claims (18)

1. A method of rendering, comprising:
rendering a first portion of content to be presented to generate a first image frame, at least a first component in the first portion being switchable to render in a cloud;
receiving a second image frame from a remote device, the second image frame generated by rendering a second portion of the content at the cloud, at least a second component in the second portion being switchable to be rendered locally; and
generating a composite image frame for presentation based on the first image frame and the second image frame.
2. The method of claim 1, further comprising:
receiving the first portion of the content from the remote device.
3. The method of claim 1, further comprising:
updating the components contained in the first portion;
rendering the updated first portion to generate a third image frame;
receiving a fourth image frame from the remote device, the fourth image frame generated by rendering a second portion at the cloud that is updated in association with the first portion; and
generating an updated composite image frame based on the third image frame and the fourth image frame.
4. The method of claim 3, wherein updating the components contained in the first portion comprises:
receiving, from the remote device, an indication to update the component contained in the first portion; and
in response to receiving the indication, performing the update.
5. The method of claim 4, wherein receiving the indication comprises:
receiving updated identifications of components in the first portion.
6. The method of claim 3, further comprising:
sending an indication to the remote device to update components contained in the second portion in association with the first portion.
7. The method of any of claims 4 to 6, wherein the indication is indicative of at least one of:
at least the first component is switched to render at the cloud;
at least the second component is switched to render locally.
8. A rendering method, comprising:
determining a first portion and a second portion of content to be presented, the first portion being renderable at a terminal device and including at least a first component being switchable to render at a cloud, the second portion being renderable at the cloud and including at least a second component being switchable to render at the terminal device;
generating a second image frame by causing the second portion of the content to be rendered at the cloud; and
and sending the second image frame to the terminal equipment.
9. The method of claim 8, further comprising:
transmitting the first portion of the content to the terminal device.
10. The method of claim 8, further comprising:
updating the components contained in the second portion;
rendering the updated second portion to generate a fourth image frame; and
and sending the fourth image frame to the terminal equipment.
11. The method of claim 10, further comprising:
sending an indication to the terminal device to update the components contained in the first portion in association with the second portion.
12. The method of claim 11, wherein sending the indication comprises:
and sending the identification of each component in the updated first part.
13. The method of claim 10, further comprising:
receiving an indication from the terminal device to update the components contained in the second portion.
14. The method of any of claims 11 to 13, wherein the indication indicates at least one of:
at least the first component is switched to render at the cloud;
at least the second component is switched to render locally.
15. A rendering apparatus, comprising:
a first rendering module configured to render a first portion of content to be presented to generate a first image frame, at least a first component in the first portion being switchable to render in a cloud;
a first receiving module configured to receive a second image frame from a remote device, the second image frame generated by rendering a second portion of the content at the cloud, at least a second component in the second portion being switchable to be rendered locally; and
a composition module configured to generate a composite image frame for presentation based on the first image frame and the second image frame.
16. A rendering apparatus, comprising:
a grouping module configured to determine a first portion and a second portion of content to be presented, the first portion being renderable at a terminal device and including at least a first component being switchable to render at a cloud, the second portion being renderable at the cloud and including at least a second component being switchable to render at the terminal device;
a second rendering module configured to generate a second image frame by causing the second portion of the content to be rendered at the cloud; and
a second transmitting module configured to transmit the second image frame to the terminal device.
17. An electronic device, comprising:
at least one processing unit; and
at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit causing the apparatus to perform the method of any of claims 1-7 and 8-14.
18. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7 and 8 to 14.
CN202211193470.5A 2022-09-28 2022-09-28 Rendering method, device, equipment and storage medium Pending CN115661011A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211193470.5A CN115661011A (en) 2022-09-28 2022-09-28 Rendering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211193470.5A CN115661011A (en) 2022-09-28 2022-09-28 Rendering method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115661011A true CN115661011A (en) 2023-01-31

Family

ID=84985445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211193470.5A Pending CN115661011A (en) 2022-09-28 2022-09-28 Rendering method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115661011A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758201A (en) * 2023-08-16 2023-09-15 淘宝(中国)软件有限公司 Rendering processing method, device and system of three-dimensional scene and computer storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116758201A (en) * 2023-08-16 2023-09-15 淘宝(中国)软件有限公司 Rendering processing method, device and system of three-dimensional scene and computer storage medium
CN116758201B (en) * 2023-08-16 2024-01-12 淘宝(中国)软件有限公司 Rendering processing method, device and system of three-dimensional scene and computer storage medium

Similar Documents

Publication Publication Date Title
US10097596B2 (en) Multiple stream content presentation
CN111882626B (en) Image processing method, device, server and medium
US9455931B2 (en) Load balancing between processors
US7830388B1 (en) Methods and apparatus of sharing graphics data of multiple instances of interactive application
WO2022100522A1 (en) Video encoding method, video decoding method, apparatus, electronic device, storage medium, and computer program product
CN110415325B (en) Cloud rendering three-dimensional visualization realization method and system
JP5481606B1 (en) Image generation system and image generation program
US10554713B2 (en) Low latency application streaming using temporal frame transformation
WO2022048097A1 (en) Single-frame picture real-time rendering method based on multiple graphics cards
US20220193540A1 (en) Method and system for a cloud native 3d scene game
WO2013140334A2 (en) Method and system for streaming video
US20140344469A1 (en) Method of in-application encoding for decreased latency application streaming
US20230236687A1 (en) Systems and methods for control of a virtual world
CN112843676A (en) Data processing method, device, terminal, server and storage medium
US10237563B2 (en) System and method for controlling video encoding using content information
CN115661011A (en) Rendering method, device, equipment and storage medium
CN113082693A (en) Rendering method, cloud game rendering method, server and computing equipment
MXPA02005310A (en) Data processing system and method, computer program, and recorded medium.
CN115220906A (en) Cloud execution of audio/video synthesis applications
US11212562B1 (en) Targeted video streaming post-production effects
Vats et al. Semantic-aware view prediction for 360-degree videos at the 5g edge
US9384276B1 (en) Reducing latency for remotely executed applications
EP3069264B1 (en) Multiple stream content presentation
US20210154576A1 (en) Vector graphics-based live streaming of video games
WO2018178748A1 (en) Terminal-to-mobile-device system, where a terminal is controlled through a mobile device, and terminal remote control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination