CN113516774A - Rendering quality adjusting method and related equipment - Google Patents

Rendering quality adjusting method and related equipment Download PDF

Info

Publication number
CN113516774A
CN113516774A CN202010449900.XA CN202010449900A CN113516774A CN 113516774 A CN113516774 A CN 113516774A CN 202010449900 A CN202010449900 A CN 202010449900A CN 113516774 A CN113516774 A CN 113516774A
Authority
CN
China
Prior art keywords
rendering
task
platform
cloud
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010449900.XA
Other languages
Chinese (zh)
Inventor
尹青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2021/083467 priority Critical patent/WO2021190651A1/en
Publication of CN113516774A publication Critical patent/CN113516774A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Abstract

The application provides a cloud rendering platform for acquiring rendering tasks; the cloud rendering platform receives a rendering resource request instruction sent by user equipment; the cloud rendering platform allocates computing resources for the rendering task according to the rendering resource request instruction to perform rendering, so that a first rendering image is obtained; the cloud rendering platform receives a rendering resource adjusting instruction sent by the user equipment; the cloud rendering platform adjusts the computing resources allocated to the rendering task according to the rendering resource adjusting instruction; and the cloud rendering platform renders the rendering task by using the adjusted computing resources allocated to the rendering task, so as to obtain a second rendering image. The scheme can adjust the rendering quality in real time according to the requirements, and different requirements of users are met.

Description

Rendering quality adjusting method and related equipment
Technical Field
The present application relates to image rendering, and in particular, to a rendering quality adjustment method and related apparatus.
Background
Rendering refers to the process of generating an image from a model in software, where the model is a description of a three-dimensional object in a well-defined language or data structure, including geometric, viewpoint, texture, and lighting information. The image is a digital image or a bitmap image. Rendering this term is similar to "rendering of a scene by an artist", and in addition, rendering is also used to describe "the process of computing effects in a video editing file to generate a final video output". Rendering may include pre-rendering (pre-rendering/offline rendering) or real-time rendering (real-time rendering/online rendering), wherein the pre-rendering is generally a live-action simulation with a predetermined script for movies, advertisements, etc.; real-time rendering is typically a real-world simulation without predefined scripts for flight training, 3D games, and interactive architectural presentations.
Real-time rendering is very demanding on the speed of computation. Taking the 3D game as an example, for a refresh rate of 30 frames per second, the time per frame image is about 33 milliseconds, and therefore the rendering time per frame image must not exceed 33 milliseconds. In order to meet the computation speed requirement of real-time rendering, rasterization rendering is generally adopted for real-time rendering. Once the rendering pipeline is started, the whole processing flow must be continuous, the image quality of an image needs to be improved, the original rendering pipeline must be interrupted, and a new rendering pipeline is restarted, so that the rendering quality is difficult to adjust.
Disclosure of Invention
In order to solve the above problems, the present application provides a rendering quality adjustment method and related devices, which can adjust rendering quality in real time as required.
In a first aspect, a rendering quality adjustment method is provided, including:
the cloud rendering platform acquires a rendering task;
the cloud rendering platform receives a rendering resource request instruction sent by user equipment;
the cloud rendering platform allocates computing resources for the rendering task according to the rendering resource request instruction to perform rendering, so that a first rendering image is obtained;
the cloud rendering platform receives a rendering resource adjusting instruction sent by the user equipment;
the cloud rendering platform adjusts the computing resources allocated to the rendering task according to the rendering resource adjusting instruction;
and the cloud rendering platform renders the rendering task by using the adjusted computing resources allocated to the rendering task, so as to obtain a second rendering image.
In some possible designs, in a case that the rendering resource adjustment instruction is used to improve the rendering quality of the rendering task, the cloud rendering platform adjusting the computing resources allocated for the rendering task according to the rendering resource adjustment instruction includes: increasing the computational resources allocated for the rendering task; or in the case that the rendering resource adjustment instruction is used to reduce the rendering quality of the rendering task, the adjusting, by the cloud rendering platform, the computing resources allocated to the rendering task according to the rendering resource adjustment instruction includes: reducing the computational resources allocated for the rendering task.
In some possible designs, after the cloud rendering platform allocates computing resources for the rendering task to render according to the rendering resource request instruction, thereby obtaining a first rendered image, the method further includes:
calculating first pricing information according to computing resources allocated for the rendering task, wherein the first pricing information is a cost for rendering the first rendered image;
after the cloud rendering platform renders the rendering task by using the adjusted computing resources allocated to the rendering task, so as to obtain a second rendering image, the method further includes:
and calculating second pricing information according to the adjusted computing resources allocated for the rendering task, wherein the second pricing information is the cost for rendering the second rendering image.
In some possible designs, rendering the rendering task using ray tracing rendering, the rendering resource request instruction including one or more of a rendering index or a resource parameter; the rendering index comprises one or more of sampling number Spp per pixel, light rebound times, object modeling triangular surface number, vertex number, image noise and frame rate, and the resource parameter comprises one or more of the number of processors, main frequency of the processors, memory size and network bandwidth.
In some possible designs, the rendering task is sent by the user device or sent by a management device.
In a second aspect, a cloud rendering platform is provided, comprising: the system comprises an acquisition module, a receiving module and a rendering engine;
the acquisition module is used for acquiring a rendering task;
the receiving module is used for receiving a rendering resource request instruction sent by user equipment;
the rendering engine is used for allocating computing resources for the rendering task according to the rendering resource request instruction so as to perform rendering, and therefore a first rendering image is obtained;
the receiving module is used for receiving a rendering resource adjusting instruction sent by the user equipment;
the rendering engine is used for adjusting the computing resources distributed to the rendering task according to the rendering resource adjusting instruction;
and the rendering engine is used for rendering the rendering task by using the adjusted computing resources allocated to the rendering task so as to obtain a second rendering image.
In some possible designs, in a case that the rendering resource adjustment instruction is used to improve the rendering quality of the rendering task, the cloud rendering platform adjusting the computing resources allocated for the rendering task according to the rendering resource adjustment instruction includes: increasing the computational resources allocated for the rendering task; or in the case that the rendering resource adjustment instruction is used to reduce the rendering quality of the rendering task, the adjusting, by the cloud rendering platform, the computing resources allocated to the rendering task according to the rendering resource adjustment instruction includes: reducing the computational resources allocated for the rendering task.
In some possible designs, the cloud rendering platform further comprises a pricing module for calculating first pricing information according to computing resources allocated for the rendering tasks, wherein the first pricing information is a cost for rendering the first rendered image;
the pricing module is further configured to calculate second pricing information according to the adjusted computing resources allocated to the rendering task, wherein the second pricing information is a cost for rendering the second rendered image.
In some possible designs, rendering the rendering task using ray tracing rendering, the rendering resource request instruction including one or more of a rendering index, a resource parameter, and a display parameter; the rendering index comprises one or more of sampling number Spp per pixel, light rebound times, object modeling triangular surface number, vertex number and picture noise, the resource parameter comprises one or more of the number of processors, dominant frequency of the processors, memory size and network bandwidth, and the display parameter comprises frame rate.
In some possible designs, the rendering task is sent by the user device or sent by a management device.
In a third aspect, there is provided a computer-readable storage medium comprising instructions which, when run on the computer, cause the computer to perform the method of any of the first aspects.
In a fourth aspect, there is provided a computer program product for performing the method of any one of the first aspect when the computer program product is read and executed by the computer.
In a fifth aspect, there is provided a rendering platform comprising at least one rendering node, each rendering node comprising a processor and a memory, the processor executing a program in the memory to perform the method of any of the first aspects.
According to the scheme, when the user equipment sends the rendering resource adjusting instruction, the cloud rendering platform adjusts the computing resources in real time according to the rendering resource adjusting instruction to perform computing, so that the rendering quality is dynamically adjusted, and different requirements of users are met.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
1A-1B are schematic diagrams of some embodiments of ray tracing rendering in a simple scene to which the present application relates;
FIG. 2 is a schematic diagram of ray tracing rendering in a complex scene according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of the sawtooth effect produced when the number of SPPs involved in the present application is low;
FIG. 4 is a schematic diagram of overcoming the aliasing effect when the number of SPPs involved in the present application is high;
FIG. 5A is a schematic diagram of the absence of formation of a caustic spot when the number of SPPs involved in the present application is low;
FIG. 5B is a schematic illustration of the formation of a caustic spot when the number of SPPs involved in the present application is high;
FIG. 6 is a schematic structural diagram of a cloud rendering system according to an embodiment of the present disclosure;
fig. 7A to 7B are schematic diagrams illustrating that an intelligent terminal improves and reduces rendering quality of a rendered image in real time through a key;
FIGS. 8A to 8B are schematic diagrams illustrating the real-time improvement and reduction of rendering quality by a computer through a key;
fig. 9A to 9B are schematic diagrams of increasing and decreasing rendering quality of a VR device in real time through a key;
FIGS. 10A-10B are schematic diagrams illustrating the mapping relationship between the pressing times of the increase key and the decrease key and the adjustment index, respectively;
11A-11D are diagrams of some embodiments of adjusting computing resources for ray tracing rendering during rendering index adjustment;
fig. 12 is a flowchart of an interaction of a rendering quality adjustment method according to an embodiment of the present application;
fig. 13 is a flowchart of an interaction of a rendering quality adjustment method according to another embodiment of the present application;
FIG. 14 is a flowchart of an interaction of a rendering quality adjustment method according to yet another embodiment of the present application;
FIG. 15 is a schematic structural diagram of a cloud rendering platform according to the present disclosure;
FIG. 16 is a block diagram of an intelligent terminal according to one implementation of the present disclosure;
FIG. 17 is a block diagram of a computer according to one embodiment of the present disclosure;
fig. 18 is a block diagram of a cloud rendering platform according to an implementation manner of the present application.
Detailed Description
Ray tracing render (ray tracing render) is a rendering method that generates a rendered image by tracing a path of light incident on a model along a ray emitted from a viewpoint of a camera (or human eye) toward each pixel of the rendered image. The core idea of ray tracing rendering is to trace rays backwards, from the point of view of the camera (or human eye). Since only the last ray that can enter the camera (or the human eye) is useful, back-tracking the ray can effectively reduce the amount of data. The light rays in the ray tracing rendering mainly have two scenes, namely reflection and refraction, and are respectively described in the following with reference to specific embodiments.
In the reflective scene shown in FIG. 1A, the model has only one light source 130 and one opaque sphere 140. A ray is projected from the viewpoint E of the camera 110 to a pixel point O in the rendered image 1201Then, it continues to emit to a point P of the opaque sphere 1401Then reflected to the light source L, at which point P1Determines the pixel point O1The color of (c). Another ray is emitted from the viewpoint E of the camera 110 and projected to another pixel point O in the rendered image 1202Then, it continues to emit to a point P of the opaque sphere 1402Then, is reflected to the light source L, and, the point P2And light source L, while point P is at point P, with an obstruction opaque sphere 1402In the shadow of the opaque sphere 140, pixel point O2Is black.
In the refractive scenario shown in FIG. 1B, the model has only one light source 230 and one transparent sphere 240. A ray is emitted from the viewpoint E of the camera 210 and projected to a pixel point O in the rendered image 2203Then, it continues to emit to a point P of the transparent sphere 2403Then is refracted to the light source L, at this time, the point P3Determines the pixel point O3The color of (c).
It is understood that the reflective scene in fig. 1A and the refractive scene in fig. 1B are the simplest scenes, and only one opaque sphere exists in the scene in fig. 1A and only one transparent sphere exists in the scene in fig. 1B, but in practical applications, the scene is far more complex than that in fig. 1A and fig. 1B, for example, multiple opaque objects and multiple transparent objects may exist in the scene at the same time, and therefore, the ray may be reflected and refracted multiple times, thereby causing the ray tracing to become very complex.
In the complex scene shown in FIG. 2, the model includes a light source 330 and two transparent spheres340. 350 and an opaque object 360. A ray is projected from the viewpoint E of the camera 310 to a pixel point O in the rendered image 3204And continues to exit to a point P of the transparent sphere 3401From P1Making a shadow test line S1 to the light source L without an obstructing object in between, so that the local light illumination model can be used to calculate the light source pair P1The intensity in the direction of its line of sight E is taken as the local intensity at that point. At the same time, the reflected ray R1 and the refracted ray T1, which are also pairs of P, are traced at this point1The light intensity of the spot contributes. In the direction of the reflected ray R1, it does not intersect any more with other objects, and the intensity in that direction is set to zero, and the tracing of the ray direction is ended. The direction of the refracted ray T1 then continues to be traced to calculate the intensity contribution of the ray. The refracted ray T1 propagates inside the transparent object 340 and continues to emerge to intersect the transparent object 350 at point P2Since this point is inside the transparent object 350, it can be assumed that its local intensity is zero, and at the same time, the reflected ray R2 and the refracted ray T2 are generated, and in the direction of the reflected ray R2, it can continue to be tracked recursively to calculate its intensity, where it does not continue. Continuing to track the refracted ray T2, T2 intersects the opaque object 360 at point P3As P3And the shadow test line S3 of the light source L is not blocked by an object, the local light intensity at the position is calculated, and the light intensity in the direction of the reflected ray R3 can be continuously tracked because the opaque object 360 is non-transparent, and the P is obtained by combining the local light intensity3The intensity of the light. The tracking of the reflected ray R3 is similar to the previous process and the algorithm may proceed recursively. The above process is repeated until the ray satisfies the tracing termination condition. Thus, we can obtain the pixel point O4I.e. its corresponding color value.
It is understood that the complex scene shown in fig. 2 is only one of the complex scenes, and in other complex scenes, the number of light sources, the number of transparent objects, the number of opaque objects, the position of the camera, the position of the light source, the position of the transparent object, and the position of the opaque object in the model may vary, and are not limited specifically herein.
The rendering quality of a ray-tracing rendering may depend on the following rendering metrics, resource parameters, and display parameters. Wherein the content of the first and second substances,
rendering metrics may include Sample per pixel (Spp), number of ray Bounce (Bounce), number of object modeling triangle tiles, number of vertices, and picture noise, among others. The following description will be made in detail by taking Spp and Bounce as examples.
SPP may be defined as the number of rays sampled per pixel. The reason why the number of spps may affect the image quality of the rendered image is that: if Spp is 1 (i.e. only one light ray passes through each pixel), then even if the light rays are slightly shifted, the color of the pixel may change greatly. The following examples will assume that light will eventually be reflected to the light source, but for the sake of simplicity, no further description will be made below. Taking fig. 3 as an example, if the light passes through the pixel a, the light will be projected onto the opaque object 1 with red color, and at this time, the color value of the pixel a is determined by the projection point on the opaque object 1, that is, the color of the pixel a is red. If the light passes through from pixel B, then the light will be projected on green opaque object 2, and at this moment, pixel B's colour value is decided by the projection point on opaque object 2, promptly, pixel B's colour is green. Therefore, although the pixel point a and the pixel point B are adjacent pixels, the colors of the pixel point a and the pixel point B are far apart, thereby generating a sawtooth effect. In order to solve the above problem, taking fig. 4 as an example, if Spp is n (that is, n rays are emitted from a viewpoint to the same pixel point on the rendered image), then the n rays are respectively projected on n projection points of the opaque object 1 or the opaque object 2 through the pixel point, so that n color values of the pixel point can be respectively determined according to the n projection points, and finally, the n color values are averaged, thereby obtaining a final color value of the pixel. The lower the sampling noise if the pixel's final color value matches the picture reference frame (mathematical expectation). Therefore, the greater the number of spps, the better the antialiasing effect of the rendered image, the lower the noise figure, the rendered imageThe better the quality of the image is naturally. In addition, the number of SPP can affect the light effect of the picture, such as the caustic effect (Caustics) of a transparent body (glass spheres, water waves) under light irradiation. When the number of samples is small, as shown in FIG. 5A, a first ray is emitted from the viewpoint E of the camera 410 and passes through a pixel O of the rendered image 4201A point P of the outgoing transparent object 4401And is refracted to a light spot H1(ii) a A second ray is emitted from the viewpoint E of the camera 410 and passes through a pixel point O of the rendered image 4202A point P of the outgoing transparent object 4402And is refracted to a light spot H2(ii) a A third ray is emitted from the viewpoint E of the camera 410 and passes through a pixel point O of the rendered image 4203A point P of the outgoing transparent object 4403And is refracted to a light spot H3Light spot H1Light spot H2And a light spot H3Only isolated spots, cannot be collected into a defocused spot. When the number of samples is large, as shown in FIG. 5B, a first ray is emitted from the viewpoint E of the camera 410 and passes through a pixel O of the rendered image 4201A point P of the outgoing transparent object 4401And is refracted to a light spot H1(ii) a A second ray is emitted from the viewpoint E of the camera 410 and passes through a pixel point O of the rendered image 4202A point P of the outgoing transparent object 4402And is refracted to a light spot H2(ii) a A third ray is emitted from the viewpoint E of the camera 410 and passes through a pixel point O of the rendered image 4203A point P of the outgoing transparent object 4403And is refracted to a light spot H3(ii) a The fourth ray from the viewpoint E of the camera 410 passes through a pixel point O of the rendered image 4204A point P of the outgoing transparent object 4404And is refracted to a light spot H4(ii) a A fifth ray is emitted from the viewpoint E of the camera 410 and passes through a pixel point O of the rendered image 4205A point P of the outgoing transparent object 4405And is refracted to a light spot H5Wherein the light spot H1And a light spot H4Constitutes a caustic spot G1 (estimate), spot H3And a light spot H5Make up ofDefocus spot G2 (estimated). Therefore, if the SPP value is low, the caustic visual effect cannot be generated effectively, and it is often necessary to add the visual effect by using another rendering method (non-ray tracing method).
The number of ray rebounds is the sum of the maximum number of reflections and the number of refractions for ray tracing before the ray tracing is terminated. The reason that the number of times of the ray rebounding can affect the image quality of the rendered image is as follows: because a ray is reflected and refracted many times in a complex scene, theoretically, the number of times the ray is reflected and refracted may be infinite, but in the process of actual algorithm, infinite tracing of the ray is impossible, and therefore some tracing termination conditions need to be given. In application, there may be the following termination conditions: the light rays are attenuated after being reflected and refracted for many times, and the light rays have small light intensity contribution to the viewpoint; the number of times of ray rebounding, namely the tracking depth, is larger than a certain value. Therefore, the more the number of times of the light rebound, the more the effective light can be tracked, the better and more vivid the refraction effect among the transparent objects is, and the better the image quality is.
The resource parameters may include: one or more of the number of processors, the main frequency of the processors, the memory size and the network bandwidth. When the main frequency of the processor is higher, the rendering efficiency is higher, and the frame rate of the image is higher; when the memory is larger, the rendering efficiency is higher, and the frame rate of the image is higher; the greater the network bandwidth, the more efficient the transmission of rendered images, and the higher the frame rate at which rendered images are displayed on the user device.
The display parameters may include: frame rate, etc., the higher the display parameters, the higher the user experience.
A rendering task, such as an application or a movie, includes rendering images over multiple frames. The rendering quality of a rendering task includes at least one of the following two aspects: the quality of the rendered image per frame, the efficiency (i.e., the frame rate) at which the image is rendered.
From the above description, it can be seen that the rendering quality of the rendering task can be adjusted by adjusting the rendering index and the resource parameter.
The higher the rendering index is, for example, the larger the number of Spp, the greater the number of times of the light rebounding, and the higher the quality of the rendered image per frame is, the better the rendering quality is. Generally, higher rendering metrics result in a greater computational load required for ray-tracing rendering, and accordingly, more resources (computing resources, storage resources, network resources, etc.) are required.
The higher the display parameters, the higher the rendering quality, and the more resources (computing, storage, and network resources, etc.) are required.
The higher the resource parameter is, the higher the efficiency of rendering the image is, the higher the image frame rate is, and the better the rendering quality is.
Therefore, resources required for ray tracing rendering can be provided on demand through the cloud rendering platform, so that rendering quality suitable for user requirements can be provided on demand.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a cloud rendering system provided in the present application. In a specific embodiment of the present application, a cloud rendering system may be used to implement ray tracing rendering, including: user device 510, network device 520, and cloud rendering platform 530.
The user device 510 may be a device that needs to display rendered images in real time, for example, a Virtual Reality device (VR) for flight training, a computer for 3D games, a smart phone for interactive architectural demonstration, and the like, and is not limited herein.
The network device 520 is used to transmit data between the user device 510 and the cloud rendering platform 530 over a communication network of any communication mechanism/communication standard. The communication network may be a wide area network, a local area network, a point-to-point connection, or any combination thereof.
Cloud rendering platform 530 includes a plurality of rendering nodes, each rendering node including, from bottom to top, rendering hardware, a virtual machine, an operating system, a rendering engine, and a rendering application. Wherein the rendering hardware includes computing resources, storage resources, and network resources. The computing resources may adopt a heterogeneous computing architecture, for example, a Central Processing Unit (CPU) + Graphics Processing Unit (GPU) architecture, a CPU + AI chip, a CPU + GPU + AI chip architecture, and the like may be adopted, which is not limited herein. The storage resources may include memory, and the like. Here, the computing resource may be divided into a plurality of computing unit resources, the storage resource may be divided into a plurality of storage unit resources, and the network resource may be divided into a plurality of network unit resources. Therefore, the cloud rendering platform can be freely combined on the basis of unit resources according to the resource requirements of the users, so that the resources are provided according to the needs of the users. For example, the computing resources may be divided into 5u computing unit resources and the storage resources may be divided into 10G storage unit resources, and the combination of computing resources and storage resources may be, 5u +10G, 5u +20G, 5u +30u, …, 10u +10G, 10u +20G, 10u +30u, …. The virtualization service is a service that constructs resources of a plurality of physical hosts into a uniform resource pool through a virtualization technology, and flexibly isolates mutually independent resources according to the needs of users to run application programs of the users. Commonly, the virtualization service may include a Virtual Machine (VM) service, a Bare Metal Server (BMS) service, and a Container (Container) service. The VM service may be a service that provides a Virtual Machine (VM) resource pool on a plurality of physical hosts through a virtualization technique to a user for use as needed. The BMS service is a service which is used by a BMS and is provided for users according to needs by virtualizing a BMS resource pool on a plurality of physical hosts. The container service is a service for virtually creating a container resource pool on a plurality of physical hosts to provide containers for users to use according to needs. A VM is a virtual computer, i.e., a logical computer, which is simulated. The BMS is an elastically telescopic high-performance computing service, the computing performance of the BMS is not different from that of a traditional physical machine, and the BMS has the characteristic of safe physical isolation. The container is a kernel virtualization technology, and can provide lightweight virtualization so as to achieve the purpose of isolating user space, processes and resources. It should be understood that the VM service, the BMS service, and the container service in the above virtualization service are only specific examples, and in practical applications, the virtualization service may also be other lightweight or heavyweight virtualization services, and is not limited in detail herein. The rendering engine may be a ray tracing renderer for implementing ray tracing rendering algorithms, and commonly used ray tracing renderers may include Unity, V-ray, Unreal, RenderMan, and so on. Rendering applications may be used to invoke a rendering engine to accomplish the rendering of rendered images, and common rendering applications may include: game applications, VR applications, movie special effects, and animations, among others.
Fig. 7A to 7B, fig. 8A to 8B, and fig. 9A to 9B below take the application scenarios of the user device as the smart terminal, the computer, and the VR device, respectively, as examples, and describe in detail how the user adjusts the rendering quality of the rendered image in real time by pressing a key.
Fig. 7A to 7B take user equipment as an example of an intelligent terminal, and describe how a user adjusts a rendering index, thereby adjusting rendering quality of a rendered image. The intelligent terminal is displaying the rendered image in real time. The side of the intelligent terminal is provided with two keys of an increase key and a decrease key, and a user can improve the rendering quality or reduce the rendering quality by pressing the increase key and the decrease key. Specifically, as shown in fig. 7A, when the user wishes to improve the rendering quality, the add button on the side of the smart terminal may be pressed, and the more times the add button of the smart terminal is pressed by the user, the higher the rendering quality is improved. As shown in fig. 7B, when the user wishes to reduce the rendering quality, the reduction key on the side of the smart terminal may be pressed, and the more times the reduction key of the smart terminal is pressed by the user, the higher the rendering quality is reduced. In the above embodiment, the rendering quality is adjusted by adding and reducing the keys, and the adding and reducing keys are two different physical keys, but in practical application, the adding and reducing keys may be integrated on the same physical key, or the adding and reducing keys may be two different virtual keys, and even more, instead of using the component of adding and reducing the keys, a progress bar or other components are used, which is not specifically limited herein.
Fig. 8A to 8B take user equipment as an example of a computer, and describe how a user adjusts a rendering index, thereby adjusting rendering quality. The intelligent terminal is displaying the rendered image in real time. The user can set the left button of the mouse as an increase button and the right button of the mouse as a decrease button. The user can increase or decrease the rendering quality by pressing the left button of the mouse and the right button of the mouse. Specifically, as shown in fig. 8A, when the user wishes to improve the rendering quality, the left key of the mouse may be pressed, and the more times the left key of the mouse of the terminal device is pressed by the user, the higher the rendering quality is improved. As shown in fig. 8B, when the user wishes to reduce the rendering quality, the right button of the mouse may be pressed, and the more times the right button of the mouse is pressed by the user, the higher the rendering quality is reduced. In the above embodiment, the rendering quality is adjusted by left and right keys of a mouse, in other embodiments, the rendering quality may also be adjusted by up and down keys of a keyboard, or the rendering quality may be adjusted by a pulley of a mouse, and the like, which is not limited specifically herein.
Fig. 9A to 9B take a user device as an example of a VR device, and describe how a user adjusts a rendering index, thereby adjusting rendering quality. The side of VR equipment has set up two kinds of buttons of increase button and reduction button, and the user can promote the quality of rendering or reduce the quality of rendering through pressing the increase button and reducing the button. Specifically, as shown in fig. 9A, when the user wishes to improve the rendering quality, the add key on the side of the VR device may be pressed, and the more times the add key of the VR device is pressed by the user, the higher the rendering quality is improved. As shown in fig. 9B, when the user wishes to reduce the rendering quality, the reduction key on the side of the VR device may be pressed, and the greater the number of times the reduction key of the VR device is pressed by the user, the higher the rendering quality is reduced. In the above embodiment, the rendering quality is adjusted by adding the key and reducing the key, and the adding the key and reducing the key are two different physical keys, but in practical application, the adding the key and reducing the key may be integrated on the same physical key, and the method is not limited specifically here.
It should be understood that the application scenarios of the smart terminal, the computer, and the VR device are only specific examples, and in other embodiments, other application scenarios may also be adopted, and are not limited herein.
The method for adjusting the rendering quality in real time by pressing the key on the user equipment mainly comprises the following steps: and (I) the user presses a key on the user equipment to obtain the adjustment index and/or the adjustment parameter. And (II) the user equipment sends the adjustment indexes and/or the adjustment parameters to the cloud rendering platform through the network equipment. And (III) the cloud rendering platform adjusts the computing resources according to the adjustment indexes and/or the adjustment parameters to perform ray tracing rendering, so that a rendered image is obtained. And (IV) the cloud rendering platform sends the rendered image to the user equipment through the network equipment and presents the rendered image to the user.
In the step (one), the user presses a key on the user equipment to obtain the adjustment index and/or the adjustment parameter. The mapping relationship between the pressing times of the keys and the adjustment index (SPP, light rebound times) and/or the adjustment parameter (main frequency and memory) can be preset in the user equipment, and the mapping relationship between the pressing times of the keys and the adjustment index (SPP, light rebound times) and/or the adjustment parameter (main frequency and memory) can be reduced.
As shown in (a) of fig. 10A, the SPP is increased by 4 when the number of times the user presses the add key is 1, by 8 when the number of times the user presses the add key is 2, by 12 when the number of times the user presses the add key is 3, and by 16 when the number of times the user presses the add key is 4. As shown in (b) of fig. 10A, when the number of times the user presses the add key is 1, the number of times the light rebounds increases to 1, when the number of times the user presses the add key is 2, the number of times the light rebounds increases to 2, when the number of times the user presses the add key is 3, the number of times the light rebounds increases to 3, and when the number of times the user presses the add key is 4, the number of times the light rebounds increases to 4. As shown in (c) of fig. 10A, when the number of times the user presses the add key is 1, the dominant frequency of the processor increases to 8u, when the number of times the user presses the add key is 2, the dominant frequency of the processor increases to 16u, when the number of times the user presses the add key is 3, the dominant frequency of the processor increases to 24u, and when the number of times the user presses the add key is 4, the dominant frequency of the processor increases to 32 u. As shown in (d) of fig. 10A, when the number of times the user presses the add key is 1, the memory of the memory is increased to 10 megabits (M), when the number of times the user presses the add key is 2, the memory of the memory is increased to 20M, when the number of times the user presses the add key is 3, the memory of the memory is increased to 30M, and when the number of times the user presses the add key is 4, the memory of the memory is increased to 40M.
As shown in fig. 10B (a), when the number of times the user presses the reduction key is 1, the SPP is reduced to 16, when the number of times the user presses the reduction key is 2, the SPP is reduced to 12, when the number of times the user presses the reduction key is 3, the SPP is reduced to 8, and when the number of times the user presses the reduction key is 4, the SPP is reduced to 4. As shown in (B) of fig. 10B, when the number of times the user presses the reduction key is 1, the reduction in the number of times the light rebounds is 4, when the number of times the user presses the reduction key is 2, the reduction in the number of times the light rebounds is 3, when the number of times the user presses the reduction key is 3, the SPP is 2, and when the number of times the user presses the reduction key is 4, the SPP is 1. As shown in fig. 10B (c), when the number of times the user presses the reduction key is 1, the dominant frequency of the processor is reduced to 32u, when the number of times the user presses the reduction key is 2, the dominant frequency of the processor is reduced to 24, when the number of times the user presses the reduction key is 3, the dominant frequency of the processor is reduced to 16, and when the number of times the user presses the reduction key is 4, the dominant frequency of the processor is reduced to 8. As shown in (d) of fig. 10B, when the number of times the user presses the reduction key is 1, the memory of the memory is reduced to 40M, when the number of times the user presses the reduction key is 2, the memory of the memory is reduced to 30M, when the number of times the user presses the reduction key is 3, the memory of the memory is reduced to 20M, and when the number of times the user presses the reduction key is 4, the memory of the memory is reduced to 10M.
In the above examples, the number of times of pressing the key is increased, the number of times of pressing the key is decreased, the number of times of increasing the SPP, the number of times of increasing the number of times of returning the light beam, the number of times of pressing the key is decreased, the number of times of decreasing the SPP, and the number of times of decreasing the number of times of returning the light beam are merely specific examples, and may be other values in practical application, and are not limited specifically herein. In addition, in the above example, the number of SPP increases and the number of light rebounds increases every time the button is pressed, and in practical applications, the number of SPP increases first when the button is pressed for increasing, the number of light rebounds increases again when the button is pressed for increasing for second time, and so on.
In the step (III), the cloud rendering platform adjusts the computing resources according to the adjustment indexes to perform ray tracing rendering, so that a rendered image is obtained. The following will respectively describe in detail the adjustment index as an improvement index requiring an increase in computing resources for ray tracing rendering, and the adjustment index as a reduction index requiring a reduction in computing resources for ray tracing rendering.
For the adjustment index which is an improvement index, the calculation resource needs to be increased for ray tracing rendering. Under the condition that the number of times of the light rebounding is not changed and the SPP is increased, as shown in fig. 11A, before the adjustment, assuming that the first rendering index is SPP 1 (the number of light rays transmitted by each pixel is 1), the cloud rendering platform performs ray tracing on the light ray 1 transmitted by the pixel a through the thread 1, where the computing resource used by the thread 1 is the first computing resource. When the user adjusts the first rendering index (SPP is 1) to the second rendering index (SPP is 2), that is, when the adjustment index is SPP plus 1, the cloud rendering platform keeps performing ray tracing on the ray 1 passing through the pixel a through the thread 1, and the cloud rendering platform newly creates a thread 2 to perform ray tracing on the newly added ray 2 passing through the pixel a, wherein the computing resource used by the thread 2 is a newly added computing resource, and the second computing resource is equal to the sum of the first computing resource and the newly added computing resource. Under the condition that the SPP is not changed and the number of times of the ray rebounding is increased, as shown in fig. 11B, assuming that the first rendering index is that the number of times of the ray rebounding is 1, the cloud rendering platform performs ray tracing on the first rebounding of the ray 1 passing through the pixel a through the thread 1, where the computing resource used by the thread 1 is the first computing resource. When a user adjusts the first rendering index (the light rebound frequency is 1) to be a second rendering index (the light rebound frequency is 2), namely, when the adjustment index is that the light rebound frequency is increased by 1, the cloud rendering platform keeps performing light tracking on the first rebound of the light 1 which penetrates through the pixel A through the thread 1, and a new thread 2 is created by the cloud rendering platform to perform light tracking on the second rebound of the light 1 which penetrates through the pixel A, wherein the computing resource used by the thread 2 is a newly added computing resource, and the second computing resource is equal to the sum of the first computing resource and the newly added computing resource.
For the adjustment index being a reduction index, the computing resources are required to be reduced for ray tracing rendering. Under the condition that the number of times of ray rebounding is not changed and the SPP is reduced, as shown in fig. 11C, before the adjustment, assuming that the first rendering index is SPP 2 (the number of rays transmitted per pixel is 2), the cloud rendering platform performs ray tracing on the ray 1 transmitted through the pixel a through the thread 1, and performs ray tracing on the ray 2 transmitted through the pixel a through the thread 2, where the computing resource used by the thread 1 is a first computing resource, and the computing resource used by the thread 2 is a first computing resource. When the user adjusts the first rendering index (SPP is 2) to the second rendering index (SPP is 1), that is, the adjustment index is SPP minus 1, the cloud rendering platform keeps performing ray tracing on the ray 1 passing through the pixel a through the thread 1, and the cloud rendering platform deletes the thread 2 to perform ray tracing on the newly added ray 2 passing through the pixel a, wherein the computing resources used by the thread 2 are reduced computing resources, and the second computing resource is equal to the difference between the first computing resource and the reduced computing resources. Under the condition that the SPP is not changed and the number of times of ray rebounding is reduced, as shown in fig. 11D, assuming that the first rendering index is that the number of times of ray rebounding is 2, the cloud rendering platform performs ray tracing on the first rebounding of the ray 1 passing through the pixel a through the thread 1, and performs ray tracing on the second rebounding of the ray 1 passing through the pixel a through the thread 2, where the computing resource used by the thread 1 is a first computing resource, and the computing resource used by the thread 2 is a second computing resource. When the user adjusts the first rendering index (the light rebound frequency 2) to a second rendering index (the light rebound frequency is 1), namely, when the adjustment index is that the light rebound frequency is reduced by 1, the cloud rendering platform keeps performing light tracking on the first rebound of the light 1 passing through the pixel a through the thread 1, and deletes the thread 2 to perform light tracking on the second rebound of the light 1 passing through the pixel a, wherein the computing resources used by the thread 2 are reduced computing resources, and the second computing resource is equal to the difference between the first computing resource and the reduced computing resources.
In the above examples, the number of the first rendering indexes, the number of the second rendering indexes, and the number of the adjustment indexes are only specific examples, and in practical applications, the number may be other values, which are not limited specifically herein. In addition, although the above examples are all triggered by the user through a key, in practical applications, the user may also be a professional having a deep technical foundation, and may directly call an Application Programming Interface (API) to trigger, which is not limited in detail here.
A process of implementing adjustment of rendering quality by a user through a cloud rendering system will be described in detail below, referring to fig. 12, where fig. 12 is a flowchart of a rendering quality adjustment method according to an embodiment of the present disclosure. The rendering quality adjustment method in this embodiment may include the steps of:
s101: and the cloud rendering platform receives the rendering task sent by the management equipment. Accordingly, the management device receives the rendering task sent by the cloud rendering platform.
In a specific embodiment of the present application, the management device may be a third party device. For example, the user device may be a device where an application of a 3D game is played by a user, and the management device may be a device of a game developer who provides the application of the 3D game.
S102: the user equipment acquires a rendering resource request instruction input by a user.
In a specific embodiment of the present application, the rendering resource request instruction includes one or more of a rendering index, a resource parameter, and a display parameter.
In a specific embodiment of the present application, the rendering index includes one or more of a sampling number per pixel Spp, a number of times of light rebounding, a number of triangle-shaped panels for object modeling, a number of vertex points, and a picture noise. Where SPP may be defined as the number of rays sampled per pixel. The number of ray rebounds is the sum of the maximum number of reflections and the number of refractions for ray tracing before the ray tracing is terminated. Here, the higher the rendering index is, for example, the larger the number of Spp is, the more the number of ray rebounds is, the larger the calculation amount required by ray tracing rendering is, and accordingly, the more resources (calculation resources, storage resources, network resources, and the like) are required, and the better the rendering quality is.
In a specific embodiment of the present application, the resource parameter includes one or more of the number of processors, a master frequency of the processors, a memory size, and a network bandwidth. Here, the higher the resource parameter, the more resources allocated to the rendering task, the higher the frame rate of the rendered image, and the better the user experience.
In a specific embodiment of the present application, the display parameters may include a frame rate, and the like, and the higher the rendering quality is, the more resources (computing resources, storage resources, network resources, and the like) are required.
S103: and the user equipment sends the rendering resource request instruction to the cloud rendering platform through the network equipment. Correspondingly, the cloud rendering platform receives the rendering resource request instruction sent by the user equipment through the network equipment.
S104: and the cloud rendering platform allocates computing resources to perform real-time rendering on the rendering task according to the rendering resource request instruction, so that a first rendering image is obtained.
S105: and the cloud rendering platform sends the first rendering image to user equipment through network equipment. Accordingly, the user equipment receives a first rendering image sent by the cloud rendering platform through the network equipment.
In a specific embodiment of the present application, the cloud rendering platform calculates first pricing information according to allocation of computing resources for the rendering task, where the first pricing information is a cost for rendering the first rendered image.
S106: the user equipment acquires a rendering resource adjusting instruction input by a user.
In an embodiment of the present application, the rendering resource adjustment instruction is to improve rendering quality, or to reduce rendering quality.
S107: and the user equipment sends the rendering resource adjusting instruction to the cloud rendering platform through the network equipment.
S108: and the cloud rendering platform adjusts the computing resources allocated to the rendering task according to the rendering resource adjusting instruction, and renders the rendering task by using the adjusted computing resources allocated to the rendering task, so as to obtain a second rendering image.
In a specific embodiment of the present application, when the rendering resource adjustment instruction is used to improve the rendering quality of the rendering task, the adjusting, by the cloud rendering platform, the computing resources allocated to the rendering task according to the rendering resource adjustment instruction includes: increasing the computational resources allocated for the rendering task; or in the case that the rendering resource adjustment instruction is used to reduce the rendering quality of the rendering task, the adjusting, by the cloud rendering platform, the computing resources allocated to the rendering task according to the rendering resource adjustment instruction includes: reducing the computational resources allocated for the rendering task.
S109: and the cloud rendering platform sends the second rendering image to the user equipment through the network equipment.
In a specific embodiment of the present application, the cloud rendering platform calculates second pricing information according to the adjusted computing resources allocated to the rendering task, where the second pricing information is a cost for rendering the second rendering image.
In the above example, the provider of the application may first propose a rendering task on the cloud rendering platform, and then each user device may adjust the rendering quality according to its own needs, for example, the provider of the application of the 3D game may first propose a rendering task to render a game scene of the 3D game, and some game players are willing to pay to improve the rendering quality of the game scene, and may send a rendering resource adjustment instruction to request the cloud rendering platform to provide more computing resources; some game players have low requirements on the image quality of the game scene of the 3D game, and can send rendering resource adjustment instructions to require the cloud rendering platform to reduce the provided computing resources, so that the cost is reduced.
For simplicity, the rendering quality adjustment method is not described herein, and refer to fig. 1A to 1B, fig. 2 to 4, fig. 5A to 5B, fig. 6, fig. 7A to 7B, fig. 8A to 8B, fig. 9A to 9B, fig. 10A to 10B, fig. 11A to 11B, and related descriptions.
A process of implementing adjustment of rendering quality by a user through a cloud rendering system will be described in detail below, referring to fig. 13, where fig. 13 is a flowchart of a rendering quality adjustment method according to another embodiment of the present disclosure. The rendering quality adjustment method in this embodiment may include the steps of:
s201: and the cloud rendering platform receives the rendering task sent by the user equipment. Accordingly, the cloud rendering platform receives the rendering task sent by the user equipment.
In a specific embodiment of the present application, the rendering task may be a task of rendering a multi-frame image provided by the same application program, or may also be a task of rendering a multi-frame image provided by multiple application programs, which is not specifically limited herein. For example, the user may be playing an A game at all times, and then the rendering task may be an application-initiated rendering task of the A game. The user may also play the a game first and then play the B game, and the rendering tasks may be those initiated by the application of the a game and the application of the B game.
S202: the user equipment acquires a rendering resource request instruction input by a user.
In a specific embodiment of the present application, the rendering resource request instruction includes one or more of a rendering index, a resource parameter, and a display parameter.
In a specific embodiment of the present application, the rendering index includes one or more of a sampling number per pixel Spp, a number of times of light rebounding, a number of triangle-shaped panels for object modeling, a number of vertex points, and a picture noise. Where SPP may be defined as the number of rays sampled per pixel. The number of ray rebounds is the sum of the maximum number of reflections and the number of refractions for ray tracing before the ray tracing is terminated. Here, the higher the rendering index is, for example, the larger the number of Spp is, the more the number of ray rebounds is, the larger the calculation amount required by ray tracing rendering is, and accordingly, the more resources (calculation resources, storage resources, network resources, and the like) are required, and the better the rendering quality is.
In a specific embodiment of the present application, the resource parameter includes one or more of the number of processors, a master frequency of the processors, a memory size, and a network bandwidth. Here, the higher the resource parameter, the more resources allocated to the rendering task, the higher the frame rate of the rendered image, and the better the user experience.
In a specific embodiment of the present application, the display parameters may include a frame rate, and the like, and the higher the rendering quality is, the more resources (computing resources, storage resources, network resources, and the like) are required.
S203: and the user equipment sends the rendering resource request instruction to the cloud rendering platform through the network equipment. Correspondingly, the cloud rendering platform receives the rendering resource request instruction sent by the user equipment through the network equipment.
S204: and the cloud rendering platform allocates computing resources to perform real-time rendering on the rendering task according to the rendering resource request instruction, so that a first rendering image is obtained.
S205: and the cloud rendering platform sends the first rendering image to user equipment through network equipment. Accordingly, the user equipment receives a first rendering image sent by the cloud rendering platform through the network equipment.
In a specific embodiment of the present application, the cloud rendering platform calculates first pricing information according to allocation of computing resources for the rendering task, where the first pricing information is a cost for rendering the first rendered image.
S206: the user equipment acquires a rendering resource adjusting instruction input by a user.
In an embodiment of the present application, the rendering resource adjustment instruction is to improve rendering quality, or to reduce rendering quality.
S207: and the user equipment sends the rendering resource adjusting instruction to the cloud rendering platform through the network equipment.
S208: and the cloud rendering platform adjusts the computing resources allocated to the rendering task according to the rendering resource adjusting instruction, and renders the rendering task by using the adjusted computing resources allocated to the rendering task, so as to obtain a second rendering image.
In a specific embodiment of the present application, when the rendering resource adjustment instruction is used to improve the rendering quality of the rendering task, the adjusting, by the cloud rendering platform, the computing resources allocated to the rendering task according to the rendering resource adjustment instruction includes: increasing the computational resources allocated for the rendering task; or in the case that the rendering resource adjustment instruction is used to reduce the rendering quality of the rendering task, the adjusting, by the cloud rendering platform, the computing resources allocated to the rendering task according to the rendering resource adjustment instruction includes: reducing the computational resources allocated for the rendering task.
S209: and the cloud rendering platform sends the second rendering image to the user equipment through the network equipment.
In a specific embodiment of the present application, the cloud rendering platform calculates second pricing information according to the adjusted computing resources allocated to the rendering task, where the second pricing information is a cost for rendering the second rendering image.
For simplicity, the rendering quality adjustment method is not described herein, and refer to fig. 1A to 1B, fig. 2 to 4, fig. 5A to 5B, fig. 6, fig. 7A to 7B, fig. 8A to 8B, fig. 9A to 9B, fig. 10A to 10B, fig. 11A to 11B, and related descriptions.
A process of implementing adjustment of rendering quality by a user through a cloud rendering system will be described in detail below, and referring to fig. 14, fig. 14 is a flowchart of a rendering quality adjustment method according to another embodiment of the present disclosure. The rendering quality adjustment method in this embodiment may include the steps of:
s301: the cloud rendering platform receives a rendering task sent by the first user equipment. Accordingly, the cloud rendering platform receives the rendering task sent by the first user equipment.
In a specific embodiment of the present application, the rendering task may be a task of rendering a multi-frame image provided by the same application program, or may also be a task of rendering a multi-frame image provided by multiple application programs, which is not specifically limited herein. For example, the user may be playing an A game at all times, and then the rendering task may be an application-initiated rendering task of the A game. The user may also play the a game first and then play the B game, and the rendering tasks may be those initiated by the application of the a game and the application of the B game.
S302: the first user equipment acquires a rendering resource request instruction input by a user.
In a specific embodiment of the present application, the rendering resource request instruction includes one or more of a rendering index, a resource parameter, and a display parameter.
In a specific embodiment of the present application, the rendering index includes one or more of a sampling number per pixel Spp, a number of times of light rebounding, a number of triangle-shaped panels for object modeling, a number of vertex points, and a picture noise. Where SPP may be defined as the number of rays sampled per pixel. The number of ray rebounds is the sum of the maximum number of reflections and the number of refractions for ray tracing before the ray tracing is terminated. Here, the higher the rendering index is, for example, the larger the number of Spp is, the more the number of ray rebounds is, the larger the calculation amount required by ray tracing rendering is, and accordingly, the more resources (calculation resources, storage resources, network resources, and the like) are required, and the better the rendering quality is.
In a specific embodiment of the present application, the resource parameter includes one or more of the number of processors, a master frequency of the processors, a memory size, and a network bandwidth. Here, the higher the resource parameter, the more resources allocated to the rendering task, the higher the frame rate of the rendered image, and the better the user experience.
In a specific embodiment of the present application, the display parameters may include a frame rate, and the like, and the higher the rendering quality is, the more resources (computing resources, storage resources, network resources, and the like) are required.
S303: and the first user equipment sends the rendering resource request instruction to the cloud rendering platform through the network equipment. Accordingly, the cloud rendering platform receives the rendering resource request instruction sent by the first user equipment through the network equipment.
S304: and the cloud rendering platform allocates computing resources to perform real-time rendering on the rendering task according to the rendering resource request instruction, so that a first rendering image is obtained.
S305: and the cloud rendering platform sends the first rendering image to second user equipment through network equipment. Accordingly, the second user equipment receives the first rendering image sent by the cloud rendering platform through the network equipment.
In a specific embodiment of the present application, the cloud rendering platform calculates first pricing information according to allocation of computing resources for the rendering task, where the first pricing information is a cost for rendering the first rendered image.
S306: the first user equipment acquires a rendering resource adjusting instruction input by a user.
In an embodiment of the present application, the rendering resource adjustment instruction is to improve rendering quality, or to reduce rendering quality.
S307: the first user equipment sends the rendering resource adjusting instruction to the cloud rendering platform through the network equipment.
S308: and the cloud rendering platform adjusts the computing resources allocated to the rendering task according to the rendering resource adjusting instruction, and renders the rendering task by using the adjusted computing resources allocated to the rendering task, so as to obtain a second rendering image.
In a specific embodiment of the present application, when the rendering resource adjustment instruction is used to improve the rendering quality of the rendering task, the adjusting, by the cloud rendering platform, the computing resources allocated to the rendering task according to the rendering resource adjustment instruction includes: increasing the computational resources allocated for the rendering task; or in the case that the rendering resource adjustment instruction is used to reduce the rendering quality of the rendering task, the adjusting, by the cloud rendering platform, the computing resources allocated to the rendering task according to the rendering resource adjustment instruction includes: reducing the computational resources allocated for the rendering task.
S309: and the cloud rendering platform sends the second rendering image to the second user equipment through the network equipment.
In a specific embodiment of the present application, the cloud rendering platform calculates second pricing information according to the adjusted computing resources allocated to the rendering task, where the second pricing information is a cost for rendering the second rendering image.
For simplicity, the rendering quality adjustment method is not described herein, and refer to fig. 1A to 1B, fig. 2 to 4, fig. 5A to 5B, fig. 6, fig. 7A to 7B, fig. 8A to 8B, fig. 9A to 9B, fig. 10A to 10B, fig. 11A to 11B, and related descriptions.
Referring to fig. 15, fig. 15 is a cloud rendering platform according to an embodiment of the present disclosure, including: an acquisition module 610, a receiving module 620, and a rendering engine 630.
The obtaining module 610 is configured to obtain a rendering task;
the receiving module 620 is configured to receive a rendering resource request instruction sent by a user equipment;
the rendering engine 630 is configured to allocate computing resources to the rendering task for rendering according to the rendering resource request instruction, so as to obtain a first rendering image;
the receiving module 620 is configured to receive a rendering resource adjustment instruction sent by the user equipment;
the rendering engine 630 is configured to adjust the computing resources allocated for the rendering task according to the rendering resource adjustment instruction;
the rendering engine 630 is configured to render the rendering task by using the adjusted computing resources allocated to the rendering task, so as to obtain a second rendering image.
For simplicity, the cloud rendering platform is not described in detail herein, and refer to fig. 1A to 1B, fig. 2 to 4, fig. 5A to 5B, fig. 6, fig. 7A to 7B, fig. 8A to 8B, fig. 9A to 9B, fig. 10A to 10B, fig. 11A to 11B, and related descriptions.
The cloud rendering system provided by the application comprises user equipment, network equipment and a cloud rendering platform. The user equipment can communicate with the cloud rendering platform through the network equipment. The user device may be a VR device, a computer, a smartphone, and so on. The cloud rendering platform includes one or more cloud rendering nodes.
Taking user equipment as an intelligent terminal as an example, fig. 16 is a structural block diagram of an intelligent terminal in an implementation manner. As shown in fig. 16, the smart terminal may include: baseband chip 710, memory 715, including one or more computer-readable storage media, Radio Frequency (RF) module 716, peripheral system 717. These components may communicate over one or more communication buses 714.
The peripheral system 717 is mainly used to implement an interactive function between the smart terminal and a user/external environment, and mainly includes an input/output device of the smart terminal. In particular implementations, the peripheral system 717 may include: a touch screen controller 718, a key controller 719, an audio controller 720, and a sensor management module 721. Wherein each controller may be coupled to a respective peripheral device such as a touch screen 723, buttons 724, audio circuitry 725, and sensors 726. In some embodiments, the gesture sensors in sensors 726 may be used to receive gesture control operations of user input. The pressure sensor of the sensors 726 may be disposed below the touch screen 723, and may be used to collect touch pressure applied to the touch screen 723 when a user inputs a touch operation through the touch screen 723. Peripheral system 717 may also include other I/O peripherals, as desired.
The baseband chip 710 may integrally include: one or more processors 711, a clock module 712, and a power management module 713. The clock module 712 integrated in the baseband chip 710 is mainly used for generating clocks required for data transmission and timing control for the processor 711. The power management module 713 integrated in the baseband chip 710 is mainly used to provide stable and high-precision voltages for the processor 711, the rf module 716, and peripheral systems.
The Radio Frequency (RF) module 716 is used for receiving and transmitting RF signals, and mainly integrates a receiver and a transmitter of the smart terminal. The Radio Frequency (RF) module 716 communicates with a communication network and other communication devices through radio frequency signals. In particular implementations, the Radio Frequency (RF) module 716 may include, but is not limited to: an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chip, a SIM card, a storage medium, and the like. In addition, the rf module 716 may further include a short-range wireless communication module such as WIFI, bluetooth, etc. In some embodiments, the Radio Frequency (RF) module 716 may be implemented on a separate chip.
The Memory 715 may include a Random Access Memory (RAM), a flash Memory (flash Memory), and may also be a RAM, a Read-only Memory (ROM), a Hard Disk Drive (Hard Disk Drive, HDD), or a Solid-state Drive (SSD). The memory 815 may store an operating system, communication programs, user interface programs, browsers, rendering applications. The rendering application includes a game application and other rendering applications.
Taking the user device as an example of a computer, fig. 17 is a block diagram of a computer in an implementation manner. As shown in fig. 17, the computer may include: a host 810, an output device 820, and an input device 830.
Host 810 can be integrated including: one or more processors, a clock module, and a power management module. The clock module integrated in the host 810 is mainly used to generate clocks required for data transmission and timing control for the processor. The power management module integrated in the host 810 is mainly used to provide stable, high-precision voltages to the processor, the output device 820, and the input device 830. Host 810 also incorporates a memory for storing various software programs and/or sets of instructions. In particular implementations, the memory may include high speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid state storage devices. The memory may store an operating system, such as an embedded operating system like ANDROID, IOS, WINDOWS, or LINUX. The memory may also store a communication program that may be used to communicate with one or more input devices or output devices. The memory can also store a user interface program, and the user interface program can vividly display the content image of the browser through a graphical operation interface and receive the control operation of a user on the browser through input controls such as a menu, a dialog box and a key. The memory may also store an operating system, communication programs, user interface programs, browsers, rendering applications, and the like. The rendering application includes a game application and other rendering applications.
The output device 820 mainly includes a Display, which may include a Cathode Ray Tube (CRT) Display, a Plasma Display Panel (PDP), a Liquid Crystal Display (LCD), and the like. Taking the display as an LCD as an example, the liquid crystal display includes a liquid crystal panel and a backlight module, wherein the liquid crystal panel includes a polarizing film, a glass substrate, a black matrix, a color filter, a protective film, a common electrode, an alignment layer, a liquid crystal layer (liquid crystal, spacer, sealant), a capacitor, a display electrode, a prism layer, and a light scattering layer. The backlight module includes: an illumination light source, a reflection plate, a light guide plate, a diffusion sheet, a brightness enhancement film (prism sheet), a frame, and the like.
The input device 830 may include a keyboard and a mouse. The keyboard and the mouse are the most common and the most main input devices, English letters, numbers, punctuations and the like can be input into a computer through the keyboard, so that commands, input data and the like are sent to the computer, and the vertical and horizontal coordinate positioning can be quickly carried out through the mouse, so that the operation is simplified. The keyboard may include a Mechanical keyboard, a plastic film keyboard (Mechanical), a conductive rubber keyboard (Membrane), a contactless electrostatic capacitance keyboard (capacities), and the like, and the mouse may include a rolling ball mouse, an optical mouse, a wireless mouse, and the like.
FIG. 18 is a block diagram of a structure of a cloud rendering platform of an implementation. The cloud rendering platform may include one or more cloud rendering nodes. The cloud rendering node includes: a processing system 910, a first memory 920, a smart card 930, and a bus 940.
Processor system 910 may be heterogeneous in structure, i.e., including one or more general purpose processors, which may be any type of device capable of Processing electronic instructions, including a Central Processing Unit (CPU), a microprocessor, a microcontroller, a main processor, a controller, and an Application Specific Integrated Circuit (ASIC), among others, as well as one or more special purpose processors, e.g., GPU or AI chips, and the like. The general purpose processor executes various types of digital storage instructions, such as a software or firmware program stored in the first memory 920. In a particular embodiment, the general purpose processor may be an x86 processor or the like. The general purpose processor sends commands to the first memory 920 through the physical interface to accomplish storage related tasks, for example, the general purpose processor may provide commands including read commands, write commands, copy commands, erase commands, and so on. The commands may specify operations related to particular pages and blocks of the first memory 920. Special processors are used to perform complex operations for image rendering, and the like.
The first Memory 920 may include a Random Access Memory (RAM), a flash Memory (flash Memory), and the like, and may also be a RAM, a Read-only Memory (ROM), a Hard Disk Drive (HDD), or a Solid-state Drive (SSD). The first memory 920 stores program code that implements a rendering engine and rendering applications.
The smart card 930 is also called a Network interface controller, a Network interface card, or a Local Area Network (LAN) adapter. Each intelligent network card 930 has a unique MAC address, which is burned into a read-only memory chip by the manufacturer of the intelligent network card 930 during production. Smart card 930 includes a processor 931, a second memory 932, and a transceiver 933. The processor 931 is similar to a general purpose processor, however, the performance requirements of the processor 931 may be lower than the performance requirements of a general purpose processor. In a specific embodiment, the processor 931 may be an ARM processor or the like. The second memory 932 may also be a flash memory, an HDD, or an SDD, and the storage capacity of the second memory 932 may be smaller than that of the first memory 920. The transceiver 933 may be configured to receive and transmit packets, and upload the received packets to the processor 931 for processing. The smart network card 930 may further include a plurality of ports, and the ports may be any one or more of three types of interfaces, i.e., a thick cable interface, a thin cable interface, and a twisted pair interface.
For simplicity, the cloud rendering system is not described in detail herein, and refer to fig. 1A to 1B, fig. 2 to 4, fig. 5A to 5B, fig. 6, fig. 7A to 7B, fig. 8A to 8B, fig. 9A to 9B, fig. 10A to 10B, fig. 11A to 11B, and related descriptions. Also, the user device may perform steps performed by the user device in the rendering quality adjustment method illustrated in fig. 12 and 13, and the cloud rendering platform may perform steps performed by the cloud rendering platform in the rendering quality adjustment method illustrated in fig. 12 and 13. In addition, the obtaining module 610 and the receiving module 620 in fig. 15 may be implemented by the smart network card 930 in this embodiment, and the rendering engine 630 in fig. 15 may be implemented by the processor system 910 in this embodiment executing the program code in the first memory 920.
According to the scheme, when the user equipment sends the rendering resource adjusting instruction, the cloud rendering platform adjusts the computing resources in real time according to the rendering resource adjusting instruction to perform computing, so that the rendering quality is dynamically adjusted, and different requirements of users are met.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital subscriber line) or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, memory Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.

Claims (10)

1. A rendering quality adjustment method, comprising:
the cloud rendering platform acquires a rendering task;
the cloud rendering platform receives a rendering resource request instruction sent by user equipment;
the cloud rendering platform allocates computing resources for the rendering task according to the rendering resource request instruction to perform rendering, so that a first rendering image is obtained;
the cloud rendering platform receives a rendering resource adjusting instruction sent by the user equipment;
the cloud rendering platform adjusts the computing resources allocated to the rendering task according to the rendering resource adjusting instruction;
and the cloud rendering platform renders the rendering task by using the adjusted computing resources allocated to the rendering task, so as to obtain a second rendering image.
2. The method of claim 1,
when the rendering resource adjustment instruction is used to improve the rendering quality of the rendering task, the cloud rendering platform adjusting the computing resources allocated to the rendering task according to the rendering resource adjustment instruction includes: increasing the computational resources allocated for the rendering task; or
In a case that the rendering resource adjustment instruction is used to reduce the rendering quality of the rendering task, the cloud rendering platform adjusting the computing resources allocated to the rendering task according to the rendering resource adjustment instruction includes: reducing the computational resources allocated for the rendering task.
3. The method according to claim 1 or 2,
after the cloud rendering platform allocates computing resources for the rendering task according to the rendering resource request instruction to perform rendering, thereby obtaining a first rendering image, the method further includes:
calculating first pricing information according to computing resources allocated for the rendering task, wherein the first pricing information is a cost for rendering the first rendered image;
after the cloud rendering platform renders the rendering task by using the adjusted computing resources allocated to the rendering task, so as to obtain a second rendering image, the method further includes:
and calculating second pricing information according to the adjusted computing resources allocated for the rendering task, wherein the second pricing information is the cost for rendering the second rendering image.
4. The method of any one of claims 1 to 3, wherein the rendering task is rendered using ray tracing rendering, and the rendering resource request instruction includes one or more of a rendering index, a resource parameter, and a display parameter; the rendering index comprises one or more of sampling number Spp per pixel, light rebound times, object modeling triangular surface number, vertex number and picture noise, the resource parameter comprises one or more of the number of processors, dominant frequency of the processors, memory size and network bandwidth, and the display parameter comprises frame rate.
5. A cloud rendering platform, comprising: the system comprises an acquisition module, a receiving module and a rendering engine;
the acquisition module is used for acquiring a rendering task;
the receiving module is used for receiving a rendering resource request instruction sent by user equipment;
the rendering engine is used for allocating computing resources for the rendering task according to the rendering resource request instruction so as to perform rendering, and therefore a first rendering image is obtained;
the receiving module is used for receiving a rendering resource adjusting instruction sent by the user equipment;
the rendering engine is used for adjusting the computing resources distributed to the rendering task according to the rendering resource adjusting instruction;
and the rendering engine is used for rendering the rendering task by using the adjusted computing resources allocated to the rendering task so as to obtain a second rendering image.
6. The platform of claim 5,
when the rendering resource adjustment instruction is used to improve the rendering quality of the rendering task, the cloud rendering platform adjusting the computing resources allocated to the rendering task according to the rendering resource adjustment instruction includes: increasing the computational resources allocated for the rendering task; or
In a case that the rendering resource adjustment instruction is used to reduce the rendering quality of the rendering task, the cloud rendering platform adjusting the computing resources allocated to the rendering task according to the rendering resource adjustment instruction includes: reducing the computational resources allocated for the rendering task.
7. The platform of claim 5 or 6, wherein the cloud rendering platform further comprises a pricing module,
the pricing module is used for calculating first pricing information according to computing resources allocated for the rendering task, wherein the first pricing information is the cost for rendering the first rendering image;
the pricing module is further configured to calculate second pricing information according to the adjusted computing resources allocated to the rendering task, wherein the second pricing information is a cost for rendering the second rendered image.
8. The platform of any one of claims 5 to 7, wherein the rendering tasks are rendered using ray tracing rendering, and the rendering resource request instructions include one or more of rendering metrics, resource parameters, and display parameters; the rendering index comprises one or more of sampling number Spp per pixel, light rebound times, object modeling triangular surface number, vertex number and picture noise, the resource parameter comprises one or more of the number of processors, dominant frequency of the processors, memory size and network bandwidth, and the display parameter comprises frame rate.
9. A rendering platform comprising at least one rendering node, each rendering node comprising a processor and a memory, the processor executing a program in the memory to perform the method of any of claims 1 to 4.
10. A computer-readable storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of any of claims 1 to 4.
CN202010449900.XA 2020-03-27 2020-05-25 Rendering quality adjusting method and related equipment Pending CN113516774A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/083467 WO2021190651A1 (en) 2020-03-27 2021-03-27 Rendering quality adjustment method and related device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN2020102315162 2020-03-27
CN202010231516 2020-03-27
CN202010351121 2020-04-28
CN2020103511216 2020-04-28

Publications (1)

Publication Number Publication Date
CN113516774A true CN113516774A (en) 2021-10-19

Family

ID=78060329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010449900.XA Pending CN113516774A (en) 2020-03-27 2020-05-25 Rendering quality adjusting method and related equipment

Country Status (1)

Country Link
CN (1) CN113516774A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160041A1 (en) * 2022-02-25 2023-08-31 腾讯科技(深圳)有限公司 Image rendering method and apparatus, computer device, computer-readable storage medium and computer program product
WO2024016679A1 (en) * 2022-07-22 2024-01-25 华为云计算技术有限公司 Image rendering processing method and related device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160041A1 (en) * 2022-02-25 2023-08-31 腾讯科技(深圳)有限公司 Image rendering method and apparatus, computer device, computer-readable storage medium and computer program product
WO2024016679A1 (en) * 2022-07-22 2024-01-25 华为云计算技术有限公司 Image rendering processing method and related device

Similar Documents

Publication Publication Date Title
WO2021190651A1 (en) Rendering quality adjustment method and related device
CN110838162B (en) Vegetation rendering method and device, storage medium and electronic equipment
CN113838184A (en) Rendering method, device and system
CN113516774A (en) Rendering quality adjusting method and related equipment
CN111491208B (en) Video processing method and device, electronic equipment and computer readable medium
CN103632337A (en) Real-time order-independent transparent rendering
WO2022022729A1 (en) Rendering control method, device and system
CN115359226B (en) Texture compression-based VR display method for Hongmong system, electronic device and medium
CN110930497A (en) Global illumination intersection acceleration method and device and computer storage medium
CN109960887B (en) LOD-based model making method and device, storage medium and electronic equipment
CN103927223A (en) Serialized Access To Graphics Resources
CN111275803A (en) 3D model rendering method, device, equipment and storage medium
US20230186554A1 (en) Rendering method, device, and system
CN112565883A (en) Video rendering processing system and computer equipment for virtual reality scene
CN112367295B (en) Plug-in display method and device, storage medium and electronic equipment
CN112473138B (en) Game display control method and device, readable storage medium and electronic equipment
CN112967369A (en) Light ray display method and device
CN112348965A (en) Imaging method, imaging device, electronic equipment and readable storage medium
CN114820331A (en) Image noise reduction method and device
CN108335362A (en) Light control method, device in virtual scene and VR equipment
CN112988294B (en) Method and device for optimizing virtual pointer of RH850 liquid crystal instrument
US11741657B2 (en) Image processing method, electronic device, and storage medium
WO2024082901A1 (en) Data processing method and apparatus for cloud game, and electronic device, computer-readable storage medium and computer program product
EP4113262A1 (en) Method for interaction between display device and terminal device, and storage medium and electronic device
CN114581588A (en) Rendering method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220211

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Applicant after: Huawei Cloud Computing Technology Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination