CN112634122A - Cloud rendering method and system, computer equipment and readable storage medium - Google Patents

Cloud rendering method and system, computer equipment and readable storage medium Download PDF

Info

Publication number
CN112634122A
CN112634122A CN202011388725.4A CN202011388725A CN112634122A CN 112634122 A CN112634122 A CN 112634122A CN 202011388725 A CN202011388725 A CN 202011388725A CN 112634122 A CN112634122 A CN 112634122A
Authority
CN
China
Prior art keywords
execution
rendering
scheduling
data
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011388725.4A
Other languages
Chinese (zh)
Inventor
李萌迪
谭述安
李承泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tiya Digital Technology Co ltd
Original Assignee
Shenzhen Tiya Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tiya Digital Technology Co ltd filed Critical Shenzhen Tiya Digital Technology Co ltd
Priority to CN202011388725.4A priority Critical patent/CN112634122A/en
Publication of CN112634122A publication Critical patent/CN112634122A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention is applicable to the field of computers, and provides a cloud rendering method, a cloud rendering system, computer equipment and a readable storage medium, wherein the cloud rendering system comprises: a user terminal, a scheduling terminal and an execution terminal; the user side is used for initiating a rendering request to the scheduling side, and the rendering request comprises data to be rendered and a connection address of the user side; the scheduling end is used for receiving the rendering request and distributing the data to be rendered to one or more target execution ends, and the target execution ends are any one of the execution ends; the target execution end is used for rendering the data to be rendered and sending the rendering result to the user end according to the connection address of the user end. According to the scheme, the data to be rendered on the user terminal with lower rendering capability is transferred to the execution end with higher rendering capability and higher speed for processing, so that reasonable configuration of resources is realized, a large amount of system resources of the user terminal are released, the user terminal can output videos/images with high rendering quality without higher hardware support, and the cost of the user terminal is reduced.

Description

Cloud rendering method and system, computer equipment and readable storage medium
Technical Field
The invention belongs to the field of computers, and particularly relates to a cloud rendering method, a cloud rendering system, computer equipment and a readable storage medium.
Background
With the development of science and technology, image and video media playing devices have been deeply involved in the aspects of work and life of people, and the requirements of people on image rendering are gradually increased.
As is known, more complex image rendering requires more complex and higher-end hardware support, and more complex and higher-end hardware is often more expensive and is not practical for common users, and devices of common users often have lower rendering capability and are difficult to support more complex image and video rendering processing; moreover, even if it can be supported, it needs to consume much power consumption and system resources of the user terminal.
Disclosure of Invention
The embodiment of the invention aims to provide a cloud rendering method, and aims to solve the problem of low rendering capability of a user terminal.
The embodiment of the invention is realized in such a way that a cloud rendering method comprises the following steps: the system comprises a user end, a scheduling end and an execution end, wherein the user end, the scheduling end and the execution end can perform data interaction;
the user side is used for initiating a rendering request to the scheduling side, wherein the rendering request at least comprises data to be rendered and a connection address of the user side;
the scheduling end is used for receiving the rendering request and distributing the data to be rendered to one or more target execution ends, wherein the target execution end is any one of the execution ends;
the target execution end is used for rendering the data to be rendered and sending a rendering result to the user end according to the connection address of the user end.
Optionally, the scheduling end is provided with an execution end information database, where the execution end information database at least includes a unique identifier of the execution end, a system resource idle rate, and update time;
the scheduling end updates the system resource vacancy rate database by monitoring the system resource vacancy rate of the execution end; and when the rendering request is received, sequencing the execution ends according to the idle rate of the system resources, and selecting one or more execution ends with the highest sequencing as the target execution end.
Optionally, the scheduling end is further configured to calculate transmission efficiency between the execution end and the user end;
and the scheduling end at least determines the target execution end according to the system resource idle rate of the execution end and the transmission efficiency between the execution end and the user end.
Optionally, the execution-side information database further includes processing efficiency of the execution side on each type of data to be rendered;
the scheduling end determines the target execution end at least according to the system resource idle rate of the execution end, the transmission efficiency between the execution end and the user end, and the processing efficiency of the execution end on each type of data to be rendered.
Optionally, when the target execution end is abnormal, the new target execution end is selected by the scheduling end to process the rendering request.
Optionally, the scheduling end is further configured to count an accumulated workload of the execution end, and calculate an award point for the execution end according to the accumulated workload.
Optionally, the scheduling end is further configured to decompose the data to be rendered, and distribute the decomposed data to the target execution end.
The embodiment of the invention also provides a cloud rendering method, which is applied to a scheduling end and comprises the following steps:
receiving a rendering request sent by a user side, wherein the rendering request at least comprises data to be rendered and a connection address of the user side;
distributing the data to be rendered to one or more target execution ends, wherein the target execution end is any one of the execution ends, and is used for rendering the data to be rendered through the target execution end and sending a rendering result to the user end.
The embodiment of the present invention further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the processor executes the steps of the cloud rendering method.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the processor is enabled to execute the steps of the cloud rendering method.
The embodiment of the application provides a cloud rendering system, wherein a rendering request of a user is received in a centralized manner through a scheduling end, the rendering request is distributed to one or more target execution ends according to a set rule, and a processing result is returned through the target execution ends; by the method, the data to be rendered on the user terminal with lower rendering capability can be transferred to the execution end with higher rendering capability and higher speed for processing, so that not only is reasonable resource configuration realized, but also a large amount of system resources of the user terminal are released, the user terminal can realize the output of video/images with high rendering quality without higher hardware support, and the cost of the user terminal is reduced.
Drawings
Fig. 1 is an application environment diagram of a cloud rendering method according to an embodiment of the present invention;
fig. 2 is a block diagram of a cloud rendering system according to an embodiment of the present invention.
Fig. 3 is a flowchart of a cloud rendering method according to an embodiment of the present invention;
FIG. 4 is a block diagram showing an internal configuration of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms unless otherwise specified. These terms are only used to distinguish one element from another. For example, a first xx script may be referred to as a second xx script, and similarly, a second xx script may be referred to as a first xx script, without departing from the scope of the present application.
Fig. 1 is an application environment diagram of a cloud rendering method according to an embodiment of the present invention, where the application environment includes a user terminal 110, a WEB server 120, and an execution node 130.
The WEB server 120 and the execution node 130 may be one or more, and they may be independent physical servers or terminals, or may be a server cluster formed by a plurality of physical servers, or may be cloud servers providing basic cloud computing services such as a cloud server, a cloud database, a cloud storage, and a CDN.
The user terminal 110 may be one or more, and may be, but is not limited to, a smart phone, a tablet computer, a laptop computer, a desktop computer, a smart speaker, a smart watch, and the like. The user terminal 110 and the WEB server 120, the execution node 130, and the WEB server 120 and the execution node 130 may be connected through a network (the dotted line in fig. 1 represents a network communication relationship), and the present invention is not limited herein.
As shown in fig. 2, in an embodiment, a cloud rendering system is provided, and specifically, the cloud rendering system may specifically include:
a user terminal 10, a scheduling terminal 20, and a plurality of execution terminals 30 capable of performing data interaction;
the user terminal 10 is configured to initiate a rendering request to the scheduling terminal 20, where the rendering request at least includes data to be rendered and a connection address of the user terminal;
the scheduling end 20 is configured to receive the rendering request, and allocate the rendering request to one or more target execution ends 30, where the target execution end 31 is any one of the execution ends 30;
after receiving the rendering request, the target execution terminal 31 renders the data to be rendered, and sends a rendering result to the user terminal 10 according to the connection address of the user terminal 10.
In an embodiment, referring to fig. 1, a user terminal 110 is deployed with a user terminal 10, a WEB server 120 is deployed with a scheduling terminal 20, and an execution node 130 is deployed with an execution terminal 30, and the cloud rendering system is configured among the user terminal 10, the scheduling terminal 20, and the execution terminal 30.
In one embodiment, the user terminal 10 may be an application program, a function plug-in, or a browser on the user terminal 110, and is not limited in particular.
In one embodiment, the connection address of the user terminal may be a network address, such as an IP (Internet Protocol) address.
In one embodiment, the user may directly access the scheduling terminal 20 through the user terminal 10, or access the scheduling terminal 20 through a user account at the user terminal 10, which is not limited in particular.
In an embodiment, when the user terminal 10 runs a related video, image, or animation scene through a media, a program, or a plug-in (e.g., a game program, a web game terminal, a video media player, etc.), if more or more complicated picture rendering is involved and the device hardware of the user terminal 10 cannot support the related rendering processing work, a rendering request may be sent to the scheduling terminal 20, where the rendering request at least includes data to be rendered and a connection address of the user terminal, and optionally, may also include a unique identifier of the user terminal, user account information, an encoding type of the data to be rendered, a file size of the data to be rendered, a rendering accuracy requirement, a feedback rate requirement, and the like.
In one embodiment, the number of the execution terminals 30 is one or more, and the role of the execution terminals is to execute the rendering tasks allocated by the scheduling terminal 20; the system is generally professional image and video rendering equipment, is configured with a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a memory and the like with high Processing performance, can execute high-end and complex rendering Processing, and has high Processing amount and high speed; and transferring the data to be rendered at the user side to the execution side, thereby realizing the optimal configuration of resources.
In one embodiment, the scheduling end 20 is provided with an execution end information database, where the execution end information database at least includes a unique identifier of an execution end, a system resource idle rate, and an update time; optionally, the executing end information database further includes: rendering capability evaluation values of the execution side for various types of data (for assigning corresponding tasks according to the capability), a historical failure rate of the execution side (which may be used as a reference in the execution side elimination mechanism), and the like.
The scheduling end updates the system resource vacancy rate database by monitoring the system resource vacancy rate of the execution end; and when the rendering request is received, sequencing the execution ends according to the idle rate of the system resources, and selecting one or more execution ends with the highest sequencing as the target execution end. The system resource idle rate is used for representing the current workload of the execution end and the available condition of the processing resources, generally, the larger the value of the system resource idle rate of the execution end is, the smaller the current workload of the execution end is, the more processing resources are abundant, and more rendering tasks can be carried; otherwise, it is not suitable for continuing the more-congested rendering task. When a rendering request is received, in order to ensure that data to be rendered can be rendered in an efficient manner, a scheduling end needs to sort current execution ends according to the idle rate of system resources, and select the execution end with the highest sorting order or several execution ends with the highest ranking order to execute a rendering task. It can be understood that, for the scheduling end, the scheduling end is further configured to decompose the data to be rendered, and distribute the decomposed data to the target execution ends, that is, decompose the data to be rendered sent by one user end into multiple tasks, and send the tasks to multiple target execution ends respectively for parallel processing, thereby improving rendering efficiency.
In one embodiment, the scheduling end is further configured to calculate transmission efficiency between the execution end and the user end; and the scheduling end at least determines the target execution end according to the system resource idle rate of the execution end and the transmission efficiency between the execution end and the user end.
In this embodiment, the transmission efficiency between the execution end and the user end mainly refers to the speed at which data is sent between the execution end and the user end, and the index is also considered when the scheduling end allocates the rendering task. The scheduling end needs to comprehensively consider the system resource idle rate of the execution end and the transmission efficiency between the execution end and the user end to determine the current optimal target execution end; specifically, a weighting formula may be used to calculate to obtain a composite value, and then the priority of the execution end for executing the rendering task is determined according to the ranking of the composite value, where the weighting value of each index may be determined through an actual test and may be obtained through a limited number of trials and comparisons.
In one embodiment, the execution-side information database further includes processing efficiency of the execution side on each type of data to be rendered; the scheduling end determines the target execution end at least according to the system resource idle rate of the execution end, the transmission efficiency between the execution end and the user end, and the processing efficiency of the execution end on each type of data to be rendered. It can be understood that different execution ends may have different processing efficiencies for rendering data on behalf of each type, for example, some execution ends may have hardware supporting rendering of data of a specific type, some execution ends may have hardware capable of rendering data of multiple types, but may perform better on some data types and perform worse on some data types; therefore, when the scheduling end allocates the rendering task, in order to ensure the work efficiency, the processing efficiency of the execution end for each type of data to be rendered needs to be further considered, so that the processing efficiency can be used as an investigation index, and a weight value is configured for the investigation index and incorporated into the weighting formula mentioned in the previous embodiment.
In one embodiment, when the target execution end is abnormal, a new target execution end is selected by the scheduling end to process the rendering request. Specifically, in order to avoid the completion of the rendering task being affected by the occurrence of an exception or a failure at the execution end, the scheduling end 20 needs to monitor the working state of each target execution end in real time or periodically, and when it is found that the target execution end does not regularly feed back the task execution condition according to the protocol or the execution end reports the exception prompt information, the scheduling end redistributes the task of the target execution end.
In one embodiment, the scheduling end is further configured to count an accumulated workload of the execution end, and calculate an award point for the execution end according to the accumulated workload. Specifically, the execution end can reward the execution end after executing the task, and the scheduling end can count the accumulated workload of each execution end, calculate reward points and give corresponding rewards; to improve the aggressiveness of the execution end.
In the foregoing embodiment of the present application, a cloud rendering system is provided, and specifically, a rendering request of a user is received in a centralized manner through a scheduling end, and is distributed to one or more target execution ends according to a set rule, and then a processing result is returned through the target execution ends; by the method, the data to be rendered on the user terminal with lower rendering capability can be transferred to the execution end with higher rendering capability and higher speed for processing, so that not only is reasonable resource configuration realized, but also a large amount of system resources of the user terminal are released, and a user can realize the output of pictures with high rendering quality through a lightweight user end program, plug-in and webpage, such as smoothly watching high-definition videos, live broadcasts and the like, namely the user terminal can realize the output of videos/images with high rendering quality without higher hardware support, and the cost of the user terminal is reduced.
As shown in fig. 3, in an embodiment, a cloud rendering method is provided and applied to the scheduling end 20, and the method includes:
step S202, receiving a rendering request sent by a user side, wherein the rendering request at least comprises data to be rendered and a connection address of the user side;
step S204, distributing the data to be rendered to one or more target execution ends, where the target execution end is any one of the execution ends, so as to render the data to be rendered through the target execution end and send a rendering result to the user end.
In one embodiment, the user terminal 10 may be an application program, a function plug-in, or a browser on the user terminal 110, and is not limited in particular.
In one embodiment, the connection address of the user terminal may be a network address, such as an IP address.
In one embodiment, the user may directly access the scheduling terminal 20 through the user terminal 10, or access the scheduling terminal 20 through a user account at the user terminal 10, which is not limited in particular.
In an embodiment, when the user terminal 10 runs a related video, image, or animation scene through a media, a program, or a plug-in (e.g., a game program, a web game terminal, a video media player, etc.), if more or more complicated picture rendering is involved and the device hardware of the user terminal 10 cannot support the related rendering processing work, a rendering request may be sent to the scheduling terminal 20, where the rendering request at least includes data to be rendered and a connection address of the user terminal, and optionally, may also include a unique identifier of the user terminal, user account information, an encoding type of the data to be rendered, a file size of the data to be rendered, a rendering accuracy requirement, a feedback rate requirement, and the like.
In one embodiment, the number of the execution terminals 30 is one or more, and the role of the execution terminals is to execute the rendering tasks allocated by the scheduling terminal 20. The system is generally professional image and video rendering equipment, is provided with a CPU (central processing unit), a GPU (graphic processing unit), a memory and the like with stronger processing performance, can execute high-end and complex rendering processing, and has higher processing amount and speed; and transferring the data to be rendered at the user side to the execution side, thereby realizing the optimal configuration of resources.
In one embodiment, the scheduling end 20 is provided with an execution end information database, where the execution end information database at least includes a unique identifier of an execution end, a system resource idle rate, and an update time; optionally, the executing end information database further includes: rendering capability evaluation values of the execution side for various types of data (for assigning corresponding tasks according to the capability), a historical failure rate of the execution side (which may be used as a reference in the execution side elimination mechanism), and the like.
The scheduling end updates the system resource vacancy rate database by monitoring the system resource vacancy rate of the execution end; and when the rendering request is received, sequencing the execution ends according to the idle rate of the system resources, and selecting one or more execution ends with the highest sequencing as the target execution end. The system resource idle rate is used for representing the current workload of the execution end and the available condition of the processing resources, generally, the larger the value of the system resource idle rate of the execution end is, the smaller the current workload of the execution end is, the more processing resources are abundant, and more rendering tasks can be carried; otherwise, it is not suitable for continuing the more-congested rendering task. When a rendering request is received, in order to ensure that data to be rendered can be rendered in an efficient manner, a scheduling end needs to sort current execution ends according to the idle rate of system resources, and select the execution end with the highest sorting order or several execution ends with the highest ranking order to execute a rendering task. It can be understood that, for the scheduling end, the scheduling end is further configured to decompose the data to be rendered, and distribute the decomposed data to the target execution ends, that is, decompose the data to be rendered sent by one user end into multiple tasks, and send the tasks to multiple target execution ends respectively for parallel processing, thereby improving rendering efficiency.
In one embodiment, the scheduling end is further configured to calculate transmission efficiency between the execution end and the user end; and the scheduling end at least determines the target execution end according to the system resource idle rate of the execution end and the transmission efficiency between the execution end and the user end.
In this embodiment, the transmission efficiency between the execution end and the user end mainly refers to the speed at which data is sent between the execution end and the user end, and the index is also considered when the scheduling end allocates the rendering task. The scheduling end needs to comprehensively consider the system resource idle rate of the execution end and the transmission efficiency between the execution end and the user end to determine the current optimal target execution end; specifically, a weighting formula may be used to calculate to obtain a composite value, and then the priority of the execution end for executing the rendering task is determined according to the ranking of the composite value, where the weighting value of each index may be determined through an actual test and may be obtained through a limited number of trials and comparisons.
In one embodiment, the execution-side information database further includes processing efficiency of the execution side on each type of data to be rendered; the scheduling end determines the target execution end at least according to the system resource idle rate of the execution end, the transmission efficiency between the execution end and the user end, and the processing efficiency of the execution end on each type of data to be rendered. It can be understood that different execution ends may have different processing efficiencies for rendering data on behalf of each type, for example, some execution ends may have hardware supporting rendering of data of a specific type, some execution ends may have hardware capable of rendering data of multiple types, but may perform better on some data types and perform worse on some data types; therefore, when the scheduling end allocates the rendering task, in order to ensure the work efficiency, the processing efficiency of the execution end for each type of data to be rendered needs to be further considered, so that the processing efficiency can be used as an investigation index, and a weight value is configured for the investigation index and incorporated into the weighting formula mentioned in the previous embodiment.
In one embodiment, when the target execution end is abnormal, a new target execution end is selected by the scheduling end to process the rendering request. Specifically, in order to avoid the completion of the rendering task being affected by the occurrence of an exception or a failure at the execution end, the scheduling end 20 needs to monitor the working state of each target execution end in real time or periodically, and when it is found that the target execution end does not regularly feed back the task execution condition according to the protocol or the execution end reports the exception prompt information, the scheduling end redistributes the task of the target execution end.
In one embodiment, the scheduling end is further configured to count an accumulated workload of the execution end, and calculate an award point for the execution end according to the accumulated workload. Specifically, the execution end can reward the execution end after executing the task, and the scheduling end can count the accumulated workload of each execution end, calculate reward points and give corresponding rewards; to improve the aggressiveness of the execution end.
In the embodiments of the present application, a cloud rendering method is provided, and specifically, a rendering request of a user is received in a centralized manner through a scheduling end, and is distributed to one or more target execution ends according to a set rule, and then a processing result is returned through the target execution ends; by the method, the data to be rendered on the user terminal with lower rendering capability can be transferred to the execution end with higher rendering capability and higher speed for processing, so that not only is reasonable resource configuration realized, but also a large amount of system resources of the user terminal are released, and a user can realize the output of pictures with high rendering quality through a lightweight user end program, plug-in and webpage, such as smoothly watching high-definition videos, live broadcasts and the like, namely the user terminal can realize the output of videos/images with high rendering quality without higher hardware support, and the cost of the user terminal is reduced.
FIG. 4 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the user terminal 110, the WEB server 120, and the execution node 130 in fig. 1. As shown in fig. 4, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the cloud rendering method. The internal memory may also have a computer program stored therein, which when executed by the processor, causes the processor to perform a cloud rendering method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 4 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is proposed, the computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
receiving a rendering request sent by a user side, wherein the rendering request at least comprises data to be rendered and a connection address of the user side;
distributing the data to be rendered to one or more target execution ends, wherein the target execution end is any one of the execution ends, and is used for rendering the data to be rendered through the target execution end and sending a rendering result to the user end.
In one embodiment, a computer readable storage medium is provided, having a computer program stored thereon, which, when executed by a processor, causes the processor to perform the steps of:
receiving a rendering request sent by a user side, wherein the rendering request at least comprises data to be rendered and a connection address of the user side;
distributing the data to be rendered to one or more target execution ends, wherein the target execution end is any one of the execution ends, and is used for rendering the data to be rendered through the target execution end and sending a rendering result to the user end.
It should be understood that, although the steps in the flowcharts of the embodiments of the present invention are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in various embodiments may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A cloud rendering system, comprising: the system comprises a user end, a scheduling end and an execution end, wherein the user end, the scheduling end and the execution end can perform data interaction;
the user side is used for initiating a rendering request to the scheduling side, wherein the rendering request at least comprises data to be rendered and a connection address of the user side;
the scheduling end is used for receiving the rendering request and distributing the data to be rendered to one or more target execution ends, wherein the target execution end is any one of the execution ends;
the target execution end is used for rendering the data to be rendered and sending a rendering result to the user end according to the connection address of the user end.
2. The cloud rendering system of claim 1, wherein the scheduling end is provided with an execution end information database, and the execution end information database at least comprises a unique identifier of the execution end, a system resource idle rate, and update time;
the scheduling end updates the system resource vacancy rate database by monitoring the system resource vacancy rate of the execution end; and when the rendering request is received, sequencing the execution ends according to the idle rate of the system resources, and selecting one or more execution ends with the highest sequencing as the target execution end.
3. The cloud rendering system of claim 2, wherein the scheduling end is further configured to calculate a transmission efficiency between the execution end and the user end;
and the scheduling end at least determines the target execution end according to the system resource idle rate of the execution end and the transmission efficiency between the execution end and the user end.
4. The cloud rendering system of claim 3, wherein the execution-side information database further includes processing efficiency of the execution side for each type of data to be rendered;
the scheduling end determines the target execution end at least according to the system resource idle rate of the execution end, the transmission efficiency between the execution end and the user end, and the processing efficiency of the execution end on each type of data to be rendered.
5. The cloud rendering system of claim 4, wherein when an exception occurs at the target execution end, a new target execution end is selected by the scheduling end to process the rendering request.
6. The cloud rendering system of claim 4, wherein the scheduling end is further configured to count an accumulated workload of the execution end, and calculate a reward point for the execution end according to the accumulated workload.
7. The cloud rendering system of claim 1, wherein the scheduling end is further configured to decompose the data to be rendered and distribute the decomposed data to the target execution end.
8. A cloud rendering method is applied to a scheduling end and is characterized by comprising the following steps:
receiving a rendering request sent by a user side, wherein the rendering request at least comprises data to be rendered and a connection address of the user side;
distributing the data to be rendered to one or more target execution ends, wherein the target execution end is any one of the execution ends, and is used for rendering the data to be rendered through the target execution end and sending a rendering result to the user end.
9. A computer device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the cloud rendering method of claim 8.
10. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, causes the processor to carry out the steps of the cloud rendering method of claim 8.
CN202011388725.4A 2020-12-01 2020-12-01 Cloud rendering method and system, computer equipment and readable storage medium Withdrawn CN112634122A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011388725.4A CN112634122A (en) 2020-12-01 2020-12-01 Cloud rendering method and system, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011388725.4A CN112634122A (en) 2020-12-01 2020-12-01 Cloud rendering method and system, computer equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN112634122A true CN112634122A (en) 2021-04-09

Family

ID=75307310

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011388725.4A Withdrawn CN112634122A (en) 2020-12-01 2020-12-01 Cloud rendering method and system, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112634122A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342340A (en) * 2021-05-31 2021-09-03 北京达佳互联信息技术有限公司 Component rendering method and device
CN115858177A (en) * 2023-02-08 2023-03-28 成都数联云算科技有限公司 Rendering machine resource allocation method, device, equipment and medium
CN116560844A (en) * 2023-05-18 2023-08-08 苏州高新区测绘事务所有限公司 Multi-node resource allocation method and device for cloud rendering
WO2024027288A1 (en) * 2022-08-03 2024-02-08 腾讯科技(深圳)有限公司 Resource rendering method and apparatus, and device, computer-readable storage medium and computer program product

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113342340A (en) * 2021-05-31 2021-09-03 北京达佳互联信息技术有限公司 Component rendering method and device
WO2024027288A1 (en) * 2022-08-03 2024-02-08 腾讯科技(深圳)有限公司 Resource rendering method and apparatus, and device, computer-readable storage medium and computer program product
CN115858177A (en) * 2023-02-08 2023-03-28 成都数联云算科技有限公司 Rendering machine resource allocation method, device, equipment and medium
CN115858177B (en) * 2023-02-08 2023-10-24 成都数联云算科技有限公司 Method, device, equipment and medium for distributing resources of rendering machine
CN116560844A (en) * 2023-05-18 2023-08-08 苏州高新区测绘事务所有限公司 Multi-node resource allocation method and device for cloud rendering

Similar Documents

Publication Publication Date Title
CN112634122A (en) Cloud rendering method and system, computer equipment and readable storage medium
CN111314741B (en) Video super-resolution processing method and device, electronic equipment and storage medium
CN109343929B (en) Multi-screen interaction method and system based on virtualized shared video memory
US9177009B2 (en) Generation based update system
CN112260853B (en) Disaster recovery switching method and device, storage medium and electronic equipment
US20140304713A1 (en) Method and apparatus for distributed processing tasks
CN109992406B (en) Picture request method, picture request response method and client
EP3624453A1 (en) A transcoding task allocation method, scheduling device and transcoding device
CN110334074B (en) Data processing method, device, server and storage medium
WO2021159831A1 (en) Programming platform user code running method, platform, node, device and medium
CN105224510B (en) Method for converting file in document format
CN115292020B (en) Data processing method, device, equipment and medium
CN111200606A (en) Deep learning model task processing method, system, server and storage medium
CN116069493A (en) Data processing method, device, equipment and readable storage medium
CN111625344B (en) Resource scheduling system, method and device in application system
CN111190731A (en) Cluster task scheduling system based on weight
CN116107710A (en) Method, apparatus, device and medium for processing offline rendering tasks
CN115396500A (en) Service platform switching method and system based on private network and electronic equipment
CN114489978A (en) Resource scheduling method, device, equipment and storage medium
CN114595080A (en) Data processing method and device, electronic equipment and computer readable storage medium
CN113342527A (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering
JP4948590B2 (en) Advertisement control apparatus and method
CN114422594A (en) Service processing method, device, computer equipment and storage medium
CN115037753B (en) Message notification method and system
CN111447258B (en) Method, device and equipment for scheduling offline tasks and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210409