CN114205359A - Video rendering coordination method, device and equipment - Google Patents

Video rendering coordination method, device and equipment Download PDF

Info

Publication number
CN114205359A
CN114205359A CN202210101688.7A CN202210101688A CN114205359A CN 114205359 A CN114205359 A CN 114205359A CN 202210101688 A CN202210101688 A CN 202210101688A CN 114205359 A CN114205359 A CN 114205359A
Authority
CN
China
Prior art keywords
rendering
video
image rendering
image
tasks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210101688.7A
Other languages
Chinese (zh)
Inventor
曹洪彬
陈思佳
黄永铖
曹健
杨小祥
张佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210101688.7A priority Critical patent/CN114205359A/en
Publication of CN114205359A publication Critical patent/CN114205359A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks

Abstract

The application provides a video rendering coordination method, a video rendering coordination device and video rendering coordination equipment, wherein the method comprises the following steps: sending a video rendering capability request to the terminal equipment; receiving a video rendering capability response of the terminal equipment, wherein the video rendering capability response comprises: video rendering capability of the terminal device; determining an optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment; the optimal video rendering cooperative configuration is a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device or a cloud server, or a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device and a cloud server in a cooperative manner. Therefore, under the limited cloud server computing resources, idle computing resources of the terminal equipment can be fully utilized, and high-quality cloud game image quality experience can be provided for users.

Description

Video rendering coordination method, device and equipment
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to a video rendering coordination method, device and equipment.
Background
With the development of cloud rendering technology, cloud games have become increasingly popular as an important game form. The cloud game puts the running, rendering and other logics of the game on a cloud server, the game picture is coded and compressed through a video coding technology, a coded video stream is transmitted to terminal equipment through a network, and then the terminal equipment decodes and plays the video stream.
The game form of the cloud game is that traditional logics such as game running, game rendering and the like which need to be completed by terminal equipment are migrated to a cloud server, the requirements on the terminal equipment are simplified to video decoding and video playing capabilities, and idle computing resources of the terminal equipment are not fully utilized. Meanwhile, in order to reduce the influence of video coding distortion on game image quality, the cloud server needs to combine with the visual characteristics of human eyes, and perform certain content analysis and video preprocessing work on game images before video coding, which undoubtedly further increases the computing resource overhead of the cloud server. Therefore, under the limited cloud server computing resources, a better cloud game image quality experience cannot be provided for the user.
Disclosure of Invention
The application provides a video rendering coordination method, device and equipment, so that idle computing resources of terminal equipment can be fully utilized under limited cloud server computing resources, and high-quality cloud game image quality experience can be provided for users.
In a first aspect, a video rendering coordination method is provided, including: sending a video rendering capability request to the terminal equipment; receiving a video rendering capability response of the terminal equipment, wherein the video rendering capability response comprises: video rendering capability of the terminal device; determining an optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment; the optimal video rendering cooperative configuration is a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device or a cloud server, or a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device and a cloud server in a cooperative manner.
In a second aspect, a video rendering coordination method is provided, including: acquiring the video rendering capability of the terminal equipment from an operating system of the terminal equipment; sending the video rendering capability of the terminal equipment to a cloud server so that the cloud server can determine the optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment; the optimal video rendering cooperative configuration is a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device or a cloud server, or a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device and a cloud server in a cooperative manner; and receiving the optimal video rendering cooperative configuration and sending the optimal video rendering cooperative configuration to the terminal equipment.
In a third aspect, a video rendering coordination method is provided, including: sending the video rendering capability of the terminal equipment to the client so that the cloud server corresponding to the client determines the optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment; the optimal video rendering cooperative configuration is a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device or a cloud server, or a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device and a cloud server in a cooperative manner; and acquiring the optimal video rendering cooperative configuration from the client, and rendering the target image frame according to the optimal video rendering cooperative configuration.
In a fourth aspect, a video rendering coordination apparatus is provided, including: the device comprises a sending module, a receiving module and a determining module, wherein the sending module is used for sending a video rendering capability request to the terminal equipment; the receiving module is used for receiving a video rendering capability response of the terminal equipment, and the video rendering capability response comprises: video rendering capability of the terminal device; the determining module is used for determining the optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment; the optimal video rendering cooperative configuration is a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device or a cloud server, or a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device and a cloud server in a cooperative manner.
In a fifth aspect, a video rendering coordination apparatus is provided, including: a communication module to: acquiring the video rendering capability of the terminal equipment from an operating system of the terminal equipment; sending the video rendering capability of the terminal equipment to a cloud server so that the cloud server can determine the optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment; the optimal video rendering cooperative configuration is a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device or a cloud server, or a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device and a cloud server in a cooperative manner.
In a sixth aspect, a video rendering coordination apparatus is provided, including: the communication module is used for sending the video rendering capability of the terminal equipment to the client so that the cloud server corresponding to the client can determine the optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment; the optimal video rendering cooperative configuration is a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device or a cloud server, or a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device and a cloud server in a cooperative manner; the communication module is further used for obtaining the optimal video rendering cooperative configuration from the client, and the processing module is used for performing rendering operation on the target image frame according to the optimal video rendering cooperative configuration.
In a seventh aspect, an electronic device is provided, including: a processor and a memory, the memory being configured to store a computer program, the processor being configured to invoke and execute the computer program stored in the memory to perform a method as in the first aspect, the third aspect or implementations thereof.
In an eighth aspect, a client is provided for performing the method as in the second aspect or its implementations.
In a ninth aspect, a computer readable storage medium is provided for storing a computer program, the computer program causing a computer to perform the method as in the first aspect, the second aspect, the third aspect or implementations thereof.
In a tenth aspect, there is provided a computer program product comprising computer program instructions to cause a computer to perform the method as in the first, second, third or respective implementation form thereof.
In an eleventh aspect, a computer program is provided, which causes a computer to perform the method as in the first, second, third or respective implementation form thereof.
According to the technical scheme, the cloud server can determine the optimal video rendering cooperative configuration according to the video rendering capability of the terminal device, for example, the cloud server can distribute a plurality of image rendering tasks to the cloud server or the terminal device, or distribute the plurality of image rendering tasks to the cloud server and the terminal device cooperatively. Therefore, under the limited cloud server computing resources, idle computing resources of the terminal equipment can be fully utilized, and high-quality cloud game image quality experience can be provided for users.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 provides a flow chart of an image processing process;
FIG. 2 provides a flow chart of another image processing procedure;
fig. 3 is a schematic diagram of a cloud game scenario provided in an embodiment of the present application;
fig. 4 is a flowchart of a video rendering coordination method according to an embodiment of the present application;
fig. 5 is a flowchart of another video rendering coordination method according to an embodiment of the present application;
fig. 6 is a schematic diagram of a correspondence relationship between an image rendering task and an image rendering algorithm provided in the embodiment of the present application;
FIG. 7 is a flowchart of an image processing process provided in an embodiment of the present application;
FIG. 8 is a flow chart of another image processing process provided by an embodiment of the present application;
FIG. 9 is a flowchart of another image processing process provided in an embodiment of the present application;
fig. 10 is a flowchart of a video rendering coordination method according to an embodiment of the present application;
fig. 11 is a schematic diagram of a video rendering coordination apparatus according to an embodiment of the present application;
fig. 12 is a schematic block diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Before the technical scheme of the application is introduced, the following description is provided for the relevant knowledge of the application:
and (3) image rendering task: the task of modifying the pixel value of a certain frame of video, such as a full image or a specific area, for example: an image sharpening processing task, an image denoising processing task, an image blurring processing task, and the like, where the image rendering task may implement a specific image enhancement effect and may also implement an image blurring effect, and the like, for example: the image sharpening processing task and the image denoising processing task can realize a specific image enhancement effect, and the image blurring processing task can realize an image blurring effect.
Note that in the present application, "rendering" is also referred to as "processing", for example: the image rendering task may be referred to as an image processing task.
The technical problems to be solved and the inventive concept of the present application will be explained as follows:
currently, for some video or image processing procedures in cloud-based scenes, the following can be performed: as shown in fig. 1, the cloud server generates a video, performs video image acquisition, processes the acquired video image, such as sharpening, blurring, denoising, and the like, encodes the processed video image to obtain a code stream of the video image, and further, the cloud server may send the code stream to the terminal device, and the terminal device decodes the code stream, and finally performs display of the video image according to a decoding result. Or, as shown in fig. 2, the cloud server generates a video, performs video image acquisition, and encodes the acquired video image to obtain a code stream of the video image, and further, the cloud server may send the code stream to the terminal device, and the terminal device decodes the code stream, processes the decoded video image, such as sharpening, blurring, and denoising, and finally displays the processed video image. As can be seen, the sharpening process, the blurring process, the noise reduction process, and the like are performed by the cloud server or the terminal device.
However, under limited cloud server computing resources, the current image processing method cannot provide users with better cloud game image quality experience.
In order to solve the technical problem, the cloud server may determine the optimal video rendering cooperative configuration according to the video rendering capability of the terminal device, for example, the cloud server may allocate a plurality of image rendering tasks to the cloud server or the terminal device, or allocate the plurality of image rendering tasks to the cloud server and the terminal device cooperatively. Therefore, under the limited cloud server computing resources, idle computing resources of the terminal equipment can be fully utilized, and high-quality cloud game image quality experience can be provided for users.
It should be understood that the technical solution of the present application can be applied to Real-time Communications (RTC) scenarios, but is not limited thereto, where the RTC technologies are typical of video conferences, video calls, remote offices, telemedicine, interactive live broadcasts, cloud games, and the like.
Cloud gaming (Cloud gaming), also known as game on demand (gaming), is an online gaming technology based on Cloud computing technology. Cloud game technology enables light-end devices (thin clients) with relatively limited graphics processing and data computing capabilities to run high-quality games. In the cloud game scene, the game is not in a player game terminal, but runs in a cloud server, and the cloud server renders the game scene into a video and audio stream which is transmitted to the player game terminal through a network. The player game terminal does not need to have strong graphic operation and data processing capacity, and only needs to have basic streaming media playing capacity and capacity of acquiring player input instructions and sending the instructions to the cloud server.
Exemplarily, fig. 3 is a schematic view of a cloud game scene provided in the embodiment of the present application, as shown in fig. 3, a cloud server 310 may communicate with a player game terminal 320, the cloud server 310 may run a game, collect a game video image, encode the collected video image, obtain a code stream of the video image, further, the cloud server may send the code stream to a terminal device, the terminal device decodes the code stream, and finally, display the video image according to a decoding result.
Alternatively, the cloud server 310 and the player game terminal 320 may communicate with each other through Long Term Evolution (LTE), New Radio (NR) technology, Wireless Fidelity (Wi-Fi) technology, and the like, but is not limited thereto.
In a cloud game scenario, a cloud server refers to a server running a game in the cloud, and has functions of video enhancement (pre-coding), video coding, and the like, but is not limited thereto.
The terminal equipment is equipment which has rich man-machine interaction modes, has the capability of accessing the internet, is usually provided with various operating systems and has stronger processing capability. The terminal device may be a smart phone, a living room television, a tablet computer, a vehicle-mounted terminal, a player game terminal, such as a handheld game console, but is not limited thereto.
The technical scheme of the application is explained in detail as follows:
fig. 4 is a flowchart of a video rendering coordination method according to an embodiment of the present application, where the method may be executed by a cloud server and a terminal device, for example, in a cloud game scene, the cloud server may be the cloud server 310 in fig. 3, and the terminal device may be the player game terminal 320 in fig. 3, and an execution subject of the video rendering coordination method is not limited in the present application, as shown in fig. 4, and the method includes:
s410: the cloud server sends a video rendering capability request to the terminal equipment;
s420: the cloud server receives a video rendering capability response of the terminal device, wherein the video rendering capability response comprises: the video rendering capability of the terminal device;
s430: and the cloud server determines the optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment.
The optimal video rendering cooperative configuration is a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device or a cloud server, or a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device and a cloud server in a cooperative manner.
Alternatively, as shown in fig. 5, the cloud server may send a video rendering capability request to the terminal device through a client installed in the terminal device, and the terminal device may also return a video rendering capability response to the cloud server through the client. Wherein, in the cloud game scenario, the client may be a cloud game client.
Optionally, the client may obtain the video rendering capability of the terminal device from an operating system of the terminal device; the video rendering capability of the terminal equipment is sent to the cloud server, so that the cloud server determines the optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment, the client can receive the optimal video rendering cooperative configuration and send the optimal video rendering cooperative configuration to the terminal equipment, and the terminal equipment can perform rendering operation on a plurality of image rendering tasks according to the optimal video rendering cooperative configuration.
Optionally, the video rendering capability request is used for requesting to acquire the video rendering capability of the terminal device.
Optionally, the video rendering capability request includes at least one of, but is not limited to: protocol version number, video resolution, video frame rate, type of rendering algorithm queried.
Optionally, the protocol version number refers to the lowest protocol version supported by the cloud server, and the protocol may be a rendering protocol.
Alternatively, the video resolution, i.e. the video size, may be the resolution of the video source to be rendered, e.g. 1080 p.
Alternatively, the video frame rate may be the frame rate of the video source to be rendered, such as 60 fps.
Optionally, the type of rendering algorithm of the query may be at least two of the following, but is not limited thereto: a sharpening processing algorithm, a noise reduction processing algorithm, a fuzzy processing algorithm, a High Dynamic Range Imaging (HDR) enhancement capability algorithm, and the like.
Alternatively, the different video resolutions may be defined by enumeration, as shown in table 1:
TABLE 1
Video resolution Enumerating definitions
360p 0x1
576p 0x2
720p 0x4
1080p 0x8
2k 0x10
4k 0x20
Alternatively, the different video frame rates may be defined by enumeration, as shown in table 2:
TABLE 2
Figure BDA0003492651150000071
Figure BDA0003492651150000081
Alternatively, the different rendering algorithms may be defined by enumeration, as shown in table 3:
TABLE 3
Rendering algorithm type Enumerating definitions
Is not defined 0
Sharpening processing algorithm 1
HDR enhancement capability algorithm 2
Alternatively, the video rendering capability request may be a video rendering capability request for a plurality of image rendering tasks for the target image frame.
Illustratively, the code implementation of the video rendering capability request may be as follows:
{
"render_ability":{
"version":"1.0",
"resolution":"8",
"framerate":"8",
"type":"1,2"
}
}
for the explanation of the data structure in the code, reference is made to table 4 below, which is not described in detail in this application.
Optionally, when the technical solution of the present application is applied to an RTC scene, the target image frame may be an image frame acquired or generated in real time.
Optionally, the target image frame may be a current image frame to be rendered in a video source to be rendered, for example: in a cloud game scene, the target image frame may be a game video image frame currently to be rendered. Alternatively, the target image frame may be a local image area to be rendered currently in a video source to be rendered, for example: in a cloud game scene, the target image frame may be an image area where a game character is located in a certain game video image frame.
The data structure of the video rendering capability of the terminal device may be as shown in table 4:
TABLE 4
Figure BDA0003492651150000082
Figure BDA0003492651150000091
Figure BDA0003492651150000101
Optionally, the video rendering capability response may include, but is not limited to, at least one of: the identifier of whether the rendering algorithm type to be queried by the cloud server is successfully queried, the protocol version number supported by the terminal equipment, the video rendering capability of the terminal equipment and the like.
Optionally, if the query of the rendering algorithm type to be queried by the cloud server is successful, the identifier indicating whether the query of the rendering algorithm type to be queried by the cloud server is successful may be represented by 0, and if the query of the rendering algorithm type to be queried by the cloud server is failed, the identifier indicating whether the query of the rendering algorithm type to be queried by the cloud server is successful may be represented by an error code, such as 001.
Optionally, the protocol version number refers to the lowest protocol version supported by the terminal device, and the protocol may be a rendering protocol.
Optionally, the video rendering capability of the terminal device includes, but is not limited to, at least one of: the type of rendering algorithm supported by the terminal device and the performance of the rendering algorithm.
Optionally, the performance of the rendering algorithm includes, but is not limited to, at least one of: the algorithm can handle video size, frame rate, and latency.
Illustratively, the code implementation of the video rendering capability response may be as follows:
{
"render_ability":{
"state":"0",
"version":"1.0",
"renders":"2"
},
"render1":{
"type":"1",
"performances":"1",
"performance1":"8,8,10"
},
"render2":{
"type":"2",
"performances":"1",
"performance1":"8,8,5"
}
}
illustratively, the code implementation of the video rendering capability response (only supporting partial rendering capability) may be as follows:
{
"render_ability":{
"state":"0",
"version":"1.0",
"renders":"1"
},
"render1":{
"type":"2",
"performances":"1",
"performance1":"8,8,5"
}
}
illustratively, the code implementation of the video rendering capability response (rendering capability not supported) may be as follows:
{
"render_ability":{
"state":"0",
"version":"1.0",
"renders":"0"
}
}
illustratively, the code implementation of the video rendering capability response (protocol request failure) may be as follows:
{
"render_ability":{
"state":"-1",
"version":"0.9"
}
}
it should be understood that the explanation of each data structure in these codes can refer to table 4, which is not described in detail herein.
Optionally, the plurality of image rendering tasks correspond to a plurality of image rendering algorithms; the image rendering tasks and the image rendering algorithms have a one-to-one correspondence relationship.
Illustratively, as shown in fig. 6, the image rendering task is cooperatively oriented to a specific image rendering task, such task may be divided into different independent subtasks, each of which corresponds to a different image rendering algorithm, and as shown in fig. 6, the image rendering task a is formed by cascading 3 independent subtasks: and the rendering task cooperation enables a part of the image rendering tasks to be completed on the cloud server and the other part of the image rendering tasks to be completed on the terminal equipment according to the video rendering capability of the terminal equipment. The image rendering task completed by the cloud server is completed before video coding (video preprocessing), and the image rendering task completed by the terminal equipment is completed after video decoding (video postprocessing).
Optionally, the plurality of image rendering algorithms includes at least two of the following, but is not limited thereto: a sharpening processing algorithm, a noise reduction processing algorithm and a fuzzy processing algorithm.
Exemplarily, it is assumed that the background image of the target image frame needs to be blurred, and the foreground image needs to be denoised and sharpened, in which case, the target image frame corresponds to three image rendering tasks, which are a blur processing task, a denoising processing task and a sharpening processing task, respectively, and the image rendering algorithms corresponding to the three tasks are a blur processing algorithm, a denoising processing algorithm and a sharpening processing algorithm, respectively.
Alternatively, the video rendering capability of the terminal device can be divided into the following three cases:
the first condition is as follows: the terminal equipment has full video rendering capability aiming at a plurality of image rendering tasks.
Case two: the terminal equipment has local video rendering capacity aiming at a plurality of image rendering tasks.
Case three: the terminal device does not have video rendering capability.
Wherein, different video rendering capabilities of the terminal device may be defined by enumeration, as shown in table 5:
TABLE 5
Rendering capability Enumerating definitions
Is not defined 0
Lack of video rendering capability 1
Local video rendering capability 2
Full video rendering capability 3
Optionally, for any image rendering task, whether the video rendering capability of the terminal device meets the requirement of the task refers to: the terminal device has basic capability of processing the task, and can complete the rendering of the task under the condition of the frame rate of a video source to which a plurality of image rendering tasks belong.
Optionally, the terminal device having basic capability of processing the task includes: the terminal device performs the software capabilities and hardware capabilities of the task.
For example, assuming that image sharpening is currently required for a certain image frame in the video source, the basic capability of the terminal device to perform this task refers to: the terminal device can execute the image sharpening task through software and hardware.
Optionally, when the terminal device is at the frame rate of the video source to which the plurality of image rendering tasks belong, the fact that the terminal device can complete the rendering of the tasks means that the processing speed of the image rendering algorithm adopted by the terminal device is consistent with the resolution of the video source and the frame rate. Or, when the terminal device is at the frame rate of the video source to which the plurality of image rendering tasks belong, the terminal device can complete the rendering of the tasks, that is, the processing speed of the image rendering algorithm adopted by the terminal device is consistent with the resolution of the video source and the frame rate, and the single-frame processing delay of the image rendering algorithm is less than the preset delay.
Optionally, the preset time delay may be set by an application layer of the terminal device, and may be 2ms, 3ms, 5ms, and the like.
Optionally, for the plurality of image rendering tasks, the corresponding preset time delays may be the same or different, which is not limited in this application.
Exemplarily, assuming that an image sharpening process is currently required for a video source of 1080p @60fps, whether the video rendering capability of the terminal device meets the requirement of the task refers to: the terminal device can execute the image sharpening task through software and hardware, the processing speed of an image sharpening algorithm adopted by the terminal device can be kept consistent with the resolution 1080p and the frame rate 60fps of a video source, and meanwhile, the single-frame processing delay of the image sharpening algorithm can be smaller than the preset delay.
Based on the video rendering capability division condition of the terminal device, the distribution condition of the multiple image rendering tasks may be as follows:
optionally, if the terminal device does not have a video rendering capability, the plurality of image rendering tasks are allocated to the cloud server, as shown in fig. 7, the cloud server generates a video, performs video image acquisition, performs video image enhancement on all image rendering tasks of the acquired video image, encodes the processed video image to obtain a code stream of the video image, and further, the cloud server may send the code stream to the terminal device, and the terminal device decodes the code stream and finally performs display of the video image according to a decoding result. If the video rendering capability of the terminal device meets the requirements of part of the tasks in the plurality of image rendering tasks, part of the tasks are allocated to the terminal device, and the other tasks except part of the tasks in the plurality of image rendering tasks are allocated to the cloud server, as shown in fig. 8, the cloud server generates a video, performs video image acquisition, performs video image enhancement on the subtasks a and b, encodes the processed video image to obtain a code stream of the video image, further, the cloud server can send the code stream to the terminal device, the terminal device decodes the code stream, performs video image enhancement on the subtask c, and finally performs display of the video image. If the terminal device has complete video rendering capability for the multiple image rendering tasks, the multiple image rendering tasks are allocated to the terminal device, as shown in fig. 9, the cloud server generates a video, performs video image acquisition, encodes the acquired video image to obtain a code stream of the video image, further, the cloud server can send the code stream to the terminal device, the terminal device decodes the code stream, performs video image enhancement processing and the like on all image rendering tasks of the decoded video image, and finally displays the processed video image.
Exemplarily, it is assumed that for a video source of 1080p @60fps, an image blurring process is currently required to be performed on a background image of a certain image frame in the video source, and an image denoising process and a sharpening process are performed on a foreground image of the image frame, that is, three image rendering tasks exist for the image frame, respectively: based on the image blurring processing task, the image denoising processing task and the image sharpening processing task, if the video rendering capability of the terminal device can meet the requirements of the image blurring processing task, the image denoising processing task and the image sharpening processing task, that is, the terminal device has basic capability of executing the three tasks, the execution speeds of the three algorithms can be kept consistent with the 1080p resolution of a video source and the 60fps frame rate, and meanwhile, the single-frame processing delay of the image rendering algorithms corresponding to the three image rendering tasks can be smaller than the preset delay. In this case, the cloud server may allocate all the three tasks to the terminal device, so that the terminal device executes the three tasks, and thus the load of the cloud server may be reduced. If the video rendering capability of the terminal device can meet the requirements of part of tasks in the image blurring processing task, the image denoising processing task and the image sharpening processing task, in this case, the cloud server can allocate the tasks met by the terminal device to the terminal device, and allocate the rest of tasks to the cloud server, so that the terminal device and the cloud server are matched to execute the three tasks. If the video rendering capability of the terminal device cannot meet the requirements of any one of the image blurring processing task, the image denoising processing task and the image sharpening processing task, in this case, the cloud server can allocate all the three tasks to itself, so that it can execute the three tasks, thereby reducing the load of the terminal device.
It should be understood that the foregoing describes the video rendering capability of the terminal device as being divided into three cases, and actually, the video rendering capability of the terminal device can also be divided into the following two cases:
the first condition is as follows: the terminal equipment has full video rendering capability aiming at a plurality of image rendering tasks.
Case two: the terminal device does not have full video rendering capability for multiple image rendering tasks.
In other words, the plurality of image rendering tasks are regarded as a whole.
It should be understood that, as to whether the video rendering capability of the terminal device meets the requirement of a certain task, the above explanation can be referred to, and the detailed description of the present application is omitted.
Based on the video rendering capability division condition of the terminal device, the distribution condition of the multiple image rendering tasks may be as follows: if the video rendering capability of the terminal equipment meets the requirements of the plurality of image rendering tasks, distributing the plurality of image rendering tasks to the terminal equipment; and if the video rendering capability of the terminal equipment does not meet the requirements of the plurality of image rendering tasks, distributing the plurality of image rendering tasks to the cloud server.
Exemplarily, it is assumed that for a video source of 1080p @60fps, an image blurring process is currently required to be performed on a background image of a certain image frame in the video source, and an image denoising process and a sharpening process are performed on a foreground image of the image frame, that is, three image rendering tasks exist for the image frame, respectively: based on the image blurring processing task, the image denoising processing task and the image sharpening processing task, if the video rendering capability of the terminal device can meet the requirements of the image blurring processing task, the image denoising processing task and the image sharpening processing task, that is, the terminal device has basic capability of executing the three tasks, the execution speeds of the three algorithms can be kept consistent with the 1080p resolution of a video source and the 60fps frame rate, and meanwhile, the single-frame processing delay of the image rendering algorithms corresponding to the three image rendering tasks can be smaller than the preset delay. In this case, the cloud server may allocate all the three tasks to the terminal device, so that the terminal device executes the three tasks, and thus the load of the cloud server may be reduced. If the video rendering capability of the terminal device does not meet the requirements of at least one of the image blurring processing task, the image denoising processing task and the image sharpening processing task, it indicates that the video rendering capability of the terminal device does not meet the three tasks, and in this case, the cloud server may allocate all the three tasks to itself, so that it can execute the three tasks, thereby reducing the load of the terminal device.
Optionally, if it is determined that the optimal video rendering coordination configuration is a video rendering coordination configuration in which a plurality of image rendering tasks are allocated to the terminal device, sending identifiers of the plurality of image rendering tasks and first indication information to the terminal device; the first indication information is used for indicating the terminal equipment to process the image rendering tasks.
Optionally, if it is determined that the optimal video rendering cooperative configuration is a video rendering cooperative configuration in which a plurality of image rendering tasks are cooperatively allocated to the terminal device and the cloud server, sending identification and second indication information of a second image rendering task to the terminal device; the second image rendering task is an image rendering task which needs to be allocated to the terminal device in the multiple image rendering tasks, and the second indication information is used for indicating the terminal device to render the second image rendering task.
Optionally, when the cloud server issues the image rendering task to the terminal device, information of the task may also be issued to the terminal device, where the information of the task includes at least one of the following items, but is not limited to this: type, name, size, number of regions, region range, threshold, etc. of the image rendering algorithm.
Illustratively, the code implementation of the cloud server issuing the image rendering task may be as follows:
an image rendering task, wherein an algorithm corresponding to the task is an image sharpening algorithm aiming at an image area:
{
"render_task":{
"version":"1.0",
"renders":"1"
},
"render1":{
"type":"1",
"name":"unsharp masking",
"scale":"100",
"regions":"2",
"region1":"0,0,33,33",
"region2":"67,67,100,100"
},
"render1_args":{
"threshold":"0",
"amount":"50",
"radius":"5"
}
}
illustratively, the code implementation of the cloud server issuing the image rendering task may be as follows:
an image rendering task, wherein the corresponding algorithms are an image sharpening algorithm and a full graph HDR for an image area:
{
"render_task":{
"version":"1.0",
"renders":"2"
},
"render1":{
"type":"1",
"name":"unsharp masking",
"scale":"100",
"regions":"2",
"region1":"0,0,33,33",
"region2":"67,67,100,100"
},
"render1_args":{
"threshold":"0",
"amount":"50",
"radius":"5"
},
"render2":{
"type":"2",
"name":"hdr",
"scale":"100",
"regions":"1",
"region1":"0,0,100,100"
}
}
it should be understood that the explanation of each data structure in these codes can refer to table 4, which is not described in detail herein.
In summary, the present application provides a video rendering coordination method, where a cloud server may determine an optimal video rendering coordination configuration according to a video rendering capability of a terminal device, for example, the cloud server may allocate a plurality of image rendering tasks to the cloud server or the terminal device, or allocate the plurality of image rendering tasks to the cloud server and the terminal device in a coordination manner. Therefore, under the limited cloud server computing resources, idle computing resources of the terminal equipment can be fully utilized, and high-quality cloud game image quality experience can be provided for users.
Fig. 10 is a flowchart of a video rendering coordination method according to an embodiment of the present application, and as shown in fig. 10, the method includes:
s1010: sending a video rendering capability request to the terminal equipment;
s1020: receiving a video rendering capability response of the terminal device, the video rendering capability response comprising: the video rendering capability of the terminal device;
s1030: acquiring coding attributes of a plurality of image rendering tasks;
s1040: and determining the optimal video rendering cooperative configuration according to the coding attributes of the plurality of image rendering tasks and the video rendering capability of the terminal equipment.
It should be noted that, for the explanation on the same steps in the embodiment corresponding to fig. 10 and the embodiment corresponding to fig. 4, reference may be made to the above description, and details are not repeated herein.
It should be understood that, generally, the image processing capability of the cloud server is much stronger than that of the terminal device, and for an image rendering task with higher encoding complexity, higher encoding quality requirement, or higher encoding delay requirement, the image rendering task can be generally executed only by the cloud server, so the following alternatives are provided in the present application to allocate the image rendering task, but not limited thereto:
optionally, the cloud server determines whether at least one image rendering task needing to be allocated to the cloud server exists in the plurality of image rendering tasks according to the encoding attributes of the plurality of image rendering tasks; if at least one image rendering task exists in the plurality of image rendering tasks, the at least one image rendering task is distributed to the cloud server, and whether the rest image rendering tasks except the at least one image rendering task exist in the plurality of image rendering tasks is judged; if at least one image rendering task does not exist in the plurality of image rendering tasks, determining the optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment; and if the plurality of image rendering tasks have the residual image rendering tasks, determining the optimal video rendering cooperative configuration equipment for the residual image rendering tasks according to the video rendering capability of the terminal equipment.
It should be understood that, with respect to how to allocate the remaining image rendering tasks to at least one of the cloud server and the terminal device only according to the video rendering capability of the terminal device, reference may be made to a method of allocating a plurality of image rendering tasks to at least one of the cloud server and the terminal device according to the video rendering capability of the terminal device, which is not described herein again.
Optionally, if the encoding attribute of a third image rendering task in the plurality of image rendering tasks meets the corresponding preset condition, determining that the third image rendering task is an image rendering task needing to be allocated to the cloud server; and if the encoding attribute of a fourth image rendering task in the plurality of image rendering tasks does not meet the corresponding preset condition, determining that the fourth image rendering task is not the image rendering task needing to be allocated to the cloud server.
Optionally, the encoding property is any one of, but not limited to: coding complexity, coding quality and coding delay.
It should be understood that, since the greater the encoding complexity, encoding quality, and encoding delay of an image rendering task, the greater the processing difficulty of the image rendering task is, it is determined whether to allocate the image rendering task to the cloud server by using the encoding attributes.
Optionally, the preset condition corresponding to the encoding complexity is to determine whether the encoding complexity of the third image rendering task or the fourth image rendering task reaches the preset encoding complexity. And judging whether the coding quality of the third image rendering task or the fourth image rendering task reaches the preset coding quality according to the preset condition corresponding to the coding quality. And judging whether the coding time delay of the third image rendering task or the fourth image rendering task reaches the preset coding time delay or not according to the preset condition corresponding to the coding time delay.
Optionally, the preset condition may be negotiated between the cloud server and the terminal device, may also be predefined, and may also be specified by the cloud server, or specified by the terminal device, or specified by an application layer of the terminal device, which is not limited in this application.
Optionally, the preset encoding complexity may be negotiated between the cloud server and the terminal device, may also be predefined, and may also be specified by the cloud server, or specified by the terminal device, or specified by an application layer of the terminal device, which is not limited in this application.
Optionally, the preset encoding quality may be negotiated between the cloud server and the terminal device, may also be predefined, and may also be specified by the cloud server, or specified by the terminal device, or specified by an application layer of the terminal device, which is not limited in this application.
Optionally, the preset coding delay may be negotiated between the cloud server and the terminal device, may also be predefined, and may also be specified by the cloud server, or specified by the terminal device, or specified by an application layer of the terminal device, which is not limited in this application.
Illustratively, assuming that for a 1080p @60fps video source, sharpening, denoising and blurring are currently required to be performed on a certain image frame in the video source, that is, there are an image sharpening task, an image denoising task and an image blurring task, where it is assumed that the encoding complexity of the image sharpening task is greater than a preset encoding complexity, and the encoding complexity of the image denoising task and the image blurring task is lower than the preset encoding complexity, based on which, the cloud server may allocate the image sharpening task to itself, further, the terminal device assumes that the terminal device has software and hardware capabilities of image denoising and image blurring, and the processing speed of the image denoising algorithm adopted by the terminal device may be consistent with the 1080p resolution of the video source and the frame rate of 60fps, and meanwhile, the single-frame processing delay of the image denoising algorithm may be smaller than the preset delay, however, the processing speed of the image blurring algorithm employed by the terminal device cannot be kept consistent with the resolution 1080p of the video source, and the frame rate 60 fps. Based on the method, the cloud server can distribute the image denoising processing task to the terminal equipment and distribute the image blurring processing task to the cloud server.
In summary, the present application provides a video rendering coordination method, where a cloud server may allocate a plurality of image rendering tasks to a server or a terminal device according to coding attributes of the plurality of image rendering tasks and a video rendering capability of the terminal device, or cooperatively allocate the plurality of image rendering tasks to the server and the terminal device, so that at least one of the server and the terminal device processes the plurality of image rendering tasks. For the image rendering tasks needing to be processed by the cloud server, the image rendering tasks can be allocated to the cloud server, and the rest of the image rendering tasks except the image rendering tasks can be allocated according to the video rendering capability of the terminal equipment. The video rendering coordination method can fully utilize idle computing resources of the terminal equipment under the limited computing resources of the cloud server, so that better cloud game image quality experience can be provided for users, image rendering tasks needing to be processed by the cloud server can be ensured to be executed by the cloud server, and the image quality effect is further ensured because the performance of the cloud server is stronger.
Fig. 11 is a schematic diagram of a video rendering coordination apparatus according to an embodiment of the present application, and as shown in fig. 11, the apparatus includes: the video rendering capability determining method comprises a sending module 1110, a receiving module 1120 and a determining module 1130, wherein the sending module 1110 is used for sending a video rendering capability request to a terminal device; the receiving module 1120 is configured to receive a video rendering capability response of the terminal device, where the video rendering capability response includes: video rendering capability of the terminal device; the determining module 1130 is configured to determine an optimal video rendering cooperative configuration according to the video rendering capability of the terminal device; the optimal video rendering cooperative configuration is a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device or a cloud server, or a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to a terminal device and a cloud server in a cooperative manner.
Optionally, the determining module 1130 is specifically configured to: if the terminal equipment has complete video rendering capability for the plurality of image rendering tasks, determining that the optimal video rendering cooperative configuration is the video rendering cooperative configuration for distributing the plurality of image rendering tasks to the terminal equipment; if the terminal equipment has local video rendering capability aiming at the plurality of image rendering tasks, determining that the optimal video rendering cooperative configuration is a video rendering cooperative configuration for cooperatively distributing the plurality of image rendering tasks to the terminal equipment and the cloud server; and if the terminal equipment does not have the video rendering capability, determining the optimal video rendering cooperative configuration as the video rendering cooperative configuration for distributing the plurality of image rendering tasks to the cloud server.
Optionally, the apparatus further includes a determining module 1140, configured to determine whether the terminal device has software and hardware capabilities to perform the first image rendering task; if the terminal equipment does not have the software and hardware capability of executing the first image rendering task, determining that the video rendering capability of the terminal equipment does not meet the requirement of the first image rendering task; if the terminal equipment has the software and hardware capability of executing the first image rendering task, judging whether the terminal equipment can finish rendering the first image rendering task under the condition of the frame rate of a video source to which the plurality of image rendering tasks belong; if the terminal equipment cannot finish rendering the first image rendering task under the condition of the frame rate of the video source to which the plurality of image rendering tasks belong, determining that the video rendering capability of the terminal equipment does not meet the requirement of the first image rendering task; and if the terminal equipment can finish rendering the first image rendering task under the condition of the frame rate of the video source to which the plurality of image rendering tasks belong, determining that the video rendering capability of the terminal equipment meets the requirement of the first image rendering task.
Optionally, the determining module 1140 is specifically configured to: judging whether the processing speed of an image rendering algorithm adopted by the terminal equipment is consistent with the resolution and the frame rate of the video source under the condition that the frame rates of the video sources to which the plurality of image rendering tasks belong; if the processing speed of the image rendering algorithm adopted by the terminal equipment is consistent with the resolution and the frame rate of the video source, determining that the terminal equipment can finish rendering the first image rendering task; and if the processing speed of the image rendering algorithm adopted by the terminal equipment is inconsistent with the resolution and the frame rate of the video source, determining that the terminal equipment cannot finish rendering the first image rendering task.
Optionally, the determining module 1140 is specifically configured to: judging whether the processing speed of an image rendering algorithm adopted by the terminal equipment is consistent with the resolution and the frame rate of the video source and whether the single-frame processing time delay of the image rendering algorithm is smaller than the preset time delay or not under the condition of the frame rates of the video sources to which the plurality of image rendering tasks belong; if the processing speed of the image rendering algorithm adopted by the terminal equipment is consistent with the resolution and the frame rate of the video source, and the single-frame processing delay of the image rendering algorithm is smaller than the preset delay, determining that the terminal equipment can finish rendering the first image rendering task; and if the processing speed of the image rendering algorithm adopted by the terminal equipment is inconsistent with the resolution and the frame rate of the video source, or the single-frame processing time delay of the image rendering algorithm is greater than or equal to the preset time delay, determining that the terminal equipment cannot finish rendering the first image rendering task.
Optionally, the sending module 1110 is further configured to: if the determining module 1130 determines that the optimal video rendering coordination configuration is a video rendering coordination configuration in which a plurality of image rendering tasks are allocated to the terminal device, sending identifiers of the plurality of image rendering tasks and first indication information to the terminal device; the first indication information is used for indicating the terminal equipment to process the image rendering tasks.
Optionally, the sending module 1110 is further configured to: if the determining module 1130 determines that the optimal video rendering cooperative configuration is a video rendering cooperative configuration in which a plurality of image rendering tasks are cooperatively allocated to the terminal device and the cloud server, the identifier and the second indication information of the second image rendering task are sent to the terminal device; the second image rendering task is an image rendering task which needs to be allocated to the terminal device in the multiple image rendering tasks, and the second indication information is used for indicating the terminal device to render the second image rendering task.
Optionally, the apparatus further includes an obtaining module 1150, configured to obtain encoding attributes of the plurality of image rendering tasks; accordingly, the determining module 1130 is specifically configured to: and determining the optimal video rendering cooperative configuration according to the coding attributes of the plurality of image rendering tasks and the video rendering capability of the terminal equipment.
Optionally, the determining module 1130 is specifically configured to: judging whether at least one image rendering task needing to be allocated to the cloud server exists in the image rendering tasks according to the coding attributes of the image rendering tasks; if at least one image rendering task exists in the plurality of image rendering tasks, the at least one image rendering task is distributed to the cloud server, and whether the rest image rendering tasks except the at least one image rendering task exist in the plurality of image rendering tasks is judged; if at least one image rendering task does not exist in the plurality of image rendering tasks, determining the optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment; and if the plurality of image rendering tasks have the residual image rendering tasks, determining the optimal video rendering cooperative configuration equipment for the residual image rendering tasks according to the video rendering capability of the terminal equipment.
Optionally, the determining module 1130 is specifically configured to: if the coding attribute of a third image rendering task in the plurality of image rendering tasks meets the corresponding preset condition, determining the third image rendering task as the image rendering task needing to be allocated to the cloud server; and if the encoding attribute of a fourth image rendering task in the plurality of image rendering tasks does not meet the corresponding preset condition, determining that the fourth image rendering task is not the image rendering task needing to be allocated to the cloud server.
Optionally, the encoding property is any one of: coding complexity, coding quality and coding delay.
Optionally, if the encoding attribute is encoding complexity; judging whether the encoding attribute of the third image rendering task or the fourth image rendering task meets the corresponding preset condition or not, wherein the judgment comprises the following steps: and judging whether the coding complexity of the third image rendering task or the fourth image rendering task reaches the preset coding complexity.
Optionally, if the coding attribute is coding quality; judging whether the encoding attribute of the third image rendering task or the fourth image rendering task meets the corresponding preset condition or not, wherein the judgment comprises the following steps: and judging whether the coding quality of the third image rendering task or the fourth image rendering task reaches the preset coding quality.
Optionally, if the coding attribute is coding delay; judging whether the encoding attribute of the third image rendering task or the fourth image rendering task meets the corresponding preset condition or not, wherein the judgment comprises the following steps: and judging whether the coding time delay of the third image rendering task or the fourth image rendering task reaches the preset coding time delay or not.
Optionally, the plurality of image rendering tasks correspond to a plurality of image rendering algorithms; the image rendering tasks and the image rendering algorithms have a one-to-one correspondence relationship.
Optionally, the plurality of image rendering algorithms comprises at least two of: a sharpening processing algorithm, a noise reduction processing algorithm and a fuzzy processing algorithm.
It is to be understood that apparatus embodiments and method embodiments may correspond to one another and that similar descriptions may refer to method embodiments. To avoid repetition, further description is omitted here. Specifically, the apparatus shown in fig. 11 may perform the method embodiment, and the foregoing and other operations and/or functions of each module in the apparatus are respectively for implementing corresponding flows in the methods, and are not described herein again for brevity.
The apparatus of the embodiments of the present application is described above in connection with the drawings from the perspective of functional modules. It should be understood that the functional modules may be implemented by hardware, by instructions in software, or by a combination of hardware and software modules. Specifically, the steps of the method embodiments in the present application may be implemented by integrated logic circuits of hardware in a processor and/or instructions in the form of software, and the steps of the method disclosed in conjunction with the embodiments in the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in random access memory, flash memory, read only memory, programmable read only memory, electrically erasable programmable memory, registers, and the like, as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps in the above method embodiments in combination with hardware thereof.
Fig. 12 is a schematic block diagram of an electronic device provided in an embodiment of the present application.
As shown in fig. 12, the electronic device may include:
a memory 1210 and a processor 1220, the memory 1210 being configured to store computer programs and to transfer the program codes to the processor 1220. In other words, the processor 1220 may call and execute a computer program from the memory 1210 to implement the method in the embodiment of the present application.
For example, the processor 1220 may be configured to perform the above-described method embodiments according to instructions in the computer program.
In some embodiments of the present application, the processor 1220 may include, but is not limited to:
general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like.
In some embodiments of the present application, the memory 1210 includes, but is not limited to:
volatile memory and/or non-volatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DR RAM).
In some embodiments of the present application, the computer program may be partitioned into one or more modules, which are stored in the memory 1210 and executed by the processor 1220 to perform the methods provided herein. The one or more modules may be a series of computer program instruction segments capable of performing certain functions, the instruction segments describing the execution of the computer program in the electronic device.
As shown in fig. 12, the electronic device may further include:
a transceiver 1230, the transceiver 1230 being connectable to the processor 1220 or memory 1210.
The processor 1220 may control the transceiver 1230 to communicate with other devices, and specifically, may transmit information or data to the other devices or receive information or data transmitted by the other devices. The transceiver 1230 may include a transmitter and a receiver. The transceiver 1230 may further include an antenna, and the number of antennas may be one or more.
It should be understood that the various components in the electronic device are connected by a bus system that includes a power bus, a control bus, and a status signal bus in addition to a data bus.
The present application also provides a computer storage medium having stored thereon a computer program which, when executed by a computer, enables the computer to perform the method of the above-described method embodiments. In other words, the present application also provides a computer program product containing instructions, which when executed by a computer, cause the computer to execute the method of the above method embodiments.
When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application occur, in whole or in part, when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the module is merely a logical division, and other divisions may be realized in practice, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. For example, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and all the changes or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (20)

1. A video rendering coordination method, comprising:
sending a video rendering capability request to the terminal equipment;
receiving a video rendering capability response of the terminal device, wherein the video rendering capability response comprises: video rendering capability of the terminal device;
determining an optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment;
the optimal video rendering cooperative configuration is a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to the terminal device or the cloud server, or a video rendering cooperative configuration in which the plurality of image rendering tasks are allocated to the terminal device and the cloud server in a cooperative manner.
2. The method according to claim 1, wherein the determining an optimal video rendering coordination configuration according to the video rendering capability of the terminal device comprises:
if the terminal equipment has full video rendering capability for the image rendering tasks, determining that the optimal video rendering collaborative configuration is a video rendering collaborative configuration for distributing the image rendering tasks to the terminal equipment;
if the terminal equipment has local video rendering capability for the image rendering tasks, determining that the optimal video rendering cooperative configuration is a video rendering cooperative configuration for cooperatively distributing the image rendering tasks to the terminal equipment and the cloud server;
and if the terminal equipment does not have video rendering capability, determining that the optimal video rendering cooperative configuration is a video rendering cooperative configuration for distributing the plurality of image rendering tasks to the cloud server.
3. The method of claim 2, wherein determining whether the video rendering capability of the terminal device meets the requirements of the first image rendering task comprises:
judging whether the terminal equipment has the software and hardware capability of executing the first image rendering task;
if the terminal equipment does not have the software and hardware capability of executing the first image rendering task, determining that the video rendering capability of the terminal equipment does not meet the requirement of the first image rendering task;
if the terminal equipment has the software and hardware capability of executing the first image rendering task, judging whether the terminal equipment can finish rendering the first image rendering task under the condition of the frame rate of a video source to which the plurality of image rendering tasks belong;
if the terminal equipment cannot finish rendering the first image rendering task under the condition of the frame rate of the video source to which the plurality of image rendering tasks belong, determining that the video rendering capability of the terminal equipment does not meet the requirement of the first image rendering task;
and if the terminal equipment can finish rendering the first image rendering task under the condition of the frame rate of the video source to which the plurality of image rendering tasks belong, determining that the video rendering capability of the terminal equipment meets the requirement of the first image rendering task.
4. The method according to claim 3, wherein the determining whether the terminal device can complete the rendering of the first image rendering task at a frame rate of a video source to which the plurality of image rendering tasks belong comprises:
judging whether the processing speed of an image rendering algorithm adopted by the terminal equipment is consistent with the resolution and the frame rate of the video source under the condition that the frame rates of the video sources to which the image rendering tasks belong;
if the processing speed of the image rendering algorithm adopted by the terminal equipment is consistent with the resolution and the frame rate of the video source, determining that the terminal equipment can finish rendering the first image rendering task;
and if the processing speed of the image rendering algorithm adopted by the terminal equipment is inconsistent with the resolution of the video source and the frame rate, determining that the terminal equipment cannot finish rendering the first image rendering task.
5. The method according to claim 3, wherein the determining whether the terminal device can complete the rendering of the first image rendering task at a frame rate of a video source to which the plurality of image rendering tasks belong comprises:
judging whether the processing speed of an image rendering algorithm adopted by the terminal equipment is consistent with the resolution and the frame rate of the video source and whether the single-frame processing delay of the image rendering algorithm is smaller than a preset delay or not under the condition of the frame rate of the video source to which the image rendering tasks belong;
if the processing speed of the image rendering algorithm adopted by the terminal equipment is consistent with the resolution of the video source and the frame rate, and the single-frame processing time delay of the image rendering algorithm is smaller than the preset time delay, determining that the terminal equipment can finish rendering the first image rendering task;
and if the processing speed of the image rendering algorithm adopted by the terminal equipment is inconsistent with the resolution of the video source and the frame rate, or the single-frame processing time delay of the image rendering algorithm is greater than or equal to the preset time delay, determining that the terminal equipment cannot finish rendering the first image rendering task.
6. The method of any of claims 2-5, further comprising:
if the optimal video rendering cooperative configuration is determined to be the video rendering cooperative configuration for distributing the image rendering tasks to the terminal equipment, sending identifiers and first indication information of the image rendering tasks to the terminal equipment;
the first indication information is used for indicating the terminal equipment to process the plurality of image rendering tasks.
7. The method of any of claims 2-5, further comprising:
if the optimal video rendering cooperative configuration is determined to be a video rendering cooperative configuration in which the plurality of image rendering tasks are cooperatively allocated to the terminal device and the cloud server, sending an identifier and second indication information of a second image rendering task to the terminal device;
the second image rendering task is an image rendering task which needs to be allocated to the terminal device in the plurality of image rendering tasks, and the second indication information is used for indicating the terminal device to render the second image rendering task.
8. The method according to claim 1, wherein before determining the optimal video rendering coordination configuration according to the video rendering capability of the terminal device, the method further comprises:
acquiring coding attributes of the plurality of image rendering tasks;
the determining an optimal video rendering cooperative configuration according to the video rendering capability of the terminal device includes:
and determining the optimal video rendering cooperative configuration according to the coding attributes of the image rendering tasks and the video rendering capability of the terminal equipment.
9. The method according to claim 8, wherein the determining an optimal video rendering coordination configuration according to the encoding properties of the plurality of image rendering tasks and the video rendering capability of the terminal device comprises:
judging whether at least one image rendering task needing to be allocated to the cloud server exists in the image rendering tasks according to the encoding attributes of the image rendering tasks;
if the at least one image rendering task exists in the plurality of image rendering tasks, the at least one image rendering task is distributed to the cloud server, and whether the rest image rendering tasks except the at least one image rendering task exist in the plurality of image rendering tasks is judged;
if the at least one image rendering task does not exist in the plurality of image rendering tasks, determining an optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment;
and if the residual image rendering tasks exist in the plurality of image rendering tasks, determining optimal video rendering cooperative configuration equipment for the residual image rendering tasks according to the video rendering capability of the terminal equipment.
10. The method according to claim 9, wherein the determining whether there is at least one image rendering task in the plurality of image rendering tasks that needs to be allocated to the cloud server according to the encoding attributes of the plurality of image rendering tasks comprises:
if the encoding attribute of a third image rendering task in the plurality of image rendering tasks meets a corresponding preset condition, determining the third image rendering task as the image rendering task needing to be allocated to the cloud server;
and if the encoding attribute of a fourth image rendering task in the plurality of image rendering tasks does not meet the corresponding preset condition, determining that the fourth image rendering task is not the image rendering task needing to be allocated to the cloud server.
11. The method of claim 10, wherein the encoding property is any one of: coding complexity, coding quality and coding delay.
12. The method of claim 11, wherein if the coding property is coding complexity; judging whether the encoding attribute of the third image rendering task or the fourth image rendering task meets a corresponding preset condition, including:
and judging whether the coding complexity of the third image rendering task or the fourth image rendering task reaches a preset coding complexity.
13. The method of claim 11, wherein if the coding property is coding quality; judging whether the encoding attribute of the third image rendering task or the fourth image rendering task meets a corresponding preset condition, including:
and judging whether the coding quality of the third image rendering task or the fourth image rendering task reaches the preset coding quality.
14. The method of claim 11, wherein if the coding property is coding delay; judging whether the encoding attribute of the third image rendering task or the fourth image rendering task meets a corresponding preset condition, including:
and judging whether the coding time delay of the third image rendering task or the fourth image rendering task reaches a preset coding time delay.
15. The method of any of claims 1-5, wherein the plurality of image rendering tasks correspond to a plurality of image rendering algorithms;
wherein, the image rendering tasks and the image rendering algorithms have a one-to-one correspondence relationship.
16. The method of claim 15, wherein the plurality of image rendering algorithms comprises at least two of: a sharpening processing algorithm, a noise reduction processing algorithm and a fuzzy processing algorithm.
17. A video rendering coordination method, comprising:
acquiring video rendering capability of terminal equipment from an operating system of the terminal equipment;
sending the video rendering capability of the terminal equipment to a cloud server so that the cloud server can determine the optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment; the optimal video rendering cooperative configuration is a video rendering cooperative configuration which allocates a plurality of image rendering tasks to the terminal device or the cloud server, or a video rendering cooperative configuration which allocates the plurality of image rendering tasks to the terminal device and the cloud server;
and receiving the optimal video rendering cooperative configuration and sending the optimal video rendering cooperative configuration to the terminal equipment.
18. A video rendering coordination method, comprising:
sending the video rendering capability of the terminal equipment to a client so that a cloud server corresponding to the client determines the optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment; the optimal video rendering cooperative configuration is a video rendering cooperative configuration which allocates a plurality of image rendering tasks to the terminal device or the cloud server, or a video rendering cooperative configuration which allocates the plurality of image rendering tasks to the terminal device and the cloud server;
and acquiring the optimal video rendering cooperative configuration from the client, and performing rendering operation on the plurality of image rendering tasks according to the optimal video rendering cooperative configuration.
19. A video rendering coordination apparatus, comprising:
the sending module is used for sending a video rendering capability request to the terminal equipment;
a receiving module, configured to receive a video rendering capability response of the terminal device, where the video rendering capability response includes: video rendering capability of the terminal device;
the determining module is used for determining the optimal video rendering cooperative configuration according to the video rendering capability of the terminal equipment;
the optimal video rendering cooperative configuration is a video rendering cooperative configuration in which a plurality of image rendering tasks are allocated to the terminal device or the cloud server, or a video rendering cooperative configuration in which the plurality of image rendering tasks are allocated to the terminal device and the cloud server in a cooperative manner.
20. An electronic device, comprising:
a processor and a memory for storing a computer program, the processor for invoking and executing the computer program stored in the memory to perform the method of any one of claims 1 to 18.
CN202210101688.7A 2022-01-27 2022-01-27 Video rendering coordination method, device and equipment Pending CN114205359A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210101688.7A CN114205359A (en) 2022-01-27 2022-01-27 Video rendering coordination method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210101688.7A CN114205359A (en) 2022-01-27 2022-01-27 Video rendering coordination method, device and equipment

Publications (1)

Publication Number Publication Date
CN114205359A true CN114205359A (en) 2022-03-18

Family

ID=80658886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210101688.7A Pending CN114205359A (en) 2022-01-27 2022-01-27 Video rendering coordination method, device and equipment

Country Status (1)

Country Link
CN (1) CN114205359A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023191710A1 (en) * 2022-03-31 2023-10-05 脸萌有限公司 End-cloud collaboration media data processing method and apparatus, device and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120110185A1 (en) * 2010-10-29 2012-05-03 Cisco Technology, Inc. Distributed Hierarchical Rendering and Provisioning of Cloud Services
CN106447755A (en) * 2016-09-27 2017-02-22 上海斐讯数据通信技术有限公司 Animation rendering system
CN109173244A (en) * 2018-08-20 2019-01-11 贵阳动视云科技有限公司 Game running method and device
CN110393921A (en) * 2019-08-08 2019-11-01 腾讯科技(深圳)有限公司 Processing method, device, terminal, server and the storage medium of cloud game
CN111818120A (en) * 2020-05-20 2020-10-23 北京元心科技有限公司 End cloud user interaction method and system, corresponding equipment and storage medium
CN112473130A (en) * 2020-11-26 2021-03-12 成都数字天空科技有限公司 Scene rendering method and device, cluster, storage medium and electronic equipment
CN112565884A (en) * 2020-11-27 2021-03-26 北京达佳互联信息技术有限公司 Image processing method, image processing device, terminal, server and storage medium
CN112614202A (en) * 2020-12-24 2021-04-06 北京元心科技有限公司 GUI rendering display method, terminal, server, electronic device and storage medium
CN112669428A (en) * 2021-01-06 2021-04-16 南京亚派软件技术有限公司 BIM (building information modeling) model rendering method based on server and client cooperation
CN113313807A (en) * 2021-06-28 2021-08-27 完美世界(北京)软件科技发展有限公司 Picture rendering method and device, storage medium and electronic device
CN113613066A (en) * 2021-08-03 2021-11-05 天翼爱音乐文化科技有限公司 Real-time video special effect rendering method, system, device and storage medium
CN113838184A (en) * 2020-06-08 2021-12-24 华为技术有限公司 Rendering method, device and system

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120110185A1 (en) * 2010-10-29 2012-05-03 Cisco Technology, Inc. Distributed Hierarchical Rendering and Provisioning of Cloud Services
CN106447755A (en) * 2016-09-27 2017-02-22 上海斐讯数据通信技术有限公司 Animation rendering system
CN109173244A (en) * 2018-08-20 2019-01-11 贵阳动视云科技有限公司 Game running method and device
CN110393921A (en) * 2019-08-08 2019-11-01 腾讯科技(深圳)有限公司 Processing method, device, terminal, server and the storage medium of cloud game
CN111818120A (en) * 2020-05-20 2020-10-23 北京元心科技有限公司 End cloud user interaction method and system, corresponding equipment and storage medium
CN113838184A (en) * 2020-06-08 2021-12-24 华为技术有限公司 Rendering method, device and system
CN112473130A (en) * 2020-11-26 2021-03-12 成都数字天空科技有限公司 Scene rendering method and device, cluster, storage medium and electronic equipment
CN112565884A (en) * 2020-11-27 2021-03-26 北京达佳互联信息技术有限公司 Image processing method, image processing device, terminal, server and storage medium
CN112614202A (en) * 2020-12-24 2021-04-06 北京元心科技有限公司 GUI rendering display method, terminal, server, electronic device and storage medium
CN112669428A (en) * 2021-01-06 2021-04-16 南京亚派软件技术有限公司 BIM (building information modeling) model rendering method based on server and client cooperation
CN113313807A (en) * 2021-06-28 2021-08-27 完美世界(北京)软件科技发展有限公司 Picture rendering method and device, storage medium and electronic device
CN113613066A (en) * 2021-08-03 2021-11-05 天翼爱音乐文化科技有限公司 Real-time video special effect rendering method, system, device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023191710A1 (en) * 2022-03-31 2023-10-05 脸萌有限公司 End-cloud collaboration media data processing method and apparatus, device and storage medium

Similar Documents

Publication Publication Date Title
CN114501062B (en) Video rendering coordination method, device, equipment and storage medium
US10924783B2 (en) Video coding method, system and server
CN111314741B (en) Video super-resolution processing method and device, electronic equipment and storage medium
CN113457160B (en) Data processing method, device, electronic equipment and computer readable storage medium
US20210316213A1 (en) Apparatus and method for providing streaming video by application program
CN112843676B (en) Data processing method, device, terminal, server and storage medium
CN106162232A (en) video playing control method and device
CN110290398B (en) Video issuing method and device, storage medium and electronic equipment
CN114339412B (en) Video quality enhancement method, mobile terminal, storage medium and device
US20240098316A1 (en) Video encoding method and apparatus, real-time communication method and apparatus, device, and storage medium
CN111327921A (en) Video data processing method and device
CN113766270A (en) Video playing method, system, server, terminal equipment and electronic equipment
CN114205359A (en) Video rendering coordination method, device and equipment
CN116567346A (en) Video processing method, device, storage medium and computer equipment
US20160142723A1 (en) Frame division into subframes
US20230018087A1 (en) Data coding method and apparatus, and computer-readable storage medium
WO2011135554A1 (en) Method and apparatus for allocating content components to different hardware interfaces
CN116244231A (en) Data transmission method, device and system, electronic equipment and storage medium
KR102231875B1 (en) Apparatus and method for providing streaming video or application program
WO2023142714A1 (en) Video processing collaboration method, apparatus, device, and storage medium
CN104221393A (en) Content adaptive video processing
CN115550690B (en) Frame rate adjusting method, device, equipment and storage medium
CN114615546B (en) Video playing method and device, electronic equipment and storage medium
CN110069570B (en) Data processing method and device
CN112887742B (en) Live stream processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40071910

Country of ref document: HK

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220318