CN112533059B - Image rendering method and device, electronic equipment and storage medium - Google Patents

Image rendering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112533059B
CN112533059B CN202011308285.7A CN202011308285A CN112533059B CN 112533059 B CN112533059 B CN 112533059B CN 202011308285 A CN202011308285 A CN 202011308285A CN 112533059 B CN112533059 B CN 112533059B
Authority
CN
China
Prior art keywords
key frame
image
frame data
target
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011308285.7A
Other languages
Chinese (zh)
Other versions
CN112533059A (en
Inventor
袁俊晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011308285.7A priority Critical patent/CN112533059B/en
Publication of CN112533059A publication Critical patent/CN112533059A/en
Application granted granted Critical
Publication of CN112533059B publication Critical patent/CN112533059B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs

Abstract

The application discloses an image rendering method, an image rendering device, electronic equipment and a storage medium, wherein the image rendering method comprises the following steps: receiving image data to be decoded by a target cloud application, wherein the image data at least comprises key frame data; detecting whether the key frame data is abnormal or not when the image data is decoded; determining abnormal key frame data as target key frame data, and detecting whether a target cloud application terminal contains standby key frame data corresponding to the target key frame data; when the target cloud application terminal is detected to contain standby key frame data corresponding to the target key frame data, calling the standby key frame data corresponding to the target key frame data from the target cloud application terminal; and rendering the application image of the target cloud application based on the image data and the called standby key frame data, wherein the scheme can normally render the application image of the cloud application under the condition of poor network condition or large delay.

Description

Image rendering method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to an image rendering method, an image rendering device, electronic equipment and a storage medium.
Background
More and more users begin to watch videos online through terminals such as mobile phones, tablet computers and personal computers, and with the development of mobile terminals and network technologies, games can also show corresponding contents to users in an image rendering mode. The terminal of the player does not need to have strong graphic operation and data processing capacity, and only needs to have basic streaming media playing capacity and the capacity of acquiring the input instruction of the player and sending the input instruction to the cloud server.
However, in a poor network situation or a large delay, video data may be lost, which may result in a stuttering game played on the player's terminal.
Disclosure of Invention
The application provides an image rendering method, an image rendering device, an electronic device and a storage medium, which can normally render application images of cloud applications under the condition of poor network conditions or large delay.
The application provides an image rendering method, which comprises the following steps:
receiving image data to be decoded by a target cloud application, wherein the image data at least comprises key frame data;
detecting whether the key frame data is abnormal or not when the image data is decoded;
determining abnormal key frame data as target key frame data, and detecting whether a target cloud application terminal contains standby key frame data corresponding to the target key frame data;
when the target cloud application terminal is detected to contain standby key frame data corresponding to the target key frame data, calling the standby key frame data corresponding to the target key frame data from the target cloud application terminal;
rendering an application image of the target cloud application based on the image data and the called standby key frame data.
Correspondingly, the present application also provides an image rendering apparatus, comprising:
the receiving module is used for receiving image data to be decoded by a target cloud application, and the image data at least comprises key frame data;
the detection module is used for detecting whether the key frame data are abnormal or not when the image data are decoded;
the determining module is used for determining the abnormal key frame data as target key frame data and detecting whether the target cloud application terminal contains standby key frame data corresponding to the target key frame data;
the calling module is used for calling the standby key frame data corresponding to the target key frame data from the target cloud application terminal when the target cloud application terminal is detected to contain the standby key frame data corresponding to the target key frame data;
and the rendering module is used for rendering the application image of the target cloud application based on the image data and the called standby key frame data.
Optionally, in some embodiments of the present application, the rendering module includes:
the replacing submodule is used for replacing abnormal key frame data in the image data by using the called standby key frame data to obtain replaced image data;
a rendering submodule for rendering an application image of the target cloud application based on the replaced image data.
Optionally, in some embodiments of the present application, the rendering sub-module includes:
the determining unit is used for determining currently decoded key frame data in the replaced image data to obtain current key frame data;
the decoding unit is used for decoding the standby key frame data to obtain a standby application picture corresponding to the standby key frame data if the current key frame data is the standby key frame data, wherein the standby application picture is an application picture only containing target image elements;
and the rendering unit is used for restoring the target image element in the application picture by using the standby application picture so as to render the application image of the target cloud application.
Optionally, in some embodiments of the present application, the rendering unit includes:
a determining subunit, configured to determine a non-target image element that needs to be loaded;
an obtaining subunit, configured to obtain the non-target image element from the target cloud application according to a current network intensity value;
and the rendering subunit is configured to render the application image of the target cloud application based on the acquired non-target image element and the target image element in the standby application screen.
Optionally, in some embodiments of the present application, the obtaining subunit includes:
the network detection unit is used for detecting the current network intensity value;
the first policy determining unit is used for determining an element acquisition policy as a first acquisition policy when the network strength value is greater than or equal to a preset value, and acquiring the non-target image element from the target cloud application based on the first acquisition policy;
and the second policy determining unit is used for determining an element acquisition policy as a second acquisition policy when the network strength value is smaller than a preset value, and acquiring the non-target image element from the target cloud application based on the second acquisition policy.
Optionally, in some embodiments of the present application, the first policy determining unit is specifically configured to:
determining the image resolution of the target image element to obtain a first image resolution;
and acquiring non-target image elements with the image resolution being the first image resolution from the target cloud application.
Optionally, in some embodiments of the present application, the second policy determining unit is specifically configured to:
determining the image resolution corresponding to the second acquisition strategy to obtain a second image resolution;
obtaining non-target image elements with an image resolution of a second image resolution from the target cloud application.
Optionally, in some embodiments, the invoking module is specifically configured to:
extracting a data tag corresponding to the target key frame data;
and calling standby key frame data corresponding to the target key frame data from the target cloud application terminal according to the extracted data tag.
Optionally, in some embodiments of the present application, the apparatus further includes a downloading module, where the downloading module is specifically configured to:
and downloading standby key frame data corresponding to the target cloud application.
After receiving image data to be decoded by a target cloud application, the cloud application runs on a server, the image data at least comprises key frame data, when the image data is decoded, whether the key frame data are abnormal is detected, then the abnormal key frame data are determined to be the target key frame data, whether a target cloud application terminal contains standby key frame data corresponding to the target key frame data is detected, when the target cloud application terminal contains the standby key frame data corresponding to the target key frame data, the standby key frame data corresponding to the target key frame data are called from the target cloud application terminal, and finally, an application image of the target cloud application is rendered based on the image data and the called standby key frame data. Therefore, the scheme can be used for normally rendering the application image of the cloud application when the network condition is not good or the delay is large.
Drawings
In order to more clearly illustrate the technical solutions in the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1a is a scene schematic diagram of an image rendering method provided by the present application;
FIG. 1b is a schematic flowchart of an image rendering method provided in the present application;
FIG. 2a is another schematic flow chart of an image rendering method provided in the present application;
FIG. 2b is a schematic diagram of an image rendering system provided herein;
FIG. 2c is a schematic diagram of a game screen in the image rendering method provided by the present application;
FIG. 2d is another schematic diagram of an image rendering system provided herein;
FIG. 3a is a schematic structural diagram of an image rendering apparatus provided in the present application;
FIG. 3b is a schematic structural diagram of an image rendering apparatus provided in the present application;
fig. 4 is a schematic structural diagram of an electronic device provided in the present application.
Detailed Description
The technical solutions in the present application will be described clearly and completely with reference to the accompanying drawings in the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The application provides an image rendering method, an image rendering device, electronic equipment and a storage medium.
The image rendering device may be specifically integrated in a terminal, and the terminal may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, or the like, but is not limited thereto. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
The Cloud application can be a Cloud game or a Cloud mobile phone, and the Cloud game (Cloud gaming) can also be called game on demand (gaming), which is an online game technology based on a Cloud computing technology. Cloud game technology enables light-end devices (thin clients) with relatively limited graphics processing and data computing capabilities to run high-quality games. In a cloud game scene, a game is not operated in a player game terminal but in a cloud server, and the cloud server renders the game scene into a video and audio stream which is transmitted to the player game terminal through a network. The player game terminal does not need to have strong graphic operation and data processing capacity, and only needs to have basic streaming media playing capacity and capacity of acquiring player input instructions and sending the instructions to the cloud server. Cloud mobile phones (Cloudphone) apply cloud computing technology to network terminal services, and a mobile phone which realizes cloud services through a cloud server is a smart phone which is combined with network services deeply, and the mobile phone can realize numerous functions through a network by means of a system of the mobile phone and a network terminal erected by a manufacturer.
For example, referring to fig. 1a, taking a cloud game as an example, an image rendering device is integrated on a terminal 10, when the user starts the cloud game, the terminal 10 acquires image data to be decoded of the cloud game from the server 20 in real time, the image data includes at least key frame data, and then the terminal 10 decodes the acquired image data, and when the terminal 10 decodes the image data, detecting whether the key frame data is abnormal, the terminal 10 determines the abnormal key frame data as target key frame data, and detects whether the terminal 10 includes the backup key frame data corresponding to the target key frame data, when the terminal 10 detects that the backup key frame data corresponding to the target key frame data is locally included, the terminal calls the backup key frame data corresponding to the target key frame data, and finally, the terminal 10 renders a game image of the cloud game based on the image data and the called standby key frame data.
According to the image rendering method, when the target cloud application terminal is detected to contain the standby key frame data corresponding to the target key frame data, the standby key frame data corresponding to the target key frame data is called from the target cloud application terminal, under the condition that the network situation is not good or the delay is large, the game image of the cloud game can be rendered based on the image data and the called standby key frame data, and the phenomenon that the application image is jammed when the cloud application is operated due to the loss of the key frame data is avoided.
The following are detailed below. It should be noted that the description sequence of the following embodiments is not intended to limit the priority sequence of the embodiments.
An image rendering method, comprising: receiving image data to be decoded by target cloud application, detecting whether key frame data are abnormal or not when the image data are decoded, determining the abnormal key frame data as the target key frame data, detecting whether a target cloud application terminal contains standby key frame data corresponding to the target key frame data or not, calling the standby key frame data corresponding to the target key frame data from the target cloud application terminal when the target cloud application terminal is detected to contain the standby key frame data corresponding to the target key frame data, and rendering an application image of the target cloud application based on the image data and the called standby key frame data.
Referring to fig. 1b, fig. 1b is a schematic flowchart of an image rendering method provided in the present application. The specific flow of the image rendering method may be as follows:
101. receiving image data to be decoded by a target cloud application.
Wherein the image data includes at least key frame data,
cloud applications are a subset of the concept of "cloud computing", which is the embodiment of cloud computing technology at the application layer. The biggest difference between cloud applications and cloud computing is that cloud computing exists as a macro technology development concept, and cloud applications are products that directly face customers to solve practical problems. The working principle of cloud application is a novel application that the using mode of local installation and local operation of traditional software is changed into a service of instant taking and using, and a remote server cluster is connected and controlled through the internet or a local area network to complete service logic or operation tasks. The main carrier of the cloud application is internet technology, and the interface is essentially the integration of technologies such as HTML5, Javascript, or Flash in the presentation form of a Thin Client (Thin Client) or a Smart Client (Smart Client).
In order to reduce the data amount during transmission, at present, a server encodes application images of a cloud application to form image data, where the format of the image data may be AVC (Advanced Video Coding, i.e., h.264 Coding standard) or HEVC (High Efficiency Video Coding, i.e., h.265 Coding standard), the image data may further include prediction frame data, which is described as an example of the h.264 Coding standard, in the h.264 Coding standard, several frames of images are grouped into the same Group to form a Group of Pictures, the Group of Pictures is also referred to as a Group of Pictures (GOP), a first image frame of the GOP must be a key frame, and it is ensured that the GOP does not need to refer to other images, and can be independently decoded. The key frame is also called an I frame (Intra coded frames), and the I frame is an Intra-frame coding method, which is a full-frame compression coding frame. The method carries out compression coding and transmission on full-frame image information, and reconstructs a complete image by using only data of an I frame during decoding, wherein the I frame describes details of an image background and a motion subject (namely, spatial correlation in a single-frame image), namely, only the spatial correlation in the single-frame image is utilized, but not the temporal correlation. The I frame uses intraframe compression and does not use motion compensation, and is a random access entry point and a decoded reference frame at the same time because the I frame does not depend on other frames; the Predicted frame includes a forward Predicted frame and a bidirectional Predicted frame, the forward Predicted frame is also called a P frame (Predicted frames), the P frame represents the difference between the frame and a previous key frame (or P frame), when decoding, the difference defined by the frame needs to be superimposed on the previously buffered picture, and a final picture is generated, i.e., the P frame has no complete picture data and only has data of the difference with the picture of the previous frame, the P frame uses an I frame as a reference frame, a Predicted value and a motion vector of a certain point of the P frame are found in the I frame, and the Predicted difference value and the motion vector are taken to be transmitted together. The method comprises the steps of finding out a predicted value of a certain point of a P frame from an I frame according to a motion vector at a receiving end, adding the predicted value and a difference value to obtain a sample value of the certain point of the P frame, so as to obtain a complete P frame. It only refers to the I frame or P frame nearest to it, and the bidirectional predicted frame is also called B frame (Bi-directional predicted frames), the B frame image also adopts interframe coding mode, the B frame image adopts bidirectional time prediction, the B frame uses the former I or P frame and the latter P frame as reference frame, finds the predicted value and two motion vectors of the 'certain point' of the B frame, and takes the predicted difference value and motion vector to transmit. The receiving end (such as the terminal of the application) finds out (calculates) "the predicted value" in two reference frames according to the motion vector and sums with the difference value to obtain the sample value of "a certain point" of the B frame, thereby obtaining the complete B frame, the B frame is predicted by the previous I or P frame and the following P frame, and the B frame transmits the prediction error and the motion vector between the B frame and the previous I or P frame and the following P frame, the B frame only reflects the change situation of the motion subject between the C reference frames, the prediction is more accurate, it needs to be noted that the B frame is not used as the reference when the frame is decoded, and the diffusion of the decoding error can not be caused.
Specifically, the video data of the video to be played may be obtained from the server through a wired network or a wireless network, which is determined according to the actual situation and is not described herein again.
It should be noted that when the user starts or installs the target cloud application for the first time, the backup key frame data corresponding to the target cloud application may be downloaded, and of course, after the target cloud application is installed, the backup key frame data corresponding to the target cloud application may be referenced by the target cloud when the network condition is good.
102. When image decoding is performed on image data, whether key frame data is abnormal is detected.
In order to facilitate storage and transmission of video content, it is usually necessary to reduce the volume of the video content, that is, to compress an original video image, a compression algorithm is also called an encoding format for short, and a video encoding method is to compress the original video image into a binary byte stream through prediction, change, quantization, recombination and entropy encoding in sequence by a compression technique, where the encoding method includes: context-based Adaptive variable Length Coding (CAVLC) and Context-based Adaptive Binary Arithmetic Coding (CABAC),
in the h.264 standard coding system, a video image shows the following characteristics after undergoing prediction, transformation and quantization coding: 4 x 4 blocks of residual data are sparse, where non-zero coefficients are mainly concentrated in the low frequency part and high frequency coefficients are mostly zero; after the quantized data is scanned, the non-zero coefficient value near the direct current component is larger, and most of the non-zero coefficient values at the high-frequency position are +1 and-1; the number of non-zero coefficients of adjacent 4 x 4 blocks is relevant. CAVLC uses the characteristics of the coded residual data, selects different code tables in a self-adaptive manner, and uses less coded data to carry out lossless entropy coding on the residual data, thereby further reducing the redundancy and the correlation of the coded data and improving the compression efficiency of H.264.
The CABAC coding aims to perform compression once again from the perspective of probability, because the h.264 standard divides data that may appear in one image slice into 399 context models, each model has its own context serial number, and each different character indexes its own probability lookup table according to the corresponding context model. After receiving the character, firstly finding the serial number of the context model corresponding to the character, then determining the probability lookup table corresponding to the character according to the serial number of the context model corresponding to the character, and then forming the self-adaptive binary arithmetic coder by the probability estimation method of the probability model found by context modeling. The probability estimates are those updated in the previous context modeling phase. After each binary value is coded, the value of the probability estimate is adjusted accordingly according to the binary symbol just coded, which is a special case of arithmetic coding, in the same way as general arithmetic coding, except that the binary arithmetic coding sequence has only "0" and "1" symbols, and the probabilities involved are only P (0) and P (1). Through the steps, the CABAC coding process is completed,
specifically, the server transmits the encoded binary byte stream to the terminal device of the user through the network, and the terminal device acquires the binary byte stream and then decodes the binary byte stream, for example, the server may compress a video image by using an h.264 encoding format, when receiving the compressed video content sent by the server, the compressed video content needs to be decompressed, also called decoding, in terms of video image encoding and decoding, the encoder encodes a plurality of images to produce a section of GOP, and the decoder reads a section of GOP to decode and then reads a picture to render and display when playing. The GOP is a group of continuous pictures and consists of a key frame and a plurality of reference frames, wherein only one key frame is in a group, the key frame is a complete picture, and a forward prediction frame and a bidirectional prediction frame record changes relative to the key frame, so the key frame can be independently decoded, the forward prediction frame needs to be decoded depending on a previous frame image, and the bidirectional prediction frame needs to be decoded depending on not only the previous frame image but also a next frame image.
For image rendering, it is most important to find a key frame in image data and decode the found key frame to implement rendering of an application image of cloud application, how to extract information of the key frame is important for a task of image rendering, and a method for extracting the key frame may be a method based on motion analysis.
In the method based on motion analysis, mainly the optical flow of the object motion is analyzed, wherein the optical flow is calculated according to the correlation of each pixel between video frames and frames. Inputting two S frames and W frames adjacent in time sequence, and calculating the displacement of each pixel point on the frame by the optical flow, so that the position of each pixel point is consistent with that of the W frame at the next moment after the displacement is finished, and selecting the video frame with the least optical flow movement frequency in the image as the extracted key frame each time; the method for extracting the key frame is not limited in the application, and the selection is specifically performed according to the actual situation.
Therefore, when decoding the key frame data, it can be detected whether the key frame data is missing due to network failure or large delay, which results in rendering the application image subsequently, and when detecting that the key frame data is abnormal, step 103 is executed.
103. And determining the abnormal key frame data as target key frame data, and detecting whether the target cloud application terminal contains standby key frame data corresponding to the target key frame data.
For example, specifically, when some key frame data are lost due to network jitter, the abnormal key frame data is determined as target key frame data, and it is detected whether the target cloud application terminal includes standby key frame data corresponding to the target key frame data, and then step 104 is performed.
104. And when the target cloud application terminal is detected to contain the standby key frame data corresponding to the target key frame data, calling the standby key frame data corresponding to the target key frame data from the target cloud application terminal.
For example, specifically, if the target cloud application terminal (i.e., local) includes the backup key frame data corresponding to the target key frame data, the data tag corresponding to the target key frame data is extracted, and then the backup key frame data corresponding to the data tag is called from the target cloud application terminal.
It should be noted that the standby key frame is a key frame that is locally and previously acquired from the server, and image elements corresponding to the standby key frame are preset by the server or an operation and maintenance worker, for example, the image elements corresponding to the standby key frame are static image elements, such as image elements of a house, a table, a chair, a fence, and the like, that is, the server may collect candidate video data of a plurality of candidate videos, and then the server identifies the key frame corresponding to the static image element in each candidate image group to obtain the standby key frame, and the terminal may download the target cloud from the server and refer to the corresponding standby key frame data. It should be noted that, in cloud games, for dynamic image elements such as running characters, a server and a client are required to interact and return data in real time due to a large number of picture matching, numerical calculation and the like; for static image elements such as maps, rooms and other pictures, the positions of the static image elements in the cloud game are usually fixed, so that a server and a client do not need to interact and return data in real time, and in order to avoid an error between a restored image group and an original image group (namely, the restored image elements are not consistent with actual image elements), the static image elements are determined as target image elements in the application.
105. Rendering an application image of the target cloud application based on the image data and the called standby key frame data.
For example, specifically, the application image of the target cloud application may be rendered based on the called standby key frame data, for example, in a case of a poor network condition or a large delay, the called standby key frame data is used to replace abnormal key frame data in the image data to render the application image of the target cloud application, that is, optionally, in some embodiments, the step "rendering the application image of the target cloud application based on the image data and the called standby key frame data" may specifically include:
(31) replacing abnormal key frame data in the image data by using the called standby key frame data to obtain replaced image data;
(32) rendering an application image of the target cloud application based on the replaced image data.
In the cloud mobile phone, if the application image of the cloud mobile phone is the boot interface of the cloud mobile phone, the boot interface of the cloud mobile phone may be rendered based on the replaced image data, for example, the standby key frame data is decoded to obtain a standby application picture, and then, the standby application picture is used to recover the target image element in the application picture to render the application image of the target cloud application, that is, optionally, in some embodiments, the step "rendering the application image of the target cloud application based on the replaced image data" may specifically include:
(41) determining currently decoded key frame data in the replaced image data to obtain current key frame data;
(42) if the current key frame data is the standby key frame data, decoding the standby key frame data to obtain a standby application picture corresponding to the standby key frame data;
(43) and restoring the target image element in the application picture by using the standby application picture so as to render the application image of the target cloud application.
It can be appreciated that the alternate application picture is an application picture that contains only the target image element, since the alternate application picture is decoded from the alternate key frame. However, in the cloud game, the game image of the cloud game includes a static image (such as house, flowers and defense tower) and a dynamic image (such as game characters, game vehicles and game organs, etc.), since the cloud game terminal cannot predict the change of the dynamic image, such as the game characters operated by the player, in some embodiments, the static image element is determined as the target image element, that is, after the abnormal key frame data in the image data is replaced to obtain the replaced image data, the non-target image element required to be obtained from the server is determined, for example, the dynamic image element in the game image, such as a game character controlled by the player or a game carrier controlled by the player, is obtained from the server, and then the application image referenced by the target cloud is rendered based on the obtained non-target image element and the target image element of the standby application image, that is, optionally, in some embodiments, the step "restore the target image element in the application screen by using the standby application screen to render the application image of the target cloud application" may specifically include:
(51) determining non-target image elements needing to be loaded;
(52) acquiring non-target image elements from the target cloud application according to the current network intensity value;
(53) and rendering the application image of the target cloud application based on the acquired non-target image elements and the target image elements in the standby application picture.
In this embodiment, since the non-target image element needs to be acquired from the target cloud application, the network intensity value when the non-target image element is acquired needs to be considered, and different element acquisition policies are corresponding to different environments (that is, different network intensity values), that is, optionally, in some embodiments, the step "acquiring the non-target image element from the target cloud application according to the current network intensity value" may specifically include:
(61) detecting a current network strength value;
(62) when the network intensity value is larger than or equal to a preset value, determining an element acquisition strategy as a first acquisition strategy, and acquiring non-target image elements from a target cloud application based on the first acquisition strategy;
(63) and when the network intensity value is smaller than a preset value, determining an element acquisition strategy as a second acquisition strategy, and acquiring non-target image elements from the target cloud application based on the second acquisition strategy.
For example, specifically, when the network strength value is greater than or equal to a preset value, a non-target image element of the image resolution corresponding to the first acquisition policy is acquired; similarly, when the network intensity value is smaller than the preset value, a non-target image element of the image resolution corresponding to the second acquisition policy is acquired, that is, optionally, the step "acquiring the non-target image element from the target cloud application based on the first acquisition policy" is specifically: determining the image resolution of the target image element to obtain a first image resolution, and acquiring the non-target image element with the image resolution being the first image resolution from the target cloud application. The method comprises the following steps of acquiring non-target image elements from a target cloud application based on a second acquisition strategy, specifically: and determining the image resolution corresponding to the second acquisition strategy to obtain a second image resolution, and acquiring the non-target image elements with the image resolution being the second image resolution from the target cloud application.
The image resolution refers to the amount of information stored in an image, and is how many pixels are in each inch of the image, and the unit of the resolution is ppi (pixels Per inch), which is generally called pixel Per inch, and the expression mode of the image resolution is also "horizontal pixel number × vertical pixel number", where the image resolution of the pre-obtained standby image element may be a high-resolution image element, or may be the same as the image resolution of the application image, for example, the image resolution of the video to be played may be 720 × 576, the image resolution of the pre-obtained standby image element may be 1920 × 1080, or may be 720 × 576, which may be specifically selected in practical situations and will not be described herein again.
The preset value can be set according to actual requirements, for example, the preset value can be set according to a memory occupied by the target image element, and when the network intensity value is greater than or equal to the preset value, the non-target image element with the resolution same as that of the standby image can be obtained from the server.
And for the case that the network intensity value is smaller than the preset value, the non-target image element with the resolution lower than that of the standby image and the same resolution can be obtained according to a second obtaining strategy, the second obtaining strategy can be preset in the case that the network intensity is smaller than the preset value, the image resolution of the non-target image element is downloaded from the server, and in order to improve the fluency of image rendering, the second image resolution is smaller than the image resolution of the standby image element.
The image rendering is performed depending on the network communication delay, and in the present application, when the network communication quality is poor, the standby key frame data corresponding to the target key frame data may be called, and the application image may be restored according to the called standby key frame data.
The method comprises the steps of detecting whether key frame data are abnormal or not after receiving image data to be decoded of target cloud application and when image decoding is carried out on the image data, then determining the abnormal key frame data as target key frame data, detecting whether a target cloud application terminal contains standby key frame data corresponding to the target key frame data or not, calling the standby key frame data corresponding to the target key frame data from the target cloud application terminal when detecting that the target cloud application terminal contains the standby key frame data corresponding to the target key frame data, and finally rendering an application image of the target cloud application based on the image data and the called standby key frame data. Under the condition of poor network condition or larger delay, the game image of the cloud game can be rendered based on the image data and the called standby key frame data, and the phenomenon that the application image is blocked when the cloud application is operated due to the loss of the key frame data is avoided.
The method according to the examples is further described in detail below by way of example.
In the present embodiment, the image rendering apparatus will be described by taking an example in which it is specifically integrated in a terminal.
Referring to fig. 2a, an image rendering method may include the following specific processes:
201. the terminal receives image data to be decoded by the target cloud application.
The image data at least includes key frame data, specifically, the terminal may obtain Video data of a Video to be played from the server through a wired network or a wireless network, and the format of the Video data may be AVC (Advanced Video Coding, i.e., h.264 Coding standard) or HEVC (High Efficiency Video Coding, i.e., h.265 Coding standard).
In addition, when the user starts or installs the target cloud application for the first time, the terminal may download the standby key frame data corresponding to the target cloud application, and of course, the terminal may also download the target cloud to refer to the corresponding standby key frame when the network condition is good after the target cloud application is installed.
It should be noted that the standby key frame is a key frame that is locally and previously acquired from the server, and image elements corresponding to the standby key frame are preset by the server or an operation and maintenance worker, for example, the image elements corresponding to the standby key frame are static image elements, such as image elements of a house, a table, a chair, a fence, and the like, that is, the server may collect candidate video data of a plurality of candidate videos, and then the server identifies the key frame corresponding to the static image element in each candidate image group to obtain the standby key frame, and the terminal may download the target cloud from the server and refer to the corresponding standby key frame data.
202. When the terminal decodes the image data, whether the key frame data is abnormal is detected.
In order to facilitate storage and transmission of video content, it is generally necessary to reduce the volume of the video content, that is, to compress an original video image, and a compression algorithm is also referred to as an encoding format. For example, the server transmits the encoded binary byte stream to the terminal device of the user through the network, the terminal device acquires the binary byte stream and then decodes the binary byte stream, the terminal device may detect whether the key frame data is missing due to network failure or large delay when decoding the key frame data, and when detecting that the key frame data is abnormal, step 203 is executed.
203. The terminal determines the abnormal key frame data as target key frame data, and detects whether the target cloud application terminal contains standby key frame data corresponding to the target key frame data.
For example, specifically, when some key frame data are lost due to network jitter, the terminal determines abnormal key frame data as target key frame data, detects whether the target cloud application terminal includes standby key frame data corresponding to the target key frame data, and then executes step 204.
204. And when the terminal detects that the target cloud application terminal contains the standby key frame data corresponding to the target key frame data, calling the standby key frame data corresponding to the target key frame data from the target cloud application terminal.
For example, specifically, if the target cloud application terminal (i.e., local) includes the backup key frame data corresponding to the target key frame data, the terminal invokes the backup key frame data corresponding to the target key frame data from the target cloud application terminal.
205. And rendering the application image of the target cloud application by the terminal based on the image data and the called standby key frame data.
For example, the terminal may render an application image of the target cloud application based on the called standby key frame data, for example, in a case that the network condition is not good or the delay is large, the terminal replaces the abnormal key frame data in the image data with the called standby key frame data to render the application image of the target cloud application.
Further, after the terminal replaces abnormal key frame data in the image data to obtain replaced image data, the terminal determines non-target image elements needing to be acquired from the server, then the terminal renders an application image referenced by the target cloud based on the acquired non-target image elements and the target image elements of the standby application picture,
the terminal detects whether key frame data are abnormal or not when the terminal decodes the image data after receiving the image data to be decoded of the target cloud application, then determines the abnormal key frame data as the target key frame data, detects whether the target cloud application terminal contains standby key frame data corresponding to the target key frame data or not, calls the standby key frame data corresponding to the target key frame data from the target cloud application terminal when the terminal detects that the target cloud application terminal contains the standby key frame data corresponding to the target key frame data, and finally renders the application image of the target cloud application based on the image data and the called standby key frame data, and calls the standby key frame data corresponding to the target key frame data from the target cloud application terminal when the terminal detects that the target cloud application terminal contains the standby key frame data corresponding to the target key frame data, under the condition that the network condition is not good or the delay is large, the terminal can render the game image of the cloud game based on the image data and the called standby key frame data, and the phenomenon that the application image is blocked when the cloud application is operated due to the loss of the key frame data is avoided.
In order to further understand the image rendering scheme of the present application, the present application provides an image rendering system (hereinafter referred to as a playing system), which is described by taking a Cloud game as an example, where the Cloud game (Cloud gaming) may also be referred to as game on demand (gaming), which is an online game technology based on a Cloud computing technology, and please refer to fig. 2b, the Cloud game performs all image quality rendering work on a server, and encodes and decodes the image quality rendering work into corresponding streaming media data. At this time, key frame data of all static resources of the game, such as maps, walls, rooms, bridges, trees, and the like, need to be extracted during encoding and decoding, and which key frame data are to be preset are selected according to different scenic education of different games. A background developer is required to detach game key frame data during game encoding and decoding, and store the extracted key frame data in a server, so that each cloud game client can obtain the data locally through a network request. When the cloud game application is started, the cloud game application requests to acquire preprocessed game key frame data from the server through the interface, and the acquired key frame data are stored in a local cache or a disk file.
For example, referring to fig. 2c, the cloud game screen includes a first game control 221, a second game control 222, a third game control 223, a game character 224, and a house 225, and a position of the house 225 in the game is fixed, so that a background developer extracts key frame data corresponding to the house in the multiplayer online shooting game during encoding and decoding the game, a user can download key frame data corresponding to the house together when downloading a client of the multiplayer online shooting game, and when a subsequent user starts a cloud game client, key frame data corresponding to the house can be loaded locally.
Further, when the cloud game is started, the terminal and the server establish a link, a long link communication state is maintained, streaming media data rendered by the server is acquired, at the moment, the local codec acquires streaming data at the server, decoding processing is carried out by a coding and decoding algorithm, the data is analyzed into a group of GOP data, I/P/B frame data is contained in the GOP data, the frame data is rendered through a frame buffer, the I frame in the GOP is taken as a key frame, the P/B frame reference I frame data is rendered into picture data of one frame, the player plays the picture through the data of the frame buffer synchronously at the moment, therefore, the effect of playing the media data is achieved, and when the key frame data of the streaming media data is lost due to a network or other reasons and the local decoder cannot perform normal decoding, the data of the GOP is abnormal. When the key frame is lost, the P/B frame can not be rendered into picture data of one frame by one frame in the frame buffer with reference to the I frame data, and the abnormality of the key frame data is detected. At this moment, whether preset key frame data exist in the local cache needs to be detected, if the preset key frame data do not exist in the local cache, decoding fails, and the player cannot synchronize the data, so that screen splash is caused. However, when the related key frame data preset in the local cache is processed, the preset key frame data can be taken out from the local cache, and the rendering can be restarted by matching with the data of the current frame buffer, so that the effect of normal image quality is achieved, and the player can synchronize the data pictures to play the process.
It should be noted that when the user starts or installs the application for the first time or the network condition is good, the key frame data of the relevant map and scene (i.e. the key frame data corresponding to the target image element) is preset or downloaded locally. When some image quality key frames in the cloud game are lost due to network jitter and encoding and decoding cannot be carried out through the matching of I/P/B frames, locally preset key frame data can be used for compensation.
For another example, taking a live cloud game scene as an example, please refer to fig. 2d, where the live cloud game system includes a broadcaster a, a viewer b and a live server c, the image rendering device provided in the present application may be integrated in the broadcaster a and/or the viewer b, and for convenience of description, the following description specifically describes that the image rendering device may be integrated in the broadcaster a as an example.
Specifically, the anchor terminal a sends a request to the live broadcast server c according to the live broadcast user to start the game and live broadcast, and the live broadcast server c determines and renders a game picture according to the request of the anchor terminal a and issues the rendered game picture. The anchor terminal a translates the operation of the live broadcast user on the cloud game into an operation instruction and uploads the operation instruction to the live broadcast server c, and meanwhile, the recorded audio data of the client terminal is uploaded. And the live broadcast server c also records the video of the live broadcast user in the operation process of the cloud game, synthesizes the video and the audio received from the anchor terminal a to obtain audio and video, and sends the audio and video to the live broadcast server c. And after receiving the audio and video, the live broadcast server c sends the audio and video to the audience b on the live broadcast platform so that the audience b on the live broadcast platform can play and watch the audio and video at the same time.
The method comprises the steps that a main broadcast end a can send gifts to a live broadcast end a when live broadcast is conducted, interaction between audiences and live broadcast is achieved, the live broadcast end a can display images corresponding to the received gifts when the gifts sent by the audience end b are received, the live broadcast end a receives image data which are sent by a live broadcast server c and correspond to props, whether key frame data are abnormal or not is detected when the live broadcast end a decodes the image data, the live broadcast end a determines the abnormal key frame data as target key frame data and detects whether the live broadcast end a contains standby key frame data corresponding to the target key frame data or not, in the application, the live broadcast end a can download key frame data corresponding to the props and key frame data corresponding to game buildings in cloud game in advance, and when the main broadcast end a network shakes, the live broadcast end a can render game images of the cloud game and prop images corresponding to the props during the cloud game according to the standby key frame data.
In order to better implement the image rendering method of the present application, the present application further provides an image rendering device (rendering device for short) based on the foregoing image rendering method. The terms are the same as those in the image rendering method, and specific implementation details can refer to the description in the method embodiment.
Referring to fig. 3a, fig. 3a is a schematic structural diagram of an image rendering apparatus provided in the present application, where the rendering apparatus may include a receiving module 301, a detecting module 302, a determining module 303, a calling module 304, and a rendering module 305, which may specifically be as follows:
a receiving module 301, configured to receive image data to be decoded by a target cloud application.
The image data at least includes key frame data, specifically, the receiving module 301 may obtain Video data of a Video to be played from a server through a wired network or a wireless network, and the format of the Video data may be AVC (Advanced Video Coding, i.e., h.264 Coding standard) or HEVC (High Efficiency Video Coding, i.e., h.265 Coding standard)
Optionally, in some embodiments, referring to fig. 3b, the rendering apparatus may further include a downloading module 306, where the downloading module 306 may be specifically configured to: and downloading standby key frame data corresponding to the target cloud application.
The detecting module 302 is configured to detect whether the key frame data is abnormal when the image data is decoded.
The detection module 302 obtains the binary byte stream sent by the server, and then decodes the binary byte stream, and when the detection module 302 decodes the key frame data, it can detect whether the key frame data is missing due to poor network condition or large delay, and then sends the detection result to the determination module 303.
The determining module 303 is configured to determine the abnormal key frame data as target key frame data, and detect whether the target cloud application terminal includes standby key frame data corresponding to the target key frame data.
For example, specifically, when some key frame data are lost due to network jitter, the determining module 303 determines abnormal key frame data as target key frame data, detects whether the target cloud application terminal includes standby key frame data corresponding to the target key frame data, and then sends a detection result to the invoking module 304.
The calling module 304 is configured to, when it is detected that the target cloud application terminal includes the standby key frame data corresponding to the target key frame data, call the standby key frame data corresponding to the target key frame data from the target cloud application terminal.
For example, specifically, if the target cloud application terminal includes the backup key frame data corresponding to the target key frame data, the calling module 304 calls the backup key frame data corresponding to the target key frame data from the target cloud application terminal.
Optionally, in some embodiments, the invoking module 304 is specifically configured to: and extracting a data tag corresponding to the target key frame data, and calling standby key frame data corresponding to the target key frame data from the target cloud application terminal according to the extracted data tag.
A rendering module 305, configured to render an application image of the target cloud application based on the image data and the called standby key frame data.
For example, specifically, the rendering module 305 may render the application image of the target cloud application based on the called standby key frame data, for example, in a case that the terminal is in a bad network condition or has a large delay, the rendering module 305 replaces the abnormal key frame data in the image data with the called standby key frame data to render the application image of the target cloud application.
Optionally, in some embodiments, the rendering module 305 may specifically include:
the replacing submodule is used for replacing abnormal key frame data in the image data by using the called standby key frame data to obtain replaced image data;
and the rendering submodule is used for rendering the application image of the target cloud application based on the replaced image data.
Optionally, in some embodiments, the rendering sub-module may specifically include:
the determining unit is used for determining currently decoded key frame data in the replaced image data to obtain current key frame data;
the decoding unit is used for decoding the standby key frame data to obtain a standby application picture corresponding to the standby key frame data if the current key frame data is the standby key frame data, wherein the standby application picture is an application picture only containing target image elements;
and the rendering unit is used for restoring the target image element in the application picture by using the standby application picture so as to render the application image of the target cloud application.
Optionally, in some embodiments, the rendering unit may specifically include:
a determining subunit, configured to determine a non-target image element that needs to be loaded;
the obtaining subunit is configured to obtain a non-target image element from the target cloud application according to the current network intensity value;
and the rendering subunit is used for rendering the application image of the target cloud application based on the acquired non-target image elements and the target image elements in the standby application picture.
Optionally, in some embodiments, the obtaining subunit may specifically include:
the network detection unit is used for detecting the current network intensity value;
the first strategy determining unit is used for determining the element obtaining strategy as a first obtaining strategy when the network strength value is larger than or equal to a preset value, and obtaining non-target image elements from the target cloud application based on the first obtaining strategy;
and the second strategy determining unit is used for determining the element obtaining strategy as a second obtaining strategy when the network strength value is smaller than a preset value, and obtaining the non-target image elements from the target cloud application based on the second obtaining strategy.
Optionally, in some embodiments, the first policy determining unit may specifically be configured to: determining the image resolution of the target image element to obtain a first image resolution, and acquiring the non-target image element with the image resolution being the first image resolution from the target cloud application.
Optionally, in some embodiments, the second policy determining unit may specifically be configured to: and determining the image resolution corresponding to the second acquisition strategy to obtain a second image resolution, and acquiring the non-target image element with the image resolution being the second image resolution from the target cloud application.
After receiving image data to be decoded by a target cloud application, a detection module 302 detects whether key frame data is abnormal when the image data is decoded, a determination module 303 determines the abnormal key frame data as target key frame data and detects whether a target cloud application terminal contains standby key frame data corresponding to the target key frame data, a calling module 304 calls the standby key frame data corresponding to the target key frame data from the target cloud application terminal when detecting that the target cloud application terminal contains the standby key frame data corresponding to the target key frame data, and a rendering module 305 renders an application image of the target cloud application based on the image data and the called standby key frame data, when the rendering device provided by the application detects that the target cloud application terminal contains the standby key frame data corresponding to the target key frame data, the rendering device calls the standby key frame data corresponding to the target key frame data from the target cloud application terminal, and under the condition that the network situation is not good or the delay is large, the rendering device can render the game image of the cloud game based on the image data and the called standby key frame data, so that the phenomenon that the application image is jammed when the cloud application is operated due to the loss of the key frame data is avoided.
In addition, the present application also provides an electronic device, as shown in fig. 4, which shows a schematic structural diagram of the electronic device related to the present application, specifically:
the electronic device may include components such as a processor 401 of one or more processing cores, memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 4 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the processor 401 is a control center of the electronic device, connects various parts of the whole electronic device by various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 402 and calling data stored in the memory 402, thereby performing overall monitoring of the electronic device. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by operating the software programs and modules stored in the memory 402. The memory 402 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 402 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 access to the memory 402.
The electronic device further comprises a power supply 403 for supplying power to the various components, and preferably, the power supply 403 is logically connected to the processor 401 through a power management system, so that functions of managing charging, discharging, and power consumption are realized through the power management system. The power supply 403 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The electronic device may further include an input unit 404, and the input unit 404 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the electronic device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 401 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs the application program stored in the memory 402, thereby implementing various functions as follows:
receiving image data to be decoded by target cloud application, detecting whether key frame data are abnormal or not when the image data are decoded, determining the abnormal key frame data as the target key frame data, detecting whether a target cloud application terminal contains standby key frame data corresponding to the target key frame data or not, calling the standby key frame data corresponding to the target key frame data from the target cloud application terminal when the target cloud application terminal is detected to contain the standby key frame data corresponding to the target key frame data, and rendering an application image of the target cloud application based on the image data and the called standby key frame data.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
The method comprises the steps of detecting whether key frame data are abnormal or not after receiving image data to be decoded of target cloud application and when image decoding is carried out on the image data, then determining the abnormal key frame data as target key frame data, detecting whether a target cloud application terminal contains standby key frame data corresponding to the target key frame data or not, calling the standby key frame data corresponding to the target key frame data from the target cloud application terminal when detecting that the target cloud application terminal contains the standby key frame data corresponding to the target key frame data, and finally rendering an application image of the target cloud application based on the image data and the called standby key frame data. Under the condition of poor network condition or larger delay, the game image of the cloud game can be rendered based on the image data and the called standby key frame data, and the phenomenon that the application image is blocked when the cloud application is operated due to the loss of the key frame data is avoided.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, the present application provides a storage medium having stored therein a plurality of instructions that can be loaded by a processor to perform the steps of any of the image rendering methods provided herein. For example, the instructions may perform the steps of:
receiving image data to be decoded by target cloud application, detecting whether key frame data are abnormal or not when the image data are decoded, determining the abnormal key frame data as the target key frame data, detecting whether a target cloud application terminal contains standby key frame data corresponding to the target key frame data or not, calling the standby key frame data corresponding to the target key frame data from the target cloud application terminal when the target cloud application terminal is detected to contain the standby key frame data corresponding to the target key frame data, and rendering an application image of the target cloud application based on the image data and the called standby key frame data.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any image rendering method provided by the present application, the beneficial effects that any image rendering method provided by the present application can achieve can be achieved, for details, see the foregoing embodiments, and are not described herein again.
The image rendering method, the image rendering device, the electronic device, and the storage medium provided by the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for those skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. An image rendering method, comprising:
downloading standby key frame data corresponding to the target cloud application;
receiving image data to be decoded of a target cloud application, wherein the cloud application runs on a server, and the image data at least comprises key frame data;
detecting whether the key frame data is abnormal or not when the image data is decoded;
determining abnormal key frame data as target key frame data, and detecting whether a target cloud application terminal contains standby key frame data corresponding to the target key frame data;
when the target cloud application terminal is detected to contain standby key frame data corresponding to the target key frame data, calling the standby key frame data corresponding to the target key frame data from the target cloud application terminal;
rendering an application image of the target cloud application based on the image data and the called standby key frame data.
2. The method of claim 1, wherein the rendering the application image of the target cloud application based on the image data and the invoked standby key frame data comprises:
replacing abnormal key frame data in the image data by using the called standby key frame data to obtain replaced image data;
rendering an application image of the target cloud application based on the replaced image data.
3. The method of claim 2, wherein the rendering the application image of the target cloud application based on the replaced image data comprises:
determining currently decoded key frame data in the replaced image data to obtain current key frame data;
if the current key frame data is standby key frame data, decoding the standby key frame data to obtain a standby application picture corresponding to the standby key frame data, wherein the standby application picture is an application picture only containing target image elements;
restoring the target image elements in the application picture by using the standby application picture so as to render the application image of the target cloud application.
4. The method of claim 3, wherein restoring the target image element in the application screen using the standby application screen to render the application image of the target cloud application comprises:
determining non-target image elements needing to be loaded;
acquiring the non-target image elements from the target cloud application according to the current network intensity value;
and rendering the application image of the target cloud application based on the acquired non-target image elements and the target image elements in the standby application picture.
5. The method of claim 4, wherein obtaining the non-target image element from the target cloud application according to the current network strength value comprises:
detecting a current network strength value;
when the network intensity value is larger than or equal to a preset value, determining an element acquisition strategy as a first acquisition strategy, and acquiring the non-target image elements from the target cloud application based on the first acquisition strategy;
and when the network intensity value is smaller than a preset value, determining an element acquisition strategy as a second acquisition strategy, and acquiring the non-target image elements from the target cloud application based on the second acquisition strategy.
6. The method of claim 5, wherein the obtaining the non-target image element from the target cloud application based on the first obtaining policy comprises:
determining the image resolution of the target image element to obtain a first image resolution;
and acquiring non-target image elements with the image resolution being the first image resolution from the target cloud application.
7. The method according to claim 5, wherein the splicing the non-target image element and the target image element based on the second acquisition policy to obtain the restored application picture corresponding to the spare key frame data comprises:
determining the image resolution corresponding to the second acquisition strategy to obtain a second image resolution;
obtaining, from the target cloud application, non-target image elements having an image resolution that is a second image resolution.
8. The method according to any one of claims 1 to 7, wherein the calling the standby key frame data corresponding to the target key frame data from the target cloud application terminal includes:
extracting a data tag corresponding to the target key frame data;
and calling standby key frame data corresponding to the target key frame data from the target cloud application terminal according to the extracted data tag.
9. The method according to any one of claims 1 to 7, further comprising, before receiving the target cloud referring to the image data to be decoded:
and downloading standby key frame data corresponding to the target cloud application at a preset moment.
10. An image rendering apparatus, comprising: the downloading module is used for downloading standby key frame data corresponding to the target cloud application;
the receiving module is used for receiving image data to be decoded by a target cloud application, and the image data at least comprises key frame data;
the detection module is used for detecting whether the key frame data are abnormal or not when the image data are decoded;
the determining module is used for determining the abnormal key frame data as target key frame data and detecting whether the target cloud application terminal contains standby key frame data corresponding to the target key frame data;
the calling module is used for calling the standby key frame data corresponding to the target key frame data from the target cloud application terminal when the target cloud application terminal is detected to contain the standby key frame data corresponding to the target key frame data;
and the rendering module is used for rendering the application image of the target cloud application based on the image data and the called standby key frame data.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the image rendering method according to any of claims 1-9 are implemented when the program is executed by the processor.
12. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the steps of the image rendering method according to any one of claims 1 to 9.
CN202011308285.7A 2020-11-20 2020-11-20 Image rendering method and device, electronic equipment and storage medium Active CN112533059B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011308285.7A CN112533059B (en) 2020-11-20 2020-11-20 Image rendering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011308285.7A CN112533059B (en) 2020-11-20 2020-11-20 Image rendering method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112533059A CN112533059A (en) 2021-03-19
CN112533059B true CN112533059B (en) 2022-03-08

Family

ID=74981777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011308285.7A Active CN112533059B (en) 2020-11-20 2020-11-20 Image rendering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112533059B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115473957B (en) * 2021-06-10 2023-11-14 荣耀终端有限公司 Image processing method and electronic equipment
CN113727174A (en) * 2021-07-14 2021-11-30 深圳市有为信息技术发展有限公司 Method and device for controlling vehicle satellite positioning system video platform to play and electronic equipment
CN113521728A (en) * 2021-07-23 2021-10-22 北京字节跳动网络技术有限公司 Cloud application implementation method and device, electronic equipment and storage medium
CN114071190B (en) * 2021-11-16 2023-10-31 北京百度网讯科技有限公司 Cloud application video stream processing method, related device and computer program product
CN114513647B (en) * 2022-01-04 2023-08-29 聚好看科技股份有限公司 Method and device for transmitting data in three-dimensional virtual scene
CN115661145B (en) * 2022-12-23 2023-03-21 海马云(天津)信息技术有限公司 Cloud application bad frame detection method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2961646B1 (en) * 2010-06-16 2012-06-08 Peugeot Citroen Automobiles Sa FRAME CONTROL DEVICE AND METHOD FOR USE BY AN ELECTRONIC MEMBER OF A COMMUNICATION NETWORK, BASED ON TYPES OF FUNCTIONS USING PARAMETERS CONTAINED IN THESE FRAMES
FR2970174B1 (en) * 2011-01-10 2013-07-05 Oreal COLORING OR LIGHTENING PROCESS USING A RICH BODY COMPOSITION COMPRISING AT LEAST 20 CARBON ALCOHOL, COMPOSITIONS AND DEVICE
CN108012161B (en) * 2017-11-10 2021-10-01 广州华多网络科技有限公司 Video live broadcast method, system and terminal equipment
CN110225347A (en) * 2019-06-24 2019-09-10 北京大米科技有限公司 Method of transmitting video data, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112533059A (en) 2021-03-19

Similar Documents

Publication Publication Date Title
CN112533059B (en) Image rendering method and device, electronic equipment and storage medium
US11039144B2 (en) Method and apparatus for image coding and decoding through inter-prediction
TWI622288B (en) Video decoding method
JP4996603B2 (en) Video game system using pre-encoded macroblocks
CN110198492B (en) Video watermark adding method, device, equipment and storage medium
CN105262825A (en) SPICE cloud desktop transporting and displaying method and system on the basis of H.265 algorithm
CN111901666B (en) Image processing method, image processing apparatus, electronic device, and storage medium
JP2009502069A (en) System, method and medium for switching compression levels in an image streaming system
CN112073737A (en) Re-encoding predicted image frames in live video streaming applications
US20200296422A1 (en) Image encoding and decoding method, apparatus, and system, and storage medium
CN107079159B (en) Method and device for parallel video decoding based on multi-core system
CN110891195B (en) Method, device and equipment for generating screen image and storage medium
EP4189964A1 (en) Supporting view direction based random access of bitstream
CN110401835B (en) Image processing method and device
CN114640849B (en) Live video encoding method, device, computer equipment and readable storage medium
CN117354524B (en) Method, device, equipment and computer medium for testing coding performance of encoder
CN113038277B (en) Video processing method and device
KR20060016947A (en) Mpeg video encoding system and method for the same
CN116980619A (en) Video processing method, device, equipment and storage medium
CN116761002A (en) Video coding method, virtual reality live broadcast method, device, equipment and medium
WO2023059689A1 (en) Systems and methods for predictive coding
CN114697666A (en) Screen coding method, screen decoding method and related device
KR20150006465A (en) Mechanism for facilitating cost-efficient and low-latency encoding of video streams
CN116843773A (en) Image data processing method, system, electronic device and storage medium
CN115733988A (en) Video data processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40041037

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant