CN115379235A - Image decoding method and device based on buffer pool, readable medium and electronic equipment - Google Patents

Image decoding method and device based on buffer pool, readable medium and electronic equipment Download PDF

Info

Publication number
CN115379235A
CN115379235A CN202211033430.4A CN202211033430A CN115379235A CN 115379235 A CN115379235 A CN 115379235A CN 202211033430 A CN202211033430 A CN 202211033430A CN 115379235 A CN115379235 A CN 115379235A
Authority
CN
China
Prior art keywords
frame
buffer pool
decoding
sending speed
packet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211033430.4A
Other languages
Chinese (zh)
Inventor
黄永铖
陈思佳
曹洪彬
曹健
杨小祥
宋美佳
张佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211033430.4A priority Critical patent/CN115379235A/en
Publication of CN115379235A publication Critical patent/CN115379235A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44004Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer

Abstract

The application belongs to the technical field of cloud computing, and relates to an image decoding method and device based on a buffer pool, a computer readable medium and electronic equipment. The method is applied to the client and comprises the following steps: acquiring a coded code stream, and constructing a Frame object corresponding to a single-Frame image based on the coded code stream; applying for a storage space in a buffer pool based on the Frame object, and storing a Frame packet corresponding to the single-Frame image in the buffer pool; and inputting the Frame packet to a decoding end at the maximum Frame sending speed, wherein the maximum Frame sending speed is obtained by adjusting based on the dynamic capability after the dynamic capability of the decoding end is detected according to the test code stream with the same parameters as the single-Frame image. According to the method and the device, the data packet can be cached through the buffer pool, and the Frame packet in the buffer pool is input into the decoding end at the maximum Frame sending speed, so that the performance reduction of the decoding end is avoided, the network congestion is relieved, the effect of flow stability control can be realized, and the decoding performance of the decoding end is improved.

Description

Image decoding method and device based on buffer pool, readable medium and electronic equipment
Technical Field
The application belongs to the technical field of cloud computing, and particularly relates to an image decoding method based on a buffer pool, an image decoding device based on the buffer pool, a computer readable medium and an electronic device.
Background
With the rapid development of network technology, the terminal users have higher demands on the performance of the terminal devices, for example, when watching video and playing games, the terminal users want to have clear and smooth pictures without jamming, which has extremely high requirements on the codec delay of video, for example, when the frame rate of video is 60fps (frames per second), the codec delay needs to be less than 16ms, when the frame rate of video is 120fps, the codec delay needs to be less than 8ms, and so on.
At present, when video coding and decoding are carried out, data transmitted into a network can not be generally managed, namely how much data are transmitted into the network, the data can be transmitted into a chip immediately for coding and decoding, if a large number of data packets are transmitted into the network simultaneously, an upper layer framework does not manage the data, and all the data packets are directly transmitted into the chip, so that the working load of the chip is overhigh and the heat is generated, the working efficiency of the chip is reduced, the coding and decoding delay is increased, the viewing experience of videos is finally reduced, and the video playing picture is blocked and delayed seriously.
Disclosure of Invention
The present application aims to provide a buffer pool-based image decoding method, a buffer pool-based image decoding apparatus, a computer-readable medium, and an electronic device, which can overcome the problems of chip performance degradation, serious decoding delay, and video picture blockage caused by a large number of data packets simultaneously entering a chip for decoding in the related art.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided a buffer pool-based image decoding method, including: acquiring a code stream, and constructing a Frame object corresponding to a single-Frame image based on the code stream; applying for a storage space in a buffer pool based on the Frame object, and storing a Frame packet corresponding to the single-Frame image in the buffer pool; and inputting the Frame packet to a decoding end at a maximum Frame sending speed, wherein the maximum Frame sending speed is obtained by adjusting based on the dynamic capability after the dynamic capability of the decoding end is detected according to a test code stream with the same parameters as the single-Frame image.
According to an aspect of an embodiment of the present application, there is provided a buffer pool-based image decoding apparatus including: the object construction module is used for acquiring a coded code stream and constructing a Frame object corresponding to a single-Frame image based on the coded code stream; the space application module is used for applying for a storage space in a buffer pool based on the Frame object and storing a Frame packet corresponding to the single-Frame image in the buffer pool; and the decoding module is used for inputting the Frame packet to a decoding end at the maximum Frame sending speed, wherein the maximum Frame sending speed is obtained by adjusting based on the dynamic capability after the dynamic capability of the decoding end is detected according to the test code stream with the same parameters as the single-Frame image.
In some embodiments of the present application, the object construction module is configured to: analyzing the coding code stream to obtain frame information corresponding to the single-frame image, wherein the frame information comprises a frame type, a timestamp, a frame size and a serial number; and constructing the Frame object according to the Frame information and the memory address corresponding to the single-Frame image.
In some embodiments of the present application, the buffer pool comprises: an unapplied unused space, an applied unused space, and a used space; the Frame object comprises Frame information and a memory address corresponding to the single Frame image; the space application module is configured to: sending a storage space application instruction to the buffer pool based on the memory address, and determining the size of the Frame packet according to the Frame size in the Frame information and the size of the Frame information; when the size of the Frame packet is smaller than or equal to the size of the unused space, obtaining the space in the unused space as an application space according to the size of the Frame packet; and when the size of the residual space in the applied unused space is larger than or equal to the size of the Frame packet, storing the Frame packet in the applied unused space.
In some embodiments of the present application, the space application module is further configured to: when the size of the Frame packet is larger than the size of the unused space, waiting for the release of the used space until the size of the unused space is larger than or equal to the size of the Frame packet; and when the size of the residual space in the applied unused space is smaller than the size of the Frame packet, waiting for the release of the used space until the size of the residual space in the applied unused space is larger than or equal to the size of the Frame packet.
In some embodiments of the present application, the space application module is further configured to: and storing the Frame packets in the buffer pool in sequence according to the time stamps and/or the index numbers in the Frame objects.
In some embodiments of the present application, the parameter comprises setting a frame rate; the decoding module includes: the first input unit is used for inputting the test code stream to the decoding end at the set frame rate for decoding so as to obtain a reference output frame rate and a reference single-frame average decoding delay; a first comparison unit, configured to determine a first frame rate threshold according to the set frame rate and a first coefficient, and compare the reference output frame rate with the first frame rate threshold; a first determining unit configured to set the set frame rate as the maximum frame sending rate when the reference output frame rate is smaller than the first frame rate threshold; the second input unit is used for increasing the set frame rate to acquire a frame sending speed when the reference output frame rate is greater than or equal to the first frame rate threshold, and inputting the test code stream to the decoding end at the frame sending speed for decoding to acquire an output frame rate and a single-frame average decoding delay; and the first calculation unit is used for determining the maximum frame sending speed according to the frame sending speed, the reference single-frame average decoding delay, the output frame rate and the single-frame average decoding delay.
In some embodiments of the present application, the first calculation unit includes: the second comparison unit is used for determining a second frame rate threshold according to the frame sending speed and a second coefficient and comparing the output frame rate with the second frame rate threshold; a second determining unit, configured to set the frame sending speed as the maximum frame sending speed when the output frame rate is less than the second frame rate threshold; and the second calculation unit is used for determining the maximum frame sending speed according to the frame sending speed, the reference single-frame average decoding delay and the single-frame average decoding delay when the output frame rate is greater than or equal to the second frame rate threshold.
In some embodiments of the present application, the second calculation unit includes: a third comparison unit, configured to compare the reference single-frame average decoding delay with the single-frame average decoding delay; a third determining unit, configured to use the frame sending speed as the maximum frame sending speed when the average single frame decoding delay is greater than the average reference single frame decoding delay; and the fourth calculating unit is used for determining the maximum frame sending speed according to the frame sending speed when the single frame average decoding delay is less than or equal to the reference single frame average decoding delay.
In some embodiments of the present application, the third determining unit includes: the fourth comparison unit is used for comparing the frame sending speed with a frame rate threshold; a fourth determination unit configured to take the frame sending speed as the maximum frame sending speed when the frame sending speed is greater than the frame rate threshold; the updating unit is used for updating the frame sending speed when the frame sending speed is less than or equal to the frame rate threshold value, and inputting the test code stream to the decoding end for decoding at the updated frame sending speed so as to obtain an updated output frame rate and an updated single-frame average decoding delay; and the fifth calculating unit is used for determining the maximum frame sending speed according to the updated frame sending speed, the reference single-frame average decoding delay, the updated output frame rate and the updated single-frame average decoding delay.
In some embodiments of the present application, the buffer pool-based image decoding apparatus further comprises: the acquisition module is used for acquiring network congestion duration and the number of image frames waiting for applying the storage space of the buffer pool; and the emptying module is used for detecting whether an IDR Frame exists in the Frame packet stored in the buffer pool or not when the network congestion time is longer than a preset time and the number of the image frames is larger than a preset threshold value, and emptying the buffer pool according to the detection result.
In some embodiments of the present application, the purge module is configured to: when the IDR frame exists in the buffer pool, acquiring a target index number corresponding to the IDR frame; and discarding the Frame packet of which the index is smaller than the target index number in the buffer Chi Zhongsuo, and sending the Frame packet corresponding to the IDR Frame from the buffer pool to the decoding end.
In some embodiments of the present application, the purge module is configured to: when the IDR frame does not exist in the buffer pool, sending an IDR frame acquisition request to a server; and receiving the IDR Frame sent by the server in response to the IDR Frame acquisition request, emptying the buffer pool, and sending the Frame packet containing the IDR Frame from the buffer pool to the decoding end.
According to an aspect of the embodiments of the present application, there is provided a computer readable medium, on which a computer program is stored, which when executed by a processor implements a buffer pool based image decoding method as in the above technical solution.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the buffer pool based image decoding method as in the above technical solution via executing the executable instructions.
According to an aspect of the embodiments of the present application, there is provided a computer program product, which includes computer instructions, when the computer instructions are run on a computer, cause the computer to execute the buffer pool based image decoding method as in the above technical solution.
According to the image decoding method based on the buffer pool, after the coded code stream is obtained, a Frame object corresponding to the single-Frame image is constructed based on the coded code stream, an instruction is sent to the buffer pool based on the Frame object to apply for a storage space in the buffer pool, after the storage space in the buffer pool is successfully applied, a Frame packet corresponding to the single-Frame image is stored in the buffer pool, then the Frame packet stored in the buffer pool is input to a decoding end for decoding at the maximum Frame sending speed, and the maximum Frame sending speed is obtained by adjusting the dynamic capacity based on the dynamic capacity after the dynamic capacity of the decoding end is detected according to a test code stream with the same parameters as the single-Frame image. On one hand, the method and the device can buffer the data packet transmitted by the network in the buffer pool when the network is congested, so that the performance of the decoding end is prevented from being reduced due to the impact of a large amount of data on the decoding end; on the other hand, the maximum Frame sending speed which enables the decoding performance of the decoding end to be highest can be obtained by detecting the load capacity of the decoding end through the dynamic capacity, when the Frame packet in the buffer pool is input to the decoding end at the maximum Frame sending speed, not only can network congestion be relieved, but also the effect of flow stability control can be achieved, and the decoding performance of the decoding end is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 schematically shows a structural diagram of a system architecture to which the buffer pool-based image decoding method in the embodiment of the present application is applied.
Fig. 2 schematically shows a flow chart of steps of a buffer pool-based image decoding method in an embodiment of the present application.
Fig. 3 schematically shows a structural diagram of a buffer pool in the embodiment of the present application.
Fig. 4 schematically illustrates a flow chart of applying for a storage space in a buffer pool to cache a Frame package in the embodiment of the present application.
Fig. 5 schematically shows a flowchart of acquiring the maximum frame rate in the embodiment of the present application.
Fig. 6 schematically shows a flowchart for determining the maximum frame transmission rate according to the output frame rate and the frame transmission rate in the embodiment of the present application.
Fig. 7 schematically shows a flowchart for determining the maximum frame rate according to the reference average decoding delay per frame and the average decoding delay per frame in the embodiment of the present application.
Fig. 8 schematically shows a flowchart for determining the maximum frame transfer rate from the frame transfer rate in the embodiment of the present application.
Fig. 9 schematically shows a block diagram of a buffer pool-based image decoding apparatus according to an embodiment of the present application.
FIG. 10 schematically illustrates a block diagram of a computer system suitable for use in implementing an electronic device of an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the embodiments of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
In the related technology of the application, with the development of the audio and video technology, the coding and decoding delay requirement in the audio and video framework is higher and higher, and particularly, under the scenes of cloud games, video calls and the like, the coding and decoding delay requirement is extremely high. Taking an android terminal as an example, the android terminal generally uses a Mediacodec codec interface developed by google corporation to call the codec capability of a chip to encode and decode a code stream at present, but the android terminal needs to rely on the hardware codec capability of the chip to obtain low-delay codec. When a large number of data packets are transmitted into the network simultaneously, the upper layer framework does not manage the data, the data packets are directly input into the chip, the workload of the chip is too high, the heat is generated, the working efficiency of the chip is finally reduced, the coding and decoding delay is increased, a user can feel that the picture is blocked seriously, and the user experience is reduced.
No matter an android terminal or a terminal carrying other operating systems, the encoding and decoding are performed through an encoder and a decoder in a chip, so that the problems of reduced chip working efficiency, serious encoding and decoding delay and poor user experience exist when an upper layer framework does not manage a large amount of data packets transmitted due to network congestion.
The embodiment of the present application provides an image decoding method based on a buffer pool for related technologies in the field, and the image decoding method based on the buffer pool in the present application may be applied to any scene related to video, such as a game scene, a video call scene, a live broadcast scene, and the like. Before explaining the buffer pool-based image decoding method in the embodiment of the present application in detail, terms related to the present application will be explained.
1. And (3) encoding: the process by which information is converted from one form or format to another, also referred to as code in a computer programming language.
2. And (3) decoding: the inverse process of encoding is the process of restoring the information from the already encoded form to the original form before encoding.
3. Frame rate: and the speed of sending the data frames into the coding and decoding module in the chip in the audio and video frame is, if 60fps is used, sending 60 frames of data into the chip every second.
4. Setting a frame rate: in the cloud game process, the server encodes the game picture at a set frame rate set by the user, and the client also needs to decode the game picture at the set frame rate.
Frame package: the type name of the data cached in the buffer pool includes code stream data, frame type, timestamp, frame size, index number and resolution of the corresponding single frame image.
6. Single frame average decoding delay: and calculating the decoding delay obtained by averaging the decoding time of each frame in the decoding process.
7. Frame rate: the frequency with which bitmap images appear continuously on the display in units of frames.
8. Cloud game: based on the cloud computing technology, the game runs on the remote server, terminal clients do not need to download and install, terminal configuration does not need to be considered, and the game with a large calculation amount can be played as long as a network exists.
Next, an exemplary system architecture to which the technical solution of the present application is applied will be explained.
Fig. 1 schematically shows a block diagram of an exemplary system architecture to which the solution of the present application applies.
As shown in fig. 1, system architecture 100 may include a client 101, a server 102, and a network 103. The client 101 is various electronic devices with display screens, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart television, and an intelligent vehicle-mounted terminal. In different scenarios, the number of the clients 101 is different, for example, in a cloud game scenario, the number of the clients 101 may be one, and the user participates in the cloud game through the client 101, in a video call scenario, the number of the clients 101 is multiple, the user of each client 101 performs a video call through the client 101, and so on. The server 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services. Network 103 may be a communication medium of various connection types capable of providing a communication link between client 101 and server 102, such as a wired communication link or a wireless communication link.
The system architecture in the embodiments of the present application may have any number of clients, networks, and servers, as desired for the implementation. For example, a server may be a server group consisting of a plurality of server devices. In addition, the technical scheme provided by the embodiment of the application can be applied to the client 101.
In an embodiment of the present application, a chip is mounted in the client 101, an encoding and decoding module is disposed in the chip, and a buffer pool is also disposed in the client 101, where one or more Frame packets are cached in the buffer pool, and the Frame packets are generated by processing data packets containing image frames that are sent to the client 101 by a network. After the load capacity of the chip is detected and the maximum Frame sending speed of the buffer pool corresponding to the highest decoding performance of the chip is obtained based on the dynamic capacity adjustment, the Frame packet in the buffer pool can be sent to a decoding end for decoding based on the maximum Frame sending speed so as to obtain the image Frame for rendering and displaying.
In an embodiment of the application, when dynamic capability detection is performed on load capacity of a chip to adjust and obtain a maximum frame sending speed of a buffer pool corresponding to the highest decoding performance of the chip based on the dynamic capability, a set frame rate of a client in a current scene is first obtained as the frame sending speed, data such as a reference output frame rate and a reference single-frame average decoding delay determined by inputting a test code stream to a decoding end for decoding are input, then the frame sending speed is re-determined based on the set frame rate, the test code stream is input to the decoding end at the re-determined frame sending speed to obtain data such as a decoding output frame rate and a single-frame average decoding delay at the re-determined frame sending speed, and finally the maximum frame sending speed is determined according to the re-determined frame sending speed, the obtained decoding output frame rate and single-frame average decoding delay, and the data such as the reference decoding output frame rate and the reference single-frame average decoding delay corresponding to the set frame rate.
In an embodiment of the present application, the image decoding method based on the buffer pool in the present application may be used in scenes such as a cloud game, a video call, and a live broadcast, and accordingly, the server 102 may be a cloud server providing a cloud computing service, that is, the present application relates to cloud storage and cloud computing technologies.
A distributed cloud storage system (hereinafter, referred to as a storage system) refers to a storage system that integrates a large number of storage devices (storage devices are also referred to as storage nodes) of different types in a network through application software or application interfaces to cooperatively work by using functions such as cluster application, grid technology, and a distributed storage file system, and provides a data storage function and a service access function to the outside.
At present, a storage method of a storage system is as follows: logical volumes are created, and when created, each logical volume is allocated physical storage space, which may be the disk composition of a certain storage device or of several storage devices. The client stores data on a certain logical volume, that is, the data is stored on a file system, the file system divides the data into a plurality of parts, each part is an object, the object not only contains the data but also contains additional information such as data identification (ID, ID entry), the file system writes each object into a physical storage space of the logical volume, and the file system records storage location information of each object, so that when the client requests to access the data, the file system can allow the client to access the data according to the storage location information of each object.
The process of allocating physical storage space for the logical volume by the storage system specifically includes: physical storage space is divided in advance into stripes according to a group of capacity measures of objects stored in a logical volume (the measures often have a large margin with respect to the capacity of the actual objects to be stored) and Redundant Array of Independent Disks (RAID), and one logical volume can be understood as one stripe, thereby allocating physical storage space to the logical volume.
Cloud computing (cloud computing) is a computing model that distributes computing tasks over a pool of resources formed by a large number of computers, enabling various application systems to obtain computing power, storage space, and information services as needed. The network that provides the resources is referred to as the "cloud". Resources in the "cloud" appear to the user as being infinitely expandable and available at any time, available on demand, expandable at any time, and paid for on-demand.
As a basic capability provider of cloud computing, a cloud computing resource pool (called as an ifas (Infrastructure as a Service) platform for short is established, and multiple types of virtual resources are deployed in the resource pool and are selectively used by external clients.
According to the logic function division, a PaaS (Platform as a Service) layer can be deployed on an IaaS (Infrastructure as a Service) layer, a SaaS (Software as a Service) layer is deployed on the PaaS layer, and the SaaS can be directly deployed on the IaaS. PaaS is a platform on which software runs, such as a database, a web container, etc. SaaS is a variety of business software, such as web portal, sms, and mass texting. Generally speaking, saaS and PaaS are upper layers relative to IaaS.
The following describes in detail the buffer pool-based image decoding method, the buffer pool-based image decoding apparatus, the computer-readable medium, and the electronic device provided in the present application with reference to the detailed embodiments.
Fig. 2 schematically illustrates a flowchart of steps of a buffer pool-based image decoding method in an embodiment of the present application, where the buffer pool-based image decoding method is executed by a client, and the client may specifically be the client 101 in fig. 1. As shown in fig. 2, the method for decoding an image based on a buffer pool in the embodiment of the present application may mainly include steps S210 to S230 as follows.
Step S210: acquiring a coded code stream, and constructing a Frame object corresponding to a single-Frame image based on the coded code stream;
step S220: applying for a storage space in a buffer pool based on the Frame object, and storing a Frame packet corresponding to the single-Frame image in the buffer pool;
step S230: and inputting the Frame packet to a decoding end at a maximum Frame sending speed, wherein the maximum Frame sending speed is obtained by adjusting based on the dynamic capability after the dynamic capability of the decoding end is detected according to a test code stream with the same parameters as the single-Frame image.
In the method for decoding an image based on a buffer pool provided in the embodiment of the present application, after an encoded code stream is obtained, a Frame object corresponding to a single-Frame image is constructed based on the code stream, an instruction is sent to the buffer pool based on the Frame object to apply for a storage space in the buffer pool, after the storage space in the buffer pool is successfully applied, a Frame packet corresponding to the single-Frame image is stored in the buffer pool, and then the Frame packet stored in the buffer pool is input to a decoding end at a maximum Frame transmission speed, where the maximum Frame transmission speed is obtained by performing dynamic capability detection on the decoding end according to a test code stream having the same parameters as the single-Frame image and adjusting based on the dynamic capability. On one hand, the method and the device can buffer the data packet transmitted by the network into the buffer pool when the network is congested, so that the performance of the decoding end is prevented from being reduced due to the impact of a large amount of data on the decoding end; on the other hand, the maximum Frame sending speed which enables the decoding performance of the chip to be the highest can be obtained by dynamically detecting the load capacity of the decoding end, when the Frame packet in the buffer pool is input to the decoding end at the maximum Frame sending speed, not only can the network congestion be relieved, but also the effect of flow stability control can be realized, and the decoding performance of the decoding end is improved.
The following describes in detail a specific implementation of each method step of the buffer pool-based image decoding method.
In step S210, an encoded code stream is acquired, and a Frame object corresponding to a single Frame image is constructed based on the code stream.
In an embodiment of the present application, in the field of video encoding and decoding, each frame of image forming a video is generally encoded, then data packets formed by encoding are sequentially sent to a client, each data packet is analyzed by the client to obtain an image frame therein, and finally rendering and playing are performed. Correspondingly, in the embodiment of the application, when a large number of data packets generated when the network is congested are buffered through the buffer pool, the data packets may be disassembled into a plurality of Frame packets corresponding to a single-Frame image, then all the Frame packets are arranged in sequence and are sequentially transmitted to the decoding end for decoding, where the decoding end may be specifically a chip, and decoding is implemented by a decoding module arranged in the chip.
In an embodiment of the present application, before applying for a storage space to a buffer pool and caching a Frame packet in the buffer pool, an encoded code stream needs to be acquired, and a Frame object and a Frame packet corresponding to a single-Frame image are constructed according to the encoded code stream.
Since a frame of image can be divided into a plurality of slices, each slice is divided into a plurality of macro blocks, each macro block is divided into a plurality of sub-blocks, so that a large image can be decomposed into small blocks, and spatial encoding can be conveniently carried out. When a plurality of image frames forming a video are transmitted, the video is transmitted by means of a binary stream, and the binary stream is composed of a plurality of NALU units (Network Abstract Layer units), each NALU Unit comprises three parts, namely a starting code, a NALU header and a NALU load (Payload), wherein the starting code is used for identifying the start of one NALU Unit, the NALU header comprises a Type parameter NAL Type, and the NALU load is specifically RBSP (Raw Byte Sequence Payload) data and contains coded data of the video. Therefore, after receiving a data packet containing single-frame image information transmitted by a network, a client can extract an NALU header from the data packet to obtain an NALU type, then splice and form a code stream corresponding to the single-frame image according to RBSP data in the data packet, and finally analyze the RBSP data according to the obtained NALU type to obtain frame information corresponding to the single-frame image, wherein the frame information specifically comprises a frame type, a timestamp, a frame size and a sequence number, and further can also comprise information such as resolution. The frame types are specifically divided into intra-coded frames (I-frames), forward predictive coded frames (P-frames), and bidirectional predictive interpolated coded frames (B-frames), and the I-frames further include an instantaneous decoding refresh IDR frame, which is the first image frame in an image sequence.
After Frame information corresponding to a single Frame image is obtained, a Frame object can be constructed according to the Frame information and a memory address corresponding to the single Frame image, a storage space in a buffer pool is applied based on the Frame object, if the storage space in the buffer pool is applied, a Frame packet corresponding to the single Frame image can be buffered in the buffer pool and is waited to be transmitted to a decoding end for decoding, and if the storage space in the buffer pool is not applied, the Frame packet is waited continuously until the buffer space is applied. The Frame packet includes an encoded code stream and Frame information corresponding to the single-Frame image, that is, the Frame packet includes an encoded code stream, a Frame type, a timestamp, a Frame size, a sequence number corresponding to the single-Frame image, and further may include information such as a resolution.
In step S220, a storage space in a buffer pool is applied for based on the Frame object, and a Frame packet including the code stream and the Frame information is stored in the buffer pool.
In an embodiment of the present application, the Frame object includes Frame information and a memory address corresponding to a single Frame image, and the buffer pool is disposed in the client and corresponds to a memory of the client, so that a storage space application instruction may be sent to the buffer pool according to the memory address to apply for a storage space, where the storage space is a memory space. When the storage space is applied, whether enough storage space exists in the buffer pool or not can be judged according to the Frame size in the Frame information and the size of the Frame information for distribution, wherein the Frame size corresponds to the size of the coding code stream corresponding to the single-Frame image.
In an embodiment of the present application, a buffer pool object is further stored in the client, the buffer pool object stores cached Frame packets and state information of the entire buffer pool, and a storage space of the buffer pool is divided into three parts, as shown in fig. 3, the buffer pool 300 includes a used space 301, a requested unused space 302 and a requested unused space 303, where the used space 301 is used for storing Frame packets that have been input to the decoding end but have not been decoded, if a decoding error occurs, the Frame packet can be obtained from the used space 301 again according to an index number for decoding, and if decoding is completed, the Frame packet is deleted from the used space 301; the Frame packet in the applied unused space 302 is used for storing the applied storage space but is not input into the decoding end, and when the Frame packet is input into the decoding end, the Frame packet is transferred from the applied unused space 302 to the used space 301; the non-application unused space 303 is a space waiting for application. In the embodiment of the present application, a plurality of Frame packets may be sent to a decoding end at the same time for decoding, and accordingly, a plurality of Frame packets may be stored in the used space 301 and the applied unused space 302 at the same time, as shown in fig. 3, three Frame packets are stored in the used space 301, and two Frame packets are stored in the applied unused space 302.
Accordingly, the status information of the buffer pool includes: the method comprises the steps of obtaining a total space size, a size of a used space, a size of an unused space not applied, a size of an unused space applied, a saved Frame number, the number of IDR frames, and the like, wherein the total space size is equal to the sum of the sizes of the used space, the unused space not applied and the unused space applied, and the total space size depends on the resolution of the code stream in a current scene, the total space size is Frame, for example, when the resolution of the code stream in the current scene is 720P, the total space size can be set to be 100Frame, the used space is set to be 50Frame at most, that is, a buffer pool can store 100Frame packets in total, the used space can store 50Frame packets at most, when the resolution of the code stream in the current scene is 1080P, the total space size can be set to be 70Frame, the used space can be set to be 35Frame at most, that is, the buffer pool can store 70Frame packets in total, and the used space can store 35Frame packets at most. Further, the size of the usable space may be determined according to the size of the total space and the size of the used space, that is, the sum of the sizes of the unused space and the unused space, for example, when the resolution of the code stream is 720P, the size of the usable space is 50Frame, and when the resolution of the code stream is 1080P, the size of the usable space is 65Frame, meanwhile, the size of the unused space and the size of the unused space may be allocated according to actual needs based on the size of the usable space, for example, when the size of the usable space is 50Frame, both the unused space and the unused space are set to be 25Frame, and the like, which is not specifically limited in this application.
In an embodiment of the present application, when a storage space is applied to a buffer pool, the storage space is applied in units of frames, and the size of each Frame packet is different, so the size of the applied storage space is also different. Fig. 4 schematically illustrates a flow chart of applying for a storage space in a buffer pool to cache a Frame packet, as shown in fig. 4, in step S401, it is determined whether the size of the Frame packet is smaller than or equal to the size of an unused space that is not applied for; if the determination result is yes, executing step S402, and if the determination result is no, executing step S403; in step S402, a corresponding application space is obtained from an unused space that is never applied for according to the size of the Frame package; in step S403, wait for the release of the used space until the size of the unused space is not applied for and is greater than or equal to the size of the Frame packet; in step S404, it is determined whether the size of the remaining space in the applied unused space is greater than or equal to the size of the Frame packet; if the determination result is yes, executing step S405, and if the determination result is no, executing step S406; in step S405, the Frame package is put into the applied unused space; in step S406, the used space is waited to be released until the size of the remaining space in the unused space is applied for being greater than or equal to the size of the Frame packet.
When a new Frame packet is transmitted, firstly applying for a space from the unused space, if the size of the Frame packet is smaller than or equal to that of the unused space, allocating part or all of the unused space not applied as an application space to the Frame packet, then judging whether the size of the remaining space in the unused space applied is larger than or equal to that of the Frame packet, if so, storing the Frame packet in the unused space applied, releasing the application space, if not, waiting for the release of the used space, transferring the Frame packet cached in the unused space applied to the used space to a used space, and until the size of the remaining space in the unused space applied is larger than or equal to that of the Frame packet, so as to transfer the Frame packet in the application space to the unused space applied and simultaneously release the application space. If the size of the Frame packet is larger than that of the unused space, waiting for the release of the used space, transferring the Frame packet in the unused space to the used space, and simultaneously transferring the Frame packet in the applied space to the unused space so as to release the applied space, so that the size of the unused space is larger than or equal to that of the Frame packet, and a part or all of the space can be used as the applied space to be allocated to the Frame packet. It should be noted that the size of the unused space in the present application is greater than or equal to the theoretical maximum corresponding to the Frame packet, so that it can be ensured that the corresponding application space can be allocated to the new Frame packet when the unused space is not applied by the Frame packet.
In an embodiment of the application, when the Frame packet is stored in the buffer pool, the Frame packet may be sorted according to a timestamp and/or an index number included in a Frame object in the Frame packet, and stored in the buffer pool in sequence, and then sent to the decoding end in sequence when being sent to be decoded, so that video images displayed by the client are consecutive. It should be noted that, when a source end (server) that sends a data packet including an image Frame does not have a fault, usually, a timestamp and an index number in a Frame packet are corresponding, and a client receives the data packet according to a time sequence, so that the Frame packet can be sorted only according to the index number or the timestamp, but when the source end (server) has a fault, a transmission sequence may be disordered, so that the Frame packet needs to be sorted according to the timestamp and the index number at the same time, and the Frame packet is guaranteed to be sorted according to the sequence and sent to a decoding end for analysis.
In step S230, the Frame packet is input to a decoding end at a maximum Frame rate, where the maximum Frame rate is obtained by performing dynamic capability detection on the decoding end according to a test code stream having the same parameters as the single-Frame image and then adjusting based on the dynamic capability.
In an embodiment of the present application, the buffer pool also sends the buffered Frame packet to the decoding end for decoding while buffering the Frame packet, but in order to achieve the effect of stable flow control, improve the decoding performance of the decoding end, and simultaneously improve the transmission efficiency and the decoding efficiency of the Frame packet, the Frame packet needs to be sent to the decoding end at a high and stable Frame sending speed.
Generally, a chip as a decoding end increases the operating frequency with the increase of the frame sending speed, so as to increase the decoding speed of the chip.
In an embodiment of the application, when dynamic capability detection is performed, firstly, parameter configuration corresponding to an image frame in a current scene needs to be acquired, then, a test code stream with the same parameter configuration is acquired according to the parameter configuration, and finally, detection is performed based on the test code stream.
Fig. 5 schematically illustrates a flowchart of obtaining the maximum frame sending speed, and as shown in fig. 5, in step S501, parameters corresponding to image frames in the current scene are obtained, where the parameters specifically include parameters such as a set frame rate; in step S502, a test code stream having the same parameter is obtained according to the parameter; in step S503, the test code stream is input to the decoding end at the set frame rate to obtain the reference output frame rate and the reference single-frame average decoding delay; in step S504, a first frame rate threshold is determined according to the set frame rate and the first coefficient, and the reference output frame rate is compared with the first frame rate threshold; in step S505, when the reference output frame rate is less than the first frame rate threshold, setting the frame rate as the maximum frame sending speed; in step S506, when the reference output frame rate is greater than or equal to the first frame rate threshold, increasing a set frequency to obtain a frame sending speed; in step S507, the test code stream is input to the decoding end at the frame sending speed to obtain the output frame rate and the single frame average decoding delay; in step S508, the maximum frame transmission speed is determined based on the frame transmission speed, the reference single frame average decoding delay, the output frame rate, and the single frame average decoding delay.
The test code stream in step S502 may be an encoding code stream that is transmitted by the server in real time and has the same resolution as the current video and has parameters such as a set frame rate.
In step S504, the first coefficient may specifically be any value in the section [0.85,0.95], and may be, for example, 0.9, that is, when the reference output frame rate is less than 90% of the set frame rate, the current frame rate is considered to be the maximum frame rate. This is because, under normal conditions, the output Frame rate of the decoding end is close to the set Frame rate, but under network congestion, the performance of the decoding end is reduced, which causes the output Frame rate of the decoding end to be reduced, and if the output Frame rate is reduced to a certain proportion of the Frame sending rate or less, it indicates that the decoding performance of the decoding end has already started to be weakened, so the Frame sending rate under the condition can be taken as the maximum Frame sending rate, the Frame packet in the buffer pool is sent to the decoding end for decoding, and decoding is performed at higher efficiency on the premise of ensuring the decoding performance of the decoding end, so as to alleviate the network congestion.
The reference output frame rate, the reference single-frame average decoding delay, the output frame rate, and the single-frame average decoding delay obtained in steps S503 and S507 may be obtained by counting decoding results of a preset time duration, for example, may be obtained by decoding a coded stream for 2 seconds, and may also be obtained by counting decoding results of other time durations for decoding the coded stream, which is not specifically limited in this embodiment of the present application.
In step S506, when the frame transmission speed is determined, the frame rate may be set to be increased at a fixed frame rate or at a random frame rate, for example, when the frame rate is set to 60fps, the frame rate may be set to be increased at a fixed frame rate such as 10fps, 20fps, or 30fps, or the frame rate may be set to be increased at different frame rates.
In an embodiment of the present application, when the reference output frame rate is greater than or equal to the first frame rate threshold, the maximum frame sending speed may be determined according to the increased frame sending speed, the reference average single frame decoding delay, the output frame rate corresponding to the increased frame sending speed, and the average single frame decoding delay, and specifically, the maximum frame sending speed may be obtained by performing a judgment from three layers, where the first layer is to determine the maximum frame sending speed according to the output frame rate and the increased frame sending speed, the second layer is to determine the maximum frame sending speed according to the reference average single frame decoding delay and the average single frame decoding delay, and the third layer is to determine the maximum frame sending speed according to the increased frame sending speed.
In one embodiment of the present application, when determining the maximum frame rate according to the three layers, the processing logic of the three layers are interrelated, for example, the maximum frame rate may be determined from a first layer, and if the maximum frame rate cannot be determined according to the first layer, the maximum frame rate may be determined from a second layer; accordingly, if the maximum frame rate cannot be determined from the second level, the maximum frame rate is determined from the third level. Further, when the maximum frame sending speed cannot be determined according to the third level, the current frame sending speed is updated, that is, the current frame sending speed is increased according to fixed amplification or random amplification on the basis of the current frame sending speed to obtain the updated frame sending speed, the maximum frame sending speed is determined according to the updated frame sending speed, and the above process is circulated until the maximum frame sending speed is obtained.
It should be noted that, although the above-mentioned embodiment determines the maximum frame rate according to the judgment flows of the first layer, the second layer and the third layer in sequence, the embodiment of the present application includes but is not limited to the above-mentioned judgment flows, that is, in the embodiment of the present application, the maximum frame rate may be determined according to the processing logics of the first layer, the second layer and the third layer in any order, for example, it may be determined whether to use the processing logic of the first layer to determine the maximum frame rate according to the judgment result of the second layer, and determine whether to use the processing logic of the third layer to determine the maximum frame rate according to the judgment result of the first layer, and so on. Correspondingly, if the maximum frame sending speed can not be determined according to the last judgment result, the frame sending speed is updated again, and the maximum frame sending speed is determined according to the updated frame sending speed and the corresponding parameters until the maximum frame sending speed is obtained.
Next, taking the processing logic of the first layer, the second layer and the third layer as an example, the flow of determining the maximum frame rate according to three different layers in the embodiment of the present application will be described in detail.
Fig. 6 schematically shows a flowchart for determining a maximum frame sending speed according to an output frame rate and a frame sending speed, as shown in fig. 6, in step S601, a second frame rate threshold is determined according to the set frame rate and a second coefficient, and the output frame rate is compared with the second frame rate threshold; in step S602, when the output frame rate is less than the second frame rate threshold, the frame sending speed is set as the maximum frame sending speed; in step S603, when the output frame rate is greater than or equal to the first frame rate threshold, determining a maximum frame transmission rate according to the frame transmission rate, the reference single-frame average decoding delay, and the single-frame average decoding delay.
The second coefficient is also an arbitrary value located in the interval [0.85,0.95], and may be the same as or different from the first coefficient, which is not specifically limited in this embodiment of the present application.
Fig. 7 is a schematic diagram illustrating a process of determining a maximum frame rate according to a reference single frame average decoding delay and a single frame average decoding delay, as shown in fig. 7, in step S701, comparing the reference single frame average decoding delay with the single frame average decoding delay; in step S702, when the average decoding delay of the single frame is greater than the average decoding delay of the reference single frame, the frame sending speed is taken as the maximum frame sending speed; in step S703, when the average decoding delay of the single frame is less than or equal to the average decoding delay of the reference single frame, the maximum frame transmission speed is determined according to the frame transmission speed.
The reference single-frame average decoding delay is obtained by transmitting a test code stream into a decoding end for decoding at a set frame rate, when the current frame transmitting speed is obtained by increasing the frame rate on the basis of the set frame rate, and the single-frame average decoding delay obtained by transmitting the test code stream into the decoding end for decoding at the current frame transmitting speed is larger than the reference single-frame average decoding delay, the decoding end still has better decoding performance for the current frame transmitting speed, so the current frame transmitting speed can be used as the maximum frame transmitting speed.
Fig. 8 is a schematic diagram illustrating a flow of determining a maximum frame sending speed according to the frame sending speed, as shown in fig. 8, in step S801, the frame sending speed is compared with a frame rate threshold; in step S802, when the frame sending speed is greater than the frame rate threshold, the frame sending speed is taken as the maximum frame sending speed; in step S803, when the frame sending speed is less than or equal to the frame rate threshold, updating the frame sending speed, and inputting the test code stream to the decoding end for decoding according to the updated frame sending speed, so as to obtain an updated output frame rate and an updated single-frame average decoding delay; in step S804, the maximum frame transmission speed is determined according to the updated frame transmission speed, the reference single frame average decoding delay, the updated output frame rate, and the updated single frame average decoding delay.
The frame rate threshold may be a maximum output frame rate corresponding to the decoding end, for example, when the resolution is 1080P, the maximum output frame rate of the decoding end is 240fps, and when the frame transmission speed is increased to 240fps or more, the frame transmission speed may be directly determined as the maximum frame transmission speed. The maximum output frame rate of the decoding end is different with the difference of the resolution, so when the frame rate threshold value is determined, the maximum output frame rate corresponding to the decoding end can be determined as the frame rate threshold value according to the resolution corresponding to the video in the current scene, and then the frame sending speed is judged according to the determined frame rate threshold value.
In an embodiment of the present application, after determining the maximum Frame sending speed, the maximum Frame sending speed may be set as the maximum release speed of the buffer pool, and when the speed of the network transmitting the data packet is greater than the maximum Frame sending speed of the buffer pool, the buffer pool sends the buffered Frame packet to the decoding end at the maximum release speed until the network congestion phenomenon disappears. When the Frame packets cached in the buffer pool are transmitted to the decoding end, the Frame packets are sent in sequence according to the index numbers of the Frame packets, so that the decoding end can decode the code streams corresponding to the Frame packets in sequence to obtain the image frames for rendering and displaying.
In an embodiment of the present application, if the network congestion phenomenon is severe and all the space of the buffer pool is occupied, the network congestion may be relieved by emptying the buffer pool.
In an embodiment of the present application, the determination of the network congestion degree may be performed according to a network congestion duration and a number of image frames in a storage space of a pending application buffer pool, and when the network congestion duration is greater than a preset duration and the number of image frames in the storage space of the pending application buffer pool is greater than a preset threshold, the buffer pool is cleared to relieve the network congestion. The preset duration and the preset threshold may be set according to actual needs, for example, the preset duration may be set to 5s, and the preset threshold may be set to 30 frames, that is, when the duration of the network congestion is greater than 5s and the number of image frames waiting for applying for the storage space of the buffer pool is greater than 30 frames, the network congestion may be relieved by emptying the buffer pool.
In one embodiment of the present application, different emptying manners may be adopted when emptying the buffer pool according to whether an IDR frame exists in the buffer pool. Specifically, whether an image IDR Frame is immediately refreshed in a Frame packet stored in the buffer pool or not may be detected, and the buffer pool is emptied according to a detection result. When the Frame packets stored in the buffer pool have IDR frames, acquiring target index numbers corresponding to the IDR frames, discarding all the Frame packets with index numbers smaller than the target index numbers cached in the buffer pool, and simultaneously directly sending the Frame packets corresponding to the IDR frames to a decoding end; when the Frame packets stored in the buffer pool do not have the IDR Frame, the client sends an IDR Frame acquisition request to the server, and when receiving the IDR Frame sent by the server responding to the IDR Frame acquisition request, all the Frame packets in the buffer pool are emptied, and the Frame packet corresponding to the received IDR Frame is sent to the decoding end.
The available space in the buffer pool can be increased by emptying the Frame packets in the buffer pool, so that more Frame packets can be cached, the network congestion is relieved, and although the emptying of the Frame packets in the buffer pool has a little influence on the video display effect, the influence caused by emptying the buffer pool can be ignored compared with the pause caused by the network congestion.
In an embodiment of the application, the smooth network can be ensured by relieving network congestion, so that the decoding end can be ensured to smoothly digest the Frame packet sent by the buffer pool, and then the buffer Chi Yebu accumulates the Frame packet.
The image decoding method based on the buffer pool in the embodiment of the application can be applied to any scenes related to image frame processing, such as cloud games, video calls, live broadcasts, video playing and the like. Due to the fact that scenes such as cloud games, video calls, live broadcasts and video broadcasts have high requirements on image smoothness, image pictures are jammed if network congestion occurs, and experience of players, calling parties and video viewers is seriously affected, the image decoding method based on the buffer pool can be used for caching and decoding received data packets, and accordingly image quality and smoothness of the games and the videos are guaranteed.
Next, taking a cloud game scene as an example, the image decoding method based on the buffer pool in the embodiment of the present application will be specifically described.
The cloud game is a game running on a remote server, and the client can play a game with a very large amount of calculation as long as a network exists without downloading and installing and considering terminal configuration. In terms of data transmission, a remote server encodes image information of each image Frame forming the cloud game to form a data packet corresponding to each image Frame, the data packet is sent to a client through a network, the client splits the data packet into Frame packets after receiving the data packet, caches the Frame packets into a buffer pool, sends the Frame packets to a decoding end at a certain Frame sending speed for decoding to obtain corresponding image information, and finally renders according to the obtained image information, so that clear and smooth images can be seen in a display interface of the client.
When a client receives a data packet, splits the data packet into Frame packets and caches the Frame packets in a buffer pool, acquiring an NALU type by extracting an NALU header in the data packet, splicing the Frame packets according to RBSP data contained in a plurality of NALU units in the data packet to form a code stream corresponding to an image Frame, and performing NALU analysis on the code stream according to the NALU type to acquire Frame information corresponding to the image Frame, wherein the Frame information specifically comprises a Frame type, a timestamp, a Frame size and an Index, and further comprises information such as resolution; then, a Frame object can be constructed according to Frame information and a memory address of a Frame, a storage space is applied to a buffer pool based on the Frame object, the buffer pool is divided into three parts of an applied unused space, an applied unused space and a used space, when the size of the applied unused space is larger than or equal to the size of a Frame packet corresponding to the image Frame, a part or all of the applied unused space is distributed to the Frame packet as an application space according to the size of the Frame packet, whether the size of a residual space in the applied unused space is larger than or equal to the size of the Frame packet is detected, if the size of the residual space is larger than or equal to the size of the Frame packet, the Frame packet is stored in the applied unused space, the applied unused space is released, and if the size of the residual space is smaller than the size of the Frame packet, the release of the used space is waited until the size of the residual space in the applied unused space is larger than or equal to the size of the Frame packet; and when the size of the unused space is smaller than that of the Frame packet, waiting for the release of the used space until the size of the unused space is larger than or equal to that of the Frame packet. The Frame packet is constructed according to the code stream corresponding to the image Frame and the Frame information.
When the network is congested, a plurality of Frame packets are cached in the buffer pool, and in order to relieve the network congestion and improve the decoding performance of the decoding end, the buffer pool can send the cached Frame packets into the decoding end at the maximum Frame sending speed for decoding. The maximum frame sending speed is the maximum release speed of the buffer pool, and can be obtained by detecting the dynamic capacity of the load capacity of the decoding end and adjusting the load capacity based on the dynamic capacity. Specifically, configuration parameters such as resolution and a set frame rate corresponding to the current cloud game can be acquired; then, a section of test code stream with the same configuration parameters can be obtained from the server end in real time, the test code stream is input to a decoding end for decoding at a set frame rate, and the output frame rate and the single-frame average decoding delay of the test code stream for decoding 2s are counted and used as the reference output frame rate and the reference single-frame average decoding delay; then comparing the reference output frame rate with a first frame rate threshold, wherein the first frame rate threshold is the achievement of a set frame rate and a first coefficient, the first coefficient is any value in an interval [0.85,0.95], when the reference output frame rate is smaller than the first frame rate threshold, the set frame rate is used as the maximum frame sending speed, when the reference output frame rate is larger than or equal to the first frame rate threshold, the set frame rate is increased to obtain the frame sending speed, the test code stream is input to a decoding end at the frame sending speed for decoding, so as to count the output frame rate of the test code stream for decoding 2s and the average decoding delay of a single frame, and when the set frame rate is increased, the output frame rate can be increased according to fixed amplification or random amplification; and finally, comparing the output frame rate with the frame sending speed after the action of the second coefficient, comparing the single-frame average decoding delay with the reference single-frame average decoding delay, and comparing the frame sending speed with a frame rate threshold value to determine the maximum frame sending rate according to the comparison result. The second coefficient is also an arbitrary value in the interval [0.85,0.95], and may be the same as or different from the first coefficient.
After the maximum Frame sending speed is determined, the Frame packet in the buffer pool is sent to the decoding end at the maximum Frame sending speed for decoding, and the Frame packet is rendered according to image data obtained by decoding to form a cloud game interface.
Further, when the network congestion duration is longer than the preset duration and the number of image frames waiting for applying for the storage space of the buffer pool is larger than the preset threshold, the buffer pool can be emptied to relieve the decoding pressure caused by the network congestion. When the buffer pool is emptied, if the Frame packet stored in the buffer pool contains an IDR Frame, discarding the Frame packet with the index number smaller than the target index number corresponding to the IDR Frame, directly sending the Frame packet containing the IDR Frame to a decoding end for decoding, if the Frame packet stored in the buffer pool does not contain the IDR Frame, requesting a new IDR Frame from a server, emptying the buffer pool when the Frame packet containing the IDR Frame is received, and sending the Frame packet to the decoding end for decoding.
The method for decoding the image based on the buffer pool comprises the steps of constructing a Frame object corresponding to a single-Frame image according to an encoding code stream after the encoding code stream is obtained, sending an instruction to the buffer pool based on the Frame object to apply for a storage space in the buffer pool, storing a Frame packet containing code stream and Frame information in the buffer pool after the Frame object is successfully applied for the storage space in the buffer pool, then inputting the Frame packet stored in the buffer pool to a decoding end at the maximum Frame sending speed, wherein the maximum Frame sending speed is obtained by detecting the dynamic capacity of the decoding end according to a test code stream with the same parameters as the single-Frame image and adjusting the dynamic capacity. On one hand, the method and the device can buffer the data packet transmitted by the network into the buffer pool when the network is congested, so that the performance of the decoding end is prevented from being reduced due to the impact of a large amount of data on the decoding end; on the other hand, the maximum Frame sending speed which enables the decoding performance of the decoding end to be highest can be obtained by detecting the load capacity of the decoding end through the dynamic capacity, when the Frame packet in the buffer pool is input to the decoding end at the maximum Frame sending speed, not only can the network congestion be relieved, but also the effect of flow stability control can be achieved, and the decoding performance of the decoding end is improved.
It should be noted that although the various steps of the methods in this application are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the shown steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
The following describes embodiments of an apparatus of the present application, which can be used to perform the buffer pool based image decoding method in the above embodiments of the present application. Fig. 9 schematically shows a block diagram of a buffer pool-based image decoding apparatus according to an embodiment of the present application. As shown in fig. 9, the buffer pool-based image decoding apparatus 900 includes: the object constructing module 910, the space applying module 920 and the decoding module 930 specifically:
an object constructing module 910, configured to obtain an encoded code stream, and construct a Frame object corresponding to a single Frame image based on the encoded code stream; a space application module 920, configured to apply for a storage space in a buffer pool based on the Frame object, and store a Frame packet corresponding to the single-Frame image in the buffer pool; the decoding module 930 is configured to input the Frame packet to a decoding end at a maximum Frame sending speed, where the maximum Frame sending speed is obtained by performing dynamic capability detection on the decoding end according to a test code stream having the same parameter as the single Frame image and then adjusting based on the dynamic capability.
In some embodiments of the present application, based on the above technical solutions, the object constructing module 910 is configured to: analyzing the coding code stream to acquire frame information corresponding to the single-frame image, wherein the frame information comprises a frame type, a timestamp, a frame size and a serial number; and constructing the Frame object according to the Frame information and the memory address corresponding to the single-Frame image.
In some embodiments of the present application, the buffer pool comprises: an unapplied unused space, an applied unused space, and a used space; the Frame object comprises Frame information and a memory address corresponding to the single-Frame image; based on the above technical solution, the space application module 920 is configured to: sending a storage space application instruction to the buffer pool based on the memory address, and determining the size of the Frame packet according to the Frame size in the Frame information and the size of the Frame information; when the size of the Frame packet is smaller than or equal to the size of the unused space, obtaining the space in the unused space as an application space according to the size of the Frame packet; and when the size of the residual space in the applied unused space is larger than or equal to the size of the Frame packet, storing the Frame packet in the applied unused space.
In some embodiments of the present application, based on the above technical solutions, the space application module 920 is further configured to: when the size of the Frame packet is larger than the size of the unused space, waiting for the release of the used space until the size of the unused space is larger than or equal to the size of the Frame packet; and when the size of the residual space in the applied unused space is smaller than the size of the Frame packet, waiting for the release of the used space until the size of the residual space in the applied unused space is larger than or equal to the size of the Frame packet.
In some embodiments of the present application, based on the above technical solutions, the space application module 920 is further configured to: and storing the Frame packets in the buffer pool in sequence according to the time stamps and/or the index numbers in the Frame objects.
In some embodiments of the present application, the parameter comprises setting a frame rate; based on the above technical solution, the decoding module 930 includes: the first input unit is used for inputting the test code stream to the decoding end at the set frame rate for decoding so as to obtain a reference output frame rate and a reference single-frame average decoding delay; a first comparison unit, configured to determine a first frame rate threshold according to the set frame rate and a first coefficient, and compare the reference output frame rate with the first frame rate threshold; a first determining unit, configured to set the set frame rate as the maximum frame sending speed when the reference output frame rate is smaller than the first frame rate threshold; the second input unit is used for increasing the set frame rate to acquire a frame sending speed when the reference output frame rate is greater than or equal to the first frame rate threshold, and inputting the test code stream to the decoding end at the frame sending speed for decoding to acquire an output frame rate and a single-frame average decoding delay; and the first calculation unit is used for determining the maximum frame sending speed according to the frame sending speed, the reference single-frame average decoding delay, the output frame rate and the single-frame average decoding delay.
In some embodiments of the present application, based on the above technical solutions, the first computing unit includes: the second comparison unit is used for determining a second frame rate threshold according to the frame sending speed and a second coefficient and comparing the output frame rate with the second frame rate threshold; a second determining unit configured to set the frame transmission speed as the maximum frame transmission speed when the output frame rate is less than the second frame rate threshold; and the second calculation unit is used for determining the maximum frame sending speed according to the frame sending speed, the reference single-frame average decoding delay and the single-frame average decoding delay when the output frame rate is greater than or equal to the second frame rate threshold.
In some embodiments of the present application, based on the above technical solutions, the second calculating unit includes: a third comparison unit, configured to compare the reference single-frame average decoding delay with the single-frame average decoding delay; a third determining unit, configured to use the frame sending speed as the maximum frame sending speed when the average decoding delay of the single frame is greater than the average decoding delay of the reference single frame; and the fourth calculating unit is used for determining the maximum frame sending speed according to the frame sending speed when the single frame average decoding delay is less than or equal to the reference single frame average decoding delay.
In some embodiments of the present application, based on the above technical solutions, the third determining unit includes: the fourth comparison unit is used for comparing the frame sending speed with a frame rate threshold value; a fourth determination unit configured to take the frame sending speed as the maximum frame sending speed when the frame sending speed is greater than the frame rate threshold; the updating unit is used for updating the frame sending speed when the frame sending speed is less than or equal to the frame rate threshold value, and inputting the test code stream to the decoding end for decoding at the updated frame sending speed so as to obtain an updated output frame rate and an updated single-frame average decoding delay; and the fifth calculating unit is used for determining the maximum frame sending speed according to the updated frame sending speed, the reference single-frame average decoding delay, the updated output frame rate and the updated single-frame average decoding delay.
In some embodiments of the present application, based on the above technical solution, the buffer pool-based image decoding apparatus 900 further includes: the acquisition module is used for acquiring network congestion duration and the number of image frames waiting for applying the storage space of the buffer pool; and the emptying module is used for detecting whether an IDR Frame exists in the Frame packet stored in the buffer pool or not when the network congestion time is longer than a preset time and the number of the image frames is larger than a preset threshold value, and emptying the buffer pool according to the detection result.
In some embodiments of the present application, based on the above technical solutions, the emptying module is configured to: when the IDR frame exists in the buffer pool, acquiring a target index number corresponding to the IDR frame; and discarding the Frame packet of which the index is smaller than the target index number in the buffer Chi Zhongsuo, and sending the Frame packet corresponding to the IDR Frame from the buffer pool to the decoding end.
In some embodiments of the present application, based on the above technical solutions, the emptying module is configured to: when the IDR frame does not exist in the buffer pool, sending an IDR frame acquisition request to a server; and receiving the IDR Frame sent by the server in response to the IDR Frame acquisition request, emptying the buffer pool, and sending the Frame packet containing the IDR Frame from the buffer pool to the decoding end.
The specific details of the buffer pool-based image decoding apparatus provided in the embodiments of the present application have been described in detail in the corresponding method embodiments, and are not described herein again.
Fig. 10 schematically shows a block diagram of a computer system structure of an electronic device for implementing the embodiment of the present application, which may be the client 101 and the server 102 shown in fig. 1.
It should be noted that the computer system 1000 of the electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the application scope of the embodiments of the present application.
As shown in fig. 10, the computer system 1000 includes a Central Processing Unit (CPU) 1001 that can perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM) 1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the random access memory 1003, various programs and data necessary for system operation are also stored. The cpu 1001, the rom 1002, and the ram 1003 are connected to each other via a bus 1004. An Input/Output interface 1005 (Input/Output interface, i.e., I/O interface) is also connected to the bus 1004.
In some embodiments, the following components are connected to the input/output interface 1005: an input portion 1006 including a keyboard, a mouse, and the like; an output section 1007 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 1008 including a hard disk and the like; and a communication section 1009 including a network interface card such as a local area network card, modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The driver 1010 is also connected to the input/output interface 1005 as necessary. A removable medium 1011 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1010 as necessary, so that a computer program read out therefrom is mounted into the storage section 1008 as necessary.
In particular, according to embodiments of the present application, the processes described in the various method flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from the network through the communication part 1009 and/or installed from the removable medium 1011. When the computer program is executed by the cpu 1001, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable medium or any combination of the two. A computer readable medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make an electronic device execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. An image decoding method based on a buffer pool is applied to a client, and comprises the following steps:
acquiring a coded code stream, and constructing a Frame object corresponding to a single-Frame image based on the coded code stream;
applying for a storage space in a buffer pool based on the Frame object, and storing a Frame packet corresponding to the single-Frame image in the buffer pool;
and inputting the Frame packet to a decoding end at a maximum Frame sending speed, wherein the maximum Frame sending speed is obtained by detecting the dynamic capacity of the decoding end according to a test code stream with the same parameters as the single-Frame image and adjusting based on the dynamic capacity.
2. The method of claim 1, wherein the constructing a Frame object corresponding to a single-Frame image based on the encoded codestream comprises:
analyzing the coding code stream to acquire frame information corresponding to the single-frame image, wherein the frame information comprises a frame type, a timestamp, a frame size and a serial number;
and constructing the Frame object according to the Frame information and the memory address corresponding to the single Frame image.
3. The method of claim 1, wherein the buffer pool comprises: an unapplied unused space, an applied unused space, and a used space; the Frame object comprises Frame information and a memory address corresponding to the single-Frame image;
the applying for the storage space in the buffer pool based on the Frame object and storing the Frame packet corresponding to the single-Frame image in the buffer pool comprises:
sending a storage space application instruction to the buffer pool based on the memory address, and determining the size of the Frame packet according to the Frame size in the Frame information and the size of the Frame information;
when the size of the Frame packet is smaller than or equal to the size of the unused space, obtaining the space in the unused space as an application space according to the size of the Frame packet; and
and when the size of the residual space in the applied unused space is larger than or equal to the size of the Frame packet, storing the Frame packet in the applied unused space.
4. The method of claim 3, further comprising:
when the size of the Frame packet is larger than the size of the unused space, waiting for the release of the used space until the size of the unused space is larger than or equal to the size of the Frame packet; and
and when the size of the residual space in the applied unused space is smaller than the size of the Frame packet, waiting for the release of the used space until the size of the residual space in the applied unused space is larger than or equal to the size of the Frame packet.
5. The method of claim 4, further comprising:
and storing the received Frame packets in the buffer pool in sequence according to the time stamps and/or the index numbers in the Frame objects.
6. The method of claim 1, wherein the parameter comprises setting a frame rate;
after the dynamic capability detection is carried out on the decoding end according to the test code stream with the same parameters as the single-frame image, the maximum frame sending speed is obtained based on the dynamic capability adjustment, and the method comprises the following steps:
inputting the test code stream to the decoding end at the set frame rate for decoding so as to obtain a reference output frame rate and a reference single-frame average decoding delay;
determining a first frame rate threshold according to the set frame rate and a first coefficient, and comparing the reference output frame rate with the first frame rate threshold;
when the reference output frame rate is smaller than the first frame rate threshold, taking the set frame rate as the maximum frame sending speed;
when the reference output frame rate is greater than or equal to the first frame rate threshold, increasing the set frame rate to obtain a frame sending speed, and inputting the test code stream to the decoding end at the frame sending speed for decoding to obtain an output frame rate and a single-frame average decoding delay;
and determining the maximum frame sending speed according to the frame sending speed, the reference single frame average decoding delay, the output frame rate and the single frame average decoding delay.
7. The method of claim 6, wherein determining a maximum frame rate based on the frame rate, the baseline average decoding latency, the output frame rate, and the average decoding latency comprises:
determining a second frame rate threshold according to the frame sending speed and a second coefficient, and comparing the output frame rate with the second frame rate threshold;
when the output frame rate is smaller than the second frame rate threshold, taking the frame sending speed as the maximum frame sending speed;
and when the output frame rate is greater than or equal to the second frame rate threshold, determining the maximum frame sending speed according to the frame sending speed, the reference single-frame average decoding delay and the single-frame average decoding delay.
8. The method of claim 7, wherein determining a maximum frame rate based on the frame rate, the baseline single frame average decoding delay, and the single frame average decoding delay comprises:
comparing the reference single-frame average decoding delay with the single-frame average decoding delay;
when the reference single-frame average decoding delay is greater than the single-frame average decoding delay, taking the frame sending speed as the maximum frame sending speed;
and when the reference single-frame average decoding delay is less than or equal to the single-frame average decoding delay, determining the maximum frame sending speed according to the frame sending speed.
9. The method of claim 8, wherein determining a maximum frame rate based on the frame rate comprises:
comparing the frame sending speed with a frame rate threshold;
when the frame sending speed is greater than the frame rate threshold value, taking the frame sending speed as the maximum frame sending speed;
when the frame sending speed is less than or equal to the frame rate threshold, updating the frame sending speed, and inputting the test code stream to the decoding end at the updated frame sending speed for decoding so as to obtain an updated output frame rate and an updated single-frame average decoding delay;
and determining the maximum frame sending speed according to the updated frame sending speed, the reference single frame average decoding delay, the updated output frame rate and the updated single frame average decoding delay.
10. The method according to any one of claims 1-9, further comprising:
acquiring network congestion duration and the number of image frames waiting for applying a storage space of a buffer pool;
and when the network congestion time is longer than a preset time and the number of the image frames is greater than a preset threshold value, detecting whether an IDR Frame of an immediately refreshed image exists in a Frame packet stored in the buffer pool, and emptying the buffer pool according to a detection result.
11. The method of claim 10, wherein the detecting whether there is an immediate refresh picture IDR Frame in the Frame packet stored in the buffer pool, and emptying the buffer pool according to the detection result comprises:
when the IDR frame exists in the buffer pool, acquiring a target index number corresponding to the IDR frame;
and discarding the Frame packet of which the index is smaller than the target index number in the buffer Chi Zhongsuo, and sending the Frame packet corresponding to the IDR Frame from the buffer pool to the decoding end.
12. The method according to claim 10, wherein said detecting whether there is an IDR Frame of an immediate refresh picture in the Frame packet stored in the buffer pool, and emptying the buffer pool according to the detection result comprises:
when the IDR frame does not exist in the buffer pool, sending an IDR frame acquisition request to a server;
and receiving the IDR Frame sent by the server in response to the IDR Frame acquisition request, emptying the buffer pool, and sending the Frame packet containing the IDR Frame from the buffer pool to the decoding end.
13. An image decoding apparatus based on a buffer pool, configured at a client, includes:
the object construction module is used for acquiring a coded code stream and constructing a Frame object corresponding to a single-Frame image based on the coded code stream;
the space application module is used for applying for a storage space in a buffer pool based on the Frame object and storing a Frame packet corresponding to the single-Frame image in the buffer pool;
and the decoding module is used for inputting the Frame packet to a decoding end at the maximum Frame sending speed, wherein the maximum Frame sending speed is obtained by detecting the dynamic capability of the decoding end according to the test code stream with the same parameters as the single-Frame image and adjusting the decoding end based on the dynamic capability.
14. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the buffer pool based image decoding method of any one of claims 1 to 12.
15. An electronic device, comprising:
a processor; and
a memory to store instructions;
wherein the processor executes the instructions stored in the memory to implement the buffer pool based image decoding method of any one of claims 1 to 12.
CN202211033430.4A 2022-08-26 2022-08-26 Image decoding method and device based on buffer pool, readable medium and electronic equipment Pending CN115379235A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211033430.4A CN115379235A (en) 2022-08-26 2022-08-26 Image decoding method and device based on buffer pool, readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211033430.4A CN115379235A (en) 2022-08-26 2022-08-26 Image decoding method and device based on buffer pool, readable medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115379235A true CN115379235A (en) 2022-11-22

Family

ID=84068105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211033430.4A Pending CN115379235A (en) 2022-08-26 2022-08-26 Image decoding method and device based on buffer pool, readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115379235A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115550690A (en) * 2022-12-02 2022-12-30 腾讯科技(深圳)有限公司 Frame rate adjusting method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115550690A (en) * 2022-12-02 2022-12-30 腾讯科技(深圳)有限公司 Frame rate adjusting method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
US10681097B2 (en) Methods and systems for data transmission
US8397269B2 (en) Fast digital channel changing
US7523482B2 (en) Seamless digital channel changing
CN111135569A (en) Cloud game processing method and device, storage medium and electronic equipment
US11870829B2 (en) Methods and systems for data transmission
CN110582012B (en) Video switching method, video processing device and storage medium
Tizon et al. MPEG-4-based adaptive remote rendering for video games
WO2023040825A1 (en) Media information transmission method, computing device and storage medium
US20140215017A1 (en) Prioritized side channel delivery for download and store media
JP2022545623A (en) Prediction-Based Drop Frame Handling Logic in Video Playback
CN111726657A (en) Live video playing processing method and device and server
CN112866746A (en) Multi-path streaming cloud game control method, device, equipment and storage medium
CN113905257A (en) Video code rate switching method and device, electronic equipment and storage medium
CN115379235A (en) Image decoding method and device based on buffer pool, readable medium and electronic equipment
Gutierrez-Aguado et al. Cloud-based elastic architecture for distributed video encoding: Evaluating H. 265, VP9, and AV1
US10609111B2 (en) Client-driven, ABR flow rate shaping
WO2023071469A1 (en) Video processing method, electronic device and storage medium
CN115767149A (en) Video data transmission method and device
CN112468818B (en) Video communication realization method and device, medium and electronic equipment
US11622135B2 (en) Bandwidth allocation for low latency content and buffered content
CN115225902A (en) High-resolution VR cloud game solution method based on scatter coding and computer equipment
US20190245749A1 (en) Optimizing cloud resources for abr systems
KR102072615B1 (en) Method and Apparatus for Video Streaming for Reducing Decoding Delay of Random Access in HEVC
TR2021020846A2 (en) A METHOD TO PLAY HIGH-QUALITY VIDEOS ON OLD DEVICES
CN117979062A (en) Real-time video stream transmission method and device based on coded stream reference count

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination