CN115278288B - Display processing method and device, computer equipment and readable storage medium - Google Patents

Display processing method and device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN115278288B
CN115278288B CN202211163467.9A CN202211163467A CN115278288B CN 115278288 B CN115278288 B CN 115278288B CN 202211163467 A CN202211163467 A CN 202211163467A CN 115278288 B CN115278288 B CN 115278288B
Authority
CN
China
Prior art keywords
image
code stream
decoding
parameter
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211163467.9A
Other languages
Chinese (zh)
Other versions
CN115278288A (en
Inventor
王俊凯
张丹
刘剑
魏定强
陈超
王胜韬
李松桔
余颖
金泗涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211163467.9A priority Critical patent/CN115278288B/en
Publication of CN115278288A publication Critical patent/CN115278288A/en
Application granted granted Critical
Publication of CN115278288B publication Critical patent/CN115278288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the application discloses a display processing method, a display processing device, computer equipment and a readable storage medium, wherein the method comprises the following steps: detecting a demand for screen projection display of a live broadcast picture in terminal equipment, and acquiring code stream data of the live broadcast picture from the terminal equipment; acquiring a target control parameter related to the first opening time of the code stream data, and analyzing the coding process of the code stream data according to the target control parameter to obtain a coding parameter of the code stream data; generating decoding parameters of the code stream data based on the coding parameters, and decoding by adopting the decoding parameters to obtain a decoded image of the corresponding image frame; the reference number of the decoding images which are not finished with image rendering is determined from the decoding images obtained by decoding by adopting the decoding parameters, the image rendering speed is determined according to the reference number, and the decoding images which are not finished with image rendering are rendered and displayed according to the image rendering speed, so that the delay time for screen projection display can be effectively reduced, and the screen projection display efficiency is improved.

Description

Display processing method and device, computer equipment and readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a display processing method and apparatus, a computer device, and a readable storage medium.
Background
With the development of internet technology, live video is widely applied in scenes such as online video conference, live game events, online classrooms and the like in a novel presentation form. In the live broadcast, a stream pushing end (namely, an equipment end for collecting data) generally adopts a certain transmission protocol to package data and push the packaged data to a network, and a playing end pulls the stream from the network to display, so that the effect of projecting pictures and audio in the equipment end to the playing end for real-time display is realized. Practice proves that a certain delay time exists between a playing end and a stream pushing end, and the real-time performance of live broadcast is directly influenced by the overlarge delay time, so that the live broadcast effect is poor. Therefore, how to reduce the delay time of the screen projection display in the process of push-pull stream is a hot issue of current research.
Disclosure of Invention
The embodiment of the application provides a display processing method and device, a computer device and a readable storage medium, which can reduce the delay time of screen projection display in the process of pushing and pulling a stream, realize the low-delay screen projection effect, improve the screen projection display efficiency and keep the continuity of low delay in a long-time playing scene.
In one aspect, an embodiment of the present application provides a display processing method, including:
detecting a requirement for screen-casting display of a live broadcast picture in terminal equipment, and acquiring code stream data of the live broadcast picture from the terminal equipment; the code stream data comprises one or more image frames, and any image frame is obtained by encoding a corresponding live broadcast picture by the terminal equipment;
acquiring a target control parameter related to the first opening time of the code stream data, and analyzing the coding process of the code stream data according to the target control parameter to obtain a coding parameter of the code stream data;
generating decoding parameters of the code stream data based on the coding parameters, and decoding by adopting the decoding parameters to obtain a decoded image of a corresponding image frame in the code stream data; displaying any decoded image after finishing image rendering;
and determining the reference number of the decoded images which are not subjected to image rendering from the decoded images obtained by decoding by adopting the decoding parameters, and determining the image rendering speed according to the reference number so as to render and display the decoded images which are not subjected to image rendering according to the image rendering speed.
In one aspect, an embodiment of the present application provides a display processing apparatus, including:
the acquisition module is used for detecting the requirement of screen projection display on a live broadcast picture in terminal equipment and acquiring code stream data of the live broadcast picture from the terminal equipment; the code stream data comprises one or more image frames, and any image frame is obtained by encoding a corresponding live broadcast picture by the terminal equipment;
the analysis module is used for acquiring a target control parameter related to the first opening time of the code stream data and analyzing the coding process of the code stream data according to the target control parameter to obtain a coding parameter of the code stream data;
the decoding module is used for generating decoding parameters of the code stream data based on the coding parameters and obtaining a decoding image of a corresponding image frame in the code stream data by adopting the decoding parameters for decoding; displaying any decoded image after finishing image rendering;
and the rendering module is used for determining the reference number of the decoded images which are not subjected to image rendering from the decoded images obtained by decoding by adopting the decoding parameters, and determining the image rendering speed according to the reference number so as to perform image rendering and display on the decoded images which are not subjected to image rendering according to the image rendering speed.
Accordingly, an embodiment of the present application provides a computer device, including: a processor, a memory, a network interface, an input device, and an output device; the processor is connected with the memory and the network interface, wherein the network interface is used for providing a network communication function, the memory is used for storing program codes, the input device is used for receiving input instructions to generate signal input related to the setting and function control of the computer device, the output device is used for outputting data information, and the processor is used for calling the program codes to execute the display processing method in the embodiment of the application.
Accordingly, embodiments of the present application provide a computer-readable storage medium, in which a computer program is stored, where the computer program includes program instructions, and when the program instructions are executed by a processor, the display processing method in embodiments of the present application is executed.
In the embodiment of the application, in response to a screen-casting display requirement for screen-casting display of a live broadcast picture in a terminal device, code stream data obtained after the live broadcast picture of the terminal device is encoded is obtained, a target control parameter related to the first opening time of the code stream data is obtained, an encoding process of the code stream data is analyzed based on the target control parameter, an encoding parameter is obtained, a decoding parameter of the code stream data is further obtained, decoding processing is performed on an image frame in the code stream data by using the decoding parameter, a decoded image is obtained, the decoded image can be displayed after rendering, then the reference number of the decoded images of which image rendering is not completed can be determined, the image rendering speed is determined based on the reference number, and then the decoded image is rendered and displayed according to the determined image rendering speed. In the process, the target control parameter can control the analysis time of the code stream data, when the parameter value of the target control parameter is smaller, the analysis time spent in the coding process of analyzing the code stream data based on the target control parameter related to the first opening time is smaller, so that the decoding work of starting the code stream data can be accelerated to a certain extent, the first opening time is shorter, the first opening time delay is reduced, the first frame screen projection display is accelerated, and the screen projection display time delay is reduced; in the rendering stage, the image rendering speed is not always constant but determined based on the current decoding and rendering conditions, and specifically, the image rendering speed can be determined according to the number of the decoded images which are not rendered by the images, so that the image rendering speed is adapted to the accumulated number of the decoded images to be rendered, dynamic balance between the image rendering speed and the decoding speed can be realized, and when the number of the decoded images which are not rendered by the images is too large, the rendering efficiency can be improved through a larger image rendering speed, so that the time delay of screen projection display of a live broadcast picture of the terminal equipment is further reduced, the live broadcast real-time performance is improved, and as the number of the decoded images to be rendered is dynamically changed, the number of the decoded images which are not rendered by the images can be continuously monitored in the rendering stage, and further the corresponding image rendering speed is determined.
Drawings
FIG. 1a is an architecture diagram of a display processing system according to an embodiment of the present application;
fig. 1b is a schematic diagram of an audio and video data processing link provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a display processing method according to an embodiment of the present application;
FIG. 3a is a diagram illustrating an exemplary decoding and rendering relationship provided by an embodiment of the present application;
FIG. 3b is a schematic view of an exemplary live game event provided by an embodiment of the present application;
FIG. 3c is a schematic diagram of an exemplary game screen projection display provided by an embodiment of the present application;
fig. 4 is a schematic flowchart of another display processing method provided in the embodiment of the present application;
FIG. 5a is a diagram illustrating results of an exemplary target program execution according to an embodiment of the present application;
FIG. 5b is a diagram illustrating a setting of a low latency parameter according to an embodiment of the present application;
fig. 6 is a schematic flowchart of another display processing method provided in the embodiment of the present application;
fig. 7a is a schematic diagram illustrating function calls involved in a frame dropping process according to an embodiment of the present application;
fig. 7b is a schematic diagram illustrating an example of code content of a capture frame end indicator according to an embodiment of the present application;
FIG. 7c is a flowchart illustrating an exemplary setting of a rendering speed of an image according to an embodiment of the present application;
FIG. 7d is a diagram illustrating initialization of parameters of a sound processing library according to an embodiment of the present application;
fig. 7e is a schematic flowchart of smoothing sound according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a display processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a display processing method, a terminal device can display a live broadcast picture, when the live broadcast picture in the terminal device needs to be pushed and then displayed on a screen, the terminal device can encode the live broadcast picture to generate code stream data, and then the terminal device can send (i.e. push stream) the code stream data to a server corresponding to a playing device (a computer device), so that the playing device can obtain the code stream data from the server (i.e. pull stream), thus the playing device can obtain the code stream data of the live broadcast picture from the terminal device. In one embodiment, the parameter value of the target control parameter related to the first opening time length, which is obtained by the playing device, is a parameter value used for controlling the first opening time length to be a lower delay parameter value, so that the first opening speed can be accelerated, and the playing device can improve the display process of the code stream data to a certain extent based on the improvement of the first opening speed.
In addition, the playing device can generate decoding parameters aiming at the code stream data based on the coding parameters, and the generated decoding parameters are adopted to decode the code stream data to obtain a decoding image corresponding to the corresponding image frame. By adopting the decoding parameters to decode the code stream data, a more accurate and complete decoded image can be obtained, so that the playing device can accurately present a live broadcast picture after performing image rendering on the decoded image. In the process of rendering the decoded images, the total amount of images (i.e., the reference number) of the decoded images of which image rendering is not completed may be determined from the decoded images, and the image rendering rate may be determined based on the total amount of images of the decoded images of which image rendering is not completed. The total amount of images of the decoded images which are not subjected to image rendering can reflect the accumulation condition of the decoded images to be rendered in the rendering stage, the image rendering speed is set adaptively according to the accumulation condition and can be well matched with the decoding speed, dynamic balance between decoding and rendering processing is realized, and the fact that the time delay of the playing equipment relative to the terminal equipment is low when the playing equipment displays a live broadcast picture is guaranteed. In one implementation manner, when the reference number of the decoded images of which the image rendering is not completed is large, it is indicated that the accumulated amount of the decoded images to be rendered is large, and the large accumulated amount may cause a large time difference to exist between the playing device and the terminal device relative to the display of the live broadcast pictures, so that a high delay is generated. In another implementation, when the reference number of decoded images for which image rendering is not completed is small, it indicates that the accumulated amount of decoded images to be rendered is small, and the display time difference of the picture between the playback device and the terminal device is low in the accumulated amount, the conventional image rendering speed can be adopted for processing, and low delay is maintained. And then, image rendering and displaying can be carried out on the decoded image to be rendered according to the determined image rendering speed, so that the live broadcast picture of the terminal equipment is displayed in the playing equipment smoothly in real time, and a better live broadcast effect is realized. It can be understood that, since the number of decoded images of the uncompleted image rendering is dynamically changed, the image rendering speed can be adaptively changed throughout the rendering stage, thereby ensuring the persistence of low delay of the projection display.
It can be seen that the following steps are performed to produce the following effects: when a frame of live broadcast picture is displayed in the terminal equipment, a corresponding frame of live broadcast picture can be displayed in the playing equipment, and from the time point of displaying the same frame of live broadcast picture, the time difference between the time point of displaying the same live broadcast picture by the playing equipment and the time point of displaying the live broadcast picture in the terminal equipment is at a lower value. From the sequence of the live broadcast pictures displayed at the same time point, the difference between the number of frames of the live broadcast picture displayed in the playing device and the live broadcast picture displayed in the terminal device is also at a smaller value, for example, at the time point when the terminal device displays the 5 th frame of the live broadcast picture, the playing device displays the 3 rd frame of the live broadcast picture, that is, the difference between the number of frames is 2 frames, and the lower difference between the number of frames indicates that the delay of the screen projection display is also lower. When the time difference is small or the difference in the number of frames is small, it can be considered that the synchronization of the picture is almost achieved between the terminal apparatus and the playback apparatus.
In one embodiment, when the live broadcast picture is a game picture, the game picture is the picture content of a currently running game displayed in a screen of a terminal device, the terminal device can encode the displayed game picture to generate code stream data comprising a plurality of image frames, then send the code stream data corresponding to the game picture to a playing device, when the playing device can detect a demand for screen projection display of the game picture in the terminal device, the code stream data of the game picture sent by the terminal device can be acquired, then the code stream data of the game picture is decoded and analyzed based on the acquired target control parameter to obtain a corresponding encoding parameter, further generate a decoding parameter based on the encoding parameter, perform decoding processing by using the code stream data of the decoding parameter game picture, determine an image rendering speed based on the number of uncompleted image rendering decoding images in the currently decoded decoding image, further perform decoding image processing based on the image rendering speed and display the corresponding game picture, so that the game picture displayed by the terminal device can be projected on the playing device in real time, thereby realizing a low-time delay game terminal application, and ensuring that audiences of the game play scenes can be displayed in real-time.
It can be understood that the above display processing scheme can be applied in various live broadcast scenes, besides game live broadcast scenes, sports event live broadcast scenes, shopping live broadcast scenes and the like, and live broadcast pictures correspond to corresponding live broadcast scenes. This is not limiting.
The architecture of the display processing system provided in the embodiments of the present application will be described below with reference to the accompanying drawings.
Referring to fig. 1a, fig. 1a is a schematic diagram of a display processing system according to an embodiment of the present disclosure. As shown in fig. 1a, the display processing system includes a terminal device 11 and a playback device 12, and a communication connection is established between the terminal device 11 and the playback device 12 in a wired or wireless manner. Wherein the terminal devices 11 include but are not limited to: the smart phone, the computer, the intelligent voice interaction device, the intelligent household electrical appliance, the vehicle-mounted terminal, the aircraft and other devices are not limited in this application. The number of the terminal devices is not limited in the present application. In one implementation, the playback device 12 may include a processing device and a display device that may provide display functionality, or the playback device may include only a display device that may provide display functionality. The display device may include a director (a software or hardware device that live clips the multi-channel video, television signal and forwards it to the video content that the viewer ultimately sees). The processing device may be a server, the server may be an independent physical server, or a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a CDN (Content delivery Network), a big data and an artificial intelligence platform, but is not limited thereto. The number of servers is not limited in this application.
In an embodiment, the playing device 12 and the terminal device 11 may be devices in the same lan, and the playing device 12 and the terminal device 11 may perform Real-Time audio/video data communication based on a streaming media Protocol, such as a Real Time Messaging Protocol (RTMP Protocol), so as to implement push-stream screen casting, where the push-stream screen casting refers to that the terminal device 11 performs push-stream based on the streaming media Protocol, and the playing device 12 performs pull-stream display based on the streaming media Protocol, and the push-stream screen casting may implement a screen casting effect that a live-broadcast picture of the terminal device 11 is presented in the playing device 12, where the live-broadcast picture displayed by the playing device 12 has a Time delay relative to the terminal device 11 due to the influence of some factors (e.g., a network environment, parameter setting involved in a decoding process, etc.).
In an implementation manner, in order to implement a basic push-pull streaming function, a target streaming media service may be integrated in the playing device 12, where the streaming media refers to a media format played on the internet or an intranet in a streaming transmission manner, and may include audio, video, multimedia files, and the like, the streaming media data has basic characteristics of real-time performance, isochronism, and the target streaming media service is a service provided for transmitting the streaming media data. For example, the target streaming media service may be a streaming media service developed for implementing a push-pull streaming-based screen projection display, for example, a service program of the target streaming media service obtained by performing secondary development on a service program (e.g., a livego service) of an existing open source streaming media service, in one implementation manner, when the playback device 12 includes a processing device and a display device, the service program of the target streaming media service may be integrated into the processing device included in the playback device 12, when the playback device includes the display device, the service program of the target streaming media service may be integrated into the display device, and integration of the target streaming media service in the playback device 12 may provide a pull-stream display service. Based on the target streaming media service, the terminal device pushes the stream, and the terminal device can directly use the code stream identifier set for the code stream data, and does not need to apply for the code stream identifier to other devices (such as a server for forwarding the code stream data), the playing device can accurately pull the code stream data pushed by the corresponding terminal device and display the code stream data by inquiring the code stream identifier set for the terminal device, in addition, the step of applying for the code stream identifier to other devices is omitted based on the target streaming media service, the pushing stream can be accelerated, the code stream data can reach the playing device more quickly, the display of corresponding live broadcast pictures is accelerated, and the time delay is reduced.
On this basis, the terminal device 11 may collect a picture (such as a game picture) displayed in a screen or collect other live pictures based on a recording function of the terminal device, and use the collected picture as a live broadcast picture, and the terminal device may encode the live broadcast picture to obtain code stream data. Alternatively, the terminal device may perform stream pushing based on a streaming media transmission protocol, such as an RTMP protocol, where the stream pushing refers to sending the content (i.e., the stream data) packaged in the acquisition stage to the playing device 12. In an implementation manner, when the playing device 12 includes a processing device and a display device, the terminal device 11 may send the code stream data to the processing device included in the playing device 12, and further, when the processing device detects a demand for screen projection display of a live broadcast picture of the terminal device, the terminal device may obtain corresponding code stream data, and display the live broadcast picture in the terminal device through the display device after performing corresponding processing (including processing such as analysis and decoding) on the code stream data. When the playing device 12 only includes a display device capable of providing a display function, the terminal device 11 may directly send the code stream data to the display device, and when a processing module included in the display device detects a demand for screen projection display of a live broadcast picture of the terminal device, the processing module obtains corresponding code stream data and processes (including analysis, decoding, and other processes) the obtained code stream data, and then displays the live broadcast picture on a screen of the display device. Experiments prove that by applying the display processing scheme in the display processing system, the picture difference between the finally displayed picture of the terminal equipment and the picture displayed in the playing equipment in a pull stream mode can be controlled within 1 second. Specifically, the streaming media based transmission protocol can finally present the MS (millisecond) level low-delay screen projection effect when the local area network pushes and pulls the stream.
It will be appreciated that the display processing system described above is also applicable to the schematic of an audiovisual data processing link as shown in figure 1 b. The terminal device may collect screen data (mediaproject), obtain a live broadcast picture (for example, a game picture) displayed in the terminal device, project the live broadcast picture to a virtual display (virtual display) based on the screen data, call a media code (media code) method (surface) for processing, then encode the live broadcast picture by using an encoding standard h.264, and send code stream data obtained by encoding based on RTMP. The playing device can receive code stream data based on RTMP, and decode the received code stream data by adopting the same encoding standard H.264 for encoding, and the collected live broadcast picture is presented in the playing device after the video is rendered and played. The display processing method provided by the application can analyze, decode and render the code stream data based on some parameters related to the playing device end and optimized contents of a processing mechanism, including optimized contents such as initial starting time optimization, decoding time-consuming optimization, rendering optimization and the like, so that initial starting low delay can be realized, the decoding efficiency and the rendering efficiency can be improved, and the continuity of low delay can be kept under a long-time playing scene.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a display processing method according to an embodiment of the present disclosure. The method may be performed by the playback device described above. The display processing method includes the following steps S201 to S204.
S201, detecting the requirement of screen projection display of the live broadcast picture in the terminal equipment, and acquiring code stream data of the live broadcast picture from the terminal equipment.
The live broadcast picture is a picture being displayed in the terminal device or a picture acquired by the terminal device in real time. In some modes, the live video can be a game video about a target game displayed when the target game is run in the terminal device, wherein the target game can be any game installed in the terminal device or a cloud game without installation. The live broadcast picture may also be a video picture provided by an application program being played when the application program is run by the terminal device, or may also be a real-time picture acquired by the terminal device through an acquisition device, such as a video recorded by starting a shooting function.
In some live scenes, in order to better view a real-time situation, a live picture displayed in a terminal device generally needs to be projected into another playing device in real time to be displayed, and the terminal device has a screen projection display requirement (i.e., a requirement for projecting and displaying a live picture in the terminal device). For example, in a live game scene, a game screen displayed on a terminal device of a contestant is usually required to be projected on a larger screen in real time, so that the commentator explains and viewers watch the real-time game situation, and at this time, the terminal device running the target game has a screen projection display requirement. In an implementation manner, the terminal device may start a screen projection display function to generate a screen projection display requirement, and based on the screen projection display requirement, a live broadcast picture to be screen projected for display may be encoded to generate code stream data, and optionally, the live broadcast picture to be screen projected for display may be encapsulated into code stream data in a corresponding format, such as mp4 (an encapsulation format), flv (Flash Video, i.e., a streaming media format), m3u8 (a list file for storing audio and Video chunks), and so on. The terminal device may send the encapsulated codestream data to a playback device (i.e., push streaming). When the playing device only comprises the display device, the terminal device can send the code stream data to a processing module contained in the display device, and the processing module performs subsequent processing on the code stream data and displays the code stream data on a screen of the display device.
The method includes that when a terminal device starts a screen projection display function, a playing device can detect a demand (hereinafter referred to as screen projection display demand) for screen projection display of a live broadcast picture in the terminal device, for example, when the terminal device starts the screen projection display function, a screen projection display message is broadcasted to playing devices in the same local area network, and when the playing device receives the screen projection display message, it is determined that the terminal device has the screen projection display demand. After the playing device detects a screen-casting display requirement, the live broadcast picture in the terminal device can be screen-cast and displayed according to the process introduced in the embodiment of the application. In order to realize the screen-casting display of the live broadcast picture of the terminal equipment, the playing equipment can firstly acquire the code stream data of the live broadcast picture sent by the terminal equipment, wherein the code stream data comprises one or more image frames, and any image frame is obtained by coding a corresponding live broadcast picture by the terminal equipment. One image frame can contain all or part of coding information of one live broadcast picture, each image frame can be arranged according to a corresponding coding processing sequence to form code stream data, and the playing equipment can sequentially decode and display the corresponding live broadcast picture when the code stream data is subsequently processed.
S202, acquiring a target control parameter related to the first opening time of the code stream data, and analyzing the coding process of the code stream data according to the target control parameter to obtain a coding parameter of the code stream data.
The first opening time period refers to a time period from the start of decoding the first image frame of the encoded stream data to the display of the first decoded image in the playback device. The more popular understanding means that it is time consuming to display the first screen image in the playback device after the screen projection display is triggered in the playback device, that is, the first screen display duration of the playback device. For example, the commonly-said first screen second is that the duration of the first screen image displayed by the playing device is within 1 second (i.e. 1 s). The first-opening duration is a factor directly influencing the object experience, the shorter the first-opening duration is, the faster the first-screen image is displayed in the playing device, the better the object experience is, and when the first-opening duration is too long, the too long time for displaying the first-screen image influences the object experience. Therefore, experience can be greatly improved by optimizing the first-time duration.
The parameter value of the target control parameter related to the first opening time of the code stream data can be used for controlling the first opening time, and the parameter value of the target control parameter is a lower parameter value, so that the first opening time can be controlled to be lower in delay, the first opening speed is increased, and the display process of the live broadcast picture is improved. In one implementation, since the parsing of the bitstream data involves determining the encapsulation format of the bitstream data, and corresponding information needs to be extracted according to the protocol convention of the corresponding encapsulation format, the time overhead required for these operations is relatively large. Thus, the time overhead required for these jobs increases the first-on duration. Thus, the delay reduction function may be enabled to reduce the first-on duration. The parameter value of the target control parameter is obtained by optimizing the parameter value corresponding to the corresponding control parameter after the delay reduction function is started, wherein the optimization can be specifically to reduce the parameter value, a small amount of code stream data can be read by reducing the parameter value, and effective code stream data can be analyzed in a short time. In addition, the target control parameter can also control the size of the data quantity required to be read when the code stream data is analyzed, the parameter value is controlled to be a smaller value, less code stream data can be obtained and effectively analyzed, and the efficiency of decoding and analyzing the code stream data is further improved.
And analyzing the coding process of the code stream data according to the target control parameters, so that the coding parameters of the code stream data can be obtained. Optionally, the target control parameter may specify to read a certain amount of code stream data and analyze the read code stream data within a certain time, that is, the encoding parameter may be obtained by analyzing a part of the code stream data. The information indicated by the encoding parameters includes, but is not limited to, one or more of the following: coding information (such as adopted coding standard and type of the code stream data), frame rate, code rate, duration and the like, which are involved in coding the code stream data. Based on the information indicated by the encoding parameters, decoding can be performed in a similar manner at the decoding stage of the encoded stream data, so that the live pictures in the terminal device are completely presented in the playing device.
S203, generating decoding parameters of the code stream data based on the coding parameters, and decoding by adopting the decoding parameters to obtain a decoded image of a corresponding image frame in the code stream data.
The encoding parameters are parameters used for describing the terminal device to encode the live picture, and include, but are not limited to: a coding frame rate parameter, a coding standard indication parameter, a duration parameter, and the like. In one mode, the decoding parameters generated based on the encoding parameters may specifically adopt the same parameter values as the corresponding encoding parameters, and the types may be the same as the encoding parameters. For example, the encoding parameters include an encoding rate and an encoding frame rate, and the decoding parameters correspondingly include a decoding rate and a decoding frame rate, and the values are the same. Thus, encoding and decoding audio and video data can be understood as a reciprocal process, and encoding is performed in any mode, and decoding is performed in a corresponding mode during decoding, so that the original live broadcast picture is restored as much as possible. For example, if the frame rate used for encoding is 30 frames/second (i.e., 30 fps), then decoding can be performed at 30fps for decoding. In another implementation, in order to better adapt to the decoding capability of the playback device itself, the playback device may also adjust a parameter value corresponding to the encoding parameter and then use the adjusted parameter value as the decoding parameter.
After the decoding parameters are generated, the code stream data can be decoded according to the decoding parameters to obtain a decoded image corresponding to the image frame contained in the code stream data, and a live broadcast picture corresponding to the decoded image is restored as much as possible. One or more frames of decoded images can be decoded for one image frame in the code stream data. And displaying any decoded image after finishing image rendering. That is, for any decoded image, image rendering processing needs to be performed, and the image is displayed in the playback device after the image rendering processing is completed. Therefore, decoding and image rendering are two operations, if the decoding speed and the image rendering speed are increased, the playing device can display the live broadcast picture more quickly, and the live broadcast picture displayed in the terminal device is displayed in the playing device in time, so that the time length from the terminal device to the playing device for pushing the stream and throwing the screen is reduced, and low-delay playing is realized.
In one implementation, in order to improve the decoding efficiency, the delay reduction function may be enabled before playing the video, and corresponding auxiliary parameters may be set after the delay reduction function is enabled, so as to accelerate the decoding speed from multiple dimensions, such as a hardware level and a software level, and improve the decoding efficiency of the playing device. For example, if an audio/video time synchronization mechanism (a processing mechanism for aligning audio and video and ensuring audio and video synchronization) is cancelled, because the time error between audio and video is usually small when the playing device and the terminal device are in the same local area network, and the audio and video time synchronization is hardly artificially experienced when the video is displayed in the playing device, the audio/video time synchronization mechanism can be cancelled, so that after the audio frame and the video frame (i.e., the image frame) are decoded, time synchronization processing is not needed, the time overhead required for aligning audio and video is reduced, and the decoding speed is increased. And for example, setting the parameters of hardware accelerated decoding, a hardware device (such as a graphics processor) can be called to accelerate the decoding speed.
And S204, determining the reference quantity of the decoded images which are not finished with image rendering from the decoded images obtained by decoding by adopting the decoding parameters, and determining the image rendering speed according to the reference quantity so as to perform image rendering and display on the decoded images which are not finished with image rendering according to the image rendering speed.
Decoding of the code stream data is also performed synchronously when the decoded image is subjected to image rendering, the decoded image is continuously processed in the image rendering stage, and a new image frame is continuously decoded in the decoding processing stage to generate a new decoded image. When one frame of decoded image is subjected to image rendering, one or more frames of decoded images are generally waited for image rendering, and when the playing device plays for a long time, the number of decoded images waiting for image rendering is gradually increased along with the increase of the playing time, that is, the decoded images with unfinished image rendering are gradually accumulated. And the total number of images of the decoded images with unfinished image rendering in all the decoded images obtained by decoding by adopting the decoding parameters, namely the reference number of the decoded images with unfinished image rendering. In the present application, the decoded image whose image rendering is not completed includes a decoded image waiting for image rendering, and for convenience of description, the decoded image to be rendered is referred to below simply as the decoded image to be rendered, it can be understood that, after decoding the encoded stream data to obtain the decoded image, each frame of the decoded image is arranged according to the decoding order and then displayed after being rendered, that is, the decoded image corresponding to the image frame decoded first is rendered first, as shown in fig. 3a, for example, a schematic diagram of a relationship between decoding and rendering.
Further, an image rendering rate may be determined based on the reference number, and the image rendering rate may be an initial rendering rate or a rendering rate after adjusting the default rendering rate. The image rendering speed is set based on the reference number of decoded images for which image rendering is not completed, and the image rendering speed can be made to match the current actual rendering situation, thereby efficiently performing image rendering. From the perspective of the whole image rendering stage, the image rendering speed can follow the change of the reference number of the decoded images which are not rendered, and dynamically change when the condition is met, so that different numbers of decoded images are adaptively processed, the rendering efficiency is improved, and overhigh time delay caused by excessive accumulation of the decoded images is avoided.
The image rendering may be performed on the decoded image for which the image rendering is not completed according to the determined image rendering speed, so that the image rendering speed corresponds to a speed of the decoding process. In an implementation manner, as the number of the decoded images to be rendered is gradually increased along with the advance of the decoding process, if the reference number of the decoded images of which the image rendering is not completed is greater than the number threshold, it indicates that more decoded images are not rendered, so that the time delay of the live broadcast picture displayed by the playing device relative to the live broadcast picture displayed in the terminal device is increased, and the viewing experience is affected. In order to reduce the time delay as much as possible, the image rendering speed can be increased, the decoding speed is kept unchanged, the rendering speed is increased to accelerate the processing of the decoded images, so that the number of the decoded images which are not finished with the image rendering can be gradually reduced, and the picture difference between the live broadcast pictures displayed by the playing device and the terminal device is made as small as possible. When the image rendering speed is accelerated and the number of unfinished image renderings is smaller than the number threshold, the image rendering speed can be adjusted to be lower, so that the low time delay of the playing equipment is ensured, the processing resources required by rendering are saved, and the decoding processing speed and the image rendering speed reach a certain balance.
The display processing method provided by the application can be applied to various live broadcast scenes, such as game live broadcast scenes, shopping live broadcast scenes, sports event live broadcast scenes, concert live broadcast scenes and the like, and is not limited. When applied to a live game event scene, the method can specifically cover various offline micro-event scenes (i.e., game events participated by a small number of devices in the same local area network), such as cinema events, enterprise events, coffee bar events, and the like. Under different live scenes, the frame content of the live broadcast frame is different, for example, under a scene of a game live broadcast event, the live broadcast frame is specifically a game frame of a target game displayed by a terminal device running the target game, the game frame in the terminal device can be projected and displayed in a playing device, the time delay between the terminal device and the playing device is low, and a director explains the game frame displayed by the playing device, namely explains the game frame based on the terminal device, so that audiences listen to the game frame, and the real-time performance and the consistency of the visual content and the explanation are ensured. For example, as shown in fig. 3b, a scene diagram of a live game event is shown, where (1) in fig. 3b is a scene diagram of a live game event in a cinema scene, and (2) in fig. 3b is a scene diagram of a live game event in a coffee bar scene, a background service (specifically, a streaming media service) can be pushed to a playback device through a terminal device, the playback device can perform pull-stream playback, a picture runs smoothly, and a delay is less than 0.2 seconds (i.e., between 100 and 200 milliseconds (ms)). As shown in fig. 3c, the game screen of the mobile phone is displayed on the computer, and the game screen displayed on the computer falls behind by 0.2 seconds in the mobile phone by the use countdown after the skill displayed on the two devices is released, specifically, as shown in fig. 3c, the use countdown after the skill marked by 310 in the game screen displayed on the computer is released is 0.5 seconds, and the use countdown after the skill marked by 311 in the game screen displayed on the mobile phone is released is 0.3 seconds.
According to the display processing method provided by the embodiment of the application, in response to the screen-projection display requirement for screen-projection display of the live broadcast picture in the terminal equipment, code stream data obtained after the live broadcast picture of the terminal equipment is coded is obtained, a target control parameter related to the first opening time of the code stream data is obtained, the coding process of the code stream data is analyzed based on the target control parameter to obtain a coding parameter and further obtain a decoding parameter of the code stream data, then the decoding parameter is adopted to decode an image frame in the code stream data to obtain a decoded image, the decoded image can be displayed after rendering, then the reference number of the decoded images with uncompleted image rendering can be determined, the image rendering speed is determined based on the reference number, and then the decoded image is rendered and displayed according to the determined image rendering speed. In the process, the encoding process of analyzing the code stream data based on the target control parameter related to the first opening time length can control the analysis time length of the code stream data, and accelerate the decoding process to a certain extent, so that the first opening time length is controlled at a lower time delay, and in the rendering stage, the image rendering speed can be determined according to the number of the decoding images of which the image rendering is not completed, so that the matching between the image rendering speed and the number of the decoding images to be rendered can realize the dynamic balance between the image rendering speed and the decoding speed, and when the number of the decoding images to be rendered is excessive, the rendering efficiency can be improved through the corresponding image rendering speed, so that the time delay of screen projection display of a live broadcast picture of the terminal equipment is further reduced, the real-time performance is improved, and because the number of the decoding images to be rendered changes in real time, the number of the decoding images of which the image rendering is not completed can be continuously monitored in the rendering stage, so that the corresponding image rendering speed is determined, and the continuous low delay can be kept in a long-time playing scene.
Referring to fig. 4, fig. 4 is a schematic flowchart of another display processing method according to an embodiment of the present disclosure. The method may be performed by the playback device described above. The display processing method includes the following steps S401 to S406.
S401, detecting the requirement of screen projection display of the live broadcast picture in the terminal equipment, and acquiring code stream data of the live broadcast picture from the terminal equipment.
S402, acquiring a target control parameter related to the first opening time of the code stream data.
In one embodiment, the code stream data is analyzed by calling a target program. The object program is a computer program for parsing audio-video encoded data. The playing device may call a target program, analyze the bitstream data according to an analysis policy indicated by the target program, and decode the bitstream data based on information obtained by the analysis. In an implementation manner, in order to implement real-time display of decoded acquired code stream data, a player technology, such as FFME (a player technology implemented on the basis of a foundation library combining FFMPEG. The automatic gen is used for packaging and calling FFMPEG, and FFMPEG is a set of open source computer programs which can be used for recording and converting digital audio and video and converting the digital audio and video into streams. The target program may be FFMPEG, and the playing device may analyze and decode the encoded stream data based on FFMPEG, and convert the encoded stream data into a corresponding audio/video data stream for playing.
Before the target program is called to analyze the code stream data, the target control parameter may be obtained, and the specific manner may be: acquiring a decoding function which needs to be called when a target program is adopted to analyze and process code stream data, and determining a target decoding function which causes the generation of initial starting time length from the called decoding function; the first opening time is the time from the start of decoding the code stream data to the display of the first screen image; and acquiring a corresponding reference field when the analysis processing is carried out from the target decoding function, and taking the reference field as a target control parameter related to the first opening time of the code stream data.
Specifically, when the target program is called to analyze and process the bitstream data, the playing device needs to call corresponding decoding functions, and when the target program is called to analyze the bitstream data including the image frames, the execution time consumption of some decoding functions may be longer, so that the analysis time consumption of the bitstream data may be increased, a longer initial opening time may be generated, and the initial opening delay is higher. Among the called decoding functions, a decoding function that causes the first-open time to be generated is a target decoding function, and it can be understood that the first-open time here refers to a time from when decoding processing is started on the coded stream data to when the first-screen image (i.e., the first-screen image) is displayed, and the first-open delay refers to that the first-open time is not zero. Further, the playing device may obtain a reference field used in the analysis processing in the target decoding function, where the reference field may be used as a target control parameter related to the first start time of the code stream data.
For example, when the target program is FFMPEG, a certain byte of stream data is read from the format _ find _ stream _ info (a function for reading stream information) to analyze basic information of the stream, such as coding information, duration, code rate, frame rate, and the like, which can be specifically controlled by two parameters, namely Probesize and analysis. Wherein, probesize is used for controlling the size of the data volume read, and Analyzation is used for controlling the time length of reading the data. Analyzing the analysis process of the code stream data finds that the function of the format _ find _ stream _ info consumes longer time, specifically, the time consumed for analyzing the information contained in the code stream data is longer, and the start of decoding of the code stream data depends on the information obtained by analysis, so that the decoding of the first screen image is always waited for the end of the analysis process, and the first opening time is longer. As shown in the target program execution result shown in fig. 5a, as indicated by 501 in fig. 5a, the default parameter value of the maximum analysis duration field MaxAnalyzeduration in the format _ find _ stream _ info function is 5s, and the parameter value corresponding to the maximum analysis duration field is much larger than the parameter value corresponding to the analysis duration field, i.e. MaxAnalyzeduration > > analysis duration, where > > indicates that is much larger than. The amount of data read as indicated by the parameter value corresponding to the probe field defaults to 5000000 bytes, i.e., 5M (megabytes). Such a large amount of data may take a relatively long time to read and analyze, increasing the first-start duration, resulting in a relatively high first-start delay. Therefore, the probe field and the analysis duration field can be used as the target control parameter related to the first-on duration. The initial opening time is shortened by optimizing the parameter value of the target control parameter, and the lower initial opening delay is kept.
In one embodiment, the target control parameter related to the first-on duration is obtained in an initialization stage after the delay reduction function is determined to be enabled, and the initialization stage is further used for obtaining the encoding parameter of the code stream data. Specifically, before playing a video, the playing device may start the delay reduction function when detecting an instruction to perform screen projection display on a live broadcast picture of the terminal device, acquire the target control parameter at an initialization stage after the delay reduction function is started, analyze and process a coding process of the code stream data based on the target control parameter, and obtain a corresponding coding parameter. The implementation of analyzing the encoding process of the code stream data according to the target control parameter based on the enabling of the delay reduction function to obtain the encoding parameter of the code stream data may include the following steps S403 to S404.
S403, in the initialization stage, determining the size of the data quantity to be read from the code stream data according to the parameter value of the target control parameter.
In the initialization stage, the data size of the partial code stream data read from all the code stream data may be determined according to the parameter value of the target control parameter, for example, the size indicated by the parameter value is directly determined as the data size required to be read.
In one implementation, the target control parameter includes a sounding frame parameter and a duration parameter, and a parameter value of the sounding frame parameter is used to indicate a frame number of the read data. The frame detection refers to detection processing of one or more image frames contained in the code stream data, and may be understood as reading an image frame from the code stream data. The parameter value of the probing frame parameter may indicate the number of image frames in the read code stream data, for example, the parameter value of the probing parameter Probesize is 4096, which means that the frame probing size is 4096 in bytes, then 4096 bytes of code stream data may be obtained from the code stream data, if the data amount of one image frame is 1024 bytes, then corresponding to 4 image frames, and when the probing frame parameter is not adjusted, the default parameter value is 5000000 bytes (i.e. 5M). The parameter value of the duration parameter is used to indicate the data duration of the read data, where the data duration refers to the time length of the read code stream data, for example, 100 seconds of code stream data is read. The time length parameter corresponds to a maximum analysis time length parameter, and the maximum analysis time length parameter is used for indicating a maximum time length allowed to be spent on analyzing and processing the read code stream data. For example, when the parameter value of the duration parameter Analyzeduration defaults to 0, the default value of the maximum analysis duration parameter max _ analyze _ duration (i.e., maxAnalyzeduration) is 5 seconds.
Based on the content included in the target control parameter, the implementation manner of determining the size of the data amount to be read from the code stream data according to the parameter value of the target control parameter may include: taking the data quantity indicated by the parameter values of the frame detection parameter and the duration parameter as the data quantity to be read from the code stream data; the frame number of the analysis data read from the code stream data is equal to the frame number indicated by the parameter value of the probe frame parameter, and the data duration of the analysis data is equal to the data duration indicated by the parameter value of the duration parameter.
Specifically, the analysis data is data of a specified data size read from the code stream data, the data size of the analysis data read from the code stream data includes contents of two dimensions, one dimension is that the number of frames of the analysis data may be equal to the number of frames indicated by the parameter value of the frame detection parameter, the other dimension is that the data duration of the analysis data is equal to the data duration indicated by the parameter value of the duration parameter, the data duration refers to the time length occupied by the analysis data in the code stream data, for example, 10 seconds of code stream data is read from 100 seconds(s) of code stream data. In combination with the above two dimensions, the analysis data is, for example, partial code stream data with a data duration of 10 seconds and a frame number of 50. In another implementation manner, the data volume of the analysis data read from the code stream data may be equal to the data volume indicated by the parameter value of the frame detection parameter, or equal to the data volume indicated by the duration parameter, that is, the playing device first obtains which parameter in the target control parameter, and reads the analysis data of the corresponding data volume from the code stream data according to the data volume indicated by the parameter value of the parameter. For example, 5000000 bytes of analysis data or 100 seconds of analysis data are read, and when the standard arrives first, analysis data with a corresponding data size is obtained from the code stream data according to the standard, and detection is stopped.
It can be understood that the parameter values corresponding to the sounding frame parameter and the duration parameter included in the target control parameter are optimized parameter values, and specifically, the initial start-up speed of the first second start can be achieved by reducing the parameter values of the two parameters. For example, audio video data is encapsulated in a source file in a corresponding format, FFMPEG reads audio video data of a part of the source file in an format _ find _ stream _ info (a function for reading stream information) to analyze file information, which can be controlled by two parameters, probe size and analysis parameter analysis, probesize and analysis being defined in a library format/options _ table.
S404, reading analysis data with corresponding data quantity from the code stream data according to the data quantity, and analyzing and processing the coding process of the analysis data to obtain corresponding coding parameters.
After the data amount corresponding to the analysis data to be read is determined, the playing device may read analysis data of the corresponding data amount from the code stream data, and analyze the encoding process of the analysis data to obtain corresponding encoding parameters, such as an encoding frame rate parameter, an encoding code rate parameter, a duration parameter, and the like. Therefore, a small amount of code stream data is read from all the code stream data for analysis, so that the basic information of the code stream can be obtained, the decoding starting stage can be accelerated by reducing the time consumption for reading the data, and the first-time starting duration is shortened. In this process, the parameter value corresponding to the target control parameter in the originally time-consuming decoding function is an empirical value. The empirical value is the optimal parameter value obtained by repeatedly adjusting and testing the parameter value, and the initial starting time can be controlled to the lowest value while the complete information is analyzed. Under the empirical value, based on the data volume required to be read and indicated by the target control parameter, the information required to be used for decoding can be accurately analyzed in less time, the acquisition efficiency and the analysis efficiency of the information for decoding are improved, the decoding stage is accelerated to enter, the overall processing speed is improved, and the screen projection display time delay is reduced. After the coding process of the analysis data is analyzed, corresponding coding parameters can be obtained and can be used as coding parameters of code stream data.
In one embodiment, the initialization stage after the delay reduction function is enabled further specifies analysis parameters to be referred to in performing the analysis, where the analysis parameters include one or more of a buffer parameter and an optimization parameter, a parameter value corresponding to the buffer parameter is used to indicate that the analysis data is not buffered, and a parameter value corresponding to the optimization parameter is used to indicate that the analysis processing is performed after the useless frames in the analysis data are optimized.
Specifically, by setting the cache parameter for indicating that the analysis data is not cached, the read analysis data is only used for analysis processing and is not displayed, so that the code stream data is not repeatedly read for analysis processing, new code stream data is read during analysis processing, and the decoding rate is increased. For example, in FFMPEG, a buffer parameter FlagNoBuffer may be set to 1, which indicates that the analysis data is not buffered. Further, the optimization parameter as an optimization term for increasing the analysis rate may be used to indicate that the useless frames in the analysis data are optimized, and the optimization may be to discard the useless frames. In one implementation, the analysis processing is performed on the coding process of the analysis data, and includes: performing useless frame optimization processing on the read analysis data based on the indication of the parameter value of the optimization parameter contained in the analysis parameter, and directly performing analysis processing on the encoding process of the optimized analysis data; the useless frames in the analysis data are: and the image frames do not need to be referred to when the coding process of the analysis data is analyzed.
Due to some incidental factors, image frames with incomplete information, which may be regarded as useless frames, are read from the analysis data, and the analysis data is encoded without reference to the image frames. For useless frames in analysis data read from code stream data, the analysis data can be optimized in a discarding mode, and then the discarded analysis data is directly analyzed. By discarding the useless frames, on one hand, the resources can be utilized as much as possible to analyze the effective data in the code stream data, on the other hand, the analysis speed can be increased, and the waste of the resources is avoided. For example, in FFMPEG, an optimization parameter flag discardcordrupt =1 may be set, which indicates that an image frame with incomplete information is discarded, that is, a useless frame is discarded.
In one implementation, the delay reduction function is enabled, and besides optimizing the parameters used in the initialization stage (initialization stage), the auxiliary decoding parameters used in the decoding stage can be set to speed up the decoding process. The decoding stage can be denoted as the encoding stage, i.e. the stage for starting decoding. In one implementation, the auxiliary decoding parameters may include at least one of: a time synchronization mechanism parameter, a hardware acceleration parameter, a low-delay decoding parameter, and a cache indication parameter; the optimization function of decoding acceleration can be realized through corresponding parameter setting, so that the decoding of code stream data is accelerated, and the decoding efficiency is improved. The method specifically comprises the following steps: a time synchronization mechanism parameter IsTimeSyncDisabled =1 is set to indicate that the time synchronization mechanism is cancelled, for example, the time synchronization mechanism is cancelled for the local area network live stream, that is, the audio and the video do not need synchronous processing, and a hardware acceleration parameter EnableFastDecoding = true can be set to indicate that hardware acceleration is started; setting a low-delay decoding parameter EnableLowDelayDecoding = true, indicating that a low-delay decoding function is enabled; and setting a buffer indication parameter to adjust the minimum value of the audio/video buffer, for example, setting the audio buffer amount to 1, namely AudioBlockCache =1, and setting the video buffer amount to 1, namely VideoBlockCache =1, so that the speed of acquiring image frames at one time can be improved, and the minimum playback buffer ratio minimumplaybackbuffer percentage can also be set, for example, to 0.1%, so that the time consumption of occupied memory and preloaded pictures can be reduced by setting the parameter value. For example, the delay reduction function shown in fig. 5b is enabled, and then optimized parameters are involved in the initialization phase and the exposure phase.
In summary, by tuning basic parameters required by analysis processing, the first-opening low-delay is achieved by optimizing the frame detection and accelerating the decoding efficiency, and by setting parameters with different dimensions, such as a buffer parameter, a time synchronization mechanism parameter, a hardware acceleration parameter, and the like, parameters used by the playing device are optimized in a multi-dimension manner, so that the efficiency of the playing device in decoding the code stream data and rendering the decoded image can be effectively improved.
S405, generating decoding parameters of the code stream data based on the coding parameters, and decoding by adopting the decoding parameters to obtain a decoded image of a corresponding image frame in the code stream data.
S406, determining the reference number of the decoding images which are not finished with image rendering from the decoding images obtained by decoding by adopting the decoding parameters, and determining the image rendering speed according to the reference number so as to perform image rendering and display on the decoding images which are not finished with image rendering according to the image rendering speed.
By the display processing scheme provided by the embodiment of the application, the target program can be adopted to analyze the code stream data (namely, the coding process of analyzing the code stream data) and decode the code stream data, and the process of analyzing and processing the code stream data by the target program can be analyzed to determine the parameters causing the initial opening delay, so that the parameters can be used as the target control parameters. In the initialization stage, the optimized parameter values corresponding to the target control parameters can be used to read the analysis data of the corresponding data size, and the analysis data is analyzed to obtain the coding parameters of the code stream data. In the process, the analysis efficiency of the code stream data is greatly improved due to the optimization of the parameter values, so that the decoding of the code stream data can be accelerated, the first-opening speed is accelerated, and the first-opening delay is reduced. In addition, one or more of cache parameters and optimization parameters referred in analysis processing can be set in the initialization stage after the delay reduction function is started, so that the analysis processing speed can be increased on the premise of ensuring the analysis accuracy, the overall processing efficiency is improved, the time delay of screen projection display of the final live broadcast picture is reduced, and the screen projection display efficiency is improved.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating another display processing method according to an embodiment of the present disclosure. The method may be performed by the playback device described above. The display processing method includes the following steps S601 to S605.
S601, detecting the requirement of screen-casting display on the live broadcast picture in the terminal equipment, and acquiring code stream data of the live broadcast picture from the terminal equipment.
S602, acquiring a target control parameter related to the first opening time of the code stream data, and analyzing the coding process of the code stream data according to the target control parameter to obtain a coding parameter of the code stream data.
S603, generating a decoding parameter of the code stream data based on the coding parameter, and decoding by adopting the decoding parameter to obtain a decoding image of a corresponding image frame in the code stream data.
In one embodiment, one or more image frames included in the code stream data are sequentially arranged according to the corresponding playing time sequence, and when a decoded image of a corresponding image frame in the code stream data is obtained, the image frames in the code stream data are sequentially decoded based on the arrangement sequence. Specifically, one or more image frames included in the code stream data are sequentially arranged according to a playing time sequence corresponding to the terminal device, when each image frame in the code stream data is decoded, the image frames can also be sequentially decoded according to the arrangement sequence of the image frames, the image frame corresponding to the game picture played in the terminal device first can be decoded in the playing device first, so that a decoded image of the corresponding image frame in the code stream data is obtained, the rendered decoded image is sequentially displayed after rendering, and the video playing is completed. And displaying any one of the decoded images obtained after decoding the code stream data after finishing image rendering, wherein the rendering and the displaying of each decoded image are also performed in sequence.
Decoding by adopting the decoding parameters to obtain a decoded image of a corresponding image frame in the code stream data, wherein the decoding comprises the following steps: in the process of sequentially decoding each image frame of the code stream data by adopting the decoding parameters, after a decoded image corresponding to one image frame of the code stream data is obtained, a frame end symbol corresponding to one image frame is obtained; after the frame end symbol corresponding to one image frame is obtained, the next image frame of the image frame is obtained from the code stream data based on the arrangement sequence, and the next image frame is decoded to obtain a corresponding decoded image.
In the process, after a decoded image corresponding to one image frame of the code stream data is obtained, a frame end symbol corresponding to the image frame can be obtained, and the current image frame and a next image frame (i.e. a next image frame) are subjected to frame dismantling processing according to the corresponding frame end symbol, wherein the arrangement sequence of the next image frame is adjacent to the arrangement sequence of the current image frame and is the image frame after the arrangement sequence of the current image frame. The sequence order refers to a playing order, and may also correspond to a coding order. And acquiring a next image frame of the current image frame and decoding the next image frame. It can be seen that the frame end indicator is an identifier indicating the end of decoding of one image frame, and the frame removal processing is performed by determining the frame end indicator, so that compared with the frame removal processing performed by using the frame start indicator, the time delay of one frame can be reduced, and the decoding efficiency can be improved.
This is because the frame splitting usually uses the start code of the next frame (i.e. the next image frame) as the current frame end symbol, and the frame that is not indicated to be decoded can not be processed by image rendering, and usually the start code of the next frame can be waited to determine the decoding end of the current frame, and the waiting for the start code of the next frame causes a delay of one frame. For example, in FFMPEG, the framing is based on the start code of the next frame as the current frame end, which is typically: 0x00, 0x000x00, 0x01, or 0x00, 0x01. This results in a delay of one frame. If the frame end symbol is used for frame splitting, the delay of one frame can be solved without waiting for the start code of the next frame, namely, the frame is split by determining the frame end symbol and replacing the frame end symbol with the start code of the next frame. Fig. 7a is a schematic diagram illustrating function calls involved in the process of frame dropping. The whole call flow is composed of read _ frame- > read _ frame _ internal- > space _ packet- > av _ parameter _ space 2- > space _ space- > ff _ combine _ frame. The read _ frame may be called to indicate reading frame data, which is specifically an image frame, then the read _ frame calls an internal method, read _ frame _ internal, then the read _ frame _ internal calls a space _ packet method, and the space _ packet calls av _ server _ space 2 to interpret a packet, and the method records a frame offset. The original start code of the next frame is changed to the end code of the current frame, and the offset length of the original start code of the next frame is removed. Next, av _ parser _ parse2 calls parser _ parse method, which removes the process of finding the start code of the next frame as the frame end of the current frame, and finally calls ff _ combine _ frame framing method of parser.c, replacing the start code with mark flag bits as the frame end.
Fig. 7b is a diagram illustrating an example of code content of the end of frame acquisition. In FFMPEG, the corresponding content acquisition end frame, i.e. mark flag, can be directly used in the rtp _ parse _ packet _ internal method (a method for parsing packets) of rtp dec.c files (files available from FFMPEG for processing audio/video data). By recording the flag bit, the frame start code of the next frame is not needed finally, thereby further accelerating the decoding speed and improving the decoding efficiency.
In one embodiment, after decoding the decoding parameters to obtain the decoded image of the corresponding image frame in the code stream data, the method further includes: and storing the obtained decoded image in a buffer, and displaying the decoded image in the buffer after rendering. Specifically, for each image frame in the encoded stream data, after the image frame is subjected to decoding processing to obtain a decoded image, the decoded image may be stored in a buffer. In one implementation, the decoded images may be sequentially stored in a buffer based on a decoding order, so as to be selected in an arrangement order of the decoded images at the time of rendering. It is understood that the decoded images obtained by decoding are stored in a buffer queue, so that the decoded images in the buffer queue can be sequentially stored and read according to a first-in first-out rule to render the decoded images and correctly display the game pictures.
Based on the above-described content of the decoded image buffered in the buffer, a specific manner of determining the reference number of decoded images for which image rendering is not completed may be the content of S604 below.
S604, after finishing rendering and displaying any decoded image frame, calling back to the buffer to obtain the total number of the decoded images contained in the buffer, and taking the total number of the images as the reference number of the decoded images which are not finished with image rendering.
And after the rendering and display of any decoded image frame are finished, calling back to the buffer to obtain the total quantity of the decoded images contained in the buffer, and taking the total quantity of the images as the reference quantity of the decoded images which are not finished with image rendering.
After one frame of decoded image is rendered and displayed, the playing device is indicated to play one frame of live broadcast picture, and the playing device can return to the buffer after each frame is played, the total image quantity of the currently stored decoded image in the buffer can be known in real time through the return to the buffer, and the total image quantity can be used as the reference quantity of the decoded image which is not finished with image rendering. The playing device can call two independent threads for image rendering and decoding of the code stream data to execute, the threads are called as a decoding thread and a rendering thread respectively, decoded images obtained by calling the decoding thread can be stored in a buffer, and the calling of the rendering thread can sequentially obtain the next frame of decoded images adjacent to the currently rendered decoded images from the buffer and execute image rendering processing on the next frame of decoded images. It can be seen that decoding and rendering are two tasks (worker), decoded images obtained by decoding can be stored in a buffer in a queue form to obtain a buffer queue, the time overhead of rendering with respect to decoding is relatively long, and as the playing device plays for a long time, the number of decoded images accumulated in the buffer queue gradually increases, which may result in an increase in reception accumulation delay, and a relatively large time difference may be generated between the playing device and the terminal device in actual synchronization. To avoid such problems, when the number of decoded images accumulated in the buffer queue increases to a certain amount, an acceleration mechanism may be used to accelerate the image rendering speed and reduce the number of decoded images accumulated in the buffer queue. One implementation of determining the image rendering speed according to the reference number may include the following S605.
And S605, comparing the reference quantity with a preset quantity, accelerating the normal rendering speed when the reference quantity is greater than the preset quantity, and taking the accelerated rendering speed as the image rendering speed.
The preset number is a critical number for realizing dynamic balance between decoding and image rendering and ensuring that the playing device is in low time delay relative to the terminal device, and the reference number is the total number of images of decoded images of which image rendering is not finished. Before determining the image rendering speed, the playback apparatus may determine whether the number of decoded images for which image rendering is not completed exceeds a critical number, compared to a size between a preset number and a reference number. When the reference number is greater than the preset number, it is indicated that the number of the decoded images of the uncompleted image rendering exceeds the critical number, and it is considered that the number of the decoded images waiting for image rendering is too large, and there may be a problem of too large time delay. Based on the accelerated rendering speed, the rendering speed of the decoded image can be accelerated, the consumption of the accumulated decoded image can be accelerated, and the fast forwarding of the picture brought by the accelerated processing can not be perceived by the object. It can be understood that, at the normal rendering speed, the playback device plays back the live frame at the normal playback speed, and the normal playback speed may be 1.0. When the image rendering speed is accelerated, the corresponding playing speed is also accelerated.
And S606, when the reference number is less than or equal to the preset number, rendering processing is carried out at a normal rendering speed.
When the reference number is less than or equal to the preset number, it indicates that the number of decoded images of which image rendering is not completed does not exceed the critical number, and the decoded images can be rendered at the normal rendering speed. In one implementation, when the reference number is less than or equal to the preset number, performing rendering processing at a normal rendering speed includes: when the reference quantity is less than or equal to the preset quantity, if the current rendering speed is the accelerated rendering speed, judging whether the reference quantity is less than the target quantity; and when the reference quantity is determined to be smaller than the target quantity, adjusting the accelerated rendering speed to be a normal rendering speed, and performing rendering processing by adopting the normal rendering speed.
Specifically, when the reference number is less than or equal to the preset number, it indicates that the number of decoded images rendered by the uncompleted images is relatively small, and if the current rendering speed is the accelerated rendering speed, it indicates that the decoded images rendered by the uncompleted images before this time gradually decrease with time until the reference number of decoded images rendered by the determined uncompleted images decreases, in order to ensure that the live broadcast picture is close to synchronous effect in different devices, it is possible to measure whether the total number of images of the decoded images rendered by the current uncompleted images decreases to a specified value at the accelerated rendering speed by using the target number, and if the reference number is less than the target number, if the image rendering processing is still performed at the accelerated rendering speed, it may cause waste of processing resources under the effect of expected low latency, and therefore, the accelerated rendering speed may be adjusted to the normal rendering speed, and the rendering processing is performed at the normal rendering speed, and the decoded images to be rendered are rendered and displayed. And at the normal rendering speed, the playing equipment plays at the normal playing speed.
Based on the above description of the image rendering speed, for better understanding, refer to the setting flow chart of the image rendering speed as shown in fig. 7 c. The buffer number refers to decoded images of uncompleted image rendering stored in the buffer, the playing speed can be used for reflecting the rendering speed of image rendering, the normal playing speed is 1, the accelerated playing speed is 1.1, the preset number is 40, and the target number is 10. The rough flow is as follows: the playing device starts to play the live broadcast picture displayed by the terminal device, starts an acceleration mechanism, and executes according to the following logic: firstly, it can be determined whether the play position is changed, that is, the play position is changed after playing a frame, for example, a common play progress bar is changed, the processing logic corresponding to the playing device can call back to the buffer after playing a frame, and the play call back is determined whether the play call back is called back to the buffer, the buffer number (PacketBufferCount) can be obtained through the play call back, and then it can be determined whether the buffer number of PacketBufferCount packets is greater than 40, so as to perform different processing. If the number of buffers is greater than 40, it may be judged whether the play speed (SpeedRatio) is not 1.1, if the play speed is not 1.1, the play speed may be set to 1.1, and if the play speed is 1.1, the play speed of 1.1 may be maintained. If the buffer number is less than or equal to 40, it may be determined whether the buffer number is less than 10 and the playing speed is greater than 1, if both are satisfied, the playing speed may be set to 1, and if the buffer number is within a range of 10 to 40, the playing speed may maintain the current playing speed, that is, the current playing speed may be 1 or 1.1. Since the setting of the playback speed directly affects the image rendering speed, the setting of the playback speed may represent the setting of the image rendering speed. Through the optimization of the buffering technology of the playing device in the above manner, the number of the decoded images to be rendered can be continuously adjusted back, the total number of the decoded images to be rendered contained in the buffer is monitored in real time, the image rendering speed is set to be the normal rendering speed or the accelerated rendering speed according to different total numbers of the images, the cumulative receiving delay is optimized, the persistence of low delay is realized, and experiments prove that the delay between the playing device and the terminal device can be kept within the second delay effect all the time based on the optimized buffering technology.
In one embodiment, since the optimization of the reception accumulation delay may involve an acceleration process of the normal image rendering speed, which is greater than the normal play speed, the sound may become high or sharp. In order to ensure that the pitch of the sound is maintained during the acceleration of the image rendering, the corresponding audio data may be processed as follows. In one implementation, the codestream data includes one or more image frames and may further include audio frames corresponding to the image frames. It can be understood that the code stream data includes a video code stream and an audio code stream, the video code stream includes one or more image frames, the audio code stream includes one or more audio frames, and an alignment relationship exists between the image frames and the audio frames. One image frame may correspond to one or more audio frames.
Based on this, the processing for the audio data may include what is described in the following (1) to (3): (1) and according to the decoding parameters, decoding the code stream data to obtain a decoding processing result, acquiring a decoding audio corresponding to the audio frame, and acquiring initial output parameters corresponding to the decoding audio.
The code stream data comprises an audio code stream, the audio code stream can be analyzed and decoded in sequence, a decoding processing result obtained by decoding the code stream data according to a decoding parameter can comprise a decoded audio obtained by decoding an audio frame and an initial output parameter corresponding to the decoded audio, so that the decoded audio and the corresponding initial output parameter contained in the code stream data can be obtained, the initial output parameter is attribute information used for describing audio output, and the attribute information can comprise one or more of the number of audio channels and audio sampling frequency.
In one implementation, the following method may be used to obtain the initial output parameters corresponding to the decoded audio: acquiring a sound processing library, and acquiring reference parameters for audio output from the sound processing library; the sound processing library is obtained when the kernel of the player is initialized; and determining the parameter value of the reference parameter from the decoding parameters, and taking the reference parameter with the determined parameter value as the initial output parameter of the decoded audio.
The sound processing library may support soundport.dll that sets treble, speed, and play frequency. The sound processing library may be loaded when kernel initialization is performed on a player (i.e., a processing module or a processing device included in a playing device), and a parameter value of a corresponding parameter is initialized, specifically, a reference parameter required for audio output may be obtained from the sound processing library, where the reference parameter includes one or both of the number of audio channels and the audio sampling frequency, then the same type of parameter in the decoding parameter, which includes a value corresponding to each of the number of audio channels and the audio sampling frequency, is used as a parameter value of the reference parameter, and then the reference parameter having the parameter value is used as an initial output parameter of the decoded audio, for example, as shown in fig. 7d, when the sound processing library is loaded, channel initialization may be performed to initialize the number of audio channels (setChannel), and sampling frequency initialization may be performed to initialize the audio sampling frequency (setSampleRate). If the number of audio channels included in the decoding parameter is 2 and the audio sampling frequency is 5000, the number of audio channels included in the reference parameter may be set to 2 and the audio sampling frequency may be set to 5000 during initialization.
(2) And acquiring the display speed of the decoded image, and smoothing the decoded audio based on the display speed and the initial output parameters to obtain the smoothed decoded audio.
And when the display speed is the accelerated image rendering speed, the playing speed of the audio is also the accelerated playing speed. In order to ensure that the pitch of the audio after the speed change is unchanged, the decoded audio can be smoothed based on the display speed and the initial output parameters to obtain the smoothed decoded audio, thereby ensuring the smoothness of the sound during the short acceleration.
Optionally, the display speed of the decoded image is the determined image rendering speed, and the initial output parameter includes one or both of the number of audio channels and the audio sampling frequency, and a corresponding parameter value. The determined image rendering rate may be a normal rendering rate or an accelerated rendering rate, and the initial output parameters include one or more of: the number of audio channels and corresponding parameter values, audio sampling frequency and corresponding parameter values;
in one implementation, when the display speed is an accelerated image rendering speed, the smoothing processing for the audio data includes: determining a target speed when audio output is carried out from the decoded audio according to the display speed, and acquiring a reference audio block from the decoded audio based on the target speed; acquiring a sound speed adjusting function, calling the sound speed adjusting function and combining with audio sampling frequency, and performing data volume adjusting processing on a reference audio block so as to adjust the data volume of the reference audio block to a preset data volume; taking a reference audio block with a preset data volume as a decoded audio obtained after smoothing; the reference audio block with the preset data volume is audio with the tone kept unchanged and the sound speed increased.
First, a target speed at which audio is output may be determined based on the display speed, for example, the display speed is directly set to the target speed, and then a certain amount of data of decoded audio may be obtained from the decoded audio as a reference audio block based on the target speed. The playing device can use a sound speed adjusting function provided by the sound processing library, set the sound speed of the reference audio block as a target speed, determine a preset data volume based on the audio sampling frequency and the number of audio channels, then adjust the data volume of the reference audio block to the preset data volume to obtain the reference audio block with the preset data volume, wherein the audio sampling frequency of the reference audio block with the preset data volume is not changed, and the sound speed is accelerated, so that the audio with the unchanged tone and the accelerated sound speed is obtained.
Taking the sound processing library as soundport (a kind of sound processing library) as an example, the sound can be kept at the original smoothness during the buffering acceleration process by accessing the soundport technology. Specifically, see fig. 7e for a schematic flow chart of smoothing sound. In the interaction technology of sound rendering and a sound processing library, when a sound renderer in a playing device reads a decoded audio corresponding to an audio frame, if a rendering speed of the decoded audio is not a normal rendering speed, for example, an overall speed is not 1, that is, a playing speed is not 1, the playing device may first call a sound speed adjustment function (i.e., sound. The sound data (here, decoded audio) can be input to the sound processing library by calling a soundport.putsamples _ i16 function in turn until a specified data size (SampleToRequest) is reached, the sound data input to the sound processing library is smoothed by the processor to obtain sound data with variable speed and no tone change, the sound data can return to a calling layer after being processed, the sound data can be stored in the processor, the playing device can call the soundport.receivesamples _ i16 function to read the processed sound data from the processor, the processed data can be added into a sound buffer queue to be queued for subsequent rendering, and finally the played audio can present a smoother sound perception effect.
In another implementation manner, when the display speed is the image rendering speed without acceleration processing, determining the data volume of the output audio to be acquired from the decoded audio according to the parameter value corresponding to the number of audio channels and the parameter value corresponding to the audio sampling frequency; and acquiring the decoded audio of the corresponding data volume as the audio to be output, and taking the audio to be output as the decoded audio obtained after smoothing processing. Specifically, the image rendering speed without accelerated processing may be a normal rendering speed, the data amount of the output audio may be determined based on the parameter value corresponding to the number of audio channels and the parameter value corresponding to the audio sampling frequency, the decoded audio of the data amount may be obtained from the decoded audio as the audio to be output, the audio to be output may be a decoded audio after smoothing processing, and since the display speed is the normal rendering speed, the tone, speed, and the like of the audio may not change, and thus the audio to be output may be directly output after rendering.
(3) And in the process of displaying the decoded image, outputting audio according to the decoded audio after the smoothing processing.
In the process of displaying the decoded image by the playing device, audio output can be performed according to the decoded audio after the smoothing processing. The audio data smoothed in the corresponding manner may be variable-speed non-tonal audio data, and the audio data subjected to the variable-speed non-tonal audio data is output after being rendered, so that a relatively smooth sound perception effect can be presented, and the sound pause can be optimized. When the audio data after the smoothing processing is the audio to be output with unchanged tone, speed and the like, the output audio data is the sound with normal playing speed.
In summary, the display processing scheme provided by the embodiment of the present application may set the image rendering speed based on the number of the decoded images to be rendered in the decoding stage, and specifically, when the total number of the decoded images to be rendered is too large (where the reference number is greater than the preset number), the normal image rendering speed may be adjusted to a slightly faster image rendering speed by performing acceleration processing on the normal image rendering speed, and the image rendering on the decoded images may be accelerated by the accelerated rendering speed, thereby avoiding the problem of too large time delay caused by too much accumulated number of the decoded images waiting for image rendering. When the total number of the images of the decoded images to be rendered is small (the reference number is smaller than the preset number), the decoded images can be rendered at a normal rendering speed, so that rendering resources are saved while low time delay is ensured. On this basis, because the broadcast speed also can increase when accelerating the image rendering speed, the speed of sound also can change, and the accessible carries out smooth processing to decoding audio frequency in this application can guarantee that the tone does not change when accelerating the broadcast, obtains the audio frequency of the unchangeable tone of variable speed, realizes smooth sound sense effect.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a display processing apparatus according to an embodiment of the present disclosure. The display processing device may be a computer program (including program code) running in the playback apparatus, for example, the display processing device is an application software; the display processing device can be used for executing corresponding steps in the method provided by the embodiment of the application. As shown in fig. 8, the display processing apparatus 800 may include at least one of the following modules: an acquisition module 801, an analysis module 802, a decoding module 803, a rendering module 804, a storage module 805, and a smoothing module 806.
An obtaining module 801, configured to detect a demand for screen-casting display of a live broadcast picture in a terminal device, and obtain code stream data of the live broadcast picture from the terminal device; the code stream data comprises one or more image frames, and any image frame is obtained by encoding a corresponding live broadcast picture by the terminal equipment;
the analysis module 802 is configured to obtain a target control parameter related to the first start time of the code stream data, and analyze the coding process of the code stream data according to the target control parameter to obtain a coding parameter of the code stream data;
a decoding module 803, configured to generate a decoding parameter of the code stream data based on the encoding parameter, and decode the decoding parameter to obtain a decoded image of a corresponding image frame in the code stream data; displaying any decoded image after finishing image rendering;
the rendering module 804 is configured to determine a reference number of the decoded images of the uncompleted image rendering from the decoded images obtained by decoding using the decoding parameters, and determine an image rendering speed according to the reference number, so as to perform image rendering and display on the decoded images of the uncompleted image rendering according to the image rendering speed.
In one embodiment, the target control parameter related to the first-on duration is obtained in an initialization stage after the delay reduction function is determined to be enabled, and the initialization stage is further used for obtaining a coding parameter of code stream data; the analysis module 802 is specifically configured to: in the initialization stage, determining the size of data quantity to be read from code stream data according to the parameter value of the target control parameter; reading analysis data with corresponding data quantity from the code stream data according to the data quantity, and analyzing and processing the coding process of the analysis data to obtain corresponding coding parameters; and the obtained coding parameters are used as the coding parameters of the code stream data.
In one embodiment, the target control parameters comprise a frame detection parameter and a duration parameter, wherein a parameter value of the frame detection parameter is used for indicating the frame number of the read data, and a parameter value of the duration parameter is used for indicating the data duration of the read data; the analysis module 802 is specifically configured to: taking the data quantity indicated by the parameter values of the frame detection parameter and the duration parameter as the data quantity to be read from the code stream data; the frame number of the analysis data read from the code stream data is equal to the frame number indicated by the parameter value of the probe frame parameter, and the data duration of the analysis data is equal to the data duration indicated by the parameter value of the duration parameter.
In one embodiment, the initialization stage after the delay reduction function is enabled further specifies analysis parameters to be referred to during analysis, where the analysis parameters include one or more of a cache parameter and an optimization parameter, a parameter value corresponding to the cache parameter is used to indicate that the analysis data is not cached, and a parameter value corresponding to the optimization parameter is used to indicate that analysis processing is performed after useless frames in the analysis data are optimized; an analysis module 802 further configured to: performing useless frame optimization processing on the read analysis data based on the indication of the parameter value of the optimization parameter contained in the analysis parameter, and directly performing analysis processing on the encoding process of the optimized analysis data; analyzing useless frames in the data refers to: when the coding process of the analysis data is analyzed, a reference image frame is not needed.
In one embodiment, the storage module 805 is configured to: after decoding parameters are adopted to decode and obtain a decoded image of a corresponding image frame in code stream data, storing the obtained decoded image in a buffer, and displaying the decoded image in the buffer after rendering is completed; a rendering module 804 to: and after any decoded image frame is rendered and displayed, calling back to the buffer to acquire the total number of the decoded images contained in the buffer, and taking the total number of the images as the reference number of the decoded images with uncompleted image rendering.
In one embodiment, the rendering module 804 is configured to: comparing the reference quantity with a preset quantity, accelerating the normal rendering speed when the reference quantity is greater than the preset quantity, and taking the accelerated rendering speed as the image rendering speed; and when the reference number is less than or equal to the preset number, performing rendering processing at a normal rendering speed.
In an embodiment, the rendering module 804 is specifically configured to: when the reference quantity is less than or equal to the preset quantity, if the current rendering speed is the accelerated rendering speed, judging whether the reference quantity is less than the target quantity; and when the reference quantity is determined to be smaller than the target quantity, adjusting the accelerated rendering speed to be a normal rendering speed, and performing rendering processing by adopting the normal rendering speed.
In one embodiment, the code stream data is analyzed and processed by calling a target program; an analysis module 802 to: acquiring a decoding function to be called when the target program is adopted to analyze and process the code stream data, and determining a target decoding function causing the initial starting time length from the called decoding function; the first opening time is the time from the start of decoding the code stream data to the display of the first screen image; and acquiring a corresponding reference field when the analysis processing is carried out from the target decoding function, and taking the reference field as a target control parameter related to the first opening time of the code stream data.
In one embodiment, one or more image frames contained in the code stream data are sequentially arranged according to the corresponding playing time sequence, and when a decoded image of a corresponding image frame in the code stream data is obtained, each image frame in the code stream data is sequentially decoded based on the arrangement sequence; the decoding module 803 is specifically configured to: in the process of sequentially decoding each image frame of the code stream data by adopting the decoding parameters, after a decoding image corresponding to one image frame of the code stream data is obtained, a frame end symbol corresponding to one image frame is obtained; after the frame end symbol corresponding to one image frame is obtained, the next image frame of the image frame is obtained from the code stream data based on the arrangement sequence, and the next image frame is decoded to obtain a corresponding decoded image.
In one embodiment, the codestream data includes audio frames corresponding to one or more image frames, respectively; a smoothing module 806 configured to: decoding the code stream data according to the decoding parameters to obtain a decoding processing result, acquiring a decoding audio corresponding to the audio frame, and acquiring initial output parameters corresponding to the decoding audio; acquiring the display speed of a decoded image, and smoothing the decoded audio based on the display speed and the initial output parameters to obtain the smoothed decoded audio; and in the process of displaying the decoded image, outputting audio according to the decoded audio after the smoothing processing.
In one embodiment, the smoothing module 806 is configured to: acquiring a sound processing library, and acquiring reference parameters for audio output from the sound processing library; the sound processing library is obtained when the kernel of the player is initialized; determining a parameter value of a reference parameter from the decoding parameters, and taking the reference parameter with the determined parameter value as an initial output parameter of the decoded audio; the reference parameter includes one or both of the number of audio channels and the audio sampling frequency.
In one embodiment, the display speed of the decoded image is the determined image rendering speed, and the initial output parameters comprise one or two of the number of audio channels and the audio sampling frequency, and corresponding parameter values; the smoothing module 806 is specifically configured to: when the display speed is the accelerated image rendering speed, determining a target speed for audio output from the decoded audio according to the display speed, and acquiring a reference audio block from the decoded audio based on the target speed; acquiring a sound speed adjusting function, calling the sound speed adjusting function, combining audio sampling frequency and audio channel number, and performing data volume adjusting processing on a reference audio block so as to adjust the data volume of the reference audio block to a preset data volume; taking a reference audio block with a preset data volume as a decoded audio obtained after smoothing; the reference audio block with the preset data volume is audio with unchanged tone and accelerated sound speed.
In one embodiment, the display speed of the decoded image is the determined image rendering speed, and the initial output parameters comprise one or two of the number of audio channels and the audio sampling frequency, and corresponding parameter values; the smoothing module 806 is specifically configured to: when the display speed is the image rendering speed without accelerated processing, determining the data volume of the output audio required to be acquired from the decoded audio according to the parameter value corresponding to the number of audio channels and the parameter value corresponding to the audio sampling frequency; and acquiring the decoded audio of the corresponding data volume as the audio to be output, and taking the audio to be output as the decoded audio obtained after smoothing processing.
It can be understood that the functions of the functional modules of the display processing apparatus described in the embodiment of the present application can be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process of the method can refer to the description related to the foregoing method embodiment, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application. The playback device is a computer device herein, and may specifically include an input device 901, an output device 902, a processor 903, a memory 904, a network interface 905, and at least one communication bus 906. Wherein: the processor 903 may be a Central Processing Unit (CPU). The processor may further include a hardware chip. The hardware chip may be an Application-Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), or the like. The PLD may be a Field-Programmable Gate Array (FPGA), a General Array Logic (GAL), or the like.
The Memory 904 may include a Volatile Memory (Volatile Memory), such as a Random-Access Memory (RAM); the Memory 904 may also include a Non-Volatile Memory (Non-Volatile Memory), such as a Flash Memory (Flash Memory), a Solid-State Drive (SSD), etc.; the Memory 904 may be a high-speed RAM Memory or a Non-Volatile Memory (Non-Volatile Memory), such as at least one disk Memory. The memory 904 may optionally be at least one storage device located remotely from the processor 903. The memory 904 may also comprise a combination of the above-described types of memory. As shown in fig. 9, the memory 904, which is a kind of computer-readable storage medium, may include therein an operating system, a network communication module, an interface module, and a device control application program.
The network interface 905 may include a standard wired interface, a wireless interface (such as a WI-FI interface), and the network interface may be used to provide data communication functions as a communication interface; the communication bus 906 is responsible for connecting the various communication elements; the input device 901 receives an input instruction to generate a signal input related to object setting and function control of the terminal device, and in one embodiment, the input device 901 includes one or more of, but is not limited to, a touch panel, a physical Keyboard or a virtual Keyboard (Keyboard), a function key, a mouse, and the like; the output device 902 is configured to output data information, in this embodiment of the present application, the output device 902 may be configured to project a screen to Display a live frame, and the output device 902 may include a Display screen (Display) or other Display devices; the processor 903 is a control center of the terminal device, connects various parts of the entire terminal device by various interfaces and lines, and executes various functions by scheduling and running a computer program stored in the memory 904.
The processor 903 may be configured to call a computer program in the memory to perform the following operations: detecting a demand for screen projection display of a live broadcast picture in terminal equipment, and acquiring code stream data of the live broadcast picture from the terminal equipment; the code stream data comprises one or more image frames, and any image frame is obtained by encoding a corresponding live broadcast picture by the terminal equipment; acquiring a target control parameter related to the first opening time of the code stream data, and analyzing the coding process of the code stream data according to the target control parameter to obtain a coding parameter of the code stream data; generating decoding parameters of the code stream data based on the coding parameters, and decoding by adopting the decoding parameters to obtain a decoded image of a corresponding image frame in the code stream data; displaying any decoded image after finishing image rendering; and determining the reference number of the decoded images which are not subjected to image rendering from the decoded images obtained by decoding by adopting the decoding parameters, and determining the image rendering speed according to the reference number so as to render and display the images of the decoded images which are not subjected to image rendering according to the image rendering speed.
In one embodiment, the target control parameter related to the first-on duration is obtained in an initialization stage after the delay reduction function is determined to be enabled, and the initialization stage is further used for obtaining a coding parameter of code stream data; the processor 903 is specifically configured to: in the initialization stage, determining the size of data quantity to be read from code stream data according to the parameter value of the target control parameter; reading analysis data with corresponding data volume from the code stream data according to the data volume, and analyzing and processing the coding process of the analysis data to obtain corresponding coding parameters; and the obtained coding parameters are used as the coding parameters of the code stream data.
In one embodiment, the target control parameters include a probe frame parameter and a duration parameter, a parameter value of the probe frame parameter is used for indicating the frame number of the read data, and a parameter value of the duration parameter is used for indicating the data duration of the read data; the processor 903 is specifically configured to: taking the data quantity indicated by the parameter values of the frame detection parameter and the duration parameter as the data quantity to be read from the code stream data; the frame number of the analysis data read from the code stream data is equal to the frame number indicated by the parameter value of the detection frame parameter, and the data duration of the analysis data is equal to the data duration indicated by the parameter value of the duration parameter.
In one embodiment, the initialization stage after the delay reduction function is enabled further specifies analysis parameters to be referred to in analysis, where the analysis parameters include one or more of cache parameters and optimization parameters, a parameter value corresponding to the cache parameter is used to indicate that analysis data is not cached, and a parameter value corresponding to the optimization parameter is used to indicate that analysis processing is performed after useless frames in the analysis data are optimized; a processor 903 further configured to: performing useless frame optimization processing on the read analysis data based on the indication of the parameter value of the optimization parameter contained in the analysis parameter, and directly performing analysis processing on the encoding process of the optimized analysis data; analyzing useless frames in the data refers to: when the coding process of the analysis data is analyzed, the reference image frame is not needed.
In one embodiment, the processor 903 is configured to: after decoding parameters are adopted to decode and obtain decoded images of corresponding image frames in code stream data, storing the obtained decoded images in a buffer, and displaying the decoded images in the buffer after rendering is completed; a processor 903 for: and after any decoded image frame is rendered and displayed, calling back to the buffer to acquire the total number of the decoded images contained in the buffer, and taking the total number of the images as the reference number of the decoded images with uncompleted image rendering.
In one embodiment, the processor 903 is configured to: comparing the reference quantity with a preset quantity, accelerating the normal rendering speed when the reference quantity is greater than the preset quantity, and taking the accelerated rendering speed as the image rendering speed; and when the reference number is less than or equal to the preset number, performing rendering processing at a normal rendering speed.
In one embodiment, the processor 903 is specifically configured to: when the reference quantity is less than or equal to the preset quantity, if the current rendering speed is the accelerated rendering speed, judging whether the reference quantity is less than the target quantity; and when the reference quantity is determined to be smaller than the target quantity, adjusting the accelerated rendering speed to be a normal rendering speed, and performing rendering processing by adopting the normal rendering speed.
In one embodiment, the code stream data is analyzed and processed by calling a target program; a processor 903 to: acquiring a decoding function to be called when the target program is adopted to analyze and process the code stream data, and determining a target decoding function causing the initial starting time length from the called decoding function; the first opening time is the time from the start of decoding the code stream data to the display of the first screen image; and acquiring a corresponding reference field when the analysis processing is carried out from the target decoding function, and taking the reference field as a target control parameter related to the first opening time of the code stream data.
In one embodiment, one or more image frames contained in the code stream data are sequentially arranged according to a corresponding playing time sequence, and when a decoded image of a corresponding image frame in the code stream data is obtained, each image frame in the code stream data is sequentially decoded based on the arrangement sequence; the processor 903 is specifically configured to: in the process of sequentially decoding each image frame of the code stream data by adopting the decoding parameters, after a decoding image corresponding to one image frame of the code stream data is obtained, a frame end symbol corresponding to one image frame is obtained; after the frame end symbol corresponding to one image frame is obtained, the next image frame of the image frame is obtained from the code stream data based on the arrangement sequence, and the next image frame is decoded to obtain a corresponding decoded image.
In one embodiment, the codestream data includes audio frames corresponding to one or more image frames, respectively; a processor 903 for: decoding the code stream data according to the decoding parameters to obtain a decoding processing result, acquiring decoding audio corresponding to the audio frame, and acquiring initial output parameters corresponding to the decoding audio; acquiring the display speed of the decoded image, and smoothing the decoded audio based on the display speed and the initial output parameters to obtain the smoothed decoded audio; and in the process of displaying the decoded image, outputting audio according to the decoded audio after the smoothing processing.
In one embodiment, the processor 903 is configured to: acquiring a sound processing library, and acquiring reference parameters for audio output from the sound processing library; the sound processing library is obtained when the kernel of the player is initialized; determining a parameter value of a reference parameter from the decoding parameters, and taking the reference parameter with the determined parameter value as an initial output parameter of the decoded audio; the reference parameter includes one or both of the number of audio channels and the audio sampling frequency.
In one embodiment, the display speed of the decoded image is the determined image rendering speed, and the initial output parameters comprise one or two of the number of audio channels and the audio sampling frequency, and corresponding parameter values; the processor 903 is specifically configured to: when the display speed is the accelerated image rendering speed, determining a target speed when audio output is carried out according to the display speed, and acquiring a reference audio block from the decoded audio based on the target speed; acquiring a sound speed adjusting function, calling the sound speed adjusting function, combining audio sampling frequency and audio channel number, and performing data volume adjusting processing on a reference audio block so as to adjust the data volume of the reference audio block to a preset data volume; taking a reference audio block with a preset data volume as a decoded audio obtained after smoothing; the reference audio block with the preset data volume is audio with unchanged tone and accelerated sound speed.
In one embodiment, the display speed of the decoded image is the determined image rendering speed, and the initial output parameters comprise one or two of the number of audio channels and the audio sampling frequency, and corresponding parameter values; the processor 903 is specifically configured to: when the display speed is the image rendering speed without accelerated processing, determining the data volume of the output audio required to be acquired from the decoded audio according to the parameter value corresponding to the number of audio channels and the parameter value corresponding to the audio sampling frequency; and acquiring the decoded audio of the corresponding data volume as the audio to be output, and taking the audio to be output as the decoded audio obtained after smoothing processing.
It should be understood that the computer device 900 described in this embodiment of the present application can perform the description of the display processing method in the corresponding embodiment described above, and can also perform the description of the display processing apparatus 800 in the corresponding embodiment shown in fig. 8, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
In addition, it should be further noted that an exemplary embodiment of the present application further provides a storage medium, where the storage medium stores a computer program of the foregoing display processing method, where the computer program includes program instructions, and when one or more processors load and execute the program instructions, the description of the display processing method in the embodiment may be implemented, which is not described herein again, and beneficial effects of using the same method are also described herein without details. It will be understood that the program instructions may be deployed to be executed on one computer device or on multiple computer devices that are capable of communicating with each other.
The computer-readable storage medium may be the display processing apparatus provided in any of the foregoing embodiments or an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) card, a flash memory card (flash card), and the like provided on the computer device. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the computer device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the computer device. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
In one aspect of the application, a computer program product or computer program is provided, the computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided by one aspect of the embodiments of the present application.
The steps in the method of the embodiment of the application can be sequentially adjusted, combined and deleted according to actual needs.
The modules in the device can be combined, divided and deleted according to actual needs.
While the invention has been described with reference to a number of embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (16)

1. A display processing method, characterized in that the method comprises:
detecting a requirement for screen-casting display of a live broadcast picture in terminal equipment, and acquiring code stream data of the live broadcast picture from the terminal equipment; the code stream data comprises one or more image frames, and any image frame is obtained by encoding a corresponding live broadcast picture by the terminal equipment;
acquiring a target control parameter related to the first opening time of the code stream data, and analyzing the coding process of the code stream data according to the target control parameter to obtain a coding parameter of the code stream data; the first opening time is as follows: starting to decode the code stream data until the time of displaying the first screen image; target control parameters related to the first opening time length are used for analyzing the size of the read quantity required by the code stream data, the target control parameters comprise probe parameters and time length parameters, the parameter values of the probe parameters are used for indicating the frame number of the read data, and the parameter values of the time length parameters are used for indicating the data time length of the read data;
generating decoding parameters of the code stream data based on the coding parameters, and decoding by adopting the decoding parameters to obtain a decoded image of a corresponding image frame in the code stream data; displaying any decoded image after finishing image rendering;
and determining the reference number of the decoded images which are not subjected to image rendering from the decoded images obtained by decoding by adopting the decoding parameters, and determining the image rendering speed according to the reference number so as to render and display the decoded images which are not subjected to image rendering according to the image rendering speed.
2. The method of claim 1, wherein the target control parameter associated with the first-on duration is obtained in an initialization stage after determining that the delay reduction function is enabled, the initialization stage being further configured to obtain an encoding parameter of the codestream data; the analyzing the coding process of the code stream data according to the target control parameter to obtain the coding parameter of the code stream data comprises the following steps:
in the initialization stage, determining the size of the data volume needing to be read from the code stream data according to the parameter value of the target control parameter;
reading analysis data with corresponding data quantity from the code stream data according to the data quantity, and analyzing and processing the coding process of the analysis data to obtain corresponding coding parameters; and the obtained coding parameters are used as the coding parameters of the code stream data.
3. The method of claim 2, wherein the determining the size of the data amount to be read from the codestream data according to the parameter value of the target control parameter comprises:
taking the data volume indicated by the parameter value of the frame detection parameter and the parameter value of the duration parameter as the data volume to be read from the code stream data;
and the frame number of the analysis data read from the code stream data is equal to the frame number indicated by the parameter value of the probe frame parameter, and the data duration of the analysis data is equal to the data duration indicated by the parameter value of the duration parameter.
4. The method of claim 2, wherein an initialization phase after the delay reduction function is enabled further specifies analysis parameters to be referred to for analysis, the analysis parameters including one or more of a buffer parameter and an optimization parameter, a parameter value corresponding to the buffer parameter is used to indicate that the analysis data is not buffered, and a parameter value corresponding to the optimization parameter is used to indicate that analysis processing is performed after useless frames in the analysis data are optimized; the analysis processing of the coding process of the analysis data comprises the following steps:
performing useless frame optimization processing on the read analysis data based on the indication of the parameter value of the optimization parameter contained in the analysis parameter, and directly performing analysis processing on the encoding process of the optimized analysis data; the useless frames in the analysis data refer to: when the coding process of the analysis data is analyzed, a reference image frame is not needed.
5. The method of claim 1, wherein after the decoding using the decoding parameters to obtain the decoded image of the corresponding image frame in the code stream data, the method further comprises: storing the obtained decoded image in a buffer, wherein the decoded image in the buffer is displayed after rendering is finished;
the determining the reference number of the decoded images of the uncompleted image rendering from the decoded images obtained by decoding with the decoding parameters includes:
and after the rendering and display of any decoded image frame are finished, calling back to the buffer to obtain the total number of the decoded images contained in the buffer, and taking the total number of the images as the reference number of the decoded images of which the image rendering is not finished.
6. The method of claim 1, wherein said determining an image rendering rate from said reference number comprises:
comparing the reference quantity with a preset quantity, accelerating the normal rendering speed when the reference quantity is greater than the preset quantity, and taking the accelerated rendering speed as the image rendering speed;
and when the reference quantity is less than or equal to the preset quantity, rendering processing is carried out at the normal rendering speed.
7. The method of claim 6, wherein performing the rendering process at the normal rendering speed when the reference number is less than or equal to the preset number comprises:
when the reference quantity is less than or equal to the preset quantity, if the current rendering speed is the accelerated rendering speed, judging whether the reference quantity is less than the target quantity;
and when the reference quantity is determined to be smaller than the target quantity, adjusting the accelerated rendering speed to the normal rendering speed, and performing rendering processing by adopting the normal rendering speed.
8. The method of claim 1, wherein the codestream data is parsed by calling a target program; the acquiring of the target control parameter related to the first opening time of the code stream data includes:
acquiring a decoding function to be called when the target program is adopted to analyze and process the code stream data, and determining a target decoding function causing the first-time duration from the called decoding function;
and acquiring a corresponding reference field when the analysis processing is carried out from the target decoding function, and taking the reference field as a target control parameter related to the first opening time of the code stream data.
9. The method according to claim 1, wherein one or more image frames included in the code stream data are sequentially arranged according to a corresponding playing time sequence, and when a decoded image of a corresponding image frame in the code stream data is obtained, the decoding processing is sequentially performed on each image frame of the code stream data based on the arrangement sequence; the decoding of the decoding parameters to obtain the decoded image of the corresponding image frame in the code stream data includes:
in the process of sequentially decoding each image frame of the code stream data by adopting the decoding parameters, after a decoded image corresponding to one image frame of the code stream data is obtained, a frame end symbol corresponding to the one image frame is obtained;
and after the frame end symbol corresponding to the image frame is obtained, obtaining a next image frame of the image frame from the code stream data based on the arrangement sequence, and decoding the next image frame to obtain a corresponding decoded image.
10. The method of claim 1, wherein the codestream data includes audio frames corresponding to the one or more image frames, respectively; the method further comprises the following steps:
decoding the code stream data according to the decoding parameters to obtain a decoding processing result, acquiring a decoding audio corresponding to the audio frame, and acquiring initial output parameters corresponding to the decoding audio;
acquiring the display speed of the decoded image, and smoothing the decoded audio based on the display speed and the initial output parameter to obtain the smoothed decoded audio;
and in the process of displaying the decoded image, performing audio output according to the decoded audio after the smoothing processing.
11. The method of claim 10, wherein the obtaining initial output parameters corresponding to the decoded audio comprises:
acquiring a sound processing library, and acquiring reference parameters for audio output from the sound processing library; the sound processing library is obtained when a kernel of the player is initialized;
determining a parameter value of the reference parameter from the decoding parameters, and taking the reference parameter with the determined parameter value as an initial output parameter of the decoded audio; the reference parameter includes one or both of an audio channel number and an audio sampling frequency.
12. The method of claim 10, wherein the display speed of the decoded image is a determined image rendering speed, and the initial output parameters include one or both of the number of audio channels and the audio sampling frequency, and corresponding parameter values; the smoothing the decoded audio based on the display speed and the initial output parameter to obtain a smoothed decoded audio, comprising:
when the display speed is the accelerated image rendering speed, determining a target speed for audio output from the decoded audio according to the display speed, and acquiring a reference audio block from the decoded audio based on the target speed;
acquiring a sound speed adjusting function, calling the sound speed adjusting function and combining the audio sampling frequency to adjust the data volume of the reference audio block to a preset data volume;
taking a reference audio block with a preset data volume as a decoded audio obtained after smoothing; and the reference audio block with the preset data volume is audio with the unchanged tone and the accelerated sound speed.
13. The method of claim 10, wherein the display speed of the decoded image is a determined image rendering speed, and the initial output parameters include one or both of the number of audio channels and the audio sampling frequency, and corresponding parameter values; the smoothing the decoded audio based on the display speed and the initial output parameter to obtain a smoothed decoded audio, including:
when the display speed is the image rendering speed without accelerated processing, determining the data volume of the output audio required to be acquired from the decoded audio according to the parameter value corresponding to the audio channel number and the parameter value corresponding to the audio sampling frequency;
and acquiring the decoded audio of the corresponding data volume as the audio to be output, and taking the audio to be output as the decoded audio obtained after smoothing processing.
14. A display processing apparatus, characterized by comprising:
the acquisition module is used for detecting the requirement of screen projection display on a live broadcast picture in terminal equipment and acquiring code stream data of the live broadcast picture from the terminal equipment; the code stream data comprises one or more image frames, and any image frame is obtained by encoding a corresponding live broadcast picture by the terminal equipment;
the analysis module is used for acquiring a target control parameter related to the first opening time of the code stream data, and analyzing the coding process of the code stream data according to the target control parameter to obtain a coding parameter of the code stream data; the first opening time is as follows: starting decoding processing on the code stream data until the time length for displaying the first screen image is reached; target control parameters related to the initial starting time are used for analyzing the size of the reading amount required by the code stream data, the target control parameters comprise probe parameters and time length parameters, parameter values of the probe parameters are used for indicating the frame number of the read data, and parameter values of the time length parameters are used for indicating the data time length of the read data;
the decoding module is used for generating decoding parameters of the code stream data based on the coding parameters and obtaining a decoding image of a corresponding image frame in the code stream data by adopting the decoding parameters for decoding; displaying any decoded image after finishing image rendering;
and the rendering module is used for determining the reference number of the decoded images which are not finished with image rendering from the decoded images obtained by decoding by adopting the decoding parameters, and determining the image rendering speed according to the reference number so as to perform image rendering and display on the decoded images which are not finished with image rendering according to the image rendering speed.
15. A computer device, comprising: a processor, a memory, a network interface, an input device, and an output device;
the processor is coupled to the memory, the network interface, the input device, and the output device, wherein the network interface is configured to provide network communication functionality, the memory is configured to store program code, the input device is configured to receive input instructions to generate signal inputs related to settings and function control of the computer device, the output device is configured to output data information, and the processor is configured to invoke the program code to perform the display processing method of any of claims 1-13.
16. A computer-readable storage medium, characterized in that it stores a computer program comprising program instructions which, when executed by a processor, perform the display processing method of any one of claims 1-13.
CN202211163467.9A 2022-09-23 2022-09-23 Display processing method and device, computer equipment and readable storage medium Active CN115278288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211163467.9A CN115278288B (en) 2022-09-23 2022-09-23 Display processing method and device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211163467.9A CN115278288B (en) 2022-09-23 2022-09-23 Display processing method and device, computer equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN115278288A CN115278288A (en) 2022-11-01
CN115278288B true CN115278288B (en) 2022-12-20

Family

ID=83757738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211163467.9A Active CN115278288B (en) 2022-09-23 2022-09-23 Display processing method and device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115278288B (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4921218B2 (en) * 2007-03-27 2012-04-25 キヤノン株式会社 Image forming apparatus and control method thereof
US8284082B2 (en) * 2010-10-27 2012-10-09 Sling Media Pvt. Ltd. Dynamic encode setting adjustment
US9584787B1 (en) * 2012-06-08 2017-02-28 Amazon Technologies, Inc. Performance optimization for streaming video
US10616086B2 (en) * 2012-12-27 2020-04-07 Navidia Corporation Network adaptive latency reduction through frame rate control
US10229540B2 (en) * 2015-12-22 2019-03-12 Google Llc Adjusting video rendering rate of virtual reality content and processing of a stereoscopic image
CN110049347B (en) * 2019-04-11 2021-10-22 广州虎牙信息科技有限公司 Method, system, terminal and device for configuring images on live interface
CN111882626B (en) * 2020-08-06 2023-07-14 腾讯科技(深圳)有限公司 Image processing method, device, server and medium
KR102317873B1 (en) * 2021-04-02 2021-10-26 신형호 A System Providing Fast Video Rendering Service

Also Published As

Publication number Publication date
CN115278288A (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN107690073B (en) Video live broadcast method and video live broadcast server
US10250664B2 (en) Placeshifting live encoded video faster than real time
US20190124371A1 (en) Systems, methods and computer software for live video/audio broadcasting
US7830908B2 (en) Systems and methods of reducing delay in decoding
EP1919217A2 (en) Apparatus and method for providing in a terminal a pause function for a broadcast stream
CN112752115B (en) Live broadcast data transmission method, device, equipment and medium
US11356493B2 (en) Systems and methods for cloud storage direct streaming
WO2008061416A1 (en) A method and a system for supporting media data of various coding formats
CN111182322B (en) Director control method and device, electronic equipment and storage medium
CN113141522B (en) Resource transmission method, device, computer equipment and storage medium
CN112770122B (en) Method and system for synchronizing videos on cloud director
EP3748983B1 (en) Video playback method, terminal apparatus, and storage medium
US20180103276A1 (en) Method for initiating a transmission of a streaming content delivered to a client device and access point for implementing this method
US10798393B2 (en) Two pass chunk parallel transcoding process
CN113382278B (en) Video pushing method and device, electronic equipment and readable storage medium
Viola et al. QoE-based enhancements of Chunked CMAF over low latency video streams
EP2673958A1 (en) A method for optimizing a video stream
CN115278288B (en) Display processing method and device, computer equipment and readable storage medium
US11622135B2 (en) Bandwidth allocation for low latency content and buffered content
CN115243074A (en) Video stream processing method and device, storage medium and electronic equipment
CN111918092B (en) Video stream processing method, device, server and storage medium
JP2012054800A (en) Video transmitter
US20090296701A1 (en) Method and apparatus for improving channel acquisition
CN117768687A (en) Live stream switching method and device
WO2019132938A1 (en) Reducing latency for streaming video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40076033

Country of ref document: HK