CN110213636B - Method and device for generating video frame of online video, storage medium and equipment - Google Patents

Method and device for generating video frame of online video, storage medium and equipment Download PDF

Info

Publication number
CN110213636B
CN110213636B CN201810398279.1A CN201810398279A CN110213636B CN 110213636 B CN110213636 B CN 110213636B CN 201810398279 A CN201810398279 A CN 201810398279A CN 110213636 B CN110213636 B CN 110213636B
Authority
CN
China
Prior art keywords
rendering
task
thread
video
data stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810398279.1A
Other languages
Chinese (zh)
Other versions
CN110213636A (en
Inventor
向晨宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810398279.1A priority Critical patent/CN110213636B/en
Publication of CN110213636A publication Critical patent/CN110213636A/en
Application granted granted Critical
Publication of CN110213636B publication Critical patent/CN110213636B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4314Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for fitting data in a restricted space on the screen, e.g. EPG data in a rectangular grid
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Abstract

The embodiment of the application discloses a method, a device, a storage medium and equipment for generating video frames of an online video, and belongs to the technical field of computers. The method comprises the following steps: receiving a video data stream, wherein the video data stream contains different types of sub data streams, and each sub data stream has a different stream identifier; for each seed data stream, creating a task thread and a task queue, and adding a data segment belonging to the same video frame in the sub data stream into the task queue as a task; processing the task by using the task thread to obtain each processing result for the task thread corresponding to the task queue with the task; and rendering each processing result to obtain a video frame. The method and the device for playing the pictures can improve the performance of the terminal and can also improve the fluency of picture playing.

Description

Method and device for generating video frame of online video, storage medium and equipment
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method, a device, a storage medium and equipment for generating video frames of online videos.
Background
The live broadcast room supports multi-person online voice communication and video communication. Each live room may include at least one anchor user, and the live room may further include at least one viewer user after the at least one viewer user enters the live room. During the live broadcast, on-line interactions between the anchor user and the anchor user, and between the anchor user and the audience user may occur, including, but not limited to, sending barrages and giving away gifts. At this time, the live broadcast client needs to generate a video from three different types of sub-streams, namely, the bullet screen sub-stream, the gift sub-stream, and the video sub-stream shot by the anchor broadcast for playing. The live client here includes a live client of the anchor user and/or a live client of the viewer user.
In the related art, a live client receives a video data stream, wherein the video data stream contains different types of sub-data streams, and each sub-data stream has a different stream identifier; creating a task thread, creating a corresponding task queue for each seed data stream, and adding a data segment belonging to the same video frame in each seed data stream as a task to the task queue corresponding to the sub data stream; rendering the tasks in each task queue in sequence by using the task thread to obtain each rendering result; and informing hardware to render each rendering result to obtain a video frame.
When a task thread processes a task in one task queue, tasks in other task queues occupy processing resources of the task thread, so that the task thread interrupts processing of the current task and processes the tasks in other queues, and a picture of the current task in a video frame cannot be completely displayed, thereby resulting in unsmooth picture playing.
Disclosure of Invention
The embodiment of the application provides a method, a device, a storage medium and equipment for generating video frames of an online video, which are used for solving the problem that the tasks in different task queues occupy processing resources of the same task thread, so that the picture playing is not smooth. The technical scheme is as follows:
in one aspect, a method for generating a video frame of an online video is provided, where the method includes:
receiving a video data stream, wherein the video data stream contains different types of sub data streams, and each sub data stream has a different stream identifier;
for each sub data stream, creating a task thread and a task queue, and adding a data segment belonging to the same video frame in the sub data stream as a task into the task queue;
processing the task by using the task thread to obtain a processing result for the task thread corresponding to the task queue with the task;
and rendering each processing result to obtain a video frame.
In one aspect, an apparatus for generating a video frame of an online video is provided, the apparatus including:
the receiving module is used for receiving video data streams, wherein the video data streams contain different types of sub data streams, and each sub data stream has different stream identifications;
the creating module is used for creating a task thread and a task queue for each seed data stream received by the receiving module, and adding a data segment belonging to the same video frame in the sub data stream into the task queue as a task;
the processing module is used for processing the task by using the task thread to obtain a processing result for the task thread corresponding to the task queue with the task;
and the generating module is used for rendering each processing result obtained by the processing module to obtain a video frame.
On one hand, a video frame playing method of live video is provided, which is used in a live client, and the method comprises the following steps:
acquiring a live video stream, wherein the live video stream comprises a live picture data stream and an interactive data stream, the interactive data stream at least comprises one of a gift data stream and a barrage data stream, and the live video stream, the gift data stream and the barrage data stream have different stream identifications;
for each data stream, creating a task thread and a task queue, and adding a data segment belonging to the same live video frame in the data stream as a task into the task queue;
processing the task by using the task thread to obtain a processing result for the task thread corresponding to the task queue with the task;
rendering each processing result to obtain a live broadcast video frame, wherein the live broadcast video frame comprises a live broadcast picture and interactive data;
and playing the live video frame.
In one aspect, a video frame playing apparatus for live video is provided, where the apparatus is used in a live client, and the apparatus includes:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a live video stream, the live video stream comprises a live image data stream and an interactive data stream, the interactive data stream at least comprises one of a gift data stream and a barrage data stream, and the live video stream, the gift data stream and the barrage data stream have different stream identifications;
the creating module is used for creating a task thread and a task queue for each data stream obtained by the obtaining module, and adding a data segment belonging to the same video frame in the data stream into the task queue as a task;
the processing module is used for processing the task by using the task thread to obtain a processing result for the task thread corresponding to the task queue with the task;
the generation module is used for rendering each processing result obtained by the processing module to obtain a live video frame, and the live video frame comprises a live frame and interactive data;
and the playing module is used for playing the live video frames generated by the generating module.
In one aspect, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the video frame generation method of online video as described above, or which is loaded and executed by the processor to implement the video frame playing method of live video as described above.
In one aspect, an electronic device is provided, and the electronic device includes a processor and a memory, where the memory stores at least one instruction that is loaded and executed by the processor to implement the video frame generation method for online video as described above, or that is loaded and executed by the processor to implement the video frame playing method for live video as described above.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
because the task preempts the processing resources of the task threads for processing the task, when one task queue is established for each task thread, the problem that the processing resources of the same task thread are preempted by the tasks in different task queues can be avoided, so that the problems of performance reduction and heating and scalding of the terminal caused by the process resource preemption are avoided, and the performance of the terminal is improved. In addition, each task thread can process each task, so that the picture of each task in the video frame can be completely rendered, and the smoothness of picture playing is improved. In addition, the tasks are processed in parallel through the plurality of task threads, the processing progress of the tasks can be accelerated, the problem of pause of picture playing is avoided, and the smoothness of picture playing is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is an interface schematic diagram of a live auction-type client provided in accordance with some exemplary embodiments;
FIG. 2 is a schematic illustration of a live room interface provided in accordance with some demonstrative embodiments;
FIG. 3 is a block diagram of a video frame generation system provided in accordance with some exemplary embodiments;
FIG. 4 is a flowchart of a method for generating video frames of an online video according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating interaction between a runloop and a task thread according to an embodiment of the present application;
FIG. 6 is a flowchart of a method for generating video frames of an online video according to another embodiment of the present application;
FIG. 7 is a flow diagram of task processing and rendering provided by another embodiment of the present application;
FIG. 8 is a block diagram illustrating a process flow for exiting the background according to another embodiment of the present application;
fig. 9 is a flowchart of a method for playing a video frame of a live video according to an embodiment of the present application;
fig. 10 is a block diagram illustrating a structure of a video frame generation apparatus for an online video according to an embodiment of the present application;
fig. 11 is a block diagram illustrating a video frame playing apparatus for live video according to an embodiment of the present application;
fig. 12 is a block diagram of a terminal according to still another embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
Before explaining the embodiments of the present application in detail, an application scenario of the embodiments of the present application will be explained.
The embodiment of the application is applied to a scene in which at least two different types of data are generated into an online playing video to be played, and two typical scenes are described as follows:
1. live broadcast scene
The live broadcast scenes include a live broadcast room live broadcast scene and a television live broadcast scene, and the two live broadcast scenes are described below.
1) Live broadcast scene of live broadcast room
After the anchor user enters the live broadcast room through the live broadcast client of the anchor user and the audience user enters the live broadcast room through the live broadcast client of the audience user, online interaction can be carried out between the anchor user and between the anchor user and the audience user, and the interaction modes include but are not limited to bullet screen sending and gift giving. At this time, the live broadcast client needs to generate a video from three different types of sub-streams, namely, the bullet screen sub-stream, the gift sub-stream, and the video sub-stream shot by the anchor broadcast, so as to display the interactive fun. The live client here includes a live client of the anchor user and/or a live client of the viewer user.
The live broadcast client can be a live broadcast client for live broadcast of a game, or a live broadcast client for entertainment of entertainment programs such as live broadcast of singing and dancing, talk show and the like, and the embodiment does not limit the type of the live broadcast client.
Taking an electronic contest type live broadcast client as an example, please refer to fig. 1, which shows a tab page interface of an "king" game, wherein "BA passivity" and "penguin" in fig. 1 are two live broadcast rooms, and when a user clicks "BA passivity", the user enters the live broadcast room of "BA passivity" and jumps to the live broadcast room interface shown in fig. 2 from the tab page interface; referring to fig. 2, the upper half of the area is playing video of a game being played in "BA-passionate" live room, the middle area is displayed with a barrage, and the lower half of the area is displayed with a gift type.
It should be noted that fig. 2 exemplifies that three types of data are displayed in three different areas in the interface, and in actual implementation, a bullet screen and a gift can also be displayed on a video screen in an overlapping manner.
2) Live television scene
Besides program data, interactive data such as two-dimensional codes and short message numbers of program party contact information can be played, and different types of data such as program data and interactive data can be generated into a video to be played by the television application.
2. Video application playback scenario
The video playing application can play interactive data such as barrage, advertisement links related to the video data, advertisement words and the like besides the video data, and the video playing application can generate different types of data such as the video data, the interactive data and the like into one video to play.
In this embodiment, a task thread and a task queue are created for each type of sub-data stream, so that when processing tasks, tasks in different task queues do not mutually preempt processing resources of the task thread, so that each task can be completely processed, and smoothness of picture playing is ensured.
The system architecture of the embodiments of the present invention is described next.
Please refer to fig. 3, which illustrates a schematic structural diagram of a video frame generation system according to an embodiment of the present application. The video frame generation system includes at least one terminal 310 and a server 320. The terminal 310 establishes a connection with the server 320 through a wired network or a wireless network.
The terminal 310 is a device having a data transceiving function, such as a smart phone. The terminal 310 may be installed with the above-mentioned live client, tv client, video playing client, etc.
Taking the example that the live client is installed in the terminal 310, the live client can be divided into: an anchor client for use by an anchor user and a viewer client for use by a viewer user. The anchor user is a user with an interactive permission and a live video uploading permission, and can upload interactive data and locally acquired video data to the server, and then the server forwards the interactive data and the video data to the anchor client and the audience client, and the anchor client can receive the video data and the interactive data sent by the server and then display the video data and the interactive data; the audience users are users with interactive permission, the interactive data can be uploaded to the server, the server forwards the interactive data to the anchor client and the audience client, and the audience client can receive the video data and the interactive data sent by the server and then display the video data and the interactive data. Otherwise, the client used by the anchor user and the client used by the audience user are substantially identical.
The server 320 may be one or more servers, and may also be a cloud computing center. In other words, the server 320 may be implemented by one server; or a combination of multiple servers, each of which undertakes the same or different functions, such as a server for registration and login, a server for storing user head portraits, a server for storing channel information and configuration information, a server for storing pictures or videos, and the like; the server 320 may also be implemented by a cloud computing center, which is a virtual computing platform formed by a whole service cluster.
Illustrated in fig. 3 is a diagram showing two terminals 310 and a server 320.
Referring to fig. 4, a flowchart of a method for generating a video frame of an online video according to an embodiment of the present application is shown, where the method for generating a video frame of an online video may be applied to a client in a terminal shown in fig. 3. The video frame generation method of the online video comprises the following steps:
step 401, a video data stream is received, where the video data stream includes different types of sub-data streams, and each sub-data stream has a different stream identifier.
A video data stream is a stream of video that a server pushes to a client. For example, after a user enters a live broadcast room through a client, a server pushes a video data stream to the client; or after the user opens the television, the server pushes the video data stream to the client.
The video data streams of different videos contain different types of sub-data streams, each sub-data stream is used for providing one element in the video and each sub-data stream has different stream identification for distinguishing. For example, the video data stream of the live broadcast room at least includes a video sub-data stream, a bullet screen sub-data stream, and a gift sub-data stream; the video data stream of the video playing application at least comprises a video sub-data stream, a bullet screen sub-data stream and an advertisement sub-data stream.
In this embodiment, the client first sends a video acquisition request to the server, and the server determines a video data stream according to the video acquisition request and sends the video data stream to the client.
In this embodiment, the stream identifier may be carried in a packet header of an IP (Internet Protocol) packet generated according to data in the video data stream, so that after receiving each IP packet, the client may analyze the packet header of each IP packet, for each analyzed stream identifier, form a sub-data stream from the data in each IP packet with the stream identifier, and finally divide the video data stream into a plurality of sub-data streams.
Step 402, for each sub data stream, a task thread and a task queue are created, and a data segment belonging to the same video frame in the sub data stream is added to the task queue as a task.
A task thread is a thread for processing a task. Among them, a thread is called a lightweight process, and is the smallest unit of a program execution flow.
The task queue is a queue for buffering tasks, each task being a data segment belonging to the same video frame in the sub-data stream. The task queue is always effective when the live broadcast room is not closed, namely, the task can be always added into the task queue and read from the task queue during the live broadcast room is not closed.
In this embodiment, the client creates a task thread and a task queue for each seed data stream. For example, the data stream of the video includes a video sub-data stream, a bullet screen sub-data stream, and a gift sub-data stream, and the client may create a task thread and a task queue for the video sub-data stream, create a task thread and a task queue for the bullet screen sub-data stream, and create a task thread and a task queue for the gift sub-data stream.
For each sub-data stream, the client may divide the sub-data stream into a plurality of tasks and add the tasks to a task queue corresponding to the sub-data stream. When dividing the sub-data stream into a plurality of tasks, one implementation manner is to intercept data with fixed time duration in the sub-data stream, and use the obtained data segment as one task, where the data segment is data belonging to one video frame in the sub-data stream. For example, if the fixed time is 1s, after receiving the sub-data stream, intercepting the data received within 0-1 s, dividing the obtained data segment into a task and adding the task to the task queue; intercepting the data received in 1-2s, dividing the obtained data segment into a task, adding the task to a task queue, and so on.
It should be noted that, during the period when the client terminates the video data stream, the video data stream is continuously divided into tasks and added to the corresponding task queue. During this time, each task thread executes step 403 to process the task in the corresponding task queue.
And step 403, for the task thread corresponding to the task queue with the task, processing the task by using the task thread to obtain a processing result.
In this embodiment, when generating the task, the client may further use the timestamp of the video frame as the timestamp of the task, so that the processing order of the task thread to the task may be determined according to the timestamp, and the following introduces a process of processing the task by the task thread.
And circularly executing the following steps when each task thread processes the task: searching a task with the earliest timestamp from a corresponding task queue, starting a runloop (running loop) corresponding to the task thread, registering the task thread as a registered observer in a runloop mode, registering the task as an input source in the runloop mode, setting a timing source in the runloop mode, and performing decapsulation, decoding and other processing on the task by using the task thread during running of the runloop according to the timing source to obtain a processing result.
The runloop comprises an input source to be monitored, a timing source and a registered observer to be notified, wherein the input source is a registered task, the timing source is the time length for processing one task by the task thread, and the registered observer is the task thread. And during runloop operation, a mode is specified, the mode corresponds to an input source, the input source corresponding to the mode is monitored during runloop operation, the input source not corresponding to the mode is in a pause state, and monitoring can be carried out only when runloop of the mode matched with the input source is started. In other words, running runloop once can only monitor one type of input source of interest.
In this embodiment, a runloop corresponds to a task thread, and after a task thread processes a task, a next task may be registered in the runloop, so that continuous operation of the task thread is ensured by runloop (run loop). Referring to fig. 5, an interaction diagram of runloop and task thread is shown, where Input Sources are Input Sources, ports are Port-based Input Sources, custom Input Sources, timer Sources are timing Sources, handlePort is a processing Port-based Input source, customs is a processing user-defined Input source, mySelector is a processing Selector source, and timefilered is a processing timing source.
It should be noted that, tasks in the task queues corresponding to different task threads may be processed in parallel, so as to speed up the processing progress of the tasks.
And step 404, rendering each processing result to obtain a video frame.
The client can render each processing result to obtain a rendering result, and then notify the hardware to render the rendering result to obtain the video frame. The process of rendering each processing result is described in steps 604-607, and is not described herein again.
The hardware may be a GPU (Graphics Processing Unit), or may be another processor, and this embodiment is not limited. After the video frame is obtained by hardware rendering, the video frame can be played according to a certain frame rate, that is, the video frame is displayed on a screen.
In summary, according to the method for generating a video frame of an online video provided in the embodiment of the present application, since the task occupies processing resources of the task thread that processes the task, when a task queue is created for each task thread, the problem that the tasks in different task queues occupy processing resources of the same task thread can be avoided, so that the problems of performance degradation and heating of the terminal due to occupation of the processing resources are avoided, and the performance of the terminal is improved. In addition, each task thread can process each task, so that the picture of each task in the video frame can be completely rendered, and the smoothness of picture playing is improved. In addition, the tasks are processed in parallel through the plurality of task threads, the processing progress of the tasks can be accelerated, the problem of pause of picture playing is avoided, and the smoothness of picture playing is improved.
Referring to fig. 6, a flowchart of a method for generating a video frame of an online video according to another embodiment of the present application is shown, where the method for generating a video frame of an online video may be applied to a client in a terminal shown in fig. 3. The video frame generation method of the online video comprises the following steps:
step 601, receiving a video data stream, where the video data stream includes different types of sub data streams, and each sub data stream has a different stream identifier.
Step 602, for each sub-data stream, a task thread and a task queue are created, and a data segment belonging to the same video frame in the sub-data stream is added to the task queue as a task.
Step 603, for the task thread corresponding to the task queue with the task, processing the task by using the task thread to obtain a processing result.
The implementation process of steps 601 to 603 is the same as the implementation process of steps 401 to 403, and is not described herein again.
And step 604, sending the obtained processing results to the rendering thread by using each task thread.
The rendering thread is a thread for rendering a processing result of a task.
And for each task thread of each processing task, after each processing result is obtained, the processing result is sent to the rendering thread.
Step 605, each processing result is added to the video buffer by the rendering thread.
The rendering thread adds the processing results to the video buffer in the order in which the processing results were received, and two cases in the adding process will be described below.
1) When the rendering thread receives a processing result at the current moment, the rendering thread directly adds the processing result to the video buffer.
For example, the cached list in the video buffer is: if the processing result received by the rendering thread at the current time is the processing result of the video task 3, the list is: the processing result of video task 1- > the processing result of video task 2- > the processing result of gift task 1- > the processing result of gift task 2- > the processing result of video task 3.
2) When the rendering thread receives at least two processing results at the current time:
and if the at least two processing results are determined to correspond to different time stamps, sequentially adding the at least two processing results to the video buffer in the order of the time stamps from early to late.
If it is determined that the at least two processing results correspond to the same timestamp, that is, when there are at least two task threads sending processing results corresponding to the same video frame to the rendering thread at the same time, adding each processing result to the video buffer by using the rendering thread, the method may include the following steps:
in step 6051, at least two processing results corresponding to the same video frame are received with the rendering thread.
In step 6052, the rendering thread is used to determine the processing priority of the sub-data stream corresponding to each processing result.
Since the rendering thread renders the processing results according to the order in which the processing results are added to the video buffer, in order to ensure that the processing results of important tasks in the same video frame are rendered first, the order in which the processing results are added to the video buffer needs to be determined according to the processing priority of the sub-stream.
In this embodiment, priority may be set on the sub-data stream according to the importance of various types of sub-data streams in the video, and the priority of the sub-data stream is in positive correlation with the importance. I.e. the more important the sub-stream is, the higher its priority. Still taking the example that the video data stream includes the video sub-data stream, the gift sub-data stream, and the bullet-screen sub-data stream as an example for description, the priority of the video sub-data stream may be set to be the highest, and the priority of the bullet-screen sub-data stream may be set to be the lowest.
And step 6053, polling by the rendering thread according to the processing priority, and adding a processing result obtained by each polling to the video buffer.
For example, if the rendering thread receives the processing result of the video task and the processing result of the gift task at the same time, and the priority of the video sub-data stream is higher than that of the gift sub-data stream, the rendering thread polls whether there is a processing result of the video task first, and when the processing result of the video task is polled, the processing result of the video task is added to the video buffer; and the rendering thread polls whether the processing result of the gift task exists or not, and when the processing result of the gift task is polled, the processing result of the gift task is added into the video buffer.
And 606, rendering each processing result in the video buffer area in sequence by using the rendering thread to obtain each rendering result.
Wherein, rendering each processing result in the video buffer area in sequence by using the rendering thread to obtain each rendering result, and the method may include the following steps:
step 6061, the rendering context of the process corresponding to the client is assigned to the rendering thread.
The rendering context is used for generating a context environment, and is used when the GPU renders the rendering result on a screen. One process corresponds to one drawing context, and the process is a process corresponding to a client, namely a process for creating a task thread and a rendering thread.
Step 6062, a processing result is sequentially read from the video buffer using the rendering thread in the order in which the processing results are added to the video buffer.
And 6063, for each processing result, creating a texture by using the rendering thread, adding the texture into the rendering context, rendering the processing result to obtain intermediate data, and adding the intermediate data into the texture to obtain a rendering result.
Texture in computer graphics includes both texture of an object surface in the general sense, even if the object surface exhibits uneven grooves, and color patterns on a smooth surface of the object, in other words, texture refers to patterns of the object surface.
The intermediate data here may be RGB (Red Green Blue) data.
Still taking the list in the step 605 as an example, assigning the rendering context to a rendering thread, reading the processing result of the video task 1 by the rendering thread, creating a texture, adding the texture into the rendering context, rendering the processing result of the video task 1 to obtain RGB, and adding RGB data into the texture to obtain the rendering result of the video task 1; the rendering thread reads the processing result of the video task 2, creates texture, adds the texture into the rendering context, renders the processing result of the video task 2 to obtain RGB, and adds RGB data into the texture to obtain the rendering result of the video task 2; the rendering thread reads the processing result of the gift task 1, creates a texture, adds the texture to the rendering context, renders the processing result of the gift task 1 to obtain RGB, adds RGB data to the texture to obtain the rendering result of the gift task 1, and so on.
Referring to fig. 7, a flowchart for establishing a task queue, processing each task, setting a processing priority for polling, adding a polling processing result to a video buffer, and rendering is shown.
Step 607, when each rendering result corresponding to one video frame is obtained, sending a rendering start instruction to the GPU, where the rendering start instruction is used to instruct the GPU to start rendering each rendering result corresponding to the video frame, so as to obtain the video frame.
In this embodiment, after obtaining each rendering result, the client may notify the GPU to immediately render and play the rendering result, and since other rendering results of the same video frame may not be ready at this time, the picture playing may be unsmooth, so optionally, after obtaining all rendering results corresponding to each video frame, the client may notify the GPU to render all rendering results corresponding to the video frame, and play the video frame after obtaining the video frame.
And when the client receives a background running instruction sent by the user, quitting the background running according to the background running instruction. In the related art, if the client does not close the thread, the thread will continue to process the task and notify the GPU to generate the video frame for playing, and since the operating system of the terminal does not allow the client running in the background to continue playing the video frame, the operating system will cause the client to crash to stop the client from playing the video frame. If the client closes the thread, the user may forget to operate the client in the background, and the utilization rate of the client is reduced. In this embodiment, when the client logs out to the background runtime, step 608 is further performed to solve the above problem.
Step 608, when the background operation instruction is received, sending a rendering stopping instruction to the GPU, where the rendering stopping instruction is used to instruct the GPU to stop rendering the rendering result.
In this embodiment, the client does not close the thread when running in the background, the task thread continues to process the task and sends the processing result to the rendering thread, and the rendering thread adds the processing result to the video buffer and instructs the GPU to stop rendering the rendering result. Therefore, the GPU does not render the rendering result, the video frame cannot be played, and the operating system cannot cause the client crash. In addition, since the client does not close the thread, the thread such as audio processing may also continue to output the sound of the video to prompt the user that the client is still running, thereby increasing the usage rate of the client.
Optionally, the method further includes: and when the background running instruction is received, informing the rendering thread to stop polling.
The polling aims to preferentially add the processing result of the important task in the same video frame into the video buffer area so as to preferentially render the processing result to obtain a rendering result, so that the hardware preferentially renders and plays the rendering result. When the client runs in the background, because the GPU does not need to render and play the rendering results, the order of adding the processing results into the video buffer does not need to be concerned, that is, the rendering thread can randomly add at least two processing results corresponding to the same video frame into the video buffer, and at this time, the client can notify the rendering thread to stop polling, so as to save resources.
It should be noted that the rendering thread starts polling when receiving at least two processing results at the same time, and does not send an instruction to the rendering thread to notify it of starting polling.
Optionally, the method further includes: and when a background operation instruction is received, clearing texture data in the drawing context by using the rendering thread, and assigning the drawing context to the process.
When the client runs in the background, texture data also needs to be cleared, otherwise, the residual texture data will affect the video buffer.
In a possible implementation manner, when the client receives the background operation instruction, the client notifies the rendering thread to stop polling, the rendering thread stops polling, the texture data in the rendering context is cleaned, the rendering context is assigned to the process, and a rendering stop instruction is sent to the GPU to instruct the GPU to stop rendering the rendering result, please refer to the flow illustrated in fig. 8.
In an example, after the method provided by this embodiment is used, the occupancy rate of a Central Processing Unit (CPU) is reduced to 25.6% from 38.9% measured when the method is not used, and the occupancy rate of a memory is reduced to 190MB from 250MB measured when the method is not used, which greatly improves the performance of the terminal. In addition, before the method provided by the embodiment is not used, the client has a crash rate of 0.5% when exiting to the background operation, and after the method provided by the embodiment is used, the client does not crash when exiting to the background operation.
In summary, according to the method for generating a video frame of an online video provided in the embodiment of the present application, since the task occupies processing resources of the task thread that processes the task, when a task queue is created for each task thread, the problem that the tasks in different task queues occupy processing resources of the same task thread can be avoided, so that the problems of performance degradation and heating of the terminal due to occupation of the processing resources are avoided, and the performance of the terminal is improved. In addition, each task thread can process each task, so that the picture of each task in the video frame can be completely rendered, and the fluency of picture playing is improved. In addition, the tasks are processed in parallel through the plurality of task threads, the processing progress of the tasks can be accelerated, the problem of pause of picture playing is avoided, and the smoothness of picture playing is improved.
And when the background running instruction is received, informing the rendering thread to stop polling so as to save resources.
When a background operation instruction is received, the rendering thread is utilized to clear the texture data in the rendering context so as to avoid the influence of residual texture data on the video buffer.
Please refer to fig. 9, which shows a flowchart of a method for playing video frames of a live video according to another embodiment of the present application, where the method for playing video frames of a live video can be applied to a live client in a terminal shown in fig. 3. The video frame playing method of the live video comprises the following steps:
step 901, acquiring a live video stream, where the live video stream includes a live image data stream and an interactive data stream, the interactive data stream includes at least one of a gift data stream and a barrage data stream, and the live video stream, the gift data stream, and the barrage data stream have different stream identifications.
The live video stream is a video stream which is pushed to a live client by a server and is played in a live room.
The live video stream includes, in addition to the live image data stream, an interactive data stream generated by the anchor user and the audience user during interaction. Optionally, the interactive data stream includes at least one of a gift data stream and a barrage data stream. The gift data stream is a data stream generated when a gift is given, and the bullet screen data stream is a data stream generated when a bullet screen is sent.
In a possible implementation manner, a live broadcast client sends a live broadcast video acquisition request to a server, the server determines a live broadcast video stream according to the live broadcast video acquisition request, and the live broadcast video stream is sent to the live broadcast client.
Step 902, for each data stream, a task thread and a task queue are created, and data segments belonging to the same live video frame in the data stream are added to the task queue as a task.
In this embodiment, the live client creates a task thread and a task queue for each data stream. For example, if the live video stream includes a live image data stream, a bullet screen data stream, and a gift data stream, the live client may create a task thread and a task queue for the live image data stream, create a task thread and a task queue for the bullet screen data stream, and create a task thread and a task queue for the gift data stream.
For each data stream, the live client may divide the data stream into a number of tasks to add to a task queue corresponding to the data stream.
When dividing a data stream into a plurality of tasks, one possible implementation manner is to intercept data of a fixed duration in the data stream, and use the obtained data segment as one task, where the data segment is data belonging to one live video frame in the data stream. For example, if the fixed time is 1s, after receiving the data stream, intercepting the data received within 0-1 s, dividing the obtained data segment into a task and adding the task to the task queue; intercepting the data received in 1-2s, dividing the obtained data segment into a task, adding the task to a task queue, and so on.
Step 903, for the task thread corresponding to the task queue with the task, processing the task by using the task thread to obtain a processing result.
The implementation process of step 903 is the same as the implementation process of step 403, and is not described herein again.
And step 904, sending the obtained processing result to the rendering thread by using each task thread.
And for each task thread for processing the task, after each processing result is obtained by the task thread, the processing result is sent to the rendering thread.
Step 905, adding each processing result to the video buffer by using the rendering thread.
The rendering thread adds the processing results to the video buffer in the order in which the processing results were received, and two cases in the addition process are explained below.
1) When the rendering thread receives a processing result at the current moment, the rendering thread directly adds the processing result to the video buffer.
2) When the rendering thread receives at least two processing results at the current time:
and if the at least two processing results are determined to correspond to different time stamps, sequentially adding the at least two processing results to the video buffer in the order of the time stamps from early to late.
If it is determined that the at least two processing results correspond to the same timestamp, that is, when there are at least two task threads simultaneously sending processing results corresponding to the same live video frame to the rendering thread, adding each processing result to the video buffer by using the rendering thread, may include the following steps:
step 9051, receiving at least two processing results corresponding to the same live video frame with the rendering thread.
Step 9052, determining, by the rendering thread, a processing priority of the data stream corresponding to each processing result.
Since the rendering thread renders the processing results according to the order in which the processing results are added to the video buffer, in order to ensure that the processing results of important tasks in the same live video frame are rendered first, optionally, the order in which the processing results are added to the video buffer may be determined according to the processing priority of the data stream.
In this embodiment, the priority may be set according to the importance of each data flow, and the priority of the data flow has a positive correlation with the importance. I.e. the more important the data stream is, the higher its priority. Still taking the example that the live video stream includes the live picture data stream, the gift data stream, and the bullet screen data stream as an example for explanation, the priority of the live picture data stream may be set to be the highest, and the priority of the bullet screen data stream may be set to be the lowest.
And 9053, polling the video buffer area by using the rendering thread according to the processing priority, and adding a processing result obtained by polling each time to the video buffer area.
For example, if the rendering thread receives the processing result of the live broadcast image task and the processing result of the gift task at the same time, and the priority of the live broadcast image data stream is higher than that of the gift data stream, the rendering thread polls whether the processing result of the live broadcast image task exists or not, and when the processing result of the live broadcast image task is polled, the processing result of the live broadcast image task is added into the video buffer area; and the rendering thread polls whether the processing result of the gift task exists or not, and when the processing result of the gift task is polled, the processing result of the gift task is added into the video buffer.
And step 906, rendering each processing result in the video buffer area in sequence by using the rendering thread to obtain each rendering result.
The rendering process includes the following steps:
and 9061, assigning the rendering context of the process corresponding to the live broadcast client to the rendering thread.
The rendering context is used for generating a context environment, and is used when the GPU renders the rendering result on a screen. One process corresponds to one drawing context, and the process is a process corresponding to a live client, namely a process for creating a task thread and a rendering thread.
Step 9062, a rendering thread is used to sequentially read a processing result from the video buffer in the order in which the processing results were added to the video buffer.
And 9063, for each processing result, creating a texture by using the rendering thread, adding the texture into the rendering context, rendering the processing result to obtain intermediate data, and adding the intermediate data into the texture to obtain a rendering result.
And 907, sending a rendering starting instruction to the GPU when each rendering result corresponding to one live video frame is obtained, where the rendering starting instruction is used to instruct the GPU to start rendering each rendering result corresponding to the live video frame, so as to obtain a live video frame, where the live video frame includes a live frame and interactive data.
In this embodiment, after obtaining each rendering result, the live broadcast client may notify the GPU to immediately render and play the rendering result, and since other rendering results of the same live broadcast video frame may not be ready at this time, the picture playing may be unsmooth, so, optionally, after obtaining all rendering results corresponding to one live broadcast video frame, the GPU may be further notified to render all rendering results corresponding to the live broadcast video frame, so as to obtain the live broadcast video frame.
Step 908, play the live video frame.
And when the live broadcast client receives a background running instruction sent by a user, quitting the background running according to the background running instruction. In the related art, if the live broadcast client does not close the thread, the task thread and the rendering thread continue to process the task and notify the GPU to generate the live broadcast video frame for playing, and since the operating system of the terminal does not allow the live broadcast client running in the background to continue playing the live broadcast video frame, the operating system may cause the live broadcast client to crash to stop the live broadcast client from playing the live broadcast video frame. If the live client closes the thread, the user may forget to operate the live client in the background, and the utilization rate of the live client is reduced. In this embodiment, when the live client exits to the background operation, step 608 is further executed to solve the above problem.
In step 909, when the background running instruction is received, a rendering stopping instruction is sent to the GPU, where the rendering stopping instruction is used to instruct the GPU to stop rendering the rendering result.
In this embodiment, the live broadcast client does not close the thread when running in the background, the task thread continues to process the task and sends the processing result to the rendering thread, and the rendering thread adds the processing result to the video buffer and instructs the GPU to stop rendering the rendering result. Therefore, the GPU does not render the rendering result, the video frame cannot be played, and the operating system cannot cause the crash of the live broadcast client. In addition, because the live client does not close the thread, the thread such as audio processing can also continue to output the sound of the video to prompt the user that the live client is still running, thereby improving the utilization rate of the live client.
Optionally, the method further includes: and when the background running instruction is received, informing the rendering thread to stop polling.
The polling aims to preferentially add the processing result of the important task in the same live video frame to the video buffer area so as to preferentially render the processing result to obtain a rendering result, and the GPU preferentially renders and plays the rendering result. When the live broadcast client runs in the background, because the GPU does not need to render and play the rendering results, the order of adding the processing results into the video buffer area does not need to be concerned, namely, the rendering thread can add at least two processing results corresponding to the same live broadcast video frame into the video buffer area at random, and at the moment, the live broadcast client can inform the rendering thread to stop polling so as to save resources.
It should be noted that the rendering thread starts polling when receiving at least two processing results at the same time, and does not send an instruction to the rendering thread to notify it of starting polling.
Optionally, the method further includes: and when a background operation instruction is received, clearing texture data in the drawing context by using the rendering thread, and assigning the drawing context to the process.
When the live client runs in the background, texture data also needs to be cleared, otherwise, the residual texture data will affect the video buffer.
In a possible implementation manner, when the live broadcast client receives a background operation instruction, the live broadcast client notifies the rendering thread to stop polling, the rendering thread stops polling, texture data in the rendering context is cleared, the rendering context is assigned to the process, and a rendering stop instruction is sent to the GPU to instruct the GPU to stop rendering the rendering result.
In summary, according to the video frame playing method for the live video provided in the embodiment of the present application, since the task occupies the processing resource of the task thread that processes the task, when one task queue is created for each task thread, the problem that the tasks in different task queues occupy the processing resource of the same task thread can be avoided, so that the problems of performance degradation and heating and scalding of the terminal due to occupation of the processing resource are avoided, and the performance of the terminal is improved. In addition, each task thread can process each task, so that the picture of each task in the live video frame can be completely rendered, and the fluency of the live video playing is improved. In addition, the tasks are processed in parallel through the plurality of task threads, the processing progress of the tasks can be accelerated, the problem that the live broadcast picture is played in a blocking mode is avoided, and the smoothness of the live broadcast picture playing is improved.
And when the background running instruction is received, informing the rendering thread to stop polling so as to save resources.
When a background run instruction is received, the rendering thread is utilized to clear texture data in the rendering context to avoid the influence of residual texture data on the video buffer.
Referring to fig. 10, a block diagram of a video frame generation apparatus for an online video according to an embodiment of the present application is shown, where the video frame generation apparatus for an online video may be applied to a client in the terminal shown in fig. 3. The video frame generation device of the online video comprises:
a receiving module 1010, configured to receive a video data stream, where the video data stream includes different types of sub data streams, and each sub data stream has a different stream identifier;
a creating module 1020, configured to create a task thread and a task queue for each seed data stream received by the receiving module 1010, and add a data segment belonging to the same video frame in the sub data stream as a task to the task queue;
the processing module 1030 is configured to process, by using a task thread, a task corresponding to a task queue having the task, and obtain a processing result;
a generating module 1040, configured to render each processing result obtained by the processing module 1030, so as to obtain a video frame.
Optionally, the generating module 1040 is further configured to:
sending the obtained processing results to the rendering thread by using each task thread;
adding each processing result to a video buffer area by using a rendering thread;
rendering each processing result in the video buffer area in sequence by using the rendering thread to obtain each rendering result;
and informing the GPU to render each rendering result to obtain a video frame.
Optionally, when there are at least two task threads sending processing results corresponding to the same video frame to the rendering thread at the same time, the generating module 1040 is further configured to:
receiving at least two processing results corresponding to the same video frame by using a rendering thread;
determining the processing priority of the sub-data stream corresponding to each processing result by using the rendering thread;
and polling by using the rendering thread according to the processing priority, and adding a processing result obtained by polling each time into the video buffer area.
Optionally, the generating module 1040 is further configured to notify the rendering thread to stop polling when the background execution instruction is received.
Optionally, the generating module 1040 is further configured to:
assigning the drawing context of the process corresponding to the client to a rendering thread;
reading a processing result from the video buffer area in sequence according to the sequence of the processing result added to the video buffer area by using the rendering thread;
and for each processing result, creating a texture by using the rendering thread, adding the texture into the rendering context, rendering the processing result to obtain intermediate data, and adding the intermediate data into the texture to obtain a rendering result.
Optionally, the apparatus further comprises:
and the clearing module is used for clearing the texture data in the drawing context by using the rendering thread and assigning the drawing context to the process when the background operation instruction is received.
Optionally, the generating module 1040 is further configured to send a rendering start instruction to the hardware when each rendering result corresponding to one video frame is obtained, where the rendering start instruction is used to instruct the GPU to start rendering each rendering result corresponding to the video frame, so as to obtain the video frame.
Optionally, the apparatus further comprises:
and the sending module is used for sending a rendering stopping instruction to the GPU when the background running instruction is received, wherein the rendering stopping instruction is used for indicating the GPU to stop rendering the rendering result.
In summary, the video frame generation apparatus for an online video provided in the embodiment of the present application, because the task occupies processing resources of the task thread that processes the task, when creating a task queue for each task thread, it can avoid a problem that tasks in different task queues occupy processing resources of the same task thread, thereby avoiding a problem that performance of a terminal is reduced and the terminal is heated and hot due to occupation of processing resources, and improving performance of the terminal. In addition, each task thread can process each task, so that the picture of each task in the video frame can be completely rendered, and the smoothness of picture playing is improved. In addition, the tasks are processed in parallel through the plurality of task threads, the processing progress of the tasks can be accelerated, the problem of pause of picture playing is avoided, and the smoothness of picture playing is improved.
And when the background running instruction is received, informing the rendering thread to stop polling so as to save resources.
When a background run instruction is received, the rendering thread is utilized to clear texture data in the rendering context to avoid the influence of residual texture data on the video buffer.
Referring to fig. 11, a block diagram of a video frame playing apparatus for live video according to an embodiment of the present application is shown, where the video frame playing apparatus for live video can be applied to a live client in the terminal shown in fig. 3. This video frame play device of live broadcast video includes:
an obtaining module 1110, configured to obtain a live video stream, where the live video stream includes a live image data stream and an interactive data stream, the interactive data stream includes at least one of a gift data stream and a barrage data stream, and the live video stream, the gift data stream, and the barrage data stream have different stream identifiers;
a creating module 1120, configured to create a task thread and a task queue for each data stream obtained by the obtaining module 1110, and add a data segment belonging to the same live video frame in the data stream as a task to the task queue;
the processing module 1130 is configured to, for a task thread corresponding to a task queue having a task, process the task by using the task thread to obtain a processing result;
a generating module 1140, configured to render each processing result obtained by the processing module 1130 to obtain a live video frame, where the live video frame includes a live frame and interactive data;
a playing module 1150, configured to play the live video frames generated by the generating module 1140.
Optionally, the generating module 1140 is further configured to:
sending the processing result obtained by each task thread to a rendering thread;
adding each processing result to a video buffer by using a rendering thread;
rendering each processing result in the video buffer area in sequence by using the rendering thread to obtain each rendering result;
and informing the GPU to render each rendering result to obtain a live video frame.
Optionally, when there are at least two task threads sending processing results corresponding to the same live video frame to the rendering thread at the same time, the generating module 1040 is further configured to:
receiving at least two processing results corresponding to the same live video frame by using a rendering thread;
determining the processing priority of the sub-data stream corresponding to each processing result by using the rendering thread;
and polling by utilizing the rendering thread according to the processing priority, and adding a processing result obtained by polling each time into the video buffer.
Optionally, the generating module 1040 is further configured to notify the rendering thread to stop polling when the background execution instruction is received.
Optionally, the generating module 1040 is further configured to:
assigning a rendering context of a process corresponding to a live broadcast client to a rendering thread;
reading a processing result from the video buffer area in sequence by using the rendering thread according to the sequence of the processing result added to the video buffer area;
and for each processing result, creating a texture by using the rendering thread, adding the texture into the rendering context, rendering the processing result to obtain intermediate data, and adding the intermediate data into the texture to obtain a rendering result.
Optionally, the apparatus further comprises:
and the clearing module is used for clearing the texture data in the drawing context by using the rendering thread when the background operation instruction is received, and assigning the drawing context to the process.
Optionally, the generating module 1040 is further configured to send a rendering start instruction to the GPU when each rendering result corresponding to one live video frame is obtained, where the rendering start instruction is used to instruct the GPU to start rendering each rendering result corresponding to the live video frame, so as to obtain the live video frame.
Optionally, the apparatus further comprises:
and the sending module is used for sending a rendering stopping instruction to the GPU when the background running instruction is received, wherein the rendering stopping instruction is used for indicating the GPU to stop rendering the rendering result.
In summary, the video frame playing device for live video provided in the embodiment of the present application, because the task occupies processing resources of the task thread that processes the task, when creating a task queue for each task thread, the task in different task queues can be prevented from occupying processing resources of the same task thread, so that the problems of performance degradation and heating and scalding of the terminal due to occupation of processing resources are avoided, and the performance of the terminal is improved. In addition, each task thread can process each task, so that the picture of each task in the video frame can be completely rendered, and the smoothness of picture playing is improved. In addition, the tasks are processed in parallel through the plurality of task threads, the processing progress of the tasks can be accelerated, the problem of pause of picture playing is avoided, and the smoothness of picture playing is improved.
And when the background running instruction is received, informing the rendering thread to stop polling so as to save resources.
When a background run instruction is received, the rendering thread is utilized to clear texture data in the rendering context to avoid the influence of residual texture data on the video buffer.
Fig. 12 shows a block diagram of a terminal 1200 according to an exemplary embodiment of the present application. The terminal 1200 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1200 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, terminal 1200 includes: a processor 1201 and a memory 1202.
The processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1201 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1201 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing content that the display screen needs to display. In some embodiments, the processor 1201 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1202 can include one or more computer-readable storage media, which can be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1202 is configured to store at least one instruction for execution by processor 1201 to implement a video frame generation method for online video and/or a video frame playback method for live video provided by method embodiments herein.
In some embodiments, the terminal 1200 may further optionally include: a peripheral interface 1203 and at least one peripheral. The processor 1201, memory 1202, and peripheral interface 1203 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1203 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, touch display 1205, camera 1206, audio circuitry 1207, pointing component 1208, and power source 1209.
The peripheral interface 1203 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, memory 1202, and peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1201, the memory 1202 and the peripheral device interface 1203 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices by electromagnetic signals. The radio frequency circuit 1204 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1204 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1204 may communicate with other terminals through at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1204 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1205 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1205 is a touch display screen, the display screen 1205 also has the ability to acquire touch signals on or over the surface of the display screen 1205. The touch signal may be input to the processor 1201 as a control signal for processing. At this point, the display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1205 may be one, providing the front panel of the terminal 1200; in other embodiments, the display 1205 can be at least two, respectively disposed on different surfaces of the terminal 1200 or in a folded design; in still other embodiments, the display 1205 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 1200. Even more, the display screen 1205 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display panel 1205 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or other materials.
Camera assembly 1206 is used to capture images or video. Optionally, camera assembly 1206 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of a terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1206 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals into the processor 1201 for processing or inputting the electric signals into the radio frequency circuit 1204 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided at different locations of terminal 1200. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1201 or the radio frequency circuit 1204 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1207 may also include a headphone jack.
The positioning component 1208 is configured to locate a current geographic Location of the terminal 1200 to implement navigation or LBS (Location Based Service). The Positioning component 1208 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, the grignard System in russia, or the galileo System in the european union.
The power supply 1209 is used to provide power to various components within the terminal 1200. The power source 1209 may be alternating current, direct current, disposable or rechargeable. When the power source 1209 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: acceleration sensor 1211, gyro sensor 1212, pressure sensor 1213, fingerprint sensor 1214, optical sensor 1215, and proximity sensor 1216.
The acceleration sensor 1211 can detect magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1200. For example, the acceleration sensor 1211 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1201 may control the touch display 1205 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1211. The acceleration sensor 1211 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1212 may detect a body direction and a rotation angle of the terminal 1200, and the gyro sensor 1212 may collect a 3D motion of the user on the terminal 1200 in cooperation with the acceleration sensor 1211. From the data collected by the gyro sensor 1212, the processor 1201 may perform the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1213 may be disposed on a side bezel of terminal 1200 and/or an underlying layer of touch display 1205. When the pressure sensor 1213 is disposed on the side frame of the terminal 1200, the user's holding signal of the terminal 1200 can be detected, and the processor 1201 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1213. When the pressure sensor 1213 is disposed at a lower layer of the touch display screen 1205, the processor 1201 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1205. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1214 is used for collecting a fingerprint of the user, and the processor 1201 identifies the user according to the fingerprint collected by the fingerprint sensor 1214, or the fingerprint sensor 1214 identifies the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 1201 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 1214 may be provided on the front, back, or side of the terminal 1200. When a physical button or vendor Logo is provided on the terminal 1200, the fingerprint sensor 1214 may be integrated with the physical button or vendor Logo.
The optical sensor 1215 is used to collect the ambient light intensity. In one embodiment, the processor 1201 may control the display brightness of the touch display 1205 according to the ambient light intensity collected by the optical sensor 1215. Specifically, when the ambient light intensity is high, the display brightness of the touch display panel 1205 is increased; when the ambient light intensity is low, the display brightness of the touch display panel 1205 is turned down. In another embodiment, processor 1201 may also dynamically adjust the camera head 1206 shooting parameters based on the ambient light intensity collected by optical sensor 1215.
A proximity sensor 1216, also known as a distance sensor, is typically disposed on the front panel of the terminal 1200. The proximity sensor 1216 is used to collect a distance between the user and the front surface of the terminal 1200. In one embodiment, when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually decreases, the processor 1201 controls the touch display 1205 to switch from the bright screen state to the dark screen state; when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually becomes larger, the processor 1201 controls the touch display 1205 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 12 is not intended to be limiting of terminal 1200 and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components may be used.
An embodiment of the present application provides a computer-readable storage medium, in which at least one instruction, at least one program, a code set, or a set of instructions is stored, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the video frame generation method of an online video as described above, or the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the video frame playing method of a live video as described above.
An embodiment of the present application provides an electronic device, which includes a processor and a memory, where the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the video frame generation method of an online video as described above, or the instruction is loaded and executed by the processor to implement the video frame playing method of a live video as described above.
It should be noted that: in the video frame generation device for online video provided in the above embodiment, when the video frame generation is performed and the video frame playing device for live video plays video frames, only the division of the above functional modules is used for illustration, in practical applications, the above function distribution may be completed by different functional modules as needed, that is, the internal structure of the video frame generation device for online video is divided into different functional modules, and the internal structure of the video frame playing device for live video is divided into different functional modules to complete all or part of the above described functions. In addition, the video frame generation device of the online video and the video frame generation method of the online video provided by the above embodiments belong to the same concept, the video frame playing device of the live video and the video frame playing method of the live video belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not repeated here.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description should not be taken as limiting the embodiments of the present application, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (10)

1. A method for generating video frames of an online video, which is used in a client, the method comprising:
receiving a video data stream, wherein the video data stream contains different types of sub data streams, and each sub data stream has a different stream identifier; the video data stream at least comprises a video sub-data stream, a bullet screen sub-data stream and an advertisement sub-data stream;
for each sub data stream, creating a task thread and a task queue, and adding a data segment belonging to the same video frame in the sub data stream as a task into the task queue, wherein the data segment of each task is intercepted in the sub data stream according to a fixed time length, and the data segments of different tasks respectively correspond to different time periods in the sub data stream;
for a task thread corresponding to a task queue with a task, processing the task by using the task thread to obtain a processing result, wherein the processing comprises the following steps: decapsulating and decoding;
sending the processing result obtained by each task thread to a rendering thread;
adding each processing result to a video buffer by using the rendering thread;
assigning the rendering context of the process corresponding to the client to the rendering thread;
reading a processing result from the video buffer area in sequence according to the sequence of the processing result added to the video buffer area by using the rendering thread;
for each processing result, creating a texture by using the rendering thread, adding the texture into the rendering context, rendering the processing result to obtain intermediate data, and adding the intermediate data into the texture to obtain a rendering result;
informing a graphic processor GPU to render each rendering result to obtain a video frame,
wherein when there are at least two task threads sending processing results corresponding to the same video frame to the rendering thread at the same time, said adding, by the rendering thread, each processing result to a video buffer comprises:
receiving at least two processing results corresponding to the same video frame by using the rendering thread;
determining the processing priority of the sub-data stream corresponding to each processing result by using the rendering thread;
and polling is carried out by utilizing the rendering thread according to the processing priority, and one processing result obtained by each polling is added into the video buffer, so that the at least two processing results are added into the video buffer from top to bottom according to the processing priority.
2. The method of claim 1, further comprising:
and when a background running instruction is received, informing the rendering thread to stop polling.
3. The method of claim 1, further comprising:
and when a background operation instruction is received, clearing texture data in the drawing context by using the rendering thread, and assigning the drawing context to the process.
4. The method of claim 1, wherein notifying a Graphics Processor (GPU) of rendering of each rendering result comprises:
and when each rendering result corresponding to one video frame is obtained each time, sending a rendering starting instruction to a Graphics Processing Unit (GPU), wherein the rendering starting instruction is used for instructing the GPU to start rendering each rendering result corresponding to the video frame, so as to obtain the video frame.
5. The method according to any one of claims 1 to 4, further comprising:
and when a background operation instruction is received, sending a rendering stopping instruction to a GPU (graphics processing Unit), wherein the rendering stopping instruction is used for instructing the GPU to stop rendering the rendering result.
6. A video frame playing method of a live video is used in a live client, and the method comprises the following steps:
acquiring a live video stream, wherein the live video stream comprises a live picture data stream and an interactive data stream, the interactive data stream at least comprises one of a gift data stream and a barrage data stream, and the live video stream, the gift data stream and the barrage data stream have different stream identifications;
for each data stream, creating a task thread and a task queue, and adding data segments belonging to the same live video frame in the data stream as a task into the task queue, wherein the data segment of each task is intercepted in the data stream according to fixed time length, and the data segments of different tasks respectively correspond to different time periods in the data stream;
for a task thread corresponding to a task queue with a task, processing the task by using the task thread to obtain a processing result, wherein the processing comprises: decapsulating and decoding;
sending the obtained processing results to a rendering thread by using each task thread;
adding each processing result to a video buffer by using the rendering thread;
assigning the rendering context of the process corresponding to the live broadcast client to the rendering thread;
reading a processing result from the video buffer area in sequence according to the sequence of the processing result added to the video buffer area by using the rendering thread;
for each processing result, creating a texture by using the rendering thread, adding the texture into the rendering context, rendering the processing result to obtain intermediate data, and adding the intermediate data into the texture to obtain a rendering result;
informing a Graphic Processing Unit (GPU) to render each rendering result to obtain a live video frame, wherein the live video frame comprises a live frame and interactive data;
the live video frame is played back and the live video frame is played back,
wherein when there are at least two task threads sending processing results corresponding to the same live video frame to the rendering thread at the same time, adding each processing result to a video buffer by using the rendering thread, comprises:
receiving at least two processing results corresponding to the same live video frame by using the rendering thread;
determining the processing priority of the data stream corresponding to each processing result by using the rendering thread;
and polling according to the processing priority by utilizing the rendering thread, and adding a processing result obtained by polling each time into the video buffer area, so that the at least two processing results are added into the video buffer area in the order from the highest processing priority to the lowest processing priority.
7. An apparatus for generating video frames of an online video, the apparatus being used in a client, the apparatus comprising:
the receiving module is used for receiving video data streams, wherein the video data streams contain different types of sub data streams, and each sub data stream has different stream identifications; the video data stream at least comprises a video sub-data stream, a bullet screen sub-data stream and an advertisement sub-data stream;
a creating module, configured to create a task thread and a task queue for each seed data stream received by the receiving module, and add a data segment belonging to the same video frame in the sub data stream as a task to the task queue, where the data segment of each task is intercepted in the sub data stream according to a fixed time duration, and the data segments of different tasks respectively correspond to different time periods in the sub data stream;
a processing module, configured to process, by using a task thread corresponding to a task queue having a task, the task by using the task thread to obtain each processing result, where the processing includes: decapsulating and decoding;
the generating module is used for sending the processing result obtained by each task thread to the rendering thread; adding each processing result to a video buffer by using the rendering thread; assigning the rendering context of the process corresponding to the client to the rendering thread; reading a processing result from the video buffer area in sequence by using the rendering thread according to the sequence of the processing result added to the video buffer area; for each processing result, creating a texture by using the rendering thread, adding the texture into the rendering context, rendering the processing result to obtain intermediate data, and adding the intermediate data into the texture to obtain a rendering result; informing a graphic processor GPU to render each rendering result to obtain a video frame,
wherein when there are at least two task threads sending processing results corresponding to the same video frame to the rendering thread at the same time, adding each processing result to a video buffer by using the rendering thread comprises:
receiving at least two processing results corresponding to the same video frame by using the rendering thread;
determining the processing priority of the sub-data stream corresponding to each processing result by using the rendering thread;
and polling is carried out by utilizing the rendering thread according to the processing priority, and one processing result obtained by each polling is added into the video buffer, so that the at least two processing results are added into the video buffer from top to bottom according to the processing priority.
8. A video frame playing device of live video is used in a live client, and the device comprises:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a live video stream, the live video stream comprises a live image data stream and an interactive data stream, the interactive data stream at least comprises one of a gift data stream and a barrage data stream, and the live video stream, the gift data stream and the barrage data stream have different stream identifications;
the creation module is used for creating a task thread and a task queue for each data stream obtained by the acquisition module, and adding a data segment belonging to the same live video frame in the data stream as a task into the task queue, wherein the data segment of each task is intercepted in the data stream according to fixed time length, and the data segments of different tasks respectively correspond to different time periods in the data stream;
a processing module, configured to process, by using a task thread corresponding to a task queue having a task, the task by using the task thread to obtain a processing result, where the processing includes: decapsulating and decoding;
the generating module is used for sending the processing result obtained by each task thread to the rendering thread; adding each processing result to a video buffer by using the rendering thread; assigning the rendering context of the process corresponding to the live broadcast client to the rendering thread; reading a processing result from the video buffer area in sequence according to the sequence of the processing result added to the video buffer area by using the rendering thread; for each processing result, creating a texture by using the rendering thread, adding the texture into the rendering context, rendering the processing result to obtain intermediate data, and adding the intermediate data into the texture to obtain a rendering result; informing a Graphic Processing Unit (GPU) to render each rendering result to obtain a live video frame, wherein the live video frame comprises a live frame and interactive data;
a playing module for playing the live video frame generated by the generating module,
wherein when there are at least two task threads that send processing results corresponding to the same live video frame to the rendering thread at the same time, said adding, by the rendering thread, each processing result to a video buffer, comprises:
receiving at least two processing results corresponding to the same live video frame by using the rendering thread;
determining the processing priority of the data stream corresponding to each processing result by using the rendering thread;
and polling is carried out by utilizing the rendering thread according to the processing priority, and one processing result obtained by each polling is added into the video buffer, so that the at least two processing results are added into the video buffer from top to bottom according to the processing priority.
9. A computer-readable storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the video frame generation method of the online video according to any one of claims 1 to 5, or the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the video frame playing method of the live video according to claim 6.
10. An electronic device, comprising a processor and a memory, wherein the memory stores at least one instruction, and the instruction is loaded and executed by the processor to implement the video frame generation method of the online video according to any one of claims 1 to 5, or the instruction is loaded and executed by the processor to implement the video frame playing method of the live video according to claim 6.
CN201810398279.1A 2018-04-28 2018-04-28 Method and device for generating video frame of online video, storage medium and equipment Active CN110213636B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810398279.1A CN110213636B (en) 2018-04-28 2018-04-28 Method and device for generating video frame of online video, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810398279.1A CN110213636B (en) 2018-04-28 2018-04-28 Method and device for generating video frame of online video, storage medium and equipment

Publications (2)

Publication Number Publication Date
CN110213636A CN110213636A (en) 2019-09-06
CN110213636B true CN110213636B (en) 2023-01-10

Family

ID=67778957

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810398279.1A Active CN110213636B (en) 2018-04-28 2018-04-28 Method and device for generating video frame of online video, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN110213636B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111400038A (en) * 2020-03-10 2020-07-10 山东汇贸电子口岸有限公司 Video and picture multi-resolution self-adaptive video watermarking method and system
CN113760394B (en) * 2020-06-03 2022-05-13 阿里巴巴集团控股有限公司 Data processing method and device, electronic equipment and storage medium
CN114071224B (en) * 2020-07-31 2023-08-25 腾讯科技(深圳)有限公司 Video data processing method, device, computer equipment and storage medium
CN112423111A (en) * 2020-11-05 2021-02-26 上海哔哩哔哩科技有限公司 Graphic engine and graphic processing method suitable for player
CN114697705B (en) * 2020-12-29 2024-03-22 深圳云天励飞技术股份有限公司 Video stream object processing method and device, video stream processing system and electronic equipment
CN112711477B (en) * 2021-03-29 2021-06-25 北京拓课网络科技有限公司 Method and device for switching application programs and electronic equipment
CN113360708A (en) * 2021-05-31 2021-09-07 北京达佳互联信息技术有限公司 Video playing method and device, electronic equipment and storage medium
CN114157918A (en) * 2021-11-02 2022-03-08 统信软件技术有限公司 Media file playing method and device, computing equipment and readable storage medium
CN113923519B (en) * 2021-11-11 2024-02-13 深圳万兴软件有限公司 Video rendering method, device, computer equipment and storage medium
CN114339415B (en) * 2021-12-23 2024-01-02 天翼云科技有限公司 Client video playing method and device, electronic equipment and readable medium
CN114630184A (en) * 2022-03-23 2022-06-14 广州方硅信息技术有限公司 Video rendering method, device, equipment and computer readable storage medium
CN116761032B (en) * 2023-08-18 2024-04-23 荣耀终端有限公司 Video playing method, readable medium and electronic device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103593231B (en) * 2012-08-14 2016-06-01 腾讯科技(深圳)有限公司 A kind of data processing method, device and mobile terminal
CN105898538A (en) * 2015-12-14 2016-08-24 乐视网信息技术(北京)股份有限公司 Play method and device and mobile terminal equipment based on Android platform
CN105933641B (en) * 2016-05-18 2019-09-17 阿里巴巴集团控股有限公司 User Status based reminding method and device in video communication
CN106095506A (en) * 2016-06-14 2016-11-09 乐视控股(北京)有限公司 A kind of page loading method and device
US11197010B2 (en) * 2016-10-07 2021-12-07 Microsoft Technology Licensing, Llc Browser-based video decoder using multiple CPU threads
CN107493510B (en) * 2017-09-19 2020-03-17 武汉斗鱼网络科技有限公司 Live stream playing method and device in live broadcast room, computer storage medium and equipment
CN107911709A (en) * 2017-11-17 2018-04-13 广州酷狗计算机科技有限公司 live interface display method, device and terminal

Also Published As

Publication number Publication date
CN110213636A (en) 2019-09-06

Similar Documents

Publication Publication Date Title
CN110213636B (en) Method and device for generating video frame of online video, storage medium and equipment
CN108966008B (en) Live video playback method and device
CN111147878B (en) Stream pushing method and device in live broadcast and computer storage medium
CN109874043B (en) Video stream sending method, video stream playing method and video stream playing device
CN109660817B (en) Video live broadcast method, device and system
CN110213608B (en) Method, device, equipment and readable storage medium for displaying virtual gift
CN111093108B (en) Sound and picture synchronization judgment method and device, terminal and computer readable storage medium
CN110139116B (en) Live broadcast room switching method and device and storage medium
CN109413453B (en) Video playing method, device, terminal and storage medium
CN108174275B (en) Image display method and device and computer readable storage medium
CN112118477B (en) Virtual gift display method, device, equipment and storage medium
US20220191557A1 (en) Method for displaying interaction data and electronic device
CN109194972B (en) Live stream acquisition method and device, computer equipment and storage medium
CN111586431B (en) Method, device and equipment for live broadcast processing and storage medium
CN108600778B (en) Media stream transmitting method, device, system, server, terminal and storage medium
CN113490010B (en) Interaction method, device and equipment based on live video and storage medium
CN111918090A (en) Live broadcast picture display method and device, terminal and storage medium
CN111107389A (en) Method, device and system for determining live broadcast watching time length
CN111586444B (en) Video processing method and device, electronic equipment and storage medium
CN111669640B (en) Virtual article transfer special effect display method, device, terminal and storage medium
CN108579075B (en) Operation request response method, device, storage medium and system
CN111787347A (en) Live broadcast time length calculation method, live broadcast display method, device and equipment
CN114116053A (en) Resource display method and device, computer equipment and medium
CN110662105A (en) Animation file generation method and device and storage medium
CN110149491B (en) Video encoding method, video decoding method, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant