CN113596495B - Live broadcast push stream processing method and device, equipment and medium thereof - Google Patents

Live broadcast push stream processing method and device, equipment and medium thereof Download PDF

Info

Publication number
CN113596495B
CN113596495B CN202110857018.3A CN202110857018A CN113596495B CN 113596495 B CN113596495 B CN 113596495B CN 202110857018 A CN202110857018 A CN 202110857018A CN 113596495 B CN113596495 B CN 113596495B
Authority
CN
China
Prior art keywords
video
live
application program
live broadcast
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110857018.3A
Other languages
Chinese (zh)
Other versions
CN113596495A (en
Inventor
廖国光
黄志义
杨力群
郭鹏飞
郑潇洵
黄煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202110857018.3A priority Critical patent/CN113596495B/en
Publication of CN113596495A publication Critical patent/CN113596495A/en
Application granted granted Critical
Publication of CN113596495B publication Critical patent/CN113596495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a live broadcast plug flow processing method, a device, equipment and a medium thereof, wherein the method comprises the following steps: acquiring a video frame according to configuration information of a live broadcast plug-flow page; caching the video frames into a shared buffer area to obtain addressing information of the video frames in the shared buffer area; transmitting the addressing information of the video frames to the live broadcast plug flow page so as to control the live broadcast plug flow page to load the corresponding video frames from the shared buffer zone according to the addressing information; pushing a live stream to a live broadcasting room, wherein the live stream comprises the video frames loaded in the live broadcasting push stream page. The application constructs a novel video frame transmission mode, constructs a shared buffer area for accessing video frames, utilizes address identification to transmit video frames, replaces the traditional copy transmission mode, reduces the performance consumption of equipment, improves the overall efficiency of video drawing, and improves the output synchronization rate of video pictures played in a live broadcasting room and video pictures acquired by a main broadcasting end.

Description

Live broadcast push stream processing method and device, equipment and medium thereof
Technical Field
The application relates to the field of network live broadcasting, in particular to a live broadcasting plug flow processing method, and further relates to a device, equipment and a nonvolatile storage medium corresponding to the method.
Background
The current network live broadcast platform provides corresponding live broadcast application programs, so that a host user can acquire a graphical user interface of the application programs output in the equipment or video pictures shot by a camera, the acquired data are rendered into video frames, and the video frames are broadcast to a live broadcast room as live broadcast streams for output and play, so that audience users can watch live broadcast contents acquired by the host user through the live broadcast room.
The video rendering module for collecting and rendering video frames in the main broadcasting terminal is generally constructed based on WebGL, copies the video frames obtained by collecting and rendering, and transmits the video frames obtained by copying to a live broadcasting room page for output and playing, but nowadays, along with upgrading of networks and devices, users pursue live broadcasting pictures with higher code rate and higher resolution, so that the file size of the video frames collected and rendered by the video rendering module is bigger and bigger, and when the video frames are processed by the module, the modules take longer time, so that the video pictures output by the live broadcasting room cannot form synchronous output with the video pictures collected by the main broadcasting terminal, and copy copies with larger file size are needed, more performance of the devices is consumed, so that the equipment at the main broadcasting terminal cannot smoothly run corresponding application programs for live broadcasting, especially, if the equipment at the main broadcasting terminal cannot smoothly run the game application programs for live broadcasting, and the live broadcasting effect is greatly reduced.
In view of the video frame data delivery problems with various existing live applications, the present inventors have made a corresponding search for this problem.
Disclosure of Invention
The application aims to meet the demands of users and provides a live broadcast plug flow processing method, a corresponding device, electronic equipment and a nonvolatile storage medium.
In order to achieve the purpose of the application, the following technical scheme is adopted:
the application provides a live broadcast plug flow processing method which is suitable for one of the purposes of the application, and comprises the following steps:
acquiring a video frame according to configuration information of a live broadcast plug-flow page;
caching the video frames into a shared buffer area to obtain addressing information of the video frames in the shared buffer area;
transmitting the addressing information of the video frames to the live broadcast plug flow page so as to control the live broadcast plug flow page to load the corresponding video frames from the shared buffer zone according to the addressing information;
pushing a live stream to a live broadcasting room, wherein the live stream comprises the video frames loaded in the live broadcasting push stream page.
In a further embodiment, the step of obtaining the video frame according to the configuration information of the live push page includes:
displaying an application list in the live push page, wherein the application list comprises at least one application program suitable for providing the video frames;
Responding to an acquisition instruction acting on any application program in the application list, so as to call a video rendering module to perform video acquisition and rendering on a graphical user interface of the application program, and acquiring a video frame acquired and rendered by the video rendering module.
In a further embodiment, the step of obtaining the video frame according to the configuration information of the live push page includes:
responding to a video acquisition instruction, and determining a local cache identifier pointed by the video acquisition instruction;
and acquiring video data corresponding to the local cache identifier from a storage space of the equipment so as to acquire the video frame from the video data.
In a further embodiment, the step of obtaining the video frame according to the configuration information of the live push page includes:
responding to a synthesis instruction of a live broadcast plug-flow page, and determining a video element pointed by the synthesis instruction;
calling a video rendering module to synthesize the video elements to a synthesis position designated by the synthesis instruction in a video picture;
and obtaining the video frames which are synthesized.
In a further embodiment, in the step of obtaining a video frame according to the configuration information of the live push page, a video rendering module is invoked, and video rendering is performed according to the configuration information, so as to obtain the video frame which is completely rendered.
In a preferred embodiment, the step of buffering the video frame in a shared buffer to obtain addressing information in the shared buffer includes:
the video rendering module caches the video frames obtained by rendering into the shared buffer area;
the video rendering module determines a buffer address of the video frame in the shared buffer;
the video rendering module encapsulates the buffer address into the addressing information of the video frame.
In a preferred embodiment, the step of transferring the addressing information of the video frames to the live push page to control the live push page to load the corresponding video frames from the shared buffer according to the addressing information includes:
a browser kernel in the live broadcast plug flow page receives the addressing information pushed by the video rendering module;
the browser kernel acquires a video frame pointed by the buffer address from the shared buffer area according to the buffer address contained in the addressing information;
and the browser kernel loads the video frame into a video playing window of the live broadcast plug flow page to be output and displayed.
In a further embodiment, the step of pushing a live stream to a live room, where the live stream includes the video frames loaded in the live push page includes:
Responding to a live broadcast instruction in the live broadcast plug flow page, and determining a live broadcast room corresponding to the live broadcast plug flow page;
pushing the live stream containing the video frames loaded and output in the live pushing page to a server associated with the live broadcasting room for broadcasting;
and pushing the live stream containing the video frames to the server frame by frame, and broadcasting the live stream to the live broadcasting room for playing by the driving server.
The application provides a live broadcast plug flow processing device, which comprises:
the video frame acquisition module is used for acquiring video frames according to the configuration information of the live broadcast plug flow page;
the addressing information acquisition module is used for caching the video frames to a shared buffer area and acquiring the addressing information of the video frames in the shared buffer area;
the addressing information transfer module is used for transferring the addressing information of the video frames to the live broadcast plug flow page so as to control the live broadcast plug flow page to load the corresponding video frames from the shared buffer zone according to the addressing information;
and the live stream pushing module is used for pushing a live stream to a live broadcasting room, wherein the live stream comprises the video frames loaded in the live broadcast pushing page.
In a further embodiment, the video frame acquisition module includes:
an application list display sub-module, configured to display an application list in the live push page, where the application list includes at least one application program adapted to provide the video frame;
and the acquisition instruction response sub-module is used for responding to an acquisition instruction acting on any application program in the application list so as to call the video rendering module to perform video acquisition and rendering on a graphical user interface of the application program, so as to acquire a video frame acquired and rendered by the video rendering module.
In a preferred embodiment, the video frame acquisition module further includes:
the video acquisition instruction response sub-module is used for responding to the video acquisition instruction and determining a local cache identifier pointed by the video acquisition instruction;
and the video data acquisition sub-module is used for acquiring video data corresponding to the local cache identifier from the storage space of the equipment so as to acquire the video frame from the video data.
In a preferred embodiment, the video frame acquisition module further includes:
the synthesis instruction response sub-module is used for responding to the synthesis instruction of the live broadcast plug-flow page and determining the video element pointed by the synthesis instruction;
The video element synthesis submodule is used for calling the video rendering module to synthesize the video elements to the synthesis positions designated by the synthesis instructions in the video pictures;
and the video frame determination submodule is used for acquiring the video frames which are synthesized.
In a further embodiment, the addressing information obtaining module includes:
the video frame buffer sub-module is used for buffering the video frames obtained by rendering into the shared buffer area by the video rendering module;
a buffer address determination sub-module, configured to determine a buffer address of the video frame in the shared buffer area by using the video rendering module;
and the addressing address encapsulation sub-module is used for encapsulating the buffer address into the addressing information of the video frame by the video rendering module.
In a further embodiment, the addressing information transferring module includes:
the live broadcast instruction response sub-module is used for responding to the live broadcast instruction in the live broadcast plug-flow page and determining a live broadcast room corresponding to the live broadcast plug-flow page;
the live stream pushing sub-module is used for pushing the live stream containing the video frames loaded and output in the live push page to a server associated with the live broadcasting room for broadcasting;
And the live stream broadcasting sub-module is used for pushing the live stream containing the video frames to the server frame by frame in such a way, and driving the server to broadcast the live stream to the live broadcasting room for playing.
In a further embodiment, the live stream pushing module includes:
the addressing information pushing sub-module is used for receiving the addressing information pushed by the video rendering module by a browser kernel in the live broadcast push page;
the video frame acquisition sub-module is used for the browser kernel to acquire a video frame pointed by the buffer address from the shared buffer area according to the buffer address contained in the addressing information;
and the video frame loading sub-module is used for loading the video frame into a video playing window of the live broadcast plug flow page by the browser kernel for output display.
An electronic device according to the object of the application comprises a central processor and a memory, the central processor being arranged to invoke the steps of running a computer program stored in the memory for executing the live plug-flow processing method.
A non-volatile storage medium adapted to the object of the application stores a computer program implemented according to the live push processing method, which when invoked by a computer, performs the steps comprised by its corresponding method.
Compared with the prior art, the application has the following advantages:
the application improves video frame transmission logic, and by constructing the shared buffer area for buffering video frames, the video frames among the cross applications only need to transmit the identification of the video frames pre-stored in the buffer area, and the receiver application can acquire the video frames from the shared buffer area for output and play, thereby replacing the traditional video frame multi-level copying and transmission, reducing the performance consumption of equipment, saving the operation and calculation power and improving the overall drawing efficiency.
Firstly, the application does not need to copy the video frames, does not need to use a copy transmission mode to transmit the video frames, reduces the multi-stage copying of the video frames and the performance consumed by data transmission, and can improve the efficiency of transmitting the video frames when being applied to a live broadcast scene, further improves the drawing efficiency of the video frames in the live broadcast plug flow page, ensures that the video pictures output by a main broadcasting end and the video pictures output by a spectator end are kept synchronous as much as possible, ensures that the interaction between the main broadcasting and the spectator is more instant, and further improves the overall live broadcast effect.
In addition, the method can greatly reduce the performance consumption of the equipment caused by video frame transmission, and can ensure that the video frames are output and played more smoothly, can meet the requirement of outputting and playing the video with 24 frames per second required by general live broadcast service scenes, can also meet the requirement of outputting and playing the video with 60 frames per second or 144 frames with high frames per second, and can transmit high-code rate video frames with larger file size to output and play under the condition of lower performance occupation due to the improvement of transmission efficiency, so that live broadcast pictures, particularly game live broadcast pictures, are more exquisite and smooth, and the viewing experience of audience in a live broadcast room is comprehensively improved.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a typical network deployment architecture according to an embodiment of the present application;
fig. 2 is a flow chart of an exemplary embodiment of a live push processing method according to the present application;
FIG. 3 is a schematic diagram of a graphical user interface of a live push page according to the present application;
FIG. 4 is a flowchart illustrating steps performed in one embodiment of the step S11 in FIG. 2;
FIG. 5 is a schematic diagram of a graphical user interface for an application list in accordance with the present application;
FIG. 6 is a flowchart illustrating steps performed in the step S11 of FIG. 2;
FIG. 7 is a flowchart illustrating steps performed in the step S11 of FIG. 2;
FIG. 8 is a flowchart illustrating steps performed in one embodiment of the step S12 of FIG. 2;
FIG. 9 is a flowchart illustrating steps performed in one embodiment of the step S13 in FIG. 2;
FIG. 10 is a flowchart illustrating steps performed in one embodiment of the step S14 in FIG. 2;
FIG. 11 is a functional block diagram of an exemplary embodiment of a live push processing device of the present application;
Fig. 12 is a basic structural block diagram of a computer device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, "client," "terminal device," and "terminal device" are understood by those skilled in the art to include both devices that include only wireless signal receivers without transmitting capabilities and devices that include receiving and transmitting hardware capable of two-way communication over a two-way communication link. Such a device may include: a cellular or other communication device such as a personal computer, tablet, or the like, having a single-line display or a multi-line display or a cellular or other communication device without a multi-line display; a PCS (Personal Communications Service, personal communication system) that may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant ) that can include a radio frequency receiver, pager, internet/intranet access, web browser, notepad, calendar and/or GPS (Global Positioning System ) receiver; a conventional laptop and/or palmtop computer or other appliance that has and/or includes a radio frequency receiver. As used herein, "client," "terminal device" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or adapted and/or configured to operate locally and/or in a distributed fashion, at any other location(s) on earth and/or in space. As used herein, a "client," "terminal device," or "terminal device" may also be a communication terminal, an internet terminal, or a music/video playing terminal, for example, a PDA, a MID (Mobile Internet Device ), and/or a mobile phone with music/video playing function, or may also be a device such as a smart tv, a set top box, or the like.
The application refers to hardware such as a server, a client, a service node, and the like, which essentially is an electronic device with personal computer and other functions, and is a hardware device with necessary components disclosed by von neumann principles such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, an output device, and the like, wherein a computer program is stored in the memory, and the central processing unit calls the program stored in the memory to run, executes instructions in the program, and interacts with the input and output devices, thereby completing specific functions.
It should be noted that the concept of the present application, called "server", is equally applicable to the case of server clusters. The servers should be logically partitioned, physically separate from each other but interface-callable, or integrated into a physical computer or group of computers, according to network deployment principles understood by those skilled in the art. Those skilled in the art will appreciate this variation and should not be construed as limiting the implementation of the network deployment approach of the present application.
Referring to fig. 1, the hardware base required for implementing the related technical solution of the present application may be deployed according to the architecture shown in the figure. The server 80 of the present application is deployed at the cloud as a service server, and may be responsible for further connecting to related data servers and other servers providing related support, so as to form a logically related service cluster, to provide services for related terminal devices, such as a smart phone 81 and a personal computer 82 shown in the figure, or a third party server (not shown). The smart phone and the personal computer can access the internet through a well-known network access mode, and establish a data communication link with the cloud server 80 so as to run a terminal application program related to the service provided by the server.
For the server, the application program is usually constructed as a service process, and a corresponding program interface is opened for remote call of the application program running on various terminal devices.
The application program refers to an application program running on a server or terminal equipment, the application program adopts a programming mode to realize the related technical scheme of the application, the program codes of the application program can be stored in a nonvolatile storage medium which can be identified by a computer in the form of computer executable instructions, and the program codes are called by a central processing unit to run in a memory, and the related device of the application is constructed by the running of the application program on the computer.
For the server, the application program is usually constructed as a service process, and a corresponding program interface is opened for remote call of the application program running on various terminal devices.
The technical solution of the application, which is suitable for implementation in a terminal device, may also be programmed to be built into an application providing live webcasting as part of which the functionality is extended. The network live broadcast refers to a live broadcast room network service realized based on the network deployment architecture.
The application relates to a live broadcast room, which is a video chat room realized by means of an Internet technology and generally has an audio and video broadcast control function, wherein the video chat room comprises a host user and a spectator user, and the spectator user can comprise registered users registered in a platform or unregistered guest users; the registered user who pays attention to the anchor user may be a registered user who pays attention to the anchor user or an unregistered user. The interaction between the anchor user and the audience user can be realized through the well-known online interaction modes such as voice, video, text and the like, generally, the anchor user performs programs for the audience user in the form of audio and video streams, and economic transaction behaviors can be generated in the interaction process. Of course, the application form of the live broadcast room is not limited to online entertainment, and can be popularized to other related scenes, such as educational training scenes, video conference scenes, product recommendation sales scenes and any other scenes needing similar interaction.
Those skilled in the art will appreciate that: although the various methods of the present application are described based on the same concepts so as to be common to each other, the methods may be performed independently of each other unless specifically indicated otherwise. Similarly, for the various embodiments disclosed herein, all concepts described herein are presented based on the same general inventive concept, and thus, concepts described herein with respect to the same general inventive concept, and concepts that are merely convenient and appropriately modified, although different, should be interpreted as equivalents.
Referring to fig. 2, in an exemplary embodiment of the present application, a live push processing method includes the following steps:
step S11, obtaining video frames according to configuration information of live broadcast plug-flow pages:
and the current application program analyzes the configuration information of the live broadcast plug flow page, and determines a video source pointed by the configuration information so as to acquire the video frames in the video source.
The configuration information is generated by the live broadcast push page through determining an event by responding to a video source, the event is generally triggered by determining a corresponding video source through a corresponding control in the live broadcast push page by a user, for example, a video element synthesis control is provided in the live broadcast push page, the user performs synthesis operation of video elements in video pictures of a live broadcast stream through the video element synthesis control, the video picture which is triggered to be a synthesized video element is determined to be the video source determination event, the live broadcast push page responds to the video frame to acquire the determination event so as to generate the configuration information, the video source pointed by the configuration information is a video picture which completes element synthesis, and a video rendering module is called to conduct synthesis rendering of the video element on the live broadcast stream according to the configuration information in a current application program so as to acquire a video frame of the completed video element synthesized video picture.
Further, the video source pointed by the configuration information may be, in addition to the video picture synthesized by the video elements, a video picture generated by a graphical user interface of the collected corresponding application program, for example, the live broadcast pushing page provides an application program collection control, after the user selects the corresponding application program through the control, the video picture obtained by triggering the graphical user interface for collecting the application program is determined to be the video frame obtaining determining event of the video source, the live broadcast pushing page responds to the video frame obtaining determining event to generate the configuration information, the video source pointed by the configuration information is the collected video picture, and the current application program calls the video rendering module to collect the graphical user interface of the application program according to the configuration information to obtain the video frame of the collected video picture.
The configuration information includes target information pointing to a video source, so that a current application program obtains a corresponding video frame according to the target information, for example, when the video source is a graphical user interface of the application program, the target information is generally an application ID or an application name of the application program, and when the video source is video data in a storage space, the target information is a local cache identifier of the video data.
The video rendering module is generally a module constructed based on WebGL drawing technology standard, and is configured to collect and render a video source, obtain the video frame obtained by the video source by collecting, for example, when the video source is a graphical user interface of an application program, the video rendering module will collect, frame by frame, the current graphical user interface of the application program, so as to render the graphical user interface into the video frame, so that the live broadcast plug-flow page loads each video frame by frame, and outputs and plays the video picture of the current graphical user interface of the application program frame by frame in the page.
The live push page is typically a page written based on the hypertext markup language, which generally refers to a language conforming to the specifications of the core language HTML, especially HTML5, in the Web, and includes a series of tags, through which the document formats on the network can be unified, so that the distributed internet resources are connected into a logical whole.
Specifically, referring to fig. 3, the graphical user interface shown in fig. 3 is the live broadcast pushing page, a user determines, through a video element display calling area 301 in the live broadcast pushing page, that a corresponding video element performs a video element synthesis operation, determines a video picture, which is characterized by synthesizing the video element, as the video source determining event of a video source, the live broadcast pushing page responds to the video frame obtaining determining event to generate the configuration information, the video source pointed by the configuration information is a video picture for completing element synthesis, and triggers a video rendering module to perform video element synthesis rendering on the live broadcast stream according to the configuration information in a current application program, thereby obtaining a video frame for completing video element synthesis video picture.
The live broadcast plug flow page is provided with a browser kernel, the browser kernel is combined with the video rendering module to finish the screen of the video frame, the browser kernel acquires the video frame from the shared buffer area according to the addressing information corresponding to the video frame acquired and rendered by the video rendering module, and loads the video frame into the live broadcast plug flow page to be output and played so as to finish the screen of the video frame. For the specific implementation of the shared buffer and the browser kernel, reference is made to the related embodiments in the subsequent steps, which are not repeated.
In one embodiment, referring to fig. 3 to 5, when the video source pointed to by the configuration information is a gui of an application, the current application will perform the following specific steps:
step S111, displaying an application list in the live push page, where the application list includes at least one application program adapted to provide the video frame:
and displaying the application list in the live broadcast plug-flow page by the current application program, wherein the application list comprises one or more application programs for providing the video frames, so that a user can select a corresponding application program through the application list to acquire the video frames.
Referring to fig. 3 and fig. 5, fig. 3 is a graphical user interface of the live push page, and a user obtains a source determining control 307 through a video frame, so that a current application displays an application list shown in fig. 5 in the live push page, and a plurality of application programs suitable for providing the collection of the video frame are displayed in the application list.
In one embodiment, referring to fig. 5, an application suitable for providing the video frame shown in the application list shown in fig. 5 is an application currently operated by a device, where a control for browsing a native application 502 is provided in the application list, so that a user selects an application that is not operated in the current device but is suitable for providing the video frame, and after the user provides the control for browsing the native application 502 to select a corresponding application, the application is started/awakened to provide the video frame.
Step S112, in response to an acquisition instruction acting on any one application program in the application list, to call a video rendering module to perform video acquisition and rendering on a graphical user interface of the application program, so as to acquire a video frame acquired and rendered by the video rendering module:
the current application program responds to the acquisition instruction of any one application program in the application list, determines the application program pointed by the acquisition instruction, calls the video rendering module to perform video acquisition rendering on a graphical user interface of the application program, and renders acquired video frames as the video frames frame by the video rendering module.
And after the current application program responds to the acquisition instruction, the video rendering module is called to perform video acquisition rendering on the application program pointed by the instruction, and the current graphic user interface of the acquisition application program is rendered into the video frame, for example, if the application program pointed by the acquisition instruction is a game application program, the video rendering module acquires the current game picture of the application program rendered by the GPU according to frames by using a video frame number specification of 24 frames per second or 60 frames per second, and renders the game page into the video frame.
Referring to fig. 5, when a user selects a B application 501 in the application list shown in fig. 5, the current application corresponds to a collection instruction shown by the B application, and invokes a video rendering module to perform video collection and rendering on a graphical user interface of the B application, so as to obtain video frames collected by the video rendering module in a frame-by-frame manner.
In one embodiment, referring to fig. 6, when the video source pointed to by the configuration information is video data in the storage space of the local device, the current application program will execute the following specific steps:
step S111', in response to the video acquisition instruction, determining a local cache identifier pointed by the video acquisition instruction:
The current application program responds to the video acquisition instruction to determine the local cache identification pointed by the video acquisition instruction.
The video acquisition instruction is triggered and generated by a user through a local video data selection control, the user selects video data stored in a storage space of the local equipment through the control, and generates the video acquisition instruction pointing to a local cache identifier of the video data in the storage control, so that a current application program determines the local cache identifier pointed by the video acquisition instruction through the video acquisition instruction.
Step S112', obtaining video data corresponding to the local cache identifier from a storage space of the device, so as to obtain the video frame from the video data:
and the current application program acquires video data corresponding to the local cache identifier from a storage space of the local equipment according to the local cache identifier pointed by the acquisition instruction, acquires the video frame from the video data, and loads the video data by calling the video rendering module, for example, and takes a certain frame in video frames of the video data as the video frame.
In one embodiment, referring to fig. 3 and fig. 7, when the video source to which the configuration information is directed is video data for video element synthesis, the current application program will execute the following specific steps:
Step S111', responding to a synthesis instruction of the live broadcast plug-flow page, and determining the video element pointed by the synthesis instruction:
the current application responds to the composition instruction of the live push page to determine one or more video elements pointed by the composition instruction, and determines the composition positions of the video elements in a video picture according to the composition instruction.
Referring to fig. 3, the graphical user interface shown in fig. 3 is the live-broadcast push page, the user selects the video elements synthesized in the video frame through the video element display calling area 301, the newly added video element control 303 is used for the user to add the video elements synthesized in the video frame, the video editing area 304 is used for the user to define the synthesis positions of the video elements in the video frame, and the user performs the video element synthesis operation through any one or more controls in the video element display calling area 301, the newly added video element control 303 or the video editing area 304 to generate the synthesis instruction representing the video element synthesis operation, so that the current application program determines the video elements synthesized in the video frame and the synthesis positions of the video elements in the video frame by responding to the synthesis instruction.
The video elements generally refer to visual graphic information synthesized into a video picture, which is used for beautifying the playing effect of the video picture, and the types of the video elements include: moving pictures, still pictures, video filters, text information, etc.
The video element synthesis operation generally comprises operations such as a plane movement operation, a hierarchy adjustment operation, a new addition operation or a deletion operation, etc., which are executed on the video element in the video editing area or the video element display calling area in the live push page.
The plane moving operation refers to an operation of adjusting the synthesis position of a certain video element in the video stream by a user through the video editing area.
The level adjustment operation refers to that a user adjusts a level of a certain video element displayed in a video picture through the video editing area, for example, adjusts the video element to be a top level or a bottom level of each element in the video picture.
The new adding operation refers to an operation that a user adds a new video element to a video stream for display through the video element display calling area, and the user selects a corresponding video element from a plurality of video elements provided by the live broadcast push page or selects corresponding image-text content from a storage space of the device as the video element through the video element display calling area, and adds the video element to a video picture for display.
The deleting operation refers to an operation that a user deletes a certain video element added to a video picture through the video element display calling area or the video editing area.
Step S112', calling a video rendering module to synthesize the video elements to synthesis positions specified by the synthesis instructions in the video pictures:
and the current application program responds to the synthesis instruction, determines the video elements to be synthesized and the synthesis positions of the video elements, and then calls a video rendering module to synthesize the video elements into corresponding synthesis positions in a video picture.
Step S113", acquire the video frame that has completed the synthesis:
the video rendering module performs the synthesis operation of the video elements in the video frames frame by frame so that an application program takes each frame of video frames which are synthesized by the video rendering module and are used as the video frames.
Step S12, caching the video frame in a shared buffer area, and obtaining addressing information of the video frame in the shared buffer area:
the current application program caches the video frame into the shared buffer area and acquires the addressing information representing the buffer address of the video frame in the shared buffer area.
The addressing information is used for representing the buffer address of the video frame in the shared buffer zone, the video frame is currently stored in the shared buffer zone, and the shared buffer zone feeds back the addressing information of the video frame representing the buffer address in the buffer zone, so that the live broadcast push page loads the corresponding video frame from the shared buffer zone according to the addressing information. For a specific embodiment of the live-broadcast push page for obtaining the video frame, please refer to the related embodiment in step S13, which is not repeated.
The shared buffer is generally referred to as a shared memory, and the current application program caches the video frame in the shared memory and triggers the shared memory to return the addressing information representing the memory address of the video frame in the shared memory to the current application program, so that the application program obtains the addressing information of the video frame.
In one embodiment, the shared buffer area refers to a shared texture, the shared texture is created in advance according to a video parameter specification of the video frame, the current application program stores the video frame into the shared texture according to the video parameter specification characterized by the shared texture, and obtains an identification of the video frame in the shared texture as the addressing information of the video frame.
The organization manner of the shared buffer is generally determined based on the system environment of the device running the current application program, and the current application program selects the supported or more adaptive organization manner according to the system environment of the running device, and constructs the shared buffer to buffer the video frame for data transmission.
Specifically, the video frames acquired by the current application program are generally acquired and rendered by the built-in video rendering module, the video rendering module is generally constructed based on the drawing technical standard of WebGL, in the traditional drawing process, the video rendering module constructed based on the drawing technical standard of WebGL needs to copy the video frames acquired by rendering the video frames so as to transfer the copied video frames to the live broadcast push page for output playing, and the method stores the acquired and rendered video frames in the shared buffer, acquires addressing information representing the buffer address of the video frames in the shared buffer, transfers the addressing information to the live broadcast push page, directly acquires the video frames from the shared buffer by the live broadcast push page for output playing, so that the video rendering module does not need to copy the video frames, reduces the multi-level copy of the video frames and the performance consumed by data transfer, improves the drawing efficiency of the video frames in the live broadcast push page, and enables the received live broadcast picture to be maximally synchronous with the video source of the host broadcast end.
Referring to fig. 8, regarding an embodiment of the video rendering module storing the video frame in the shared buffer and acquiring the addressing information of the video frame, the specific steps are as follows:
step S121, the video rendering module caches the video frames obtained by rendering into the shared buffer:
and the video rendering module acquires and renders the video source pointed by the configuration information, and caches the video frame into the shared buffer after acquiring and rendering the video frame.
Step S122, the video rendering module determines a buffer address of the video frame in the shared buffer:
after the video rendering module caches the video frame into the shared buffer, a buffer address of the video frame in the shared buffer is determined so as to create the addressing information of the video frame.
The buffer address is used for representing the buffer position of the video frame in the shared buffer area, so that the video rendering module creates the addressing information of the video frame according to the buffer address.
Step S123, the video rendering module encapsulates the buffer address into the addressing information of the video frame:
And after the video rendering module determines the buffer address of the video frame in the shared buffer area, the buffer address is encapsulated into the addressing information of the video frame, so that the live broadcast push page obtains the video frame from the shared buffer area for output playing according to the buffer address contained in the addressing information.
Step S13, transmitting the addressing information of the video frame to the live broadcast push page, so as to control the live broadcast push page to load the corresponding video frame from the shared buffer according to the addressing information:
and the current application program buffers the video frames into the shared buffer area, and pushes the addressing information to the live broadcast pushing page after acquiring the addressing information of the video frames, so that the live broadcast pushing page triggers the browser kernel to acquire the video frames corresponding to the addressing information from the shared buffer area according to the addressing information for loading and playing according to calling the browser kernel.
The browser kernel is generally based on a kernel built by a browser embedded frame such as CEF (CEF) and other interfaces capable of providing C/C++ advanced programming languages, and can query and acquire video frames of the addressing information from the shared buffer according to the addressing information, and screen the video frames to corresponding video playing windows in the live broadcast push page for output playing, for example, the video frames are output to the video playing window 305 in FIG. 3 for playing.
Compared with the prior art, the video rendering module constructed based on WebGL collects the rendered video frames, generally copies the video frames, transfers the copied video frames to the browser kernel constructed based on a CEF and other browser embedded frames, so that the browser kernel outputs the video frames to the live broadcast plug flow page for output and play, but a large amount of performance of equipment is required for copying and transferring the video frames, and the operations are performed.
Referring to fig. 9, regarding an embodiment in which the browser kernel obtains the video frame according to the addressing information to output and play the video frame in the live push page, the specific implementation steps are as follows:
step S131, a browser kernel in the live push page receives the addressing information pushed by the video rendering module:
The browser kernel in the live broadcast plug flow page receives the addressing information pushed by the video rendering module to analyze the addressing information and acquire a buffer address of the video frame in the shared buffer area.
Step S132, the browser kernel obtains, from the shared buffer, a video frame pointed to by the buffer address according to the buffer address included in the addressing information:
and the browser kernel queries a video frame pointed by the buffer address in the shared buffer area according to the buffer address contained in the addressing information so as to acquire the video frame for output playing.
Step S133, the browser kernel loads the video frame into a video playing window of the live broadcast push page for output display:
and after the browser kernel acquires the video frame corresponding to the buffer address from the shared buffer area, loading the video frame into the video playing reference of the live broadcast plug flow page for output display.
Referring to fig. 3, after the browser kernel acquires the video frame, the browser kernel performs an on-screen operation on the video frame, and loads the video frame into the video playing window 305 shown in fig. 3 for output playing.
In an embodiment, referring to fig. 3, after the browser kernel obtains the video frame, the video frame is loaded into the video frame playing window 308 shown in fig. 3 to be output and played, where the case of the embodiment is generally that the video source of the video frame is a graphical user interface of a certain application program currently running on the device, or the video source is certain video data in the storage space of the device.
Step S14, pushing a live stream to a live broadcast room, where the live stream includes the video frames loaded in the live broadcast push page:
the current application program pushes the live stream containing the video frames loaded in the live stream pushing page to the associated live broadcasting room, and the current application program drives the server to broadcast the live stream to the live broadcasting room associated with the current application program for output playing by pushing the live stream to the corresponding media server.
The current application program is a live broadcast service application program, and the user identity logged into the application program is generally a hosting user, then the terminal equipment for representing the running of the current application program is a hosting client, and the current application program drives a media server to broadcast the live broadcast stream to the audience client of the live broadcast room for output playing by pushing the live broadcast stream to the corresponding media server.
As described above, the method stores the video frames into the shared buffer to transfer the video frames, so that the performance consumption of the video frames to the device can be greatly reduced, the video frames can be output and played more smoothly, and the video frames with 60 frames or 144 frames higher than the common 24 frames can be output and played.
Referring to fig. 3 and 10, regarding the embodiment in which the current application program pushes the live stream including the video frames into the live room, the implementation steps are as follows:
step S141, determining a live room corresponding to the live push page in response to the live instruction in the live push page:
and the current application program responds to the live broadcast instruction in the live broadcast stream pushing page and determines the live broadcast room corresponding to the live broadcast pushing page.
Referring to fig. 3, the play control 306 shown in fig. 3 is used to trigger to generate the live broadcast instruction, and the user triggers to generate the live broadcast instruction by touching the play control 306 in the live broadcast stream push page, so that the current application program responds to the live broadcast instruction, and determines a live broadcast room of the live broadcast user logged in the current application program.
Step S142, pushing the live stream including the video frames loaded and output in the live pushing page to the server associated with the live broadcasting room for broadcasting:
and the current application program pushes the live stream containing the video frames currently loaded and output in the live push page so as to reach the live stream to a server for carrying out live stream broadcasting service in the live broadcasting room, and drives the server to push the live stream to the live broadcasting room for output and playing.
Step S143, pushing the live stream containing the video frames to the server frame by frame, and driving the server to broadcast the live stream to the live broadcasting room for playing:
the current application program continuously pushes the live stream containing the video frames currently loaded and output in the live push page to the server frame by frame according to the step S142, so as to drive the server to broadcast the live stream frame by frame to the live room for playing, and the smooth pushing of the live stream is kept.
Further, by performing the functionalization of each step in the method disclosed in the foregoing embodiments, a live push processing device of the present application may be constructed, and according to this concept, please refer to fig. 11, where in one exemplary embodiment, the device includes: the video frame acquisition module 11 is used for acquiring video frames according to the configuration information of the live broadcast plug flow page; an addressing information obtaining module 12, configured to buffer the video frame in a shared buffer area, and obtain addressing information of the video frame in the shared buffer area; the addressing information transfer module 13 is configured to transfer addressing information of the video frame to the live broadcast push page, so as to control the live broadcast push page to load the corresponding video frame from the shared buffer area according to the addressing information; and the live stream pushing module 14 is configured to push a live stream to a live broadcasting room, where the live stream includes the video frames loaded in the live broadcast push page.
In one embodiment, the video frame acquisition module 11 includes: an application list display sub-module, configured to display an application list in the live push page, where the application list includes at least one application program adapted to provide the video frame; and the acquisition instruction response sub-module is used for responding to an acquisition instruction acting on any application program in the application list so as to call the video rendering module to perform video acquisition and rendering on a graphical user interface of the application program, so as to acquire a video frame acquired and rendered by the video rendering module.
In another embodiment, the video frame acquisition module 11 further includes: the video acquisition instruction response sub-module is used for responding to the video acquisition instruction and determining a local cache identifier pointed by the video acquisition instruction; and the video data acquisition sub-module is used for acquiring video data corresponding to the local cache identifier from the storage space of the equipment so as to acquire the video frame from the video data.
In yet another embodiment, the video frame acquisition module 11 further includes: the synthesis instruction response sub-module is used for responding to the synthesis instruction of the live broadcast plug-flow page and determining the video element pointed by the synthesis instruction; the video element synthesis submodule is used for calling the video rendering module to synthesize the video elements to the synthesis positions designated by the synthesis instructions in the video pictures; and the video frame determination submodule is used for acquiring the video frames which are synthesized.
In one embodiment, the addressing information obtaining module 12 includes: the video frame buffer sub-module is used for buffering the video frames obtained by rendering into the shared buffer area by the video rendering module; a buffer address determination sub-module, configured to determine a buffer address of the video frame in the shared buffer area by using the video rendering module; and the addressing address encapsulation sub-module is used for encapsulating the buffer address into the addressing information of the video frame by the video rendering module.
In one embodiment, the addressing information transferring module 13 includes: the live broadcast instruction response sub-module is used for responding to the live broadcast instruction in the live broadcast plug-flow page and determining a live broadcast room corresponding to the live broadcast plug-flow page; the live stream pushing sub-module is used for pushing the live stream containing the video frames loaded and output in the live push page to a server associated with the live broadcasting room for broadcasting; and the live stream broadcasting sub-module is used for pushing the live stream containing the video frames to the server frame by frame in such a way, and driving the server to broadcast the live stream to the live broadcasting room for playing.
In one embodiment, the live stream pushing module 14 includes: the addressing information pushing sub-module is used for receiving the addressing information pushed by the video rendering module by a browser kernel in the live broadcast push page; the video frame acquisition sub-module is used for the browser kernel to acquire a video frame pointed by the buffer address from the shared buffer area according to the buffer address contained in the addressing information; and the video frame loading sub-module is used for loading the video frame into a video playing window of the live broadcast plug flow page by the browser kernel for output display.
In order to solve the technical problem, the embodiment of the application also provides a computer device for running a computer program realized according to the live broadcast push processing method. Referring specifically to fig. 12, fig. 12 is a basic structural block diagram of a computer device according to the present embodiment.
As shown in fig. 12, the internal structure of the computer device is schematically shown. The computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected by a system bus. The nonvolatile storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store a control information sequence, and when the computer readable instructions are executed by a processor, the processor can realize a live broadcast plug flow processing method. The processor of the computer device is used to provide computing and control capabilities, supporting the operation of the entire computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, cause the processor to perform a live plug flow processing method. The network interface of the computer device is for communicating with a terminal connection. It will be appreciated by those skilled in the art that the structure shown in FIG. 12 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In this embodiment, the processor is configured to execute specific functions of each module/sub-module in the live broadcast push processing device of the present application, and the memory stores program codes and various data required for executing the modules. The network interface is used for data transmission between the user terminal or the server. The memory in this embodiment stores program codes and data required for executing all modules/sub-modules in the live push processing apparatus, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.
The present application also provides a non-volatile storage medium, in which the live push processing method is written as a computer program, and the computer program is stored in the storage medium in the form of computer readable instructions, where the computer readable instructions when executed by one or more processors mean that the program runs in a computer, thereby causing the one or more processors to execute the steps of the live push processing method in any of the embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored in a computer-readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
In summary, the present application improves the video frame transfer logic, and by constructing the shared buffer area for buffering video frames, the video frames among the cross applications only need to transfer the identifiers of the video frames pre-stored in the buffer area, and the receiving party application can acquire the video frames from the shared buffer area for output playing, thereby replacing the traditional video frame multi-level copying and transferring, reducing the performance consumption of the device, saving the operation and calculation power, and improving the overall drawing efficiency.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
Those of skill in the art will appreciate that the various operations, methods, steps in the flow, acts, schemes, and alternatives discussed in the present application may be alternated, altered, combined, or eliminated. Further, other steps, means, or steps in a process having various operations, methods, or procedures discussed herein may be alternated, altered, rearranged, disassembled, combined, or eliminated. Further, steps, measures, schemes in the prior art with various operations, methods, flows disclosed in the present application may also be alternated, altered, rearranged, decomposed, combined, or deleted.
The foregoing is only a partial embodiment of the present application, and it should be noted that it will be apparent to those skilled in the art that modifications and adaptations can be made without departing from the principles of the present application, and such modifications and adaptations are intended to be comprehended within the scope of the present application.

Claims (8)

1. The live broadcast plug flow processing method is characterized by comprising the following steps of:
the method comprises the steps that a current application program starts an application program selected in a live broadcast plug flow page, a graphical user interface of the selected application program is used as a video source pointed by configuration information, and a video rendering module is called to collect and render the graphical user interface frame by frame to serve as a corresponding video frame in the video source;
The video rendering module of the current application program caches each video frame to a shared buffer area according to the video parameter specification, and obtains addressing information of each video frame in the shared buffer area;
the video rendering module of the current application program transmits the addressing information of the video frames to a browser kernel called by the live broadcast plug flow page so as to control the live broadcast plug flow page to load the corresponding video frames from the shared buffer area through the browser kernel according to the addressing information;
and pushing a live stream to a live broadcasting room by the current application program, wherein the live stream comprises the video frames loaded in the live broadcasting push stream page.
2. The method of claim 1, wherein the step of the current application program starting the application program selected in the live push page, using the graphical user interface of the selected application program as the video source to which the configuration information is directed, and invoking the video rendering module to collect and render the graphical user interface frame by frame as the corresponding video frame in the video source comprises:
displaying an application list in the live broadcast plug-flow page by a current application program, wherein the application list comprises at least one application program suitable for providing the video frames;
The current application program responds to an acquisition instruction acting on any one application program in the application list so as to call a video rendering module to perform video acquisition and rendering on a graphical user interface of the application program, so that the video frame acquired and rendered by the video rendering module is acquired.
3. The method according to claim 1, wherein the step of the video rendering module of the current application buffering each of the video frames in the shared buffer according to the video parameter specification to obtain addressing information of each of the video frames in the shared buffer comprises:
the video rendering module caches the video frames obtained by rendering into the shared buffer area;
the video rendering module determines a buffer address of the video frame in the shared buffer;
the video rendering module encapsulates the buffer address into the addressing information of the video frame.
4. A method according to any one of claims 1 to 3, wherein the step of the video rendering module of the current application passing addressing information of the video frames to the browser core invoked by the live push page to control the live push page to load the corresponding video frames from the shared buffer according to the addressing information via the browser core comprises:
A browser kernel in the live broadcast plug flow page receives the addressing information pushed by the video rendering module;
the browser kernel acquires a video frame pointed by the buffer address from the shared buffer area according to the buffer address contained in the addressing information;
and the browser kernel loads the video frame into a video playing window of the live broadcast plug flow page to be output and displayed.
5. The method of claim 1, wherein the step of the current application pushing a live stream to a live room, the live stream containing the video frames loaded in the live push page, comprises:
the current application program responds to the live broadcast instruction in the live broadcast plug flow page and determines a live broadcast room corresponding to the live broadcast plug flow page;
the current application program pushes the live stream containing the video frames loaded and output in the live push page to a server associated with the live broadcasting room for broadcasting;
and by analogy, pushing the live stream containing the video frames into the server frame by the current application program, and broadcasting the live stream into the live broadcasting room by the drive server for playing.
6. A live push processing device, characterized in that it comprises:
the video frame acquisition module is used for starting an application program selected in a live broadcast plug flow page by a current application program, taking a graphical user interface of the selected application program as a video source pointed by configuration information, and calling the video rendering module to acquire and render the graphical user interface frame by frame to serve as a corresponding video frame in the video source;
the addressing information acquisition module is used for caching each video frame to a shared buffer area according to the video parameter specification by the video rendering module of the current application program to acquire addressing information of each video frame in the shared buffer area;
the addressing information transfer module is used for transferring the addressing information of the video frames to a browser kernel called by the live broadcast plug flow page by the video rendering module of the current application program so as to control the live broadcast plug flow page to load the corresponding video frames from the shared buffer area through the browser kernel according to the addressing information;
and the live stream pushing module is used for pushing a live stream to a live broadcasting room by a current application program, wherein the live stream comprises the video frames loaded in the live broadcasting push page.
7. An electronic device comprising a central processor and a memory, characterized in that the central processor is adapted to invoke a computer program stored in the memory for performing the steps of the method according to any of claims 1 to 5.
8. A non-volatile storage medium, characterized in that it stores in form of computer readable instructions a computer program implemented according to the method of any one of claims 1 to 5, which when invoked by a computer, performs the steps comprised by the method.
CN202110857018.3A 2021-07-28 2021-07-28 Live broadcast push stream processing method and device, equipment and medium thereof Active CN113596495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110857018.3A CN113596495B (en) 2021-07-28 2021-07-28 Live broadcast push stream processing method and device, equipment and medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110857018.3A CN113596495B (en) 2021-07-28 2021-07-28 Live broadcast push stream processing method and device, equipment and medium thereof

Publications (2)

Publication Number Publication Date
CN113596495A CN113596495A (en) 2021-11-02
CN113596495B true CN113596495B (en) 2023-11-24

Family

ID=78251002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110857018.3A Active CN113596495B (en) 2021-07-28 2021-07-28 Live broadcast push stream processing method and device, equipment and medium thereof

Country Status (1)

Country Link
CN (1) CN113596495B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866792A (en) * 2022-03-31 2022-08-05 广州方硅信息技术有限公司 Live video quality detection method and device, equipment and medium thereof

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105657540A (en) * 2016-01-05 2016-06-08 珠海全志科技股份有限公司 Video decoding method adapted to Android system and device thereof
CN107690622A (en) * 2016-08-26 2018-02-13 华为技术有限公司 Realize the method, apparatus and system of hardware-accelerated processing
CN107920258A (en) * 2016-10-11 2018-04-17 中国移动通信有限公司研究院 A kind of data processing method and device
CN108449634A (en) * 2018-03-27 2018-08-24 武汉斗鱼网络科技有限公司 A kind of decoded playback method of multi-process, computer equipment and storage medium
CN108509272A (en) * 2018-03-22 2018-09-07 武汉斗鱼网络科技有限公司 GPU video memory textures are copied to the method, apparatus and electronic equipment of Installed System Memory
CN108600826A (en) * 2018-05-22 2018-09-28 深圳市茁壮网络股份有限公司 A kind of method and device playing TS streams
CN109168021A (en) * 2018-10-25 2019-01-08 京信通信系统(中国)有限公司 A kind of method and device of plug-flow
CN109889875A (en) * 2019-01-23 2019-06-14 北京奇艺世纪科技有限公司 Communication means, device, terminal device and computer-readable medium
CN110891178A (en) * 2019-10-29 2020-03-17 福州瑞芯微电子股份有限公司 Method and device for real-time rendering of video
CN112995753A (en) * 2019-12-16 2021-06-18 中兴通讯股份有限公司 Media stream distribution method, CDN node server, CDN system and readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI372561B (en) * 2009-04-07 2012-09-11 Univ Nat Taiwan Method for decomposition and rending of video content and user interface operating the method thereof

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105657540A (en) * 2016-01-05 2016-06-08 珠海全志科技股份有限公司 Video decoding method adapted to Android system and device thereof
CN107690622A (en) * 2016-08-26 2018-02-13 华为技术有限公司 Realize the method, apparatus and system of hardware-accelerated processing
CN107920258A (en) * 2016-10-11 2018-04-17 中国移动通信有限公司研究院 A kind of data processing method and device
CN108509272A (en) * 2018-03-22 2018-09-07 武汉斗鱼网络科技有限公司 GPU video memory textures are copied to the method, apparatus and electronic equipment of Installed System Memory
CN108449634A (en) * 2018-03-27 2018-08-24 武汉斗鱼网络科技有限公司 A kind of decoded playback method of multi-process, computer equipment and storage medium
CN108600826A (en) * 2018-05-22 2018-09-28 深圳市茁壮网络股份有限公司 A kind of method and device playing TS streams
CN109168021A (en) * 2018-10-25 2019-01-08 京信通信系统(中国)有限公司 A kind of method and device of plug-flow
CN109889875A (en) * 2019-01-23 2019-06-14 北京奇艺世纪科技有限公司 Communication means, device, terminal device and computer-readable medium
CN110891178A (en) * 2019-10-29 2020-03-17 福州瑞芯微电子股份有限公司 Method and device for real-time rendering of video
CN112995753A (en) * 2019-12-16 2021-06-18 中兴通讯股份有限公司 Media stream distribution method, CDN node server, CDN system and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
鱼摆摆.什么是OBS(OBS直播推流操作步骤,看完就懂).《鱼摆摆,https://www.yubaibai.com/cn/article/5611658》.2020, *

Also Published As

Publication number Publication date
CN113596495A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN111901674B (en) Video playing control method and device
CN107770626B (en) Video material processing method, video synthesizing device and storage medium
US10068364B2 (en) Method and apparatus for making personalized dynamic emoticon
US8744975B2 (en) Interactive media content display system
US11899907B2 (en) Method, apparatus and device for displaying followed user information, and storage medium
CN108174272B (en) Method and device for displaying interactive information in live broadcast, storage medium and electronic equipment
US20230362430A1 (en) Techniques for managing generation and rendering of user interfaces on client devices
CN106713937A (en) Video playing control method and device as well as terminal equipment
US20100268694A1 (en) System and method for sharing web applications
US20140281896A1 (en) Screencasting for multi-screen applications
US20110060993A1 (en) Interactive Detailed Video Navigation System
WO2019007327A1 (en) Video playback method and apparatus, computing device, and storage medium
CN113727178B (en) Screen-throwing resource control method and device, equipment and medium thereof
CN113253880B (en) Method and device for processing pages of interaction scene and storage medium
CN104144357A (en) Video playing method and system
CN112035046A (en) List information display method and device, electronic equipment and storage medium
CN113596495B (en) Live broadcast push stream processing method and device, equipment and medium thereof
CN112585986B (en) Synchronization of digital content consumption
EP4080507A1 (en) Method and apparatus for editing object, electronic device and storage medium
CN114205680A (en) Video cover display method and device, equipment, medium and product thereof
CN113569089A (en) Information processing method, device, server, equipment, system and storage medium
CN113556610B (en) Video synthesis control method and device, equipment and medium thereof
CN113727177B (en) Screen-throwing resource playing method and device, equipment and medium thereof
CN113727125B (en) Live broadcast room screenshot method, device, system, medium and computer equipment
CN113411622B (en) Loading method and device of live broadcast interface, client and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant