CN113596495A - Live broadcast stream pushing processing method and device, equipment and medium thereof - Google Patents

Live broadcast stream pushing processing method and device, equipment and medium thereof Download PDF

Info

Publication number
CN113596495A
CN113596495A CN202110857018.3A CN202110857018A CN113596495A CN 113596495 A CN113596495 A CN 113596495A CN 202110857018 A CN202110857018 A CN 202110857018A CN 113596495 A CN113596495 A CN 113596495A
Authority
CN
China
Prior art keywords
video
live
video frame
page
addressing information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110857018.3A
Other languages
Chinese (zh)
Other versions
CN113596495B (en
Inventor
廖国光
黄志义
杨力群
郭鹏飞
郑潇洵
黄煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN202110857018.3A priority Critical patent/CN113596495B/en
Publication of CN113596495A publication Critical patent/CN113596495A/en
Application granted granted Critical
Publication of CN113596495B publication Critical patent/CN113596495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440281Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping

Abstract

The application discloses a live streaming processing method, a device, equipment and a medium thereof, wherein the method comprises the following steps: acquiring a video frame according to configuration information of a live streaming page; caching the video frame to a shared buffer area to obtain the addressing information of the video frame in the shared buffer area; transmitting addressing information of the video frames to the live streaming page so as to control the live streaming page to load corresponding video frames from the shared buffer area according to the addressing information; and pushing a live stream to a live broadcasting room, wherein the live stream comprises the video frames loaded in the live stream pushing page. This application constructs a neotype video frame transfer mode, constructs the sharing buffer and is used for the access video frame, utilizes the address sign to carry out the video frame transmission, replaces traditional copy transfer mode, reduces the performance consumption of equipment, promotes the overall efficiency that the video was drawn, promotes the video picture of broadcasting between the live broadcast and the output synchronous rate of the video picture of the main broadcast end collection.

Description

Live broadcast stream pushing processing method and device, equipment and medium thereof
Technical Field
The application relates to the field of network live broadcast, in particular to a live broadcast stream pushing processing method, and further relates to a device, equipment and a nonvolatile storage medium corresponding to the method.
Background
The current network live broadcast platform provides a corresponding live broadcast application program, so that a main broadcast user acquires a video picture shot by a graphical user interface or a camera of the application program output from the equipment of the main broadcast user, renders acquired data into a video frame, broadcasts the video frame to a live broadcast room as a live broadcast stream to output and play, and enables audience users to watch live broadcast content acquired by the main broadcast user through the live broadcast room.
A video rendering module for capturing and rendering video frames in a main broadcast terminal is generally constructed based on WebGL, and copies video frames obtained by capturing and rendering, and transmits the copied video frames to a page of a live broadcast room for output and play, but with the upgrade of networks and devices, users seek live broadcast pictures with higher code rate and higher resolution, which makes the file size of the video frames captured and rendered by the video rendering module larger and larger, so that the module needs to spend longer time in video frame processing, so that the video pictures output by the live broadcast room cannot form synchronous output with the video pictures captured by the main broadcast terminal, and needs copy with larger file size, and consumes more performance of devices, so that the devices of the main broadcast terminal cannot smoothly run corresponding application programs for live broadcast, especially facing game application programs which need to consume a large amount of performance, if the anchor terminal equipment can not smoothly run the game application program for live broadcasting, the live broadcasting effect is greatly reduced.
In view of the problem of video frame data transmission of various existing live broadcast applications, the applicant has made a corresponding search for the problem.
Disclosure of Invention
The application aims to meet the requirements of users and provides a live streaming processing method and a corresponding device, electronic equipment and nonvolatile storage medium thereof.
In order to realize the purpose of the application, the following technical scheme is adopted:
one of the objectives of the present application is to provide a live streaming processing method, which includes the following steps:
acquiring a video frame according to configuration information of a live streaming page;
caching the video frame to a shared buffer area to obtain the addressing information of the video frame in the shared buffer area;
transmitting addressing information of the video frames to the live streaming page so as to control the live streaming page to load corresponding video frames from the shared buffer area according to the addressing information;
and pushing a live stream to a live broadcasting room, wherein the live stream comprises the video frames loaded in the live stream pushing page.
In a further embodiment, the step of obtaining the video frame according to the configuration information of the live streaming page includes:
displaying an application list in the live push streaming page, wherein the application list comprises at least one application program suitable for providing the video frames;
and responding to a collection instruction acting on any application program in the application list to call a video rendering module to perform video collection and rendering on the graphical user interface of the application program so as to acquire a video frame obtained by collection and rendering of the video rendering module.
In a further embodiment, the step of obtaining the video frame according to the configuration information of the live streaming page includes:
responding to a video acquisition instruction, and determining a local cache identifier pointed by the video acquisition instruction;
and acquiring the video data corresponding to the local cache identification from the storage space of the equipment so as to acquire the video frame from the video data.
In a further embodiment, the step of obtaining the video frame according to the configuration information of the live streaming page includes:
responding to a synthesis instruction of a live streaming page, and determining a video element pointed by the synthesis instruction;
calling a video rendering module to synthesize the video elements to a synthesis position specified by the synthesis instruction in a video picture;
and acquiring the video frame of which the composition is finished.
In a further embodiment, in the step of acquiring the video frame according to the configuration information of the live streaming page, a video rendering module is called, and video rendering is performed according to the configuration information to acquire the rendered video frame.
In a preferred embodiment, the step of buffering the video frames into a shared buffer and obtaining the addressing information in the shared buffer comprises:
the video rendering module caches the video frames obtained by rendering into the shared buffer area;
the video rendering module determines the buffer address of the video frame in the shared buffer area;
the video rendering module encapsulates the buffer address as the addressing information for the video frame.
In a preferred embodiment, the step of transferring the addressing information of the video frame to the live streaming page to control the live streaming page to load the corresponding video frame from the shared buffer according to the addressing information includes:
a browser kernel in the live streaming page receives the addressing information pushed by the video rendering module;
the browser kernel acquires a video frame pointed by a buffer address from the shared buffer area according to the buffer address contained in the addressing information;
and the browser kernel loads the video frame into a video playing window of the live push streaming page for output and display.
In a further embodiment, a live stream is pushed to a live broadcast room, where the live stream includes the video frames loaded in the live push stream page, and the step includes:
responding to a live broadcast instruction in the live broadcast push flow page, and determining a live broadcast room corresponding to the live broadcast push flow page;
pushing the live broadcast stream containing the video frames loaded and output in the live broadcast stream pushing page to a server associated with the live broadcast room for broadcasting;
and by analogy, pushing the live stream containing the video frames to the server frame by frame, and driving the server to broadcast the live stream to the live broadcasting room for playing.
A live streaming apparatus adapted to an object of the present application, includes:
the video frame acquisition module is used for acquiring video frames according to the configuration information of the live streaming page;
the addressing information acquisition module is used for caching the video frame to a shared buffer area and acquiring the addressing information of the video frame in the shared buffer area;
the addressing information transmission module is used for transmitting the addressing information of the video frames to the live streaming page so as to control the live streaming page to load the corresponding video frames from the shared buffer area according to the addressing information;
and the live stream pushing module is used for pushing a live stream to a live broadcasting room, and the live stream contains the video frames loaded in the live stream pushing page.
In a further embodiment, the video frame acquisition module includes:
the application list display sub-module is used for displaying an application list in the live streaming page, and the application list comprises at least one application program suitable for providing the video frame;
and the acquisition instruction response submodule is used for responding to an acquisition instruction acting on any application program in the application list so as to call the video rendering module to perform video acquisition and rendering on the graphical user interface of the application program, and acquire a video frame acquired and rendered by the video rendering module.
In a preferred embodiment, the video frame acquiring module further includes:
the video acquisition instruction response submodule is used for responding to a video acquisition instruction and determining a local cache identifier pointed by the video acquisition instruction;
and the video data acquisition submodule is used for acquiring the video data corresponding to the local cache identifier from the storage space of the equipment so as to acquire the video frame from the video data.
In a preferred embodiment, the video frame acquiring module further includes:
the synthesis instruction response submodule is used for responding to a synthesis instruction of the live streaming page and determining a video element pointed by the synthesis instruction;
the video element synthesis submodule is used for calling a video rendering module and synthesizing the video elements to a synthesis position appointed by the synthesis instruction in a video picture;
and the video frame determination submodule is used for acquiring the video frame which is synthesized.
In a further embodiment, the addressing information obtaining module includes:
the video frame caching submodule is used for caching the video frame obtained by rendering into the shared buffer area by the video rendering module;
a buffer address determining submodule, configured to determine, by the video rendering module, a buffer address of the video frame in the shared buffer;
and the addressing address packaging submodule is used for packaging the buffering address into the addressing information of the video frame by the video rendering module.
In a further embodiment, the addressing information delivery module comprises:
the live broadcasting instruction response submodule is used for responding to a live broadcasting instruction in the live broadcasting push flow page and determining a live broadcasting room corresponding to the live broadcasting push flow page;
the live stream pushing sub-module is used for pushing the live stream containing the video frames loaded and output in the live stream pushing page to a server associated with the live broadcast room for broadcasting;
and the live stream broadcasting sub-module is used for pushing the live stream containing the video frames to the server frame by analogy, and driving the server to broadcast the live stream to the live broadcasting room for playing.
In a further embodiment, the live stream push module includes:
the addressing information pushing sub-module is used for receiving the addressing information pushed by the video rendering module by a browser kernel in the live streaming page;
a video frame obtaining submodule, configured to obtain, by the browser kernel, a video frame pointed to by a buffer address from the shared buffer according to the buffer address included in the addressing information;
and the video frame loading submodule is used for loading the video frame into a video playing window of the live push streaming page by the browser kernel to output and display.
The electronic device comprises a central processing unit and a memory, wherein the central processing unit is used for calling and running a computer program stored in the memory to execute the steps of the live push stream processing method.
The non-volatile storage medium stores a computer program implemented according to the live streaming processing method, and when the computer program is called by a computer, the computer program executes the steps included in the corresponding method.
Compared with the prior art, the application has the following advantages:
the application improves video frame transmission logic, is used for caching video frames by constructing the shared buffer area, so that the video frames between the cross-application only need to transmit the identification of the video frames prestored in the buffer area, and the receiver application can acquire the video frames from the shared buffer area to output and play, replaces the traditional multi-stage copy and transmission of the video frames, reduces the performance consumption of equipment, saves the operation and storage and computational power, and improves the overall drawing efficiency.
Firstly, the video frame is not required to be copied and copied, the video frame is not required to be transmitted in a copy transmission mode, the performance consumed by multi-stage copying and data transmission of the video frame is reduced, when the method is applied to a live broadcast scene, the efficiency of transmitting the video frame can be improved, the drawing efficiency of the video frame in the live broadcast push streaming page is further improved, a video picture output by a main broadcast end and a video picture output by a spectator end are kept synchronous as much as possible, the interaction between the main broadcast and the spectator is more instant, and the whole live broadcast effect is further improved.
Secondly, the method stores the video frames into the shared buffer area to transmit the video frames, which not only can greatly reduce the performance consumption of the video frame transmission on the equipment, but also can make the video frame output and play more smoothly, not only can meet the video output and play at 24 frames per second required by the common live broadcast service scene, but also can meet the high frame number output and play requirement of 60 frames per second or 144 frames per second, and because of the improvement of the transmission efficiency, the method can transmit the high code rate video frames with larger file size to output and play under the condition of lower performance occupation, so that the live broadcast pictures, especially the game live broadcast pictures, are more exquisite and smooth, and the watching experience of audiences in the live broadcast room is comprehensively improved.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic diagram of a typical network deployment architecture related to implementing the technical solution of the present application;
fig. 2 is a schematic flowchart of an exemplary embodiment of a live streaming processing method according to the present application;
fig. 3 is a schematic diagram of a graphical user interface of a live streaming page according to the present application;
FIG. 4 is a flowchart illustrating specific steps of one embodiment of step S11 in FIG. 2;
FIG. 5 is a schematic diagram of a graphical user interface for an application list according to the present application;
FIG. 6 is a schematic flowchart illustrating specific steps of another embodiment of step S11 in FIG. 2;
FIG. 7 is a flowchart illustrating specific steps of another embodiment of step S11 in FIG. 2;
FIG. 8 is a flowchart illustrating specific steps of one embodiment of step S12 in FIG. 2;
FIG. 9 is a flowchart illustrating specific steps of one embodiment of step S13 in FIG. 2;
FIG. 10 is a flowchart illustrating specific steps of one embodiment of step S14 in FIG. 2;
fig. 11 is a functional block diagram of an exemplary embodiment of a live push stream processing apparatus of the present application;
fig. 12 is a block diagram of a basic structure of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As will be appreciated by those skilled in the art, "client," "terminal," and "terminal device" as used herein include both devices that are wireless signal receivers, which are devices having only wireless signal receivers without transmit capability, and devices that are receive and transmit hardware, which have receive and transmit hardware capable of two-way communication over a two-way communication link. Such a device may include: cellular or other communication devices such as personal computers, tablets, etc. having single or multi-line displays or cellular or other communication devices without multi-line displays; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "client," "terminal device" can be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "client", "terminal Device" used herein may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, and may also be a smart tv, a set-top box, and the like.
The hardware referred to by the names "server", "client", "service node", etc. is essentially an electronic device with the performance of a personal computer, and is a hardware device having necessary components disclosed by the von neumann principle such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, an output device, etc., a computer program is stored in the memory, and the central processing unit calls a program stored in an external memory into the internal memory to run, executes instructions in the program, and interacts with the input and output devices, thereby completing a specific function.
It should be noted that the concept of "server" as referred to in this application can be extended to the case of a server cluster. According to the network deployment principle understood by those skilled in the art, the servers should be logically divided, and in physical space, the servers may be independent from each other but can be called through an interface, or may be integrated into one physical computer or a set of computer clusters. Those skilled in the art will appreciate this variation and should not be so limited as to restrict the implementation of the network deployment of the present application.
Referring to fig. 1, the hardware basis required for implementing the related art embodiments of the present application may be deployed according to the architecture shown in the figure. The server 80 is deployed at the cloud end, and serves as a business server, and is responsible for further connecting to a related data server and other servers providing related support, so as to form a logically associated server cluster to provide services for related terminal devices, such as a smart phone 81 and a personal computer 82 shown in the figure, or a third-party server (not shown in the figure). Both the smart phone and the personal computer can access the internet through a known network access mode, and establish a data communication link with the cloud server 80 so as to run a terminal application program related to the service provided by the server.
For the server, the application program is usually constructed as a service process, and a corresponding program interface is opened for remote call of the application program running on various terminal devices.
The application program refers to an application program running on a server or a terminal device, the application program implements the related technical scheme of the application in a programming mode, a program code of the application program can be saved in a nonvolatile storage medium which can be identified by a computer in a form of a computer executable instruction, and is called into a memory by a central processing unit to run, and the related device of the application is constructed by running the application program on the computer.
For the server, the application program is usually constructed as a service process, and a corresponding program interface is opened for remote call of the application program running on various terminal devices.
The technical scheme suitable for being implemented in the terminal device in the application can also be programmed and built in an application program providing live webcasting, and the technical scheme is used as a part of extended functions. The live webcast refers to a live webcast room network service realized based on the network deployment architecture.
The live broadcast room is a video chat room realized by means of an internet technology, generally has an audio and video broadcast control function and comprises a main broadcast user and audience users, wherein the audience users can comprise registered users registered in a platform or unregistered tourist users; either registered users who are interested in the anchor user or registered or unregistered users who are not interested in the anchor user. The interaction between the anchor user and the audience user can be realized through known online interaction modes such as voice, video, characters and the like, generally, the anchor user performs programs for the audience user in the form of audio and video streams, and economic transaction behaviors can also be generated in the interaction process. Of course, the application form of the live broadcast room is not limited to online entertainment, and can be popularized to other relevant scenes, such as an educational training scene, a video conference scene, a product recommendation and sale scene, and any other scene needing similar interaction.
The person skilled in the art will know this: although the various methods of the present application are described based on the same concept so as to be common to each other, they may be independently performed unless otherwise specified. In the same way, for each embodiment disclosed in the present application, it is proposed based on the same inventive concept, and therefore, concepts of the same expression and concepts of which expressions are different but are appropriately changed only for convenience should be equally understood.
Referring to fig. 2, in an exemplary embodiment of a live streaming processing method of the present application, the method includes the following steps:
step S11, acquiring video frames according to the configuration information of the live streaming page:
and the current application program analyzes the configuration information of the live streaming page, and determines a video source pointed by the configuration information so as to acquire the video frame in the video source.
The configuration information is generated by the live streaming page by responding to a video source determination event, the event is generally triggered by a user determining a corresponding video source through a corresponding control in the live streaming page, for example, a video element synthesis control is provided in the live streaming page, the user performs a synthesis operation of a video element in a video picture of a live stream through the video element synthesis control, a video picture characterized as a synthesized video element is determined as the video source determination event of the video source, the live streaming page acquires the determined event in response to the video frame to generate the configuration information, the video source pointed by the configuration information is a video picture completing element synthesis, and a video rendering module is invoked to perform a synthesis rendering of the video element on the live stream according to the configuration information in a current application program, and acquiring the video frame of the video picture which is synthesized by the video elements.
Furthermore, the video source pointed by the configuration information, besides the video picture synthesized by the video elements, can also be the video picture generated by the collected graphic user interface of the corresponding application program, for example, the live push page provides an application acquisition control, and after the user selects a corresponding application through the application acquisition control, determining a video picture obtained by triggering the acquisition of the graphical user interface of the application program as the video frame acquisition determination event of the video source, the live push streaming page generating the configuration information in response to the video frame acquisition determination event, the video source pointed by the configuration information is a collected video picture, and the current application program calls a video rendering module to collect a graphical user interface of the application program according to the configuration information to obtain a video frame of the collected video picture.
The configuration information includes target information pointing to a video source, so that a current application program obtains a corresponding video frame according to the target information, for example, when the video source is a graphical user interface of the application program, the target information is generally an application ID or an application name of the application program, and when the video source is video data in a storage space, the target information is a local cache identifier of the video data.
The video rendering module is generally a module constructed based on a drawing technical standard of WebGL, and is configured to collect and render a video source and obtain a video frame obtained by the video source, for example, when the video source is a graphical user interface of an application program, the video rendering module collects a current graphical user interface of the application program frame by frame to render the graphical user interface into the video frame, so that the live streaming page loads each video frame by frame, and outputs and plays a video picture of the current graphical user interface of the application program frame by frame in the page.
The live push pages are typically pages written in a hypertext markup language, which generally refers to a language conforming to the specifications of the core language HTML in the Web, and particularly HTML5, and include a series of tags by which documents on the network can be formatted uniformly, making the dispersed internet resources connected into a logical whole.
Specifically, referring to fig. 3, a graphical user interface shown in fig. 3 is the live broadcast push page, a user determines, through a video element display call area 301 in the live broadcast push page, that a corresponding video element performs a video element synthesizing operation, determines, as a video source determination event of a video source, a video picture that is characterized as a synthesized video element to be triggered, and the live broadcast push page responds to the video frame acquisition determination event to generate the configuration information, where the video source pointed by the configuration information is a video picture that completes element synthesis, and triggers, in a current application program, a video rendering module to perform video element synthesizing rendering on the live broadcast stream according to the configuration information, so as to acquire a video frame that completes video element synthesis video picture.
The live streaming page is provided with a browser kernel, the browser kernel is combined with the video rendering module to complete the screen loading of video frames, and the browser kernel receives addressing information corresponding to the video frames acquired and rendered by the video rendering module, acquires the video frames from a shared buffer area according to the addressing information, loads the video frames into the live streaming page to be output and played, and completes the screen loading of the video frames. Regarding the specific implementation of the shared buffer and the browser kernel, reference is made to the related embodiments in the subsequent steps, which is not repeated herein.
In an embodiment, referring to fig. 3 to 5, when the video source pointed to by the configuration information is a graphical user interface of an application program, the current application program performs the following specific steps:
step S111, displaying an application list in the live streaming page, where the application list includes at least one application program suitable for providing the video frame:
and the current application program displays the application list in the live streaming page, wherein the application list comprises one or more application programs for providing the video frames, so that a user can select a corresponding application program through the application list to acquire the video frames.
Referring to fig. 3 and fig. 5, fig. 3 is a graphical user interface of the live streaming page, and a user displays an application list shown in fig. 5 in the live streaming page through a video frame acquisition source determining control 307, where the application list displays a plurality of application programs suitable for providing acquisition of the video frame.
In an embodiment, referring to fig. 5, an application program displayed in the application list shown in fig. 5 and adapted to provide the video frame is an application program currently running on the device, a browse native application 502 control is provided in the application list for the user to select an application program that is not running on the device but is adapted to provide the video frame, and after the user provides the browse native application 502 control to select a corresponding application program, the application program is started/woken up to provide the video frame.
Step S112, responding to a capture instruction acting on any application program in the application list, so as to invoke a video rendering module to perform video capture rendering on a graphical user interface of the application program, so as to obtain a video frame obtained by capture rendering by the video rendering module:
and the current application program responds to the acquisition instruction of any one application program in the application list, determines the application program pointed by the acquisition instruction, calls the video rendering module to perform video acquisition and rendering on the graphical user interface of the application program, and takes the video frame acquired by the video rendering module through frame-by-frame rendering as the video frame.
After the current application program responds to the acquisition instruction, the video rendering module is called to perform video acquisition rendering on the application program pointed by the instruction, the video is rendered into the video frame through the current graphical user interface of the acquisition application program, for example, if the application program pointed by the acquisition instruction is a game application program, the video rendering module acquires a game picture currently rendered by the GPU of the application program according to the video frame number specification of 24 frames per second or 60 frames per second, and renders the game page into the video frame.
Referring to fig. 5, when the user selects the B application 501 in the application list shown in fig. 5, the current application calls the capture instruction of the corresponding B application to perform the video capture and rendering on the graphical user interface of the B application by using the video rendering module, and obtains a video frame obtained by the video rendering module by performing frame-by-frame rendering and capturing.
In an embodiment, referring to fig. 6, when the video source pointed by the configuration information is video data in the storage space of the local device, the current application program will perform the following specific steps:
step S111', in response to the video acquisition instruction, determining a local cache identifier pointed by the video acquisition instruction:
and the current application program responds to the video acquisition instruction to determine the local cache identifier pointed by the video acquisition instruction.
The video acquisition instruction is triggered and generated by a user through a local video data selection control, the user selects video data stored in a storage space of the local device through the control, and the video acquisition instruction pointing to a local cache identifier of the video data in the storage control is generated, so that the current application program determines the local cache identifier pointed by the video acquisition instruction through the video acquisition instruction.
Step S112', obtaining video data corresponding to the local cache identifier from a storage space of the device, so as to obtain the video frame from the video data:
the current application program obtains the video data corresponding to the local cache identifier from the storage space of the local device according to the local cache identifier pointed by the obtaining instruction, and obtains the video frame from the video data, for example, loads the video data by calling the video rendering module, and takes a certain frame of the video frames of the video data as the video frame.
In an embodiment, referring to fig. 3 and fig. 7, when the video source pointed by the configuration information is video data for performing video element synthesis, the current application program performs the following specific steps:
step S111 ″, in response to a synthesis instruction of a live streaming page, determining a video element pointed by the synthesis instruction:
and the current application responds to the synthesis instruction of the live push streaming page to determine one or more video elements pointed by the synthesis instruction, and determines the synthesis positions of the video elements in the video picture according to the synthesis instruction.
Referring to fig. 3, the graphical user interface illustrated in fig. 3 is the live streaming page, a user selects the video elements synthesized in the video frame through the video element display calling area 301, the newly added video element control 303 provides the user with the video elements for adding synthesis to the video frame, the video editing area 304 provides the user with a customized synthesis position of each video element in the video frame, and the user performs a video element synthesis operation through any one or more of the video element display calling area 301, the newly added video element control 303, or the video editing area 304 to generate the synthesis instruction representing the video element synthesis operation, so that the current application program determines the video elements synthesized in the video frame and the synthesis positions of each video element in the video frame by responding to the synthesis instruction.
The video element generally refers to visual graphics and text information synthesized into a video picture, and is used for beautifying the playing effect of the video picture, and the types of the video element include: dynamic pictures, static pictures, video filters, text information, and the like.
The video element synthesizing operation generally includes operations such as a plane moving operation, a hierarchy adjusting operation, an adding operation or a deleting operation, and the like, which are performed on the video element in the video editing area or the video element display calling area, in the live streaming page.
The plane movement operation refers to an operation of adjusting the composite position of a certain video element in the video stream through the video editing area by a user.
The hierarchy adjustment operation refers to a user adjusting a hierarchy of a certain video element displayed in a video picture through the video editing area, for example, adjusting the video element to a topmost hierarchy or a bottommost hierarchy of each element in the video picture.
The new adding operation refers to an operation that a user adds a new video element to a video stream for display through the video element display calling area, and the user selects a corresponding video element from a plurality of video elements provided by the live streaming page or selects corresponding image-text content from a storage space of the device as the video element through the video element display calling area and adds the video element to a video picture for display.
The deleting operation refers to an operation of deleting a certain video element added to the video picture through the video element display calling area or the video editing area by the user.
Step S112 ″, invoking a video rendering module to compose the video element to a composition position specified by the composition instruction in the video frame:
and the current application program responds to the synthesis instruction, determines the video elements to be synthesized and the synthesis positions of the video elements, and then calls a video rendering module to synthesize the video elements into the corresponding synthesis positions in the video picture.
Step S113 ″ acquires the video frame whose composition has been completed:
and the video rendering module performs the composition operation of the video elements in the video pictures frame by frame so that each frame of video picture which is synthesized by the video rendering module is taken as the video frame by the application program.
Step S12, buffering the video frame into a shared buffer, and obtaining the addressing information in the shared buffer:
and the current application program caches the video frame in the shared buffer area and acquires the addressing information representing the buffer address of the video frame in the shared buffer area.
The addressing information is used for representing the buffer address of the video frame in the shared buffer area, and currently, the video frame is stored in the shared buffer area, and the shared buffer area feeds back the addressing information representing the buffer address of the video frame in the buffer area, so that the live streaming page loads the corresponding video frame from the shared buffer area according to the addressing information. For the specific implementation manner of acquiring the video frame by the live streaming page, please refer to the related implementation manner in step S13, which is not repeated in this step.
The shared buffer area generally refers to a shared memory, the current application program caches the video frame in the shared memory, and triggers the shared memory to return the addressing information representing the memory address of the video frame in the shared memory to the current application program, so that the application program can obtain the addressing information of the video frame.
In one embodiment, the shared buffer refers to a shared texture, the shared texture is created in advance according to a video parameter specification of the video frame, the current application stores the video frame into the shared texture according to the video parameter specification represented by the shared texture, and acquires an identifier of the video frame in the shared texture as the addressing information of the video frame.
The organization mode of the shared buffer is generally determined based on the system environment of the device running the current application program, and the current application program selects the organization mode which is supported or more adaptive according to the system environment of the running device, and constructs the shared buffer to buffer the video frames for data transmission.
Specifically, the video frame obtained by the current application program is generally obtained by collecting and rendering the video frame by the video rendering module built in the video rendering module, the video rendering module is generally constructed based on the drawing technical standard of WebGL, in the traditional drawing process, the video rendering module constructed based on the drawing technical standard of WebGL needs to copy and copy the video frame obtained by rendering the video frame, so as to transmit the copied video frame to the live streaming page for output and play, but the method stores the video frame obtained by collecting and rendering into the shared buffer area, obtains the addressing information representing the buffer address of the video frame in the shared buffer area, transmits the addressing information to the live streaming page, directly obtains the video frame from the shared buffer area through the live streaming page through the addressing information for output and play, so that the video rendering module does not need to copy and copy the video frame, the performance consumed by multi-level copying and data transmission of the video frames is reduced, the drawing efficiency of the video frames in the live streaming page is improved, and the live broadcast pictures received by the audience end and the video source of the main broadcast end are played synchronously to the maximum extent.
Referring to fig. 8, regarding an implementation of the video rendering module storing the video frame in the shared buffer and acquiring the addressing information of the video frame, the specific steps are as follows:
step S121, the video rendering module caches the rendered video frame in the shared buffer area:
and the video rendering module acquires and renders the video source pointed by the configuration information, and caches the video frame to the shared buffer area after acquiring the video frame acquired and rendered.
Step S122, the video rendering module determines the buffer address of the video frame in the shared buffer:
after the video rendering module buffers the video frame to the shared buffer area, the video rendering module determines the buffer address of the video frame in the shared buffer area so as to create the addressing information of the video frame.
The buffer address is used for representing the buffer position of the video frame in the shared buffer area, so that the video rendering module creates the addressing information of the video frame according to the buffer address.
Step S123, the video rendering module encapsulates the buffer address as the addressing information of the video frame:
and after determining the buffer address of the video frame in a shared buffer area, the video rendering module encapsulates the buffer address into the addressing information of the video frame, so that the live push streaming page acquires the video frame from the shared buffer area according to the buffer address contained in the addressing information and outputs and plays the video frame.
Step S13, transferring the addressing information of the video frame to the live streaming page, so as to control the live streaming page to load the corresponding video frame from the shared buffer according to the addressing information:
and the current application program buffers the video frame into the shared buffer area, and after the addressing information of the video frame is acquired, the addressing information is pushed into the live broadcast push page, so that the live broadcast push page can trigger the browser kernel to acquire the video frame corresponding to the addressing information from the shared buffer area for loading and playing according to the addressing information by calling the browser kernel.
The browser kernel is generally based on a kernel constructed by a browser embedded framework such as CEF and the like which can provide a C/C + + high-level programming language interface, and the browser kernel can query and acquire a video frame of the addressing information from the shared buffer according to the addressing information, and upload the video frame to a corresponding video playing window in the live streaming page for output playing, for example, output the video frame to the video playing window 305 in fig. 3 for playing.
Compared with the traditional method, the video rendering module constructed based on WebGL collects the rendered video frame, generally copies the video frame, transfers the copied video frame to the browser kernel constructed based on a browser embedded framework such as CEF, so that the browser kernel outputs the video frame to the live push streaming page for output and play, but these operations are performed by consuming a lot of the performance of the device, due to the need to copy and transfer the video frames, by storing the video frames in the shared buffer, and passing addressing information characterizing the buffer address of the video frame in the shared buffer to the browser kernel, the data transmission of the video frames is not needed to be carried out in a copy transmission mode, the performance consumption of the equipment is reduced, and the video pictures displayed by the video source and the video pictures output from the live streaming page form maximum synchronous playing.
Referring to fig. 9, regarding an implementation manner in which the browser kernel in the live streaming page obtains the video frame according to the addressing information to perform output playing, the specific implementation steps are as follows:
step S131, the browser kernel in the live streaming page receives the addressing information pushed by the video rendering module:
and the browser kernel in the live streaming page receives the addressing information pushed by the video rendering module to analyze the addressing information and acquire the buffer address of the video frame in the shared buffer area.
Step S132, the browser kernel obtains, according to the buffer address included in the addressing information, the video frame pointed by the buffer address from the shared buffer:
and the browser kernel inquires the video frame pointed by the buffer address in the shared buffer area according to the buffer address contained in the addressing information so as to acquire the video frame for output and play.
Step S133, the browser kernel loads the video frame into a video playing window of the live streaming page for output and display:
and after the browser kernel acquires the video frame corresponding to the buffer address from the shared buffer area, loading the video frame to the video playing reference of the live streaming page for output and display.
Referring to fig. 3, after the browser kernel acquires the video frame, the browser kernel performs a screen-up operation on the video frame, and loads the video frame into the video playing window 305 shown in fig. 3 for output playing.
In an embodiment, referring to fig. 3, after the browser kernel acquires the video frame, the browser kernel loads the video frame into the video frame playing window 308 shown in fig. 3 for output playing, and the case of executing the embodiment is generally when a video source of the video frame is a graphical user interface of an application program currently running on the device, or the video source is some video data in a storage space of the device.
Step S14, push a live stream to a live broadcast room, where the live stream includes the video frames loaded in the live push stream page:
the current application program pushes the live stream containing the video frame loaded in the live stream pushing page to the associated live broadcast room, and the current application program pushes the live stream to the corresponding media server so as to drive the server to broadcast the live stream to the live broadcast room associated with the current application program for output and play.
The current application program is a live broadcast service application program, and the user identity logged in the application program is generally a main broadcast user, and then the terminal equipment running the current application program is characterized as a main broadcast client, and the current application program pushes the live broadcast stream to a corresponding media server so as to drive the media server to broadcast the live broadcast stream to audience user terminals of the live broadcast room for output and play.
As described above, the method stores the video frames in the shared buffer area to transmit the video frames, so that the performance consumption of the video frame transmission on the equipment can be greatly reduced, the video frames are more smoothly output and played, the video frames with 60 frames or 144 frames higher than the common 24 frames can be output and played, when the method is applied to a live broadcast scene, the smooth video frame transmission can keep the synchronization of the video frame output by the main broadcast end and the video frame output by the audience end as much as possible, the interaction between the main broadcast and the audience is more instant, and the whole live broadcast effect is further improved.
Referring to fig. 3 and 10, regarding the implementation of the current application program pushing the live stream containing the video frames to the live room, the implementation steps are as follows:
step S141, responding to the live broadcast instruction in the live broadcast push flow page, and determining a live broadcast room corresponding to the live broadcast push flow page:
and the current application program responds to the live broadcasting instruction in the live broadcasting stream pushing page and determines the live broadcasting room corresponding to the live broadcasting stream pushing page.
Referring to fig. 3, the playing control 306 shown in fig. 3 is used to trigger generation of the live broadcast instruction, and a user triggers generation of the live broadcast instruction by touching the playing control 306 in the live broadcast stream push page shown in fig. 3, so that the current application program responds to the live broadcast instruction and determines to log in a live broadcast room of a live broadcast user of the current application program.
Step S142, pushing the live stream containing the video frame loaded and output in the live stream pushing page to a server associated with the live broadcast room for broadcasting:
and the current application program pushes the live stream containing the video frame loaded and output currently in the live stream pushing page so as to send the live stream to a server for carrying out live stream broadcasting service for the live broadcast room, and the server is driven to push the live stream to the live broadcast room for output and play.
Step S143, by analogy, pushing the live broadcast stream containing the video frame by frame to the server, and driving the server to broadcast the live broadcast stream to the live broadcast room for playing:
according to step S142, the current application continuously pushes the live stream containing the video frame currently loaded and output in the live streaming page frame by frame to the server, so as to drive the server to broadcast the live stream frame by frame to the live room for playing, thereby maintaining smooth pushing of the live stream.
Further, a live streaming processing apparatus of the present application can be constructed by functionalizing the steps in the methods disclosed in the above embodiments, and according to this idea, please refer to fig. 11, wherein in an exemplary embodiment, the apparatus includes: the video frame acquisition module 11 is configured to acquire a video frame according to configuration information of a live streaming page; an addressing information obtaining module 12, configured to cache the video frame in a shared buffer, and obtain addressing information of the video frame in the shared buffer; an addressing information transfer module 13, configured to transfer addressing information of the video frame to the live streaming page, so as to control the live streaming page to load the corresponding video frame from the shared buffer according to the addressing information; and the live stream pushing module 14 is configured to push a live stream to a live broadcast room, where the live stream includes the video frames loaded in the live stream pushing page.
In one embodiment, the video frame acquiring module 11 includes: the application list display sub-module is used for displaying an application list in the live streaming page, and the application list comprises at least one application program suitable for providing the video frame; and the acquisition instruction response submodule is used for responding to an acquisition instruction acting on any application program in the application list so as to call the video rendering module to perform video acquisition and rendering on the graphical user interface of the application program, and acquire a video frame acquired and rendered by the video rendering module.
In another embodiment, the video frame acquiring module 11 further includes: the video acquisition instruction response submodule is used for responding to a video acquisition instruction and determining a local cache identifier pointed by the video acquisition instruction; and the video data acquisition submodule is used for acquiring the video data corresponding to the local cache identifier from the storage space of the equipment so as to acquire the video frame from the video data.
In another embodiment, the video frame acquiring module 11 further includes: the synthesis instruction response submodule is used for responding to a synthesis instruction of the live streaming page and determining a video element pointed by the synthesis instruction; the video element synthesis submodule is used for calling a video rendering module and synthesizing the video elements to a synthesis position appointed by the synthesis instruction in a video picture; and the video frame determination submodule is used for acquiring the video frame which is synthesized.
In one embodiment, the addressing information obtaining module 12 includes: the video frame caching submodule is used for caching the video frame obtained by rendering into the shared buffer area by the video rendering module; a buffer address determining submodule, configured to determine, by the video rendering module, a buffer address of the video frame in the shared buffer; and the addressing address packaging submodule is used for packaging the buffering address into the addressing information of the video frame by the video rendering module.
In one embodiment, the addressing information delivery module 13 includes: the live broadcasting instruction response submodule is used for responding to a live broadcasting instruction in the live broadcasting push flow page and determining a live broadcasting room corresponding to the live broadcasting push flow page; the live stream pushing sub-module is used for pushing the live stream containing the video frames loaded and output in the live stream pushing page to a server associated with the live broadcast room for broadcasting; and the live stream broadcasting sub-module is used for pushing the live stream containing the video frames to the server frame by analogy, and driving the server to broadcast the live stream to the live broadcasting room for playing.
In one embodiment, the live stream push module 14 includes: the addressing information pushing sub-module is used for receiving the addressing information pushed by the video rendering module by a browser kernel in the live streaming page; a video frame obtaining submodule, configured to obtain, by the browser kernel, a video frame pointed to by a buffer address from the shared buffer according to the buffer address included in the addressing information; and the video frame loading submodule is used for loading the video frame into a video playing window of the live push streaming page by the browser kernel to output and display.
In order to solve the foregoing technical problem, an embodiment of the present application further provides a computer device, configured to run a computer program implemented according to the live streaming processing method. Referring to fig. 12, fig. 12 is a block diagram of a basic structure of a computer device according to the present embodiment.
As shown in fig. 12, the internal structure of the computer device is schematically illustrated. The computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected by a system bus. The non-volatile storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store control information sequences, and the computer readable instructions, when executed by the processor, can enable the processor to realize a live push stream processing method. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, may cause the processor to perform a live push streaming method. The network interface of the computer device is used for connecting and communicating with the terminal. Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In this embodiment, the processor is configured to execute specific functions of each module/sub-module in the live streaming apparatus of the present invention, and the memory stores program codes and various data required for executing the modules. The network interface is used for data transmission to and from a user terminal or a server. The memory in this embodiment stores program codes and data required for executing all modules/sub-modules in the live streaming processing device, and the server can call the program codes and data of the server to execute the functions of all sub-modules.
The present application further provides a non-volatile storage medium, where the live streaming processing method is written as a computer program and stored in the storage medium in the form of computer readable instructions, where the computer readable instructions, when executed by one or more processors, mean that the program runs in a computer, so as to cause the one or more processors to execute the steps of the live streaming processing method according to any one of the embodiments described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
To sum up, this application improves video frame transmission logic, is used for caching video frame through constructing the shared buffer, makes the video frame of striding between the application only need transmit the sign of the video frame of prestoring in the buffer, and receiver application can follow the shared buffer and acquire the video frame and carry out the output and play, replaces traditional video frame multistage copy and transmission, reduces the performance consumption of equipment, saves fortune deposit and computing power, improves holistic drawing efficiency.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
Those of skill in the art will appreciate that the various operations, methods, steps in the processes, acts, or solutions discussed in this application can be interchanged, modified, combined, or eliminated. Further, other steps, measures, or schemes in various operations, methods, or flows that have been discussed in this application can be alternated, altered, rearranged, broken down, combined, or deleted. Further, steps, measures, schemes in the prior art having various operations, methods, procedures disclosed in the present application may also be alternated, modified, rearranged, decomposed, combined, or deleted.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (11)

1. A live broadcast push stream processing method is characterized by comprising the following steps:
acquiring a video frame according to configuration information of a live streaming page;
caching the video frame to a shared buffer area to obtain the addressing information of the video frame in the shared buffer area;
transmitting addressing information of the video frames to the live streaming page so as to control the live streaming page to load corresponding video frames from the shared buffer area according to the addressing information;
and pushing a live stream to a live broadcasting room, wherein the live stream comprises the video frames loaded in the live stream pushing page.
2. The method of claim 1, wherein the step of obtaining video frames according to configuration information of a live push streaming page comprises:
displaying an application list in the live push streaming page, wherein the application list comprises at least one application program suitable for providing the video frames;
responding to a collection instruction acting on any application program in the application list, and calling a video rendering module to perform video collection and rendering on a graphical user interface of the application program so as to obtain the video frame obtained by collection and rendering of the video rendering module.
3. The method of claim 1, wherein the step of obtaining video frames according to configuration information of a live push streaming page comprises:
responding to a video acquisition instruction, and determining a local cache identifier pointed by the video acquisition instruction;
and acquiring the video data corresponding to the local cache identification from the storage space of the equipment so as to acquire the video frame from the video data.
4. The method of claim 1, wherein the step of obtaining video frames according to configuration information of a live push streaming page comprises:
responding to a synthesis instruction of the live streaming page, and determining a video element pointed by the synthesis instruction;
calling a video rendering module to synthesize the video elements to a synthesis position specified by the synthesis instruction in a video picture;
and acquiring the video frame of which the composition is finished.
5. The method according to claim 1, wherein in the step of obtaining the video frame according to the configuration information of the live streaming page, a video rendering module is invoked to perform video rendering according to the configuration information to obtain the rendered video frame.
6. The method of claim 5, wherein the step of buffering the video frames into a shared buffer and obtaining their addressing information in the shared buffer comprises:
the video rendering module caches the video frames obtained by rendering into the shared buffer area;
the video rendering module determines the buffer address of the video frame in the shared buffer area;
the video rendering module encapsulates the buffer address as the addressing information for the video frame.
7. The method as claimed in claims 5 and 6, wherein the step of transferring addressing information of the video frames to the live streaming page to control the live streaming page to load the corresponding video frames from the shared buffer according to the addressing information comprises:
a browser kernel in the live streaming page receives the addressing information pushed by the video rendering module;
the browser kernel acquires a video frame pointed by a buffer address from the shared buffer area according to the buffer address contained in the addressing information;
and the browser kernel loads the video frame into a video playing window of the live push streaming page for output and display.
8. The method of claim 1, wherein the step of pushing a live stream to a live room, the live stream containing the video frames loaded in the live push page, comprises:
responding to a live broadcast instruction in the live broadcast push flow page, and determining a live broadcast room corresponding to the live broadcast push flow page;
pushing the live broadcast stream containing the video frames loaded and output in the live broadcast stream pushing page to a server associated with the live broadcast room for broadcasting;
and by analogy, pushing the live stream containing the video frames to the server frame by frame, and driving the server to broadcast the live stream to the live broadcasting room for playing.
9. A live streaming apparatus, comprising:
the video frame acquisition module is used for acquiring video frames according to the configuration information of the live streaming page;
the addressing information acquisition module is used for caching the video frame to a shared buffer area and acquiring the addressing information of the video frame in the shared buffer area;
the addressing information transmission module is used for transmitting the addressing information of the video frames to the live streaming page so as to control the live streaming page to load the corresponding video frames from the shared buffer area according to the addressing information;
and the live stream pushing module is used for pushing a live stream to a live broadcasting room, and the live stream contains the video frames loaded in the live stream pushing page.
10. An electronic device comprising a central processor and a memory, wherein the central processor is configured to invoke execution of a computer program stored in the memory to perform the steps of the method according to any one of claims 1 to 8.
11. A non-volatile storage medium, characterized in that it stores, in the form of computer-readable instructions, a computer program implemented according to the method of any one of claims 1 to 8, which, when invoked by a computer, performs the steps comprised by the method.
CN202110857018.3A 2021-07-28 2021-07-28 Live broadcast push stream processing method and device, equipment and medium thereof Active CN113596495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110857018.3A CN113596495B (en) 2021-07-28 2021-07-28 Live broadcast push stream processing method and device, equipment and medium thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110857018.3A CN113596495B (en) 2021-07-28 2021-07-28 Live broadcast push stream processing method and device, equipment and medium thereof

Publications (2)

Publication Number Publication Date
CN113596495A true CN113596495A (en) 2021-11-02
CN113596495B CN113596495B (en) 2023-11-24

Family

ID=78251002

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110857018.3A Active CN113596495B (en) 2021-07-28 2021-07-28 Live broadcast push stream processing method and device, equipment and medium thereof

Country Status (1)

Country Link
CN (1) CN113596495B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866792A (en) * 2022-03-31 2022-08-05 广州方硅信息技术有限公司 Live video quality detection method and device, equipment and medium thereof

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100254574A1 (en) * 2009-04-07 2010-10-07 Shao-Yi Chien Method for decomposition and rendering of video content and user interface for operating the method thereof
CN105657540A (en) * 2016-01-05 2016-06-08 珠海全志科技股份有限公司 Video decoding method adapted to Android system and device thereof
CN107690622A (en) * 2016-08-26 2018-02-13 华为技术有限公司 Realize the method, apparatus and system of hardware-accelerated processing
CN107920258A (en) * 2016-10-11 2018-04-17 中国移动通信有限公司研究院 A kind of data processing method and device
CN108449634A (en) * 2018-03-27 2018-08-24 武汉斗鱼网络科技有限公司 A kind of decoded playback method of multi-process, computer equipment and storage medium
CN108509272A (en) * 2018-03-22 2018-09-07 武汉斗鱼网络科技有限公司 GPU video memory textures are copied to the method, apparatus and electronic equipment of Installed System Memory
CN108600826A (en) * 2018-05-22 2018-09-28 深圳市茁壮网络股份有限公司 A kind of method and device playing TS streams
CN109168021A (en) * 2018-10-25 2019-01-08 京信通信系统(中国)有限公司 A kind of method and device of plug-flow
CN109889875A (en) * 2019-01-23 2019-06-14 北京奇艺世纪科技有限公司 Communication means, device, terminal device and computer-readable medium
CN110891178A (en) * 2019-10-29 2020-03-17 福州瑞芯微电子股份有限公司 Method and device for real-time rendering of video
CN112995753A (en) * 2019-12-16 2021-06-18 中兴通讯股份有限公司 Media stream distribution method, CDN node server, CDN system and readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100254574A1 (en) * 2009-04-07 2010-10-07 Shao-Yi Chien Method for decomposition and rendering of video content and user interface for operating the method thereof
CN105657540A (en) * 2016-01-05 2016-06-08 珠海全志科技股份有限公司 Video decoding method adapted to Android system and device thereof
CN107690622A (en) * 2016-08-26 2018-02-13 华为技术有限公司 Realize the method, apparatus and system of hardware-accelerated processing
CN107920258A (en) * 2016-10-11 2018-04-17 中国移动通信有限公司研究院 A kind of data processing method and device
CN108509272A (en) * 2018-03-22 2018-09-07 武汉斗鱼网络科技有限公司 GPU video memory textures are copied to the method, apparatus and electronic equipment of Installed System Memory
CN108449634A (en) * 2018-03-27 2018-08-24 武汉斗鱼网络科技有限公司 A kind of decoded playback method of multi-process, computer equipment and storage medium
CN108600826A (en) * 2018-05-22 2018-09-28 深圳市茁壮网络股份有限公司 A kind of method and device playing TS streams
CN109168021A (en) * 2018-10-25 2019-01-08 京信通信系统(中国)有限公司 A kind of method and device of plug-flow
CN109889875A (en) * 2019-01-23 2019-06-14 北京奇艺世纪科技有限公司 Communication means, device, terminal device and computer-readable medium
CN110891178A (en) * 2019-10-29 2020-03-17 福州瑞芯微电子股份有限公司 Method and device for real-time rendering of video
CN112995753A (en) * 2019-12-16 2021-06-18 中兴通讯股份有限公司 Media stream distribution method, CDN node server, CDN system and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
鱼摆摆: "《鱼摆摆,https://www.yubaibai.com/cn/article/5611658》", 13 July 2020 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114866792A (en) * 2022-03-31 2022-08-05 广州方硅信息技术有限公司 Live video quality detection method and device, equipment and medium thereof

Also Published As

Publication number Publication date
CN113596495B (en) 2023-11-24

Similar Documents

Publication Publication Date Title
CN111901674B (en) Video playing control method and device
CN108174272B (en) Method and device for displaying interactive information in live broadcast, storage medium and electronic equipment
CN106713937A (en) Video playing control method and device as well as terminal equipment
US20100268694A1 (en) System and method for sharing web applications
US8856212B1 (en) Web-based configurable pipeline for media processing
CN113727178B (en) Screen-throwing resource control method and device, equipment and medium thereof
WO2023147758A1 (en) Method and apparatus for processing cloud game resource data, and computer device and storage medium
CN114422821A (en) Live broadcast home page interaction method, device, medium and equipment based on virtual gift
CN113613027A (en) Live broadcast room recommendation method and device and computer equipment
CN114205680A (en) Video cover display method and device, equipment, medium and product thereof
CN113596495B (en) Live broadcast push stream processing method and device, equipment and medium thereof
JP2021534606A (en) Synchronization of digital content consumption
CN113824979A (en) Live broadcast room recommendation method and device and computer equipment
JP5624056B2 (en) Method, apparatus and computer program for generating a query
CN113556610B (en) Video synthesis control method and device, equipment and medium thereof
CN113727177B (en) Screen-throwing resource playing method and device, equipment and medium thereof
CN114302163B (en) Live broadcasting room advertisement processing method and device, equipment and medium thereof
CN113727125B (en) Live broadcast room screenshot method, device, system, medium and computer equipment
CN114205366A (en) Cross-platform data synchronization method and device, equipment, medium and product thereof
CN116012404A (en) Video image segmentation method, device, equipment and medium thereof
CN114501065A (en) Virtual gift interaction method and system based on face jigsaw and computer equipment
EP3229478B1 (en) Cloud streaming service system, image cloud streaming service method using application code, and device therefor
CN113727180A (en) Screen projection playing control method and device, equipment and medium thereof
CN113569089A (en) Information processing method, device, server, equipment, system and storage medium
CN113590063A (en) Method for controlling multimedia presentation by third party

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant