CN111010588B - Live broadcast processing method and device, storage medium and equipment - Google Patents

Live broadcast processing method and device, storage medium and equipment Download PDF

Info

Publication number
CN111010588B
CN111010588B CN201911354791.7A CN201911354791A CN111010588B CN 111010588 B CN111010588 B CN 111010588B CN 201911354791 A CN201911354791 A CN 201911354791A CN 111010588 B CN111010588 B CN 111010588B
Authority
CN
China
Prior art keywords
target area
anchor
video picture
display
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911354791.7A
Other languages
Chinese (zh)
Other versions
CN111010588A (en
Inventor
梁衍鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Kugou Business Incubator Management Co ltd
Original Assignee
Chengdu Kugou Business Incubator Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Kugou Business Incubator Management Co ltd filed Critical Chengdu Kugou Business Incubator Management Co ltd
Priority to CN201911354791.7A priority Critical patent/CN111010588B/en
Publication of CN111010588A publication Critical patent/CN111010588A/en
Application granted granted Critical
Publication of CN111010588B publication Critical patent/CN111010588B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a live broadcast processing method, a live broadcast processing device, a storage medium and equipment, and belongs to the technical field of internet. The method comprises the following steps: determining a target area and a non-target area in a display area of a live broadcast room, wherein the target area is used for displaying a video picture of a first anchor collected by an anchor terminal, and the non-target area does not display the video picture; filling pure color pixels in the non-target area, and encoding the video picture data displayed by the target area and the pure color pixel data filled in the non-target area; and sending the live streaming data obtained after the coding processing to at least one audience terminal. When the video picture of the first anchor is not required to be displayed through the whole live broadcast display area, the method and the device can fill the non-target area with the pure color pixels before pushing the stream to the audience terminal, so that the pure color pixel data can occupy few bytes after the encoding processing, and the effect of saving the code rate is achieved.

Description

Live broadcast processing method and device, storage medium and equipment
Technical Field
The present application relates to the field of internet technologies, and in particular, to a live broadcast processing method and apparatus, a storage medium, and a device.
Background
With the rapid development of internet technology and streaming media technology, many live broadcast application programs are emerging in the application market at present, so that live broadcast by using a terminal gradually becomes one of important ways for people to show themselves and play entertainment and leisure. The user can be used as a main broadcast to carry out live broadcast through the live broadcast application program, and can also watch the live broadcast of other main broadcasts.
In the live broadcast process, the higher the code rate, the higher the requirement for the data transmission process, such as the requirement for more network bandwidth and better network quality. Therefore, in order to achieve the effect of saving the bit rate, how to perform the live broadcast processing becomes one of the most concerned issues by those skilled in the art.
Disclosure of Invention
The embodiment of the application provides a live broadcast processing method, a live broadcast processing device, a storage medium and equipment, and can achieve the effect of saving code rate. The technical scheme is as follows:
in one aspect, a live broadcast processing method is provided, and the method includes:
determining a target area and a non-target area in a display area of a live broadcast room, wherein the target area is used for displaying a video picture of a first anchor collected by an anchor terminal, and the non-target area does not display the video picture;
filling pure color pixels in the non-target area, and encoding the video picture data displayed by the target area and the pure color pixel data filled in the non-target area;
and sending the live streaming data obtained after the coding processing to at least one audience terminal.
In one possible implementation, the determining a target area and a non-target area in a live broadcast room presentation area includes:
receiving display information issued by a server;
when the display information indicates that the video picture is displayed through a partial area, the target area and the non-target area are determined in the display area of the live broadcast room according to the display information.
In one possible implementation, the performing pure color pixel filling on the non-target area includes: and filling black pixels in the non-target area.
In a possible implementation manner, the receiving the display information sent by the server includes:
receiving first display information issued by the server, wherein the first display information indicates the area position for displaying the video picture;
the determining the target area and the non-target area in the display area of the live broadcast room according to the display information includes: and determining the target area in the display area of the live broadcast room according to the first display information, and determining other areas except the target area as the non-target areas.
In a possible implementation manner, the receiving the display information sent by the server includes:
receiving second display information issued by the server, wherein the second display information indicates the position of an area where the video picture is not displayed;
the determining the target area and the non-target area in the display area of the live broadcast room according to the display information includes: and determining the non-target area in a display area of a live broadcast room according to the second display information, and determining other areas except the non-target area as the target area.
In one possible implementation, the method further includes:
and when the display information indicates that the video pictures are displayed through all the areas, encoding the video picture data displayed through the display area of the live broadcast room.
In a possible implementation manner, the sending, to at least one viewer terminal, live streaming data obtained after the encoding process includes:
and sending the live streaming data obtained after the coding processing to a content distribution network, and sending the live streaming data obtained after the coding to at least one audience terminal by the content distribution network.
In another aspect, a live broadcast processing method is provided, where the method includes:
receiving display information issued by a server;
determining a target area and a non-target area in a display area of a live broadcast room according to the display information, wherein the target area is used for displaying a video picture of a first main broadcast, and the non-target area does not display the video picture;
receiving live streaming data of the first anchor, wherein the live streaming data is obtained after coding processing and comprises video picture data and pure pixel data of the first anchor;
and displaying the video picture of the first anchor in the target area based on the live streaming data.
In a possible implementation manner, when the number of the anchor is two, the non-target area is used for displaying a video picture of a second anchor, and the target area and the non-target area are in any one of a large-small window mode, a vertical split screen mode, or a horizontal split screen mode.
In another aspect, a live broadcast processing apparatus is provided, the apparatus including:
the device comprises a determining module, a display module and a display module, wherein the determining module is used for determining a target area and a non-target area in a display area of a live broadcast room, the target area is used for displaying a video picture of a first anchor collected by an anchor terminal, and the non-target area does not display the video picture;
the processing module is used for filling pure color pixels in the non-target area and coding the video picture data displayed by the target area and the pure color pixel data filled in the non-target area;
and the sending module is used for sending the live streaming data obtained after the coding processing to at least one audience terminal.
In one possible implementation, the apparatus further includes:
the receiving module is used for receiving the display information issued by the server;
the determining module is further configured to determine the target area and the non-target area in the display area of the live broadcast room according to the display information when the display information indicates that the video picture is displayed through a partial area.
In a possible implementation manner, the processing module is further configured to perform black pixel filling on the non-target area.
In a possible implementation manner, the receiving module is further configured to receive first display information sent by the server, where the first display information indicates a location of an area where the video frame is displayed;
the determining module is further configured to determine the target area in the live broadcast room display area according to the first display information, and determine other areas except the target area as the non-target areas.
In a possible implementation manner, the receiving module is further configured to receive second display information sent by the server, where the second display information indicates a location of an area where the video frame is not displayed;
the determining module is further configured to determine the non-target area in a display area of a live broadcast room according to the second display information, and determine other areas except the non-target area as the target area.
In a possible implementation manner, the processing module is further configured to perform encoding processing on the video picture data displayed in the display area of the live broadcast room when the display information indicates that the video picture is displayed in all areas.
In a possible implementation manner, the sending module is further configured to send the live streaming data obtained after the encoding processing to a content distribution network, and the content distribution network sends the live streaming data obtained after the encoding processing to at least one viewer terminal.
In another aspect, a live broadcast processing apparatus is provided, the apparatus including:
the first receiving module is used for receiving the display information issued by the server;
the determining module is used for determining a target area and a non-target area in a display area of a live broadcast room according to the display information, wherein the target area is used for displaying a video picture of a first anchor, and the non-target area does not display the video picture;
a second receiving module, configured to receive live streaming data of the first anchor, where the live streaming data is obtained after being encoded, and the live streaming data includes video picture data and color-only pixel data of the first anchor;
and the display module is used for displaying the video picture of the first anchor in the target area based on the live streaming data.
In a possible implementation manner, when the number of the anchor is two, the non-target area is used for displaying a video picture of a second anchor, and the target area and the non-target area are in any one of a large-small window mode, a vertical split screen mode, or a horizontal split screen mode.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the above-mentioned live broadcast processing method.
In another aspect, a live broadcast processing device is provided, where the device includes a processor and a memory, where the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the live broadcast processing method.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
for any anchor user, if the area for displaying the video picture of the anchor user changes due to a service change, the embodiment of the present application may process live streaming data pushed by a corresponding anchor terminal, for example, when a partial area needs to be used to display the video picture of the anchor user, the embodiment of the present application may determine a target area for displaying the video picture of the anchor user and a non-target area for not displaying the video picture of the anchor user in a live broadcast room display area; then, filling pure color pixels in the non-target area, and encoding the video picture data displayed by the target area and the pure color pixel data filled in the non-target area; and then, pushing the live streaming data obtained after the coding processing to a viewer terminal.
Based on the above description, when the video picture of the anchor user does not need to be displayed through the whole live display area, the embodiment of the present application fills the part of the area that does not display the video picture of the anchor user with the pure color pixels before pushing the stream to the audience terminal, so that the part of the pure color pixel data occupies few bytes after the encoding processing, thereby achieving the effect of saving the code rate, and further reducing the requirements on the data transmission process, such as reducing the occupation of the network bandwidth and the requirements on the network quality.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an implementation environment related to a live broadcast processing method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a microphone connecting scene provided in an embodiment of the present application;
fig. 3 is a flowchart of a live broadcast processing method provided in an embodiment of the present application;
fig. 4 is a flowchart of a live broadcast processing method provided in an embodiment of the present application;
fig. 5 is a flowchart of a live broadcast processing method provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of a display area and a non-display area provided by an embodiment of the present application;
fig. 7 is a schematic diagram of a live broadcast processing effect provided in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a live broadcast processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic device structure diagram of a live broadcast processing device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a live broadcast processing device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a live broadcast processing device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the embodiments of the present application in detail, terms and abbreviations used in the embodiments of the present application will be introduced.
Live broadcast stream data: in the embodiment of the present application, the anchor terminal of the anchor user is also generally referred to as a live streaming end, and the live streaming data refers to multimedia data encoded at the anchor terminal.
Wherein the multimedia data typically comprises a picture part and a speech part.
Illustratively, the multimedia data is usually pushed to a live background in a streaming form from a main broadcast terminal, and then the live streaming data is forwarded to audience terminals via a Content Delivery Network (CDN) on the live background side.
Transcoding: one way to stream a direct current. The transcoding process can convert the live stream from the original resolution and the original bitrate into a plurality of different resolutions and a plurality of different bitrates.
Code rate: used to measure the amount of data per second of the live stream in kbps. The code rate is divided into a video code rate and an audio code rate, and under a general condition, the higher the code rate is, the better the definition of a corresponding video picture is and the better the tone quality of audio is; in addition, the code rate also affects data uploading, and generally, the higher the code rate is, the higher the requirement is correspondingly when data uploading is performed, for example, the required bandwidth and the network quality are improved therewith.
CDN: the request initiated by the user can be redirected to the service node closest to the user in real time according to the network flow and the comprehensive information such as the connection, the load condition, the distance to the user, the response time and the like of each service node. The method aims to enable the user to obtain the required content nearby, solve the network congestion condition and improve the response speed of the user for accessing the website.
The following describes an implementation environment related to a live broadcast processing method provided by an embodiment of the present application.
Referring to fig. 1, the implementation environment includes: a main cast terminal 101 used by a main cast user, a live background 102 and a viewer terminal 103 used by a viewer user. Where the live background 102 is also referred to herein as a server.
In the embodiment of the present application, the types of anchor terminals 101 used by anchor users and viewer terminals 103 used by viewer users include, but are not limited to: mobile terminals and fixed terminals.
As an example, mobile terminals include, but are not limited to: smart phones, tablet computers, notebook computers, electronic readers, MP3 players (Moving Picture Experts Group Audio Layer III, Moving Picture Experts compress standard Audio Layer 3), MP4 players (Moving Picture Experts Group Audio Layer IV, Moving Picture Experts compress standard Audio Layer 4), etc.; the fixed terminal includes, but is not limited to, a desktop computer, which is not particularly limited in the embodiments of the present application.
Illustratively, fig. 1 is merely illustrative of a situation in which the anchor terminal 101 and the viewer terminal 103 are both smart phones. While the live background 102 is used to provide background services for the anchor terminal 101 and the viewer terminals 103. In a possible implementation manner, the live background 102 includes, but is not limited to, a streaming server, a transcoding server, and a CDN, which are not specifically limited in this embodiment.
In addition, live applications are typically installed on the anchor terminal 101 and the viewer terminal 103 to facilitate the anchor user to enter the live platform for live broadcast and the viewer user to enter the live broadcast room for live broadcast.
In a possible implementation manner, the data communication between the anchor terminal 101 and the viewer terminal 103 and the live broadcast background 102 may be based on a Real Time Messaging Protocol (RTMP), which is not limited in this embodiment of the present application.
As an example, based on the implementation environment described above, the embodiment of the present application may be applied in a live scene where the number of anchor users is one, or in a continuous scene where the number of anchor users is at least two.
Wherein, the wheat-connecting scene is an online interactive scene, and fig. 2 shows an exemplary situation of the wheat-connecting scene. As shown in fig. 2, two anchor users appear in the next live broadcast room in the continuous wheat scene, the video pictures of the two anchor users are simultaneously displayed in the live broadcast room display area of the audience terminal, and the live broadcast room display area is shared, that is, the video pictures of the two anchor users respectively occupy a part of the live broadcast room display area.
In a possible implementation manner, in a continuous view, in addition to the left-right split screen mode shown in fig. 2, an up-down split screen mode or a large-small window mode may also be used, which is not specifically limited in this embodiment of the application.
For example, taking the anchor user a and the anchor user B as an example, the anchor user B may temporarily enter the live broadcast room of the anchor user a to realize the live broadcast PK. For example, the anchor user a and the anchor user B may sing a song PK in the same live broadcast room, and the anchor user B exits the live broadcast room after the PK ends.
It should be noted that the live broadcast processing method provided in this embodiment of the present application may be executed on the anchor terminal side or on the transcoding server side, which is not specifically limited in this embodiment of the present application. And the effect of saving code rate can be achieved no matter the code rate is executed on the anchor terminal or the transcoding server.
The live broadcast processing method provided in the embodiments of the present application is explained in detail by the following embodiments.
Fig. 3 is a flowchart of a live broadcast processing method according to an embodiment of the present application. Referring to fig. 3, a method flow provided by the embodiment of the present application includes:
301. and determining a target area and a non-target area in a display area of the live broadcast room, wherein the target area is used for displaying the video picture of the first anchor collected by the anchor terminal, and the non-target area does not display the video picture of the first anchor.
302. And filling pure color pixels in the non-target area, and encoding the video picture data displayed by the target area and the pure color pixel data filled in the non-target area.
303. And sending the live streaming data obtained after the coding processing to at least one audience terminal.
According to the method provided by the embodiment of the application, for any anchor user, if the area for displaying the video picture of the anchor user is changed due to service change, live streaming data pushed by a corresponding anchor terminal is processed by the embodiment of the application, for example, when the video picture of the anchor user needs to be displayed by using a partial area, the embodiment of the application determines a target area for displaying the video picture of the anchor user and a non-target area for not displaying the video picture of the anchor user in a display area of a live broadcasting room; then, filling pure color pixels in the non-target area, and encoding the video picture data displayed by the target area and the pure color pixel data filled in the non-target area; and then, pushing the live streaming data obtained after the coding processing to a viewer terminal.
Based on the above description, when the video picture of the anchor user does not need to be displayed through the whole live display area, the embodiment of the present application fills the part of the area that does not display the video picture of the anchor user with the pure color pixels before pushing the stream to the audience terminal, so that the part of the pure color pixel data occupies few bytes after the encoding processing, thereby achieving the effect of saving the code rate, and further reducing the requirements on the data transmission process, such as reducing the occupation of the network bandwidth and the requirements on the network quality.
In one possible implementation, the determining a target area and a non-target area in a live broadcast room presentation area includes:
receiving display information issued by a server;
when the display information indicates that the video picture is displayed through a partial area, the target area and the non-target area are determined in the display area of the live broadcast room according to the display information.
In one possible implementation, the performing pure color pixel filling on the non-target area includes: and filling black pixels in the non-target area.
In a possible implementation manner, the receiving the display information sent by the server includes:
receiving first display information issued by the server, wherein the first display information indicates the area position for displaying the video picture;
the determining the target area and the non-target area in the display area of the live broadcast room according to the display information includes: and determining the target area in the display area of the live broadcast room according to the first display information, and determining other areas except the target area as the non-target areas.
In a possible implementation manner, the receiving the display information sent by the server includes:
receiving second display information issued by the server, wherein the second display information indicates the position of an area where the video picture is not displayed;
the determining the target area and the non-target area in the display area of the live broadcast room according to the display information includes: and determining the non-target area in a display area of a live broadcast room according to the second display information, and determining other areas except the non-target area as the target area.
In one possible implementation, the method further includes:
and when the display information indicates that the video pictures are displayed through all the areas, encoding the video picture data displayed through the display area of the live broadcast room.
In a possible implementation manner, the sending, to at least one viewer terminal, live streaming data obtained after the encoding process includes:
and sending the live streaming data obtained after the coding processing to a content distribution network, and sending the live streaming data obtained after the coding to at least one audience terminal by the content distribution network.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 4 is a flowchart of a live broadcast processing method according to an embodiment of the present application. The method is applied to a viewer terminal, and referring to fig. 4, a method flow provided by an embodiment of the present application includes:
401. and receiving the display information issued by the server.
402. And determining a target area and a non-target area in a display area of the live broadcast room according to the display information, wherein the target area is used for displaying the video picture of the first main broadcast, and the non-target area does not display the video picture of the first main broadcast.
403. Receiving live streaming data of a first anchor, wherein the live streaming data is obtained after encoding processing, and the live streaming data comprises video picture data and pure pixel data of the first anchor.
404. And displaying the video picture of the first main broadcast in the target area based on the live broadcast stream data.
According to the method provided by the embodiment of the application, for any anchor user, if the area for displaying the video picture of the anchor user is changed due to service change, the live streaming data pushed by the corresponding anchor terminal is processed, for example, when the video picture of the anchor user needs to be displayed by using a part of area, the embodiment of the application determines a target area for displaying the video picture of the anchor user and a non-target area for not displaying the video picture of the anchor user in a live broadcasting room display area; then, filling pure color pixels in the non-target area, and encoding the video picture data displayed by the target area and the pure color pixel data filled in the non-target area; and then, pushing the live streaming data obtained after the coding processing to a viewer terminal.
Based on the above description, when the video picture of the anchor user does not need to be displayed through the whole live display area, the embodiment of the present application fills the part of the area that does not display the video picture of the anchor user with the pure color pixels before pushing the stream to the audience terminal, so that the part of the pure color pixel data occupies few bytes after the encoding processing, thereby achieving the effect of saving the code rate, and further reducing the requirements on the data transmission process, such as reducing the occupation of the network bandwidth and the requirements on the network quality.
In a possible implementation manner, when the number of the anchor is two, the non-target area is used for displaying a video picture of a second anchor, and the target area and the non-target area are in any one of a large-small window mode, a vertical split screen mode, or a horizontal split screen mode.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 5 is a flowchart of a live broadcast processing method according to an embodiment of the present application. Taking the example that the anchor terminal executes the live processing process, referring to fig. 5, the method provided by the embodiment of the present application includes:
501. and the live background respectively issues display information to the anchor terminal A and the audience terminal.
In the embodiment of the present application, the anchor terminal a is configured to perform video capture on the anchor user a at the local side, and push live streaming data related to the anchor user a to the viewer terminal side.
As an example, the live background may issue presentation information according to the service change requirement. Exemplarily, the service change needs may be to switch to a horoscope, such as other anchor users temporarily entering the live room of anchor user a; in addition, the service change needs may also be that, in a scene where a single anchor user performs live broadcasting, an audience user needs to present a video frame of the anchor user a through a partial area, for example, wants to present other resource contents through another partial area.
The first point to be described is that, no matter for the anchor terminal or the audience terminal, the live broadcast room display area refers to an entire screen area on the terminal device for displaying a video picture in the live broadcast process in the embodiment of the present application.
The second point to be described is that when the display information sent by the live broadcast background indicates that the video screen of the anchor user a is displayed through a partial area, the anchor terminal is triggered to execute the following steps 502 to 505.
502. When the display information issued by the live broadcast background indicates that the video picture of the anchor user A is displayed through a partial area, the anchor terminal A determines a target area and a non-target area in a display area of a live broadcast room.
In the live broadcast process, a camera on the anchor terminal A side acquires a video of the anchor user A, namely the video picture of the anchor user A is acquired through the camera on the anchor terminal A side. For example, the camera may be a camera carried by the anchor terminal a, or may be a camera independent from the anchor terminal a, which is not specifically limited in this embodiment of the present application.
In the embodiment of the application, the target area is used for displaying a video picture of the anchor user a collected by the anchor terminal a, and is also referred to as a display area in the text; the non-target area does not show the video frame of the anchor user a, also referred to herein as a non-show area.
As shown in fig. 6, the positional relationship between the target area and the non-target area includes, but is not limited to: the display method comprises a left screen splitting mode, a right screen splitting mode, an edge position, an upper screen splitting mode, a lower screen splitting mode and a large window splitting mode, wherein a target area is located in the middle of a display area of a live broadcast room, a non-target area is located in the edge position, the upper screen splitting mode and the lower screen splitting mode, and the embodiment of the application is not specifically limited to the left screen splitting mode, the right screen splitting mode, the target area is located in the middle of the display area of the live broadcast room, and the non-target area is located in the edge position, the upper screen splitting mode and the lower screen splitting mode.
The indication information issued by the live background can be used for indicating the display area and can also be used for indicating the non-display area. Accordingly, anchor terminal a may determine the non-presentation area in two ways:
5021. after first display information issued by a live background is received, wherein the first display information indicates an area position for displaying a video picture of an anchor user A; and the anchor terminal A determines a target area in the display area of the live broadcast room according to the first display information, and correspondingly determines other areas except the target area as non-target areas, namely reversely deducing the non-display areas.
5022. After second display information issued by a live background is received, wherein the second display information indicates the position of an area where the video picture of the anchor user A is not displayed; and determining a non-target area in the display area of the live broadcast room according to the second display information, and further determining other areas except the non-target area as target areas for displaying the video picture of the main broadcast user A.
503. Anchor terminal a performs pure color pixel fill on non-target areas that do not need to show the anchor user a's video frame.
In the embodiment of the present application, anchor terminal a may fill the non-target area with pure color pixels of any single color. Illustratively, the anchor terminal a may perform black pixel filling on the non-target area, which is not specifically limited in the embodiment of the present application.
504. The anchor terminal a performs encoding processing on the video picture data displayed through the target area and the pure color pixel data filled in the non-target area.
In a possible implementation manner, the anchor terminal a may continue to reuse the original video encoding module and the video stream transmission channel for encoding processing and pushing the live stream data obtained after the encoding processing.
And the video coding module is used for coding the video picture data displayed in the target area and the pure color pixel data filled in the non-target area to obtain compressed live streaming data. The compressed live streaming data is decoded by a video decoding module at the terminal side of a viewer, and then the data is restored.
In the embodiment of the application, the pure color pixel data can occupy few bytes after being coded, so that the effect of saving code rate can be achieved.
In another embodiment, if the display information delivered by the live broadcasting background is used to indicate that the video picture of the anchor user a is displayed in all areas, the anchor terminal a performs encoding processing on the video picture data of the anchor user a displayed in the display area of the whole live broadcasting room, and the code rate is higher in this case compared with the above case.
505. And the anchor terminal A sends the live streaming data obtained after the coding processing to the CDN, and the CDN forwards the live streaming data obtained after the coding to the audience terminal.
In this embodiment of the application, the anchor terminal a pushes live streaming data obtained after encoding to the live streaming background, and after the live streaming background completes streaming reception, the live streaming data received is sequentially subjected to processing such as transcoding, transcoding and encapsulation, and then the live streaming data after the processing is distributed through the CDN, that is, the live streaming data pushed by the anchor terminal a is forwarded to the audience terminal through the CDN.
Here, the viewer terminal refers to a terminal of all viewer users who enter the live broadcast room of the anchor user a to view the live broadcast.
The above steps 502 to 505 describe the live processing procedure of the anchor terminal a. Aiming at the terminal side of the audience, the method at least comprises the following steps:
506. and when the display information issued by the live broadcast background indicates that the video picture of the main broadcast user A is displayed in a partial area, the audience terminal performs area layout in the display area of the live broadcast room according to the display information.
Here, the viewer terminal refers to a terminal of any viewer user who enters the live broadcast room of the anchor user a to view.
The step is to determine a display area which needs to display the video picture of the anchor user A and a non-display area which does not need to display the video picture of the anchor user A in a display area of a live broadcasting room of the audience terminal.
507. After receiving live streaming data of the anchor user a forwarded by the CDN, the audience terminal displays a video frame of the anchor user a in a target area of a display area in the live broadcast room based on the received live streaming data.
Fig. 7 shows the effect of live processing in a scene in which a single anchor user is live. Taking the example of filling the non-target area with black pixels, the non-target area other than the target area may be shown as black instead of showing the video frame of the anchor user a, as shown in fig. 7.
In addition, if the number of anchor users is two, the non-target area on the audience terminal and the non-target area on anchor terminal a will be used to present the video frame of another anchor user. And the anchor terminal of another anchor user can also perform live broadcast processing in the manner shown in the above steps 502 to 505, so as to achieve the effect of saving code rate.
The method provided by the embodiment of the application has at least the following beneficial effects:
for any anchor user, when a display area for displaying a video picture of the anchor user is required to be changed due to service change, the corresponding anchor terminal performs processing before pushing live streaming data, for example, when a part of area is required to be used for displaying the video picture of the anchor user, the corresponding anchor terminal determines a display area and a non-display area in a live broadcasting room display area, wherein the display area is used for displaying the video picture of the anchor user, and the non-display area does not display the video picture of the anchor user; then, the corresponding anchor terminal fills the pure color pixels in the non-display area, and encodes the video picture data displayed in the display area and the pure color pixel data filled in the non-display area; and then, pushing the live streaming data obtained after the coding processing to a viewer terminal.
Based on the above description, when the video image of the anchor user does not need to be displayed through the whole live broadcast display area, the corresponding anchor terminal fills the non-display area with the pure color pixels before pushing the stream to the audience terminal, so that the pure color pixel data occupies few bytes after the encoding processing, the effect of saving the code rate is achieved, and the requirements on the data transmission process, such as the occupation on the network bandwidth and the requirement on the network quality, are reduced.
In another embodiment, in addition to the live processing performed by the anchor terminal, live processing may be performed by a transcoding server in the live backend. Aiming at the mode, the anchor terminal A directly encodes the video picture data of the anchor user A and pushes the live streaming data obtained after the encoding processing to the live background. Further, the transcoding server in the live background can complete live processing in a manner similar to the above steps 502 to 505, which is not described herein again. In addition, in the case where the transcoding server performs live processing, only part of audience users who enter the live broadcast room of the anchor user a to pull the transcoded live broadcast stream for viewing may be covered. At this time, the live broadcast background may push two paths of live broadcast stream data to the audience user, where one path is the original live broadcast stream data, and the other path is the live broadcast stream data subjected to the live broadcast processing by the transcoding server.
Fig. 8 is a schematic structural diagram of a live broadcast processing apparatus according to an embodiment of the present application. Referring to fig. 8, the apparatus includes:
a determining module 801, configured to determine a target area and a non-target area in a display area of a live broadcast room, where the target area is used to display a video picture of a first anchor collected by an anchor terminal, and the non-target area does not display the video picture;
a processing module 802, configured to perform pure color pixel filling on the non-target area, and perform encoding processing on the video picture data displayed in the target area and the pure color pixel data filled in the non-target area;
a sending module 803, configured to send the live streaming data obtained after the encoding processing to at least one viewer terminal.
According to the device provided by the embodiment of the application, for any anchor user, if the area for displaying the video picture of the anchor user is changed due to service change, the live streaming data pushed by the corresponding anchor terminal is processed, for example, when the video picture of the anchor user needs to be displayed by using a part of area, the embodiment of the application determines a target area for displaying the video picture of the anchor user and a non-target area for not displaying the video picture of the anchor user in a live broadcasting room display area; then, filling pure color pixels in the non-target area, and encoding the video picture data displayed by the target area and the pure color pixel data filled in the non-target area; and then, pushing the live streaming data obtained after the coding processing to a viewer terminal.
Based on the above description, when the video picture of the anchor user does not need to be displayed through the whole live display area, the embodiment of the present application fills the part of the area that does not display the video picture of the anchor user with the pure color pixels before pushing the stream to the audience terminal, so that the part of the pure color pixel data occupies few bytes after the encoding processing, thereby achieving the effect of saving the code rate, and further reducing the requirements on the data transmission process, such as reducing the occupation of the network bandwidth and the requirements on the network quality.
In one possible implementation, the apparatus further includes:
the receiving module is used for receiving the display information issued by the server;
the determining module is further configured to determine the target area and the non-target area in the display area of the live broadcast room according to the display information when the display information indicates that the video picture is displayed through a partial area.
In a possible implementation manner, the processing module is further configured to perform black pixel filling on the non-target area.
In a possible implementation manner, the receiving module is further configured to receive first display information sent by the server, where the first display information indicates a location of an area where the video frame is displayed;
the determining module is further configured to determine the target area in the live broadcast room display area according to the first display information, and determine other areas except the target area as the non-target areas.
In a possible implementation manner, the receiving module is further configured to receive second display information sent by the server, where the second display information indicates a location of an area where the video frame is not displayed;
the determining module is further configured to determine the non-target area in a display area of a live broadcast room according to the second display information, and determine other areas except the non-target area as the target area.
In a possible implementation manner, the processing module is further configured to perform encoding processing on the video picture data displayed in the display area of the live broadcast room when the display information indicates that the video picture is displayed in all areas.
In a possible implementation manner, the sending module is further configured to send the live streaming data obtained after the encoding processing to a content distribution network, and the content distribution network sends the live streaming data obtained after the encoding processing to at least one viewer terminal.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 9 is a schematic structural diagram of a live broadcast processing apparatus according to an embodiment of the present application. Referring to fig. 9, the apparatus includes:
a first receiving module 901, configured to receive display information sent by a server;
a determining module 902, configured to determine, according to the display information, a target area and a non-target area in a display area of a live broadcast, where the target area is used to display a video picture of a first anchor, and the non-target area does not display the video picture;
a second receiving module 903, configured to receive live streaming data of the first anchor, where the live streaming data is obtained after being encoded, and the live streaming data includes video picture data and color-only pixel data of the first anchor;
a displaying module 904, configured to display, based on the live streaming data, a video frame of the first anchor in the target area.
According to the device provided by the embodiment of the application, for any anchor user, if the area for displaying the video picture of the anchor user is changed due to service change, the live streaming data pushed by the corresponding anchor terminal is processed, for example, when the video picture of the anchor user needs to be displayed by using a part of area, the embodiment of the application determines a target area for displaying the video picture of the anchor user and a non-target area for not displaying the video picture of the anchor user in a live broadcasting room display area; then, filling pure color pixels in the non-target area, and encoding the video picture data displayed by the target area and the pure color pixel data filled in the non-target area; and then, pushing the live streaming data obtained after the coding processing to a viewer terminal.
Based on the above description, when the video picture of the anchor user does not need to be displayed through the whole live display area, the embodiment of the present application fills the part of the area that does not display the video picture of the anchor user with the pure color pixels before pushing the stream to the audience terminal, so that the part of the pure color pixel data occupies few bytes after the encoding processing, thereby achieving the effect of saving the code rate, and further reducing the requirements on the data transmission process, such as reducing the occupation of the network bandwidth and the requirements on the network quality.
In a possible implementation manner, when the number of the anchor is two, the non-target area is used for displaying a video picture of a second anchor, and the target area and the non-target area are in any one of a large-small window mode, a vertical split screen mode, or a horizontal split screen mode.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the live broadcast processing apparatus provided in the foregoing embodiment, only the division of the functional modules is exemplified when performing live broadcast processing, and in practical applications, the function distribution may be completed by different functional modules as needed, that is, the internal structure of the apparatus is divided into different functional modules to complete all or part of the functions described above. In addition, the live broadcast processing apparatus and the live broadcast processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
Fig. 10 shows a block diagram of a live broadcast processing device 1000 according to an exemplary embodiment of the present application. The device 1000 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer iii, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Device 1000 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
Illustratively, the apparatus 1000 may be a cast terminal or a viewer terminal of the foregoing embodiments.
In general, the apparatus 1000 includes: a processor 1001 and a memory 1002.
Processor 1001 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1001 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1001 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also referred to as a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1001 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 1001 may further include an AI (Artificial Intelligence) processor for processing a computing operation related to machine learning.
Memory 1002 may include one or more computer-readable storage media, which may be non-transitory. The memory 1002 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1002 is used to store at least one instruction for execution by processor 1001 to implement a live processing method provided by method embodiments herein.
In some embodiments, the apparatus 1000 may further optionally include: a peripheral interface 1003 and at least one peripheral. The processor 1001, memory 1002 and peripheral interface 1003 may be connected by a bus or signal line. Various peripheral devices may be connected to peripheral interface 1003 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1004, touch screen display 1005, camera 1006, audio circuitry 1007, positioning components 1008, and power supply 1009.
The peripheral interface 1003 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 1001 and the memory 1002. In some embodiments, processor 1001, memory 1002, and peripheral interface 1003 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1001, the memory 1002, and the peripheral interface 1003 may be implemented on separate chips or circuit boards, which are not limited by this embodiment.
The Radio Frequency circuit 1004 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1004 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1004 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1004 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1004 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1004 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1005 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1005 is a touch display screen, the display screen 1005 also has the ability to capture touch signals on or over the surface of the display screen 1005. The touch signal may be input to the processor 1001 as a control signal for processing. At this point, the display screen 1005 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 1005 may be one, providing the front panel of the device 1000; in other embodiments, the display screens 1005 may be at least two, respectively disposed on different surfaces of the device 1000 or in a folded design; in still other embodiments, the display 1005 may be a flexible display disposed on a curved surface or on a folded surface of the device 1000. Even more, the display screen 1005 may be arranged in a non-rectangular irregular figure, i.e., a shaped screen. The Display screen 1005 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 1006 is used to capture images or video. Optionally, the camera assembly 1006 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of a terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1006 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuit 1007 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1001 for processing or inputting the electric signals to the radio frequency circuit 1004 for realizing voice communication. The microphones may be multiple and placed at different locations of the device 1000 for stereo sound acquisition or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1001 or the radio frequency circuit 1004 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuit 1007 may also include a headphone jack.
A Location component 1008 is employed to locate a current geographic Location of the device 1000 for purposes of navigation or LBS (Location Based Service). The Positioning component 1008 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
A power supply 1009 is used to power the various components in the device 1000. The power source 1009 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 1009 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the device 1000 also includes one or more sensors 1010. The one or more sensors 1010 include, but are not limited to: acceleration sensor 1011, gyro sensor 1012, pressure sensor 1013, fingerprint sensor 1014, optical sensor 1015, and proximity sensor 1016.
The acceleration sensor 1011 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the apparatus 1000. For example, the acceleration sensor 1011 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 1001 may control the touch display screen 1005 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1011. The acceleration sensor 1011 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1012 may detect a body direction and a rotation angle of the device 1000, and the gyro sensor 1012 may cooperate with the acceleration sensor 1011 to acquire a 3D motion of the device 1000 by the user. From the data collected by the gyro sensor 1012, the processor 1001 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1013 may be disposed on a side bezel of device 1000 and/or on a lower layer of touch display 1005. When the pressure sensor 1013 is disposed on a side frame of the device 1000, a user's holding signal of the device 1000 may be detected, and the processor 1001 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1013. When the pressure sensor 1013 is disposed at a lower layer of the touch display screen 1005, the processor 1001 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1005. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1014 is used to collect a fingerprint of the user, and the processor 1001 identifies the user according to the fingerprint collected by the fingerprint sensor 1014, or the fingerprint sensor 1014 identifies the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, the processor 1001 authorizes the user to perform relevant sensitive operations including unlocking a screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1014 may be disposed on the front, back, or side of the device 1000. When a physical key or vendor Logo is provided on the device 1000, the fingerprint sensor 1014 may be integrated with the physical key or vendor Logo.
The optical sensor 1015 is used to collect the ambient light intensity. In one embodiment, the processor 1001 may control the display brightness of the touch display screen 1005 according to the intensity of the ambient light collected by the optical sensor 1015. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1005 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1005 is turned down. In another embodiment, the processor 1001 may also dynamically adjust the shooting parameters of the camera assembly 1006 according to the intensity of the ambient light collected by the optical sensor 1015.
A proximity sensor 1016, also known as a distance sensor, is typically provided on the front panel of the device 1000. The proximity sensor 1016 is used to capture the distance between the user and the front of the device 1000. In one embodiment, the processor 1001 controls the touch display screen 1005 to switch from the bright screen state to the dark screen state when the proximity sensor 1016 detects that the distance between the user and the front surface of the device 1000 is gradually decreased; when the proximity sensor 1016 detects that the distance between the user and the front of the device 1000 is gradually increased, the touch display screen 1005 is controlled by the processor 1001 to switch from a breath-screen state to a bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 10 is not intended to be limiting of the apparatus 1000 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Fig. 11 is a schematic structural diagram of a live broadcast processing device provided in an embodiment of the present application, where the live broadcast processing device may be a server in the foregoing embodiment. The device 1110 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1101 and one or more memories 1102, where the memory 1102 stores therein at least one instruction, and the at least one instruction is loaded and executed by the processors 1101 to implement the live broadcast processing method provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, there is also provided a computer readable storage medium, such as a memory, comprising instructions executable by a processor in a terminal to perform the live processing method in the above embodiments. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A live broadcast processing method is applied to a terminal of a first anchor, and comprises the following steps:
receiving display information issued by a server;
when the display information indicates that the video picture is displayed through a partial area, determining a target area and a non-target area in a display area of a live broadcast room according to the display information, wherein the target area is used for displaying the video picture of a first anchor broadcast collected by an anchor terminal, and the non-target area does not display the video picture;
filling pure color pixels in the non-target area, and encoding the video picture data displayed by the target area and the pure color pixel data filled in the non-target area;
and sending the live streaming data obtained after the coding processing to at least one audience terminal.
2. The method of claim 1, wherein the pure color pixel filling of the non-target area comprises: and filling black pixels in the non-target area.
3. The method of claim 1, wherein the receiving the display information sent by the server comprises:
receiving first display information issued by the server, wherein the first display information indicates the area position for displaying the video picture;
the determining the target area and the non-target area in the display area of the live broadcast room according to the display information includes: and determining the target area in the display area of the live broadcast room according to the first display information, and determining other areas except the target area as the non-target areas.
4. The method of claim 1, wherein the receiving the display information sent by the server comprises:
receiving second display information issued by the server, wherein the second display information indicates the position of an area where the video picture is not displayed;
the determining the target area and the non-target area in the display area of the live broadcast room according to the display information includes: and determining the non-target area in a display area of a live broadcast room according to the second display information, and determining other areas except the non-target area as the target area.
5. The method of claim 1, further comprising:
and when the display information indicates that the video pictures are displayed through all the areas, encoding the video picture data displayed through the display area of the live broadcast room.
6. The method of claim 1, wherein transmitting the encoded live stream data to at least one viewer terminal comprises:
and sending the live streaming data obtained after the coding processing to a content distribution network, and sending the live streaming data obtained after the coding to at least one audience terminal by the content distribution network.
7. A live broadcast processing method, characterized in that the method comprises:
receiving display information issued by a server;
determining a target area and a non-target area in a display area of a live broadcast room according to the display information, wherein the target area is used for displaying a video picture of a first main broadcast, and the non-target area does not display the video picture;
receiving live streaming data of the first anchor, wherein the live streaming data is obtained by the first anchor performing pure color pixel filling on the non-target area according to display information of a terminal of the first anchor issued by a server, and performing coding processing on video picture data displayed in the target area and the pure color pixel data filled in the non-target area;
and displaying the video picture of the first anchor in the target area based on the live streaming data.
8. The method according to claim 7, wherein when the anchor number is two, the non-target area is used for displaying a video picture of a second anchor, and the target area and the non-target area are in any one of a large-window mode, a top-bottom split-screen mode or a left-right split-screen mode.
9. A live broadcast processing apparatus, wherein the apparatus is applied to a terminal of a first anchor, the apparatus comprising:
the determining module is used for receiving the display information issued by the server; when the display information indicates that the video picture is displayed through a partial area, determining a target area and a non-target area in a display area of a live broadcast room according to the display information, wherein the target area is used for displaying the video picture of a first anchor broadcast collected by an anchor terminal, and the non-target area does not display the video picture;
the processing module is used for filling pure color pixels in the non-target area and coding the video picture data displayed by the target area and the pure color pixel data filled in the non-target area;
and the sending module is used for sending the live streaming data obtained after the coding processing to at least one audience terminal.
10. A live broadcast processing apparatus, for use in a viewer terminal, the apparatus comprising:
the first receiving module is used for receiving the display information issued by the server;
the determining module is used for determining a target area and a non-target area in a display area of a live broadcast room according to the display information, wherein the target area is used for displaying a video picture of a first anchor, and the non-target area does not display the video picture;
the second receiving module is used for receiving the live streaming data of the first anchor, and the live streaming data is obtained by the first anchor performing pure-color pixel filling on the non-target area according to display information of a terminal of the first anchor issued by a server and performing coding processing on video picture data displayed in the target area and the pure-color pixel data filled in the non-target area;
and the display module is used for displaying the video picture of the first anchor in the target area based on the live streaming data.
11. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor to implement a live processing method as claimed in any one of claims 1 to 6; or a live broadcast processing method according to any one of claims 7 to 8.
12. A live processing apparatus, characterized in that the apparatus comprises a processor and a memory, the memory having stored therein at least one instruction, the at least one instruction being loaded and executed by the processor to implement a live processing method as claimed in any one of claims 1 to 6; or a live broadcast processing method as claimed in any one of claims 7 to 8.
CN201911354791.7A 2019-12-25 2019-12-25 Live broadcast processing method and device, storage medium and equipment Active CN111010588B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911354791.7A CN111010588B (en) 2019-12-25 2019-12-25 Live broadcast processing method and device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911354791.7A CN111010588B (en) 2019-12-25 2019-12-25 Live broadcast processing method and device, storage medium and equipment

Publications (2)

Publication Number Publication Date
CN111010588A CN111010588A (en) 2020-04-14
CN111010588B true CN111010588B (en) 2022-05-17

Family

ID=70117822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911354791.7A Active CN111010588B (en) 2019-12-25 2019-12-25 Live broadcast processing method and device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN111010588B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114205658A (en) * 2020-08-27 2022-03-18 西安诺瓦星云科技股份有限公司 Image display method, apparatus, system, and computer-readable storage medium
CN114500822B (en) * 2020-11-11 2024-03-05 华为技术有限公司 Method for controlling camera and electronic equipment
CN114268823A (en) * 2021-12-01 2022-04-01 北京达佳互联信息技术有限公司 Video playing method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009239528A (en) * 2008-03-26 2009-10-15 Sharp Corp Digital broadcast receiver
WO2018086417A1 (en) * 2016-11-10 2018-05-17 广州华多网络科技有限公司 Microphone alternating live broadcast method, apparatus and system
CN110324654A (en) * 2019-08-02 2019-10-11 广州虎牙科技有限公司 Main broadcaster end live video frame processing method, device, equipment, system and medium
CN110456998A (en) * 2019-07-31 2019-11-15 广州视源电子科技股份有限公司 Display methods and device, storage medium and processor

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104202646A (en) * 2014-08-07 2014-12-10 天津三星电子有限公司 Television picture display method and device, and television
CN106105246B (en) * 2016-06-24 2019-10-15 北京小米移动软件有限公司 Display methods, apparatus and system is broadcast live
CN106484349A (en) * 2016-09-26 2017-03-08 腾讯科技(深圳)有限公司 The treating method and apparatus of live information
US10380923B2 (en) * 2016-11-28 2019-08-13 Breakfast, Llc Modular flip-disc display system and method
CN107436658B (en) * 2017-06-23 2021-03-19 Oppo广东移动通信有限公司 Method for reducing temperature rise, computer readable storage medium and mobile terminal
CN109348276B (en) * 2018-11-08 2019-12-17 北京微播视界科技有限公司 video picture adjusting method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009239528A (en) * 2008-03-26 2009-10-15 Sharp Corp Digital broadcast receiver
WO2018086417A1 (en) * 2016-11-10 2018-05-17 广州华多网络科技有限公司 Microphone alternating live broadcast method, apparatus and system
CN110456998A (en) * 2019-07-31 2019-11-15 广州视源电子科技股份有限公司 Display methods and device, storage medium and processor
CN110324654A (en) * 2019-08-02 2019-10-11 广州虎牙科技有限公司 Main broadcaster end live video frame processing method, device, equipment, system and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向混合场景屏幕图像序列的H.264实时编码码率控制技术的设计与应用;林艺;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》;20130715;全文 *

Also Published As

Publication number Publication date
CN111010588A (en) 2020-04-14

Similar Documents

Publication Publication Date Title
CN108966008B (en) Live video playback method and device
CN111093108B (en) Sound and picture synchronization judgment method and device, terminal and computer readable storage medium
CN111083507B (en) Method and system for connecting to wheat, first main broadcasting terminal, audience terminal and computer storage medium
CN109874043B (en) Video stream sending method, video stream playing method and video stream playing device
CN111464830B (en) Method, device, system, equipment and storage medium for image display
CN111010588B (en) Live broadcast processing method and device, storage medium and equipment
CN111586431B (en) Method, device and equipment for live broadcast processing and storage medium
CN110996117B (en) Video transcoding method and device, electronic equipment and storage medium
CN109168032B (en) Video data processing method, terminal, server and storage medium
CN112104648A (en) Data processing method, device, terminal, server and storage medium
CN110958464A (en) Live broadcast data processing method and device, server, terminal and storage medium
CN108965711B (en) Video processing method and device
CN110750734A (en) Weather display method and device, computer equipment and computer-readable storage medium
CN111586444B (en) Video processing method and device, electronic equipment and storage medium
CN111083554A (en) Method and device for displaying live gift
CN111478915B (en) Live broadcast data stream pushing method and device, terminal and storage medium
CN111294551B (en) Method, device and equipment for audio and video transmission and storage medium
CN112616082A (en) Video preview method, device, terminal and storage medium
CN109714628B (en) Method, device, equipment, storage medium and system for playing audio and video
CN111698262B (en) Bandwidth determination method, device, terminal and storage medium
CN112492331B (en) Live broadcast method, device, system and storage medium
CN110996115B (en) Live video playing method, device, equipment, storage medium and program product
CN111586433B (en) Code rate adjusting method, device, equipment and storage medium
CN111478914B (en) Timestamp processing method, device, terminal and storage medium
CN113141538A (en) Media resource playing method, device, terminal, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220318

Address after: 4119, 41st floor, building 1, No.500, middle section of Tianfu Avenue, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610000

Applicant after: Chengdu kugou business incubator management Co.,Ltd.

Address before: No. 315, Huangpu Avenue middle, Tianhe District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU KUGOU COMPUTER TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant