WO2024087197A1 - 直播流切换方法、装置、服务器、终端及程序产品 - Google Patents

直播流切换方法、装置、服务器、终端及程序产品 Download PDF

Info

Publication number
WO2024087197A1
WO2024087197A1 PCT/CN2022/128356 CN2022128356W WO2024087197A1 WO 2024087197 A1 WO2024087197 A1 WO 2024087197A1 CN 2022128356 W CN2022128356 W CN 2022128356W WO 2024087197 A1 WO2024087197 A1 WO 2024087197A1
Authority
WO
WIPO (PCT)
Prior art keywords
stream
live stream
live
terminal
cloud server
Prior art date
Application number
PCT/CN2022/128356
Other languages
English (en)
French (fr)
Inventor
何思远
谢导
Original Assignee
广州酷狗计算机科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州酷狗计算机科技有限公司 filed Critical 广州酷狗计算机科技有限公司
Priority to CN202280003882.XA priority Critical patent/CN115997384B/zh
Priority to PCT/CN2022/128356 priority patent/WO2024087197A1/zh
Publication of WO2024087197A1 publication Critical patent/WO2024087197A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Definitions

  • the embodiments of the present application relate to the field of live broadcast technology, and in particular to a live broadcast stream switching method, device, server, terminal and program product.
  • Live broadcast is a mode in which at least two anchors conduct live broadcasts simultaneously in the same live broadcast room.
  • An anchor can initiate a live broadcast request to the audience or other anchors to achieve live broadcast interaction.
  • live broadcast streams When switching from ordinary live broadcast to live broadcast or ending live broadcast and switching back to ordinary live broadcast, there is a switching of live broadcast streams. In the case of poor network, the audience client will fail to pull the stream and the screen will go black.
  • the current solution to the black screen after interruption is to add a buffer to the live streaming end to pull the stream in advance.
  • this solution has high requirements on the terminal hardware. Since the audience's device hardware and network are diverse, once the stream is interrupted and re-pulled, most of them will have different degrees of black screen or video freeze, and the black screen phenomenon is more serious on the streaming end that does not support the buffer function.
  • the embodiments of the present application provide a live stream switching method, device, server, terminal and program product.
  • the technical solution is as follows:
  • the present application provides a live stream switching method, the method is performed by a cloud server, and the method includes:
  • the downlink live stream is switched from the first live stream to the second live stream.
  • the downlink live stream is the live stream pulled by the second terminal from the cloud server.
  • the second terminal is used to display the live broadcast content based on the second live stream.
  • an embodiment of the present application provides a live stream switching method, which is performed by a first terminal and includes:
  • a third live stream is sent to the stream mixing server, the stream mixing server is used to mix the live stream sent by the microphone connection terminal to generate a second live stream, and the cloud server is used to switch the downlink live stream from the first live stream to the second live stream based on the interruption of the first terminal or the pulling of the stream by the cloud server when receiving the first live stream and the second live stream;
  • the present application provides a live stream switching device, the device comprising:
  • a stream pulling module configured to receive a first live stream sent by a first terminal and a second live stream sent by a stream mixing server, wherein the second live stream is a live mixed stream of the first terminal and at least one microphone-connected terminal, and the first terminal is configured to continue to send the first live stream to the cloud server after microphone connection;
  • the streaming module is used to switch the downlink live stream from the first live stream to the second live stream based on the interruption of the first terminal or the pulling of the stream by the cloud server.
  • the downlink live stream is the live stream pulled by the second terminal from the cloud server.
  • the second terminal is used to display the live broadcast content based on the second live stream.
  • the present application provides a live stream switching device, the device comprising:
  • a streaming module used for sending a first live streaming stream to a cloud server, and the cloud server is used for forwarding the first live streaming stream to a second terminal requesting live streaming;
  • the microphone connection module is used to send a third live stream to a stream mixing server in response to a microphone connection instruction, wherein the stream mixing server is used to mix the live stream sent by the microphone connection terminal to generate a second live stream, and the cloud server is used to switch the downlink live stream from the first live stream to the second live stream based on the disconnection of the first terminal or the stream pulling of the cloud server when receiving the first live stream and the second live stream;
  • the control module is used to stop sending the first live stream to the cloud server in response to the microphone connection duration reaching the stream overlap duration.
  • the present application provides a server, which includes a processor and a memory; at least one program is stored in the memory, and the at least one program is loaded and executed by the processor to implement the live stream switching method performed by the cloud server as described in the above aspects.
  • the present application provides a terminal, which includes a processor and a memory; the memory stores at least one program, and the at least one program is loaded and executed by the processor to implement the live stream switching method performed by the first terminal as described in the above aspect.
  • the present application provides a computer-readable storage medium, in which at least one computer program is stored, and the computer program is loaded and executed by a processor to implement the live stream switching method as described in the above aspects.
  • a computer program product or computer program including computer instructions, the computer instructions being stored in a computer-readable storage medium.
  • a processor of a server reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the server executes the live stream switching method performed by the cloud server provided in various optional implementations of the above aspects;
  • a processor of a terminal reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the terminal executes the live stream switching method performed by the first terminal provided in various optional implementations of the above aspects.
  • the host end pushes streams to the cloud server and the mixed stream server in parallel.
  • the cloud server receives the first live stream and the second live stream at the same time within a period of time, and switches the streams based on the interruption of the first terminal or the pulling of the stream by the cloud server.
  • the first live stream continues to flow before the cloud server receives the second live stream, thereby achieving seamless switching between ordinary live broadcast and live broadcast with microphones connected, and solving the problem of the host end directly switching the live stream, causing mixed stream delays or interruptions, causing freezes or black screens on the viewer end.
  • the solution of the embodiment of the present application has low hardware requirements on the viewer end, and there is no need to set up an additional buffer to pull the stream in advance.
  • FIG1 shows an implementation environment provided by an exemplary embodiment of the present application
  • FIG2 shows a flow chart of a live stream switching method provided by an exemplary embodiment of the present application
  • FIG3 is a schematic diagram showing a process of switching from ordinary live broadcast to live broadcast with microphones provided by an exemplary embodiment of the present application;
  • FIG4 shows a flow chart of a live stream switching method provided by another exemplary embodiment of the present application.
  • FIG5 is a schematic diagram showing a live broadcast interface switching process provided by an exemplary embodiment of the present application.
  • FIG6 shows a schematic diagram of live stream switching provided by an exemplary embodiment of the present application.
  • FIG7 shows a flow chart of a live stream switching method provided by another exemplary embodiment of the present application.
  • FIG8 is a schematic diagram showing a process of switching from live broadcast with microphones connected to ordinary live broadcast provided by an exemplary embodiment of the present application;
  • FIG9 shows a flow chart of a live stream switching method provided by another exemplary embodiment of the present application.
  • FIG10 shows a flow chart of a live stream switching method provided by another exemplary embodiment of the present application.
  • FIG11 shows a structural block diagram of a live stream switching device provided by an exemplary embodiment of the present application.
  • FIG12 shows a structural block diagram of a live stream switching device provided by another exemplary embodiment of the present application.
  • FIG13 shows a structural block diagram of a server provided by an exemplary embodiment of the present application.
  • FIG. 14 shows a structural block diagram of a terminal provided by an exemplary embodiment of the present application.
  • a and/or B can mean: A exists alone, A and B exist at the same time, and B exists alone.
  • the character "/" generally indicates that the related objects are in an "or” relationship.
  • Figure 1 shows an implementation environment provided by an embodiment of the present application.
  • the implementation environment includes: a first terminal, a mixed stream server, a cloud server, and a second terminal.
  • an application with a live broadcast function runs in the first terminal and the second terminal.
  • the application installed on the first terminal and the second terminal is the same, or the application installed on the two terminals is the same type of application on different control system platforms.
  • the first terminal is the live streaming end, which generates and sends live streaming to the cloud server based on the collected live data (such as audio data and video data).
  • the second terminal is the live streaming end, which sends a live streaming request to the cloud server based on the received live viewing operation.
  • the cloud server is responsible for receiving the live stream sent by the streaming end of each live broadcast room, and live streaming to each streaming end corresponding to the live broadcast room.
  • the mixing server is responsible for mixing the live streams corresponding to the live broadcast rooms that are connected to each other, and sending the mixed streams to the cloud server, which is responsible for forwarding the mixed streams to the second terminals of each live broadcast room involved in the connection.
  • FIG. 2 shows a flow chart of a live stream switching method provided by an exemplary embodiment of the present application. This embodiment is described by taking the method executed by a cloud server as an example. The method includes the following steps:
  • Step 201 receiving a first live stream sent by a first terminal and a second live stream sent by a stream mixing server.
  • the first terminal is a live streaming push terminal, which generates and sends live streaming to the cloud server based on the collected live data (such as audio data and video data).
  • the second terminal is a live streaming pull terminal, which sends a live streaming pull request to the cloud server based on the received live viewing operation.
  • the cloud server is responsible for receiving the live streaming sent by the push terminal of each live broadcast room, and pushes the live streaming to each pull terminal corresponding to the live broadcast room.
  • the cloud server implements live streaming and push streaming through the Content Delivery Network (CDN).
  • CDN Content Delivery Network
  • the first live stream in the embodiment of the present application refers to the live stream in the normal live broadcast mode, that is, the live stream corresponding to a live broadcast room.
  • the second live stream refers to the live mixed stream in the live broadcast mode with microphones connected, that is, the live mixed stream of the first terminal and at least one terminal connected with microphones.
  • the terminals corresponding to the live broadcast rooms connected to each other send their own live streams to the mixed stream server.
  • the mixed stream server is responsible for mixing the live streams corresponding to the live broadcast rooms connected to each other, and sending the live mixed stream to the cloud server, which is responsible for forwarding the live mixed stream to the audience terminals of each live broadcast room involved in the microphone connection.
  • the first live stream sent to the cloud server is directly switched to the third live stream sent to the mixing server. Due to the asynchronous live streams of the live broadcast rooms connected to the microphone, the mixing delay occurs, or the network jitter of the mixing server occurs, the cloud server will be unable to pull the live stream for a period of time, which will cause a black screen or freeze on the viewer side.
  • the first terminal after the first terminal enters the microphone connection mode, while sending the first live stream to the mixing server, it still continues to send the first live stream to the cloud server, and does not immediately cut off the first live stream sent to the cloud server.
  • the mixing server mixes the third live stream sent by the first terminal and the live stream of the corresponding microphone connection terminal, generates a second live stream and sends it to the cloud server.
  • the cloud server can receive the first live stream and the second live stream at the same time within a period of time. Therefore, even if the live mixed stream does not reach the cloud server, the cloud server can continue to push the stream based on the first live stream to prevent the audience from having a black screen or freeze.
  • Step 202 based on the interruption of the first terminal or the flow of the cloud server, the downlink live stream is switched from the first live stream to the second live stream.
  • the downlink live stream is the live stream pulled by the second terminal from the cloud server, and the second terminal is used to display the live broadcast content based on the second live stream.
  • the uplink live stream refers to the live stream sent from the push end (anchor end, such as the first terminal) to the cloud server, and the downlink live stream refers to the live stream sent from the cloud server to the pull end (viewer end, such as the second terminal).
  • the first terminal synchronously sends the first live stream and the third live stream for a period of time (e.g., 5 seconds)
  • the cloud server can receive the first live stream and the second live stream at the same time for a period of time when the microphone connection is turned on, thereby switching the live streams based on the downlink uninterrupted flow logic.
  • the first live stream and the third live stream are sent synchronously by the first terminal, so that the mixed stream server and the cloud server can buffer the stream switching, avoiding the situation of interruption in the process of switching from ordinary live broadcast to microphone connection.
  • the first live stream sent by the first terminal to the cloud server is the same as the third live stream sent by the first terminal to the mixed stream server.
  • the first live stream sent by the first terminal to the cloud server is different from the third live stream sent by the first terminal to the mixed stream server, for example, in different resolutions, different encoding formats, etc. This embodiment of the application is not limited to this.
  • the host end pushes streams to the cloud server and the mixed stream server in parallel.
  • the cloud server receives the first live stream and the second live stream at the same time within a period of time, and switches the stream based on the interruption of the first terminal or the pulling of the stream by the cloud server.
  • the first live stream continues to flow before the cloud server receives the second live stream, thereby achieving seamless switching between ordinary live broadcast and live broadcast with microphones connected, and solving the problem of mixed stream delay or interruption caused by direct switching of the live stream by the host end, causing freezes or black screens on the viewer end.
  • the solution of the embodiment of the present application has low hardware requirements on the viewer end, and there is no need to set up an additional buffer to pull the stream in advance.
  • the cloud server may perform two stream switching schemes, that is, the above step 202 specifically includes the following steps:
  • Step one when receiving the first live stream and the second live stream, continue to send the first live stream to the second terminal; step two, when the first terminal cuts off the first live stream, switch the downlink live stream from the first live stream to the second live stream, and the first terminal stops sending the first live stream after the microphone connection time reaches the stream overlap time.
  • Step three when the second live stream is successfully received, the downlink live stream is switched from the first live stream to the second live stream.
  • the timing of cutting the stream in the first scheme depends on the interruption of the stream on the host side: the host side sends the live stream to the cloud server and the mixed stream server in parallel within the stream overlapping time after the start of the microphone connection; when the cloud side receives the first live stream and the second live stream at the same time, it does not cut the stream immediately and continues to download the first live stream; when the host side cuts off the first live stream, the cloud side downloads the second live stream.
  • the timing of cutting the stream in the second scheme depends on the stream pulling situation of the cloud server: the host side sends the live stream to the cloud server and the mixed stream server in parallel within the stream overlapping time after the start of the microphone connection; when the cloud side receives the first live stream and the second live stream at the same time, it immediately switches the downstream live stream to the second live stream.
  • both solutions have a stream overlap duration, that is, the host end does not cut off the live stream sent to the cloud server within the stream overlap duration, ensuring that the cloud server can seamlessly switch the downstream live stream, both solutions can prevent the occurrence of interrupted stream and black screen.
  • Figure 3 shows the process of the first terminal, cloud server, mixed flow server, corresponding connected microphone terminal and second terminal cooperating to complete the switching from ordinary live broadcast to connected microphone live broadcast.
  • the first terminal anchor end
  • the cloud server forwards the first live stream A1 to the second terminal (audience end) corresponding to the live broadcast room.
  • the switching process from ordinary live broadcast to connected microphone live broadcast includes two stages. One stage is the stage corresponding to the stream overlap duration. In the first stage, the first terminal sends the first live stream A1 to the cloud server while sending the third live stream A2 to the mixed flow server.
  • the mixed flow server mixes the third live stream A2 sent by the first terminal and the connected microphone stream B sent by the connected microphone terminal to obtain the second live stream A2+B and push it to the cloud server.
  • the cloud server receives the first live stream A1 and the second live stream A2+B, and continues to push the first live stream A1 to the second terminal.
  • the second stage is the connected microphone stage after the stream overlap duration ends.
  • the first terminal stops sending the first live stream A1 to the cloud server.
  • the cloud server receives the second live stream A2+B and seamlessly switches the downlink live stream from the first live stream A1 to the second live stream A2+B, completing the live stream switching process from ordinary live broadcast to live broadcast with microphones connected.
  • FIG. 4 shows a flow chart of a live stream switching method provided by another exemplary embodiment of the present application. This embodiment is described by taking the method executed by a cloud server as an example. The method comprises the following steps:
  • Step 401 receiving a first live stream sent by a first terminal and a second live stream sent by a stream mixing server.
  • step 401 can refer to the above step 201, and the embodiment of the present application will not be repeated here.
  • Step 402 upon receiving the first live stream and the second live stream, obtain a first identifier of the first live stream and a second identifier of the second live stream.
  • the live stream corresponds to a live stream identifier.
  • the identifier is used to indicate the live broadcast room to which the live stream belongs.
  • the live stream push end (first terminal) generates a live stream identifier based on the current live broadcast account. After encoding the live stream, the live stream push end encapsulates the live stream identifier together with the live encoding data and sends them to the cloud server or the mixing stream server.
  • Step 403 In response to the first identifier and the second identifier indicating that the first live stream and the second live stream belong to different live streams of the same live broadcast room, continue to send the first live stream to the second terminal.
  • the live stream identifier of the live stream corresponding to the same live broadcast room contains the same field.
  • the identifier of the co-hosted stream and the identifier of the common stream of the corresponding live broadcast room contain the same field.
  • the first identifier of the first live stream corresponding to live broadcast room A is STREAM_A_NORMAL
  • the second identifier corresponding to the second live stream when live broadcast room A and live broadcast room B are connected is STREAM_A_STREAM_B_PK. Since the first identifier and the second identifier contain the same field STREAM_A for indicating the live broadcast room, the cloud server determines that the first live stream and the second live stream belong to the normal stream and the mixed stream corresponding to live broadcast room A respectively.
  • Step 403 includes the following steps 403a to 403c:
  • Step 403a in response to the first identifier and the second identifier indicating that the first live stream and the second live stream belong to different live streams of the same live broadcast room, the audio encoding format of the first live stream and the audio encoding format of the second live stream are obtained.
  • the cloud server After determining that the first live stream and the second live stream belong to the common stream and the mixed stream of the same live room respectively, the cloud server obtains the audio encoding formats of the two.
  • the audio encoding format can be Moving Picture Experts Group Audio Layer III (MP3), Advanced Audio Coding (AAC), Microsoft Audio Format (Windows Media Audio, WMA), etc.
  • Step 403b In response to the audio encoding format of the first live stream being consistent with the audio encoding format of the second live stream, continue to forward the first live stream to the second terminal.
  • the cloud server continues to directly forward the first live stream.
  • Step 403c in response to the inconsistency between the audio encoding formats of the first live stream and the second live stream, re-encode the audio of the first live stream according to the audio encoding format of the second live stream, and send the first live stream with audio re-encoding to the second terminal.
  • the cloud server re-encodes the audio of the first live stream according to the audio encoding format of the second live stream, so that the audio encoding format of the first live stream in the downstream is consistent with the audio encoding format of the second live stream.
  • the second terminal can display the effect of starting the live broadcast while continuing to push the first live stream.
  • Step 404 in response to the first identifier and the second identifier indicating that the first live stream and the second live stream belong to different live streams of the same live broadcast room, obtain the second video header information of the second live stream.
  • the video header information includes the video resolution.
  • the above steps convert the encoding format through the cloud server, converting the encoding format of the ordinary stream into the encoding format of the microphone stream, so that the audio effect of the downlink live stream is consistent with the audio effect of the microphone live broadcast.
  • the live broadcast is a video live broadcast
  • the difference between the ordinary live broadcast picture and the microphone live broadcast picture also needs to be considered.
  • the normal live broadcast screen is a horizontal screen with a resolution of 1280*720
  • the live broadcast screen with microphone connection is a vertical screen with a resolution of 1080*2400.
  • the normal live broadcast screen is a vertical screen
  • the live broadcast screen with microphone connection is a horizontal screen.
  • Video header information is data used to describe video picture information, and is usually added to the header of each segment (or each frame) of the video stream, such as the sequence parameter set (SPS) and the picture parameter set (PPS).
  • SPS sequence parameter set
  • PPS picture parameter set
  • the cloud server sends the video header information of the second live stream to the second terminal, so that the second terminal renders and displays the content of the first live stream according to the resolution of the connected microphone screen.
  • Step 405 Send the second video header information to the second terminal, and the second terminal is used to adjust the video picture of the first live stream based on the second video header information.
  • the second terminal may adjust the resolution of the first live stream by stretching, cropping, splicing, expanding the margins, etc.
  • a microphone connection notification is sent to the second terminal through the cloud server.
  • the second terminal switches the style of the live broadcast interface from the ordinary live broadcast style to the microphone connection live broadcast style.
  • the second terminal directly displays the video screen 501 of the first video stream.
  • the second terminal displays the spliced screen of the two microphone connection live broadcast rooms, which includes the first display area 502 and the second display area 503.
  • the second terminal still receives the first video stream.
  • the second terminal adjusts the video screen according to the video header information of the second video stream to achieve the effect of entering the microphone connection.
  • the default background and the words "The other party is coming ⁇ " are displayed in the display area of the microphone connection live broadcast room (i.e., the second display area 503).
  • the final microphone connection screen is displayed.
  • the cloud server may also directly adjust the resolution of the first video stream and re-encode it to obtain the first live stream with updated resolution and send it down.
  • Step 406 When the first terminal disconnects the first live stream, the downlink live stream is switched from the first live stream to the second live stream.
  • step 406 can refer to the above step 2, and the embodiment of the present application will not be repeated here.
  • the cloud server re-encodes the first video stream based on the audio encoding format of the second video stream, and on the other hand sends the video header information of the second video stream to the second terminal, so that the second terminal can display the live screen according to the audio encoding format and picture resolution of the microphone connection when only the first video stream is pulled.
  • the effect of delayed streaming but synchronous display of the microphone live content is achieved.
  • the cloud server can also identify the live stream based on its identifier. If it is identified that the first identifier of the first live stream and the second identifier of the second live stream indicate the same live room, the downlink live stream is directly switched from the first live stream to the second live stream without sending the video header information in advance or re-encoding the first live stream.
  • the above embodiment shows the process of switching from ordinary live broadcast to microphone connection.
  • the solution of stream overlap is also adopted to end the microphone connection.
  • the stream overlap duration is set, and the two streams are sent in parallel within the stream overlap duration. After the stream overlap duration is reached, the new stream is switched to send.
  • the stream overlap solution is also adopted when switching in the reverse direction.
  • FIG. 7 shows a flow chart of a live stream switching method provided by another exemplary embodiment of the present application.
  • This embodiment is described by taking the method executed by a cloud server as an example. The method comprises the following steps:
  • Step 701 Receive a first live stream sent by a first terminal and a second live stream sent by a stream mixing server.
  • Step 702 based on the interruption of the first terminal or the pulling of the stream by the cloud server, the downlink live stream is switched from the first live stream to the second live stream.
  • step 701 to step 702 can refer to the above-mentioned step 201 to step 202, and the embodiment of the present application will not be repeated here.
  • the process of switching from live broadcast to ordinary live broadcast also includes two implementation methods of downlink uninterrupted streaming, one as shown in steps 703 to 704 below, and the other as shown in step 705 below.
  • Step 703 When the first live stream sent by the first terminal is received again, continue to send the second live stream to the second terminal.
  • the first terminal when the microphone connection ends, the first terminal re-sends the first live stream to the cloud server within the stream overlap duration before the microphone connection ends.
  • the cloud server receives the first live stream and the second live stream, but continues to send the second live stream to the second terminal.
  • Step 704 In response to the second live stream being disconnected, the downlink live stream is switched to the first live stream.
  • the first terminal stops sending the third live stream to the stream mixing server and continues to send the first live stream to the cloud server.
  • the cloud server switches to sending the first live stream.
  • Step 705 When the first live stream sent by the first terminal is received again, the downlink live stream is switched to the first live stream.
  • the cloud server immediately switches the stream after receiving the first live stream, switching the downlink live stream from the second live stream to the first live stream, and the audience end returns to the normal live mode.
  • the stream overlap duration is set.
  • the host end pushes the streams to the cloud server and the mixed stream server in parallel within the stream overlap duration.
  • the cloud server receives the first live stream and the second live stream within the stream overlap duration, and continues to push the second live stream to the audience end.
  • the host end cuts off the third live stream.
  • Figure 8 shows the process of the first terminal, cloud server, mixed stream server, corresponding live broadcast terminal and second terminal cooperating to complete the switch from live broadcast to ordinary live broadcast, which also includes two stages.
  • the first stage corresponds to the stream overlap duration.
  • the cloud server receives the first live stream A1 and the second live stream A2+B, and continues to push the second live stream A2+B to the second terminal.
  • the live broadcast officially ends and enters ordinary live broadcast.
  • the first terminal stops sending the third live stream A2 to the mixed stream server, and the cloud server switches the downlink live stream from the second live stream A2+B to the first live stream A1.
  • the above embodiment shows the steps performed by the cloud server during the live streaming switching process.
  • the first terminal will also trigger the downlink uninterrupted flow logic during the streaming switching process to control the push of the two live streamings.
  • Figure 9 shows a flowchart of the live streaming switching method provided by another exemplary embodiment of the present application. This embodiment is described by taking the method executed by the first terminal as an example. The method includes the following steps:
  • Step 901 Send a first live stream to a cloud server, and the cloud server is used to forward the first live stream to a second terminal that requests live streaming.
  • the first terminal is the live streaming end, which generates and sends live streaming to the cloud server based on the collected live data (such as audio data and video data). In the normal live mode, the first terminal directly sends the first live stream to the cloud server.
  • the cloud server is responsible for receiving the live stream sent by the push stream end of each live room, and pushes the live stream to each pull stream end corresponding to the live room.
  • the second terminal is the live pull stream end, which sends a live pull stream request to the cloud server based on the received live viewing operation.
  • Step 902 In response to the microphone connection instruction, send a third live stream to the stream mixing server.
  • the mixing stream server is used to mix the live stream sent by the microphone-connected terminal to generate a second live stream
  • the cloud server is used to switch the downlink live stream from the first live stream to the second live stream based on the downlink uninterrupted flow logic when receiving the first live stream and the second live stream.
  • the downlink uninterrupted flow logic is determined by the interruption of the first terminal or the pulling of the stream by the cloud server.
  • the terminals corresponding to the live broadcast rooms connected to each other send their live streams to the mixing server.
  • the mixing server is responsible for mixing the live streams corresponding to the live broadcast rooms connected to each other, and sending the live mixed stream to the cloud server, which is responsible for forwarding the live mixed stream to the audience terminals of each live broadcast room involved in the live broadcast mode.
  • the first terminal determines that a microphone connection instruction is received.
  • the first live stream sent to the cloud server is directly switched to the third live stream sent to the mixing server. Due to the asynchronous live streams of the live broadcast rooms connected to the microphone, the mixing delay is caused, and the mixing server takes a certain amount of time to mix the streams. It may cause the cloud server to fail to pull the live stream for a period of time, which in turn causes the downlink live stream terminal, causing the audience end to freeze or black screen.
  • the first terminal after the first terminal enters the microphone connection mode, while sending the first live stream to the mixing server, it continues to send the first live stream to the cloud server, and does not immediately cut off the first live stream sent to the cloud server.
  • the mixing server mixes the third live stream sent by the first terminal and the live stream of the corresponding microphone connection terminal, generates a second live stream and sends it to the cloud server.
  • the cloud server receives the first live stream and the second live stream at the same time, it switches the live stream sent to the second terminal based on the downlink uninterrupted flow logic.
  • Step 903 In response to the microphone connection duration reaching the stream overlap duration, stop sending the first live stream to the cloud server.
  • the first terminal After the first terminal sends the first live stream and the third live stream synchronously until the stream overlap time (for example, 5 seconds) is reached, it stops sending the first live stream to the cloud server and only sends the third live stream to the mixed stream server.
  • the cloud server switches the stream based on the logic of downlink uninterrupted flow. By setting the stream overlap time, the mixed stream server and the cloud server can buffer the stream switching, avoiding the situation of interruption in the process of switching from ordinary live broadcast to microphone connection.
  • FIG. 10 shows a flowchart of a live stream switching method provided by another exemplary embodiment of the present application. This embodiment is described by taking the method executed by the first terminal as an example. The method includes the following steps:
  • Step 1001 Send a first live stream to a cloud server, and the cloud server is used to forward the first live stream to a second terminal that requests live streaming.
  • Step 1002 In response to the microphone connection instruction, send a third live stream to the stream mixing server.
  • Step 1003 In response to the microphone connection duration reaching the stream overlap duration, stop sending the first live stream to the cloud server.
  • steps 1001 to 1003 can refer to the above steps 901 to 903, and the embodiments of the present application will not be repeated here.
  • Step 1004 Send the first live stream to the cloud server within the stream overlap time before the microphone connection ends.
  • the cloud server is used to continue sending the second live stream to the second terminal when the first live stream is received again.
  • the first terminal determines the end time of the live broadcast in advance, and sends the first live broadcast stream to the cloud server within the stream overlap duration before the end of the live broadcast, so that the cloud server receives the live broadcast data in advance and prepares for the stream switching to prevent the stream from being interrupted.
  • the first terminal may determine the microphone connection in advance in the following two ways:
  • a timer is set based on the stream overlap duration, and a first live stream is sent to the cloud server.
  • the first terminal When the host or the other host manually terminates the live broadcast, the first terminal immediately sets a timer based on the stream overlap duration, and the live broadcast ends when the timer reaches the stream overlap duration.
  • the live broadcast has a fixed duration limit (i.e., the target live broadcast duration), and if the first terminal determines that the target live broadcast duration is reached after the stream overlap duration (e.g., after 5 seconds), the first live broadcast stream is sent to the cloud server.
  • Step 1005 in response to the microphone connection being ended, stop sending the third live stream to the stream mixing server.
  • the cloud server is used to switch the downlink live stream from the second live stream to the first live stream when the second live stream is disconnected. Since the first terminal sends the first live stream to the cloud server in advance, the cloud server can achieve seamless switching from microphone connection to ordinary live broadcast, avoiding freezes or black screens.
  • the stream overlap duration is set.
  • the host end pushes the stream to the cloud server and the mixed stream server in parallel within the stream overlap duration.
  • the cloud server receives the first live stream and the second live stream within the stream overlap duration, and continues to push the second live stream to the audience end.
  • the host end cuts off the third live stream, and the cloud server starts to push the first live stream to the audience end.
  • FIG. 11 is a structural block diagram of a live stream switching device provided by an exemplary embodiment of the present application, the device comprising the following structure:
  • the stream pulling module 1101 is used to receive a first live stream sent by a first terminal and a second live stream sent by a stream mixing server, where the second live stream is a live mixed stream of the first terminal and at least one microphone-connected terminal, and the first terminal is used to continue to send the first live stream to the cloud server after microphone connection;
  • the streaming module 1102 is used to switch the downlink live stream from the first live stream to the second live stream based on the interruption of the first terminal or the pulling of the stream by the cloud server.
  • the downlink live stream is the live stream pulled by the second terminal from the cloud server.
  • the second terminal is used to display the live broadcast content based on the second live stream.
  • the streaming module 1102 is further used for:
  • first live stream and the second live stream When the first live stream and the second live stream are received, continue to send the first live stream to the second terminal; when the first terminal cuts off the first live stream, switch the downlink live stream from the first live stream to the second live stream, and the first terminal stops sending the first live stream after the microphone connection time reaches the stream overlap time;
  • the downlink live stream is switched from the first live stream to the second live stream.
  • the streaming module 1102 is further used for:
  • the streaming module 1102 is further used for:
  • the audio of the first live stream is re-encoded according to the audio encoding format of the second live stream, and the first live stream after audio re-encoding is sent to the second terminal.
  • the device further comprises:
  • an information acquisition module configured to acquire second video header information of the second live stream in response to the first identifier and the second identifier indicating that the first live stream and the second live stream belong to different live streams of the same live broadcast room, wherein the video header information includes video resolution;
  • the information sending module is used to send the second video header information to the second terminal, and the second terminal is used to adjust the video picture of the first live stream based on the second video header information.
  • the streaming module 1102 is further used for:
  • the downlink live stream is switched to the first live stream.
  • FIG. 12 is a structural block diagram of a live stream switching device provided by another exemplary embodiment of the present application, the device comprising the following structure:
  • the streaming module 1201 is used to send a first live streaming stream to a cloud server, and the cloud server is used to forward the first live streaming stream to a second terminal requesting live streaming;
  • the microphone connection module 1202 is used to send a third live stream to the stream mixing server in response to the microphone connection instruction, the stream mixing server is used to mix the live stream sent by the microphone connection terminal to generate a second live stream, and the cloud server is used to switch the downlink live stream from the first live stream to the second live stream based on the disconnection of the first terminal or the stream pulling condition of the cloud server when receiving the first live stream and the second live stream;
  • the control module 1203 is used to stop sending the first live stream to the cloud server in response to reaching the stream overlap duration.
  • the streaming module 1201 is further used for:
  • the control module 1203 is further used for:
  • the third live stream is stopped from being sent to the mixing stream server, and the cloud server is used to switch the downlink live stream to the first live stream when the second live stream is disconnected.
  • the streaming module 1201 is further used for:
  • the host end pushes streams to the cloud server and the mixed stream server in parallel.
  • the cloud server receives the first live stream and the second live stream at the same time within a period of time, and switches the stream based on the interruption of the first terminal or the pulling of the stream by the cloud server.
  • the first live stream continues to flow before the cloud server receives the second live stream, thereby achieving seamless switching between ordinary live broadcast and live broadcast with microphones connected, and solving the problem of mixed stream delay or interruption caused by direct switching of the live stream by the host end, causing freezes or black screens on the viewer end.
  • the solution of the embodiment of the present application has low hardware requirements on the viewer end, and there is no need to set up an additional buffer to pull the stream in advance.
  • FIG. 13 shows a schematic diagram of the structure of a server provided in one embodiment of the present application.
  • the server 1300 includes a central processing unit (CPU) 1301, a system memory 1304 including a random access memory (RAM) 1302 and a read-only memory (ROM) 1303, and a system bus 1305 connecting the system memory 1304 and the central processing unit 1301.
  • the server 1300 also includes a basic input/output (I/O) controller 1306 for facilitating information transmission between various components in the computer, and a large-capacity storage device 1307 for storing an operating system 1313, application programs 1314, and other program modules 1315.
  • I/O basic input/output
  • the basic input/output system 1306 includes a display 1308 for displaying information and an input device 1309 such as a mouse and a keyboard for user inputting information.
  • the display 1308 and the input device 1309 are connected to the central processing unit 1301 through an input/output controller 1310 connected to the system bus 1305.
  • the basic input/output system 1306 may also include an input/output controller 1310 for receiving and processing inputs from a plurality of other devices such as a keyboard, a mouse, or an electronic stylus.
  • the input/output controller 1310 also provides output to a display screen, a printer, or other types of output devices.
  • the mass storage device 1307 is connected to the central processing unit 1301 through a mass storage controller (not shown) connected to the system bus 1305.
  • the mass storage device 1307 and its associated computer readable medium provide non-volatile storage for the server 1300. That is, the mass storage device 1307 may include a computer readable medium (not shown) such as a hard disk or a Compact Disc Read-Only Memory (CD-ROM) drive.
  • a computer readable medium such as a hard disk or a Compact Disc Read-Only Memory (CD-ROM) drive.
  • the computer-readable medium may include computer storage media and communication media.
  • Computer storage media include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storing information such as computer-readable instructions, data structures, program modules or other data.
  • Computer storage media include RAM, ROM, Erasable Programmable Read Only Memory (EPROM), flash memory or other solid-state storage technology, CD-ROM, Digital Video Disc (DVD) or other optical storage, cassettes, magnetic tapes, disk storage or other magnetic storage devices.
  • EPROM Erasable Programmable Read Only Memory
  • DVD Digital Video Disc
  • the server 1300 may also be connected to a remote computer on a network through a network such as the Internet. That is, the server 1300 may be connected to a network 1312 through a network interface unit 1311 connected to the system bus 1305, or the network interface unit 1311 may be used to connect to other types of networks or remote computer systems (not shown).
  • FIG. 14 shows a block diagram of a terminal 1400 provided by an exemplary embodiment of the present application.
  • the terminal 1400 may be a portable mobile terminal, such as a smart phone, a tablet computer, a Moving Picture Experts Group Audio Layer III (MP3) player, or a Moving Picture Experts Group Audio Layer IV (MP4) player.
  • the terminal 1400 may also be referred to as a user device, a portable terminal, or other names.
  • the terminal 1400 includes a processor 1401 and a memory 1402 .
  • the processor 1401 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc.
  • the processor 1401 may be implemented in at least one hardware form of digital signal processing (DSP), field-programmable gate array (FPGA), and programmable logic array (PLA).
  • DSP digital signal processing
  • FPGA field-programmable gate array
  • PDA programmable logic array
  • the processor 1401 may also include a main processor and a coprocessor.
  • the main processor is a processor for processing data in the awake state, also known as a central processing unit (CPU);
  • the coprocessor is a low-power processor for processing data in the standby state.
  • the processor 1401 may be integrated with a graphics processing unit (GPU), and the GPU is responsible for rendering and drawing the content to be displayed on the display screen.
  • the processor 1401 may also include an artificial intelligence (AI) processor, which is used to process computing operations related to machine learning.
  • AI artificial intelligence
  • the memory 1402 may include one or more computer-readable storage media, which may be tangible and non-transitory.
  • the memory 1402 may also include a high-speed random access memory, and a non-volatile memory, such as one or more disk storage devices, flash memory storage devices.
  • the non-transitory computer-readable storage medium in the memory 1402 is used to store at least one instruction, which is used to be executed by the processor 1401 to implement the method provided in the embodiment of the present application.
  • the terminal 1400 may optionally further include: a peripheral device interface 1403 .
  • the peripheral device interface 1403 may be used to connect at least one peripheral device related to input/output (I/O) to the processor 1401 and the memory 1402.
  • the processor 1401, the memory 1402, and the peripheral device interface 1403 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1401, the memory 1402, and the peripheral device interface 1403 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
  • An embodiment of the present application also provides a computer-readable storage medium, which stores at least one instruction, and the at least one instruction is loaded and executed by a processor to implement the live stream switching method described in the above embodiments.
  • a computer program product or a computer program including computer instructions, the computer instructions being stored in a computer-readable storage medium.
  • a processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the live stream switching method provided in various optional implementations of the above aspects.
  • Computer-readable storage media include computer storage media and communication media, wherein the communication media include any media that facilitates the transmission of a computer program from one place to another.
  • the storage medium can be any available medium that a general or special-purpose computer can access.
  • the information including but not limited to user device information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • signals involved in this application are all authorized by the user or fully authorized by all parties, and the collection, use and processing of relevant data must comply with the relevant laws, regulations and standards of relevant countries and regions.
  • the live stream, video header information, user account, etc. involved in this application are all obtained with full authorization.
  • An embodiment of the present application also provides a computer-readable storage medium, which stores at least one instruction, and the at least one instruction is loaded and executed by a processor to implement the live stream switching method described in the above embodiments.
  • a computer program product or a computer program including computer instructions, the computer instructions being stored in a computer-readable storage medium.
  • a processor of a computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the live stream switching method provided in various optional implementations of the above aspects.
  • Computer-readable storage media include computer storage media and communication media, wherein the communication media include any media that facilitates the transmission of a computer program from one place to another.
  • the storage medium can be any available medium that a general or special-purpose computer can access.
  • the information including but not limited to user device information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • signals involved in this application are all authorized by the user or fully authorized by all parties, and the collection, use and processing of relevant data must comply with the relevant laws, regulations and standards of relevant countries and regions.
  • the live stream, video header information, user account, etc. involved in this application are all obtained with full authorization.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Telephonic Communication Services (AREA)

Abstract

本申请实施例公开了一种直播流切换方法、装置、服务器、终端及程序产品,属于直播技术领域。该方法包括:接收第一终端发送的第一直播流以及混流服务器发送的第二直播流,第二直播流为第一终端以及至少一个连麦终端的直播混流,第一终端用于在连麦后继续向云服务器发送第一直播流;基于第一终端的断流情况或云服务器的拉流情况,将下行直播流由第一直播流切换为第二直播流。本申请实施例通过主播端并行向云服务器和混流服务器推流,实现普通直播与连麦直播的无缝切换,解决主播端直接切换直播流导致混流延迟或断流,使观众端卡顿或黑屏的问题。

Description

直播流切换方法、装置、服务器、终端及程序产品 技术领域
本申请实施例涉及直播技术领域,特别涉及一种直播流切换方法、装置、服务器、终端及程序产品。
背景技术
连麦是一种由至少两位主播在同一直播间内进行同步直播的模式,主播可以向观众或其它主播发起连麦请求,从而实现直播互动。在由普通直播切换为连麦或者结束连麦切换回普通直播的过程中,存在直播流的切换。网络较差的情况下会导致观众客户端拉流失败而黑屏的情况。
目前解决断流黑屏方法通常是在直播拉流端增加缓冲区进行提前拉流,然而该方案对终端硬件要求较高。由于观众的设备硬件和网络多种多样,一旦断流重拉,大多会存在不同程度的黑屏或视频卡顿等现象,且不支持缓冲功能的拉流端的黑屏现象更为严重。
发明内容
本申请实施例提供了一种直播流切换方法、装置、服务器、终端及程序产品。所述技术方案如下:
一方面,本申请提供了一种直播流切换方法,所述方法由云服务器执行,所述方法包括:
接收第一终端发送的第一直播流以及混流服务器发送的第二直播流,所述第二直播流为所述第一终端以及至少一个连麦终端的直播混流,所述第一终端用于在连麦后继续向所述云服务器发送所述第一直播流;
基于所述第一终端的断流情况或所述云服务器的拉流情况,将下行直播流由所述第一直播流切换为所述第二直播流,所述下行直播流为第二终端从云服务器处拉取的直播流,所述第二终端用于基于所述第二直播流展示连麦直播内容。
另一方面,本申请实施例提供了一种直播流切换方法,所述方法由第一终端执行,所述方法包括:
向云服务器发送第一直播流,所述云服务器用于向请求直播拉流的第二终端转发所述第一直播流;
响应于连麦指令,向混流服务器发送第三直播流,所述混流服务器用于对连麦终端发送的直播流进行混流生成第二直播流,所述云服务器用于在接收到所述第一直播流以及所述第二直播流的情况下,基于所述第一终端的断流情况或所述云服务器的拉流情况,将下行直播流由所述第一直播流切换为所述第二直播流;
响应于连麦时长达到流重叠时长,停止向所述云服务器发送所述第一直播流。
另一方面,本申请提供了一种直播流切换装置,所述装置包括:
拉流模块,用于接收第一终端发送的第一直播流以及混流服务器发送的第二直播流,所述第二直播流为所述第一终端以及至少一个连麦终端的直播混流,所述第一终端用于在连麦后继续向所述云服务器发送所述第一直播流;
推流模块,用于基于所述第一终端的断流情况或所述云服务器的拉流情况,将下行直播流由所述第一直播流切换为所述第二直播流,所述下行直播流为第二终端从云服务器处拉取的直播流,所述第二终端用于基于所述第二直播流展示连麦直播内容。
另一方面,本申请提供了一种直播流切换装置,所述装置包括:
推流模块,用于向云服务器发送第一直播流,所述云服务器用于向请求直播拉流的第二终端转发所述第一直播流;
连麦模块,用于响应于连麦指令,向混流服务器发送第三直播流,所述混流服务器用于对连麦终端发送的直播流进行混流生成第二直播流,所述云服务器用于在接收到所述第一直播流以及所述第二直播流的情况下,基于所述第一终端的断流情况或所述云服务器的拉流情况,将下行直播流由所述第一直播流切换为所述第二直播流;
控制模块,用于响应于连麦时长达到流重叠时长,停止向所述云服务器发送所述第一直播流。
另一方面,本申请提供了一种服务器,所述服务器包括处理器和存储器;所述存储器中存储有至少一段程序,所述至少一段程序由所述处理器加载并执行以实现如上述方面所述的由云服务器执行的直播流切换方法。
另一方面,本申请提供了一种终端,所述终端包括处理器和存储器;所述存储器中存储有至少一段程序,所述至少一段程序由所述处理器加载并执行以实现如上述方面所述的由第一终端执行的直播流切换方法。
另一方面,本申请提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有至少一条计算机程序,所述计算机程序由处理器加载并执行以实现如上述方面所述的直播流切换方法。
根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。服务器的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该服务器执行上述方面的各种可选实现方式中提供的由云服务器执行的直播流切换方法;终端的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该终端执行上述方面的各种可选实现方式中提供的由第一终端执行的直播流切换方法。
本申请实施例提供的技术方案至少包括以下有益效果:
本申请实施例中,在由普通直播向连麦直播切换的过程中,主播端并行向云服务器和混流服务器推流,云服务器在一段时间内同时接收第一直播流和第二直播流,并基于第一终端的断流情况或云服务器的拉流情况进行切流。在云服务器接收到第二直播流之前第一直播流不断流,从而实现普通直播与连麦直播的无缝切换,解决主播端直接切换直播流导致混流延迟或断流,使观众端卡顿或黑屏的问题。并且,本申请实施例的方案对观众端的硬件要求较低,无需设置额外的缓冲区提前拉流。
附图说明
图1示出了本申请一个示例性实施例提供的实施环境;
图2示出了本申请一个示例性实施例提供的直播流切换方法的流程图;
图3示出了本申请一个示例性实施例提供的由普通直播切换至连麦直播过程的示意图;
图4示出了本申请另一个示例性实施例提供的直播流切换方法的流程图;
图5示出了本申请一个示例性实施例提供的直播界面切换过程的示意图;
图6示出了本申请一个示例性实施例提供的直播流切换的示意图;
图7示出了本申请另一个示例性实施例提供的直播流切换方法的流程图;
图8示出了本申请一个示例性实施例提供的由连麦直播切换至普通直播过程的示意图;
图9示出了本申请另一个示例性实施例提供的直播流切换方法的流程图;
图10示出了本申请另一个示例性实施例提供的直播流切换方法的流程图;
图11示出了本申请一个示例性实施例提供的直播流切换装置的结构框图;
图12示出了本申请另一个示例性实施例提供的直播流切换装置的结构框图;
图13示出了本申请一个示例性实施例提供的服务器的结构框图;
图14示出了本申请一个示例性实施例提供的终端的结构框图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
在本文中提及的“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
请参考图1,其示出了本申请实施例提供的一种实施环境。该实施环境中包括:第一终端、混流服务器、云服务器和第二终端。其中,第一终端和第二终端内运行有具有直播功能的应用程序。可选的,第一终端和第二终端上安装的应用程序是相同的,或两个终端上安装的应用程序是不同控制系统平台的同一类型应用程序。
第一终端为直播推流端,基于采集到的直播数据(例如音频数据、视频数据)生成并发送至云服务器进行直播推流。第二终端为直播拉流端,基于接收到的直播观看操作向云服务器发送直播拉流请求。云服务器负责接收各个直播间推流端发送的直播流,并向直播间对应的各个拉流端进行直播推流。混流服务器负责对相互连麦的直播间对应的直播流进行混流,并向云服务器发送连麦混流,由云服务器负责向连麦涉及到的各个直播间的第二终端转发连麦混流。
请参考图2,其示出了本申请一个示例性实施例提供的直播流切换方法的流程图。本实施例以该方法由云服务器执行为例进行说明。该方法包括如下步骤:
步骤201,接收第一终端发送的第一直播流以及混流服务器发送的第二直播流。
本申请实施例中,第一终端为直播推流端,基于采集到的直播数据(例如音频数据、视频数据)生成并发送至云服务器进行直播推流。第二终端为直播拉流端,基于接收到的直播观看操作向云服务器发送直播拉流请求。云服务器负责接收各个直播间推流端发送的直播流,并向直播间对应的各个拉流端进行直播推流。
示意性的,云服务器通过内容分发网络(Content Delivery Network,CDN)实现直播拉流和推流。
本申请实施例中的第一直播流是指普通直播模式下的直播流,即一个直播间所对应的直播流。
第二直播流是指连麦直播模式下的直播混流,即第一终端以及至少一个连麦终端的直播混流。连麦直播模式下,相互连麦的直播间对应的终端将己方的直播流发送至混流服务器。混流服务器负责对相互连麦的直播间对应的直播流进行混流,并向云服务器发送直播混流,由云服务器负责向连麦涉及到的各个直播间的观众端转发直播混流。
相关技术中,主播端在进入连麦后直接将向云服务器发送的第一直播流切换为向混流服务器发送的第三直播流。由于连麦的各直播间直播流不同步导致混流延迟,或者混流服务器网络抖动等因素,会造成云服务器一段时间内无法拉取到直播流的情况,进而导致观众端黑屏或画面卡顿。
因此在一种可能的实施方式中,第一终端在进入连麦模式后,在向混流服务器发送第一直播流的同时,仍然继续向云服务器发送第一直播流,并不立即切断向云服务器发送的第一直播流。混流服务器对第一终端发送的第三直播流以及对应的连麦终端的直播流进行混流,生成第二直播流并发送至云服务器。云服务器在一段时间内可同时接收第一直播流和第二直播流。因此即便直播混流未到达云服务器,云服务器也能够基于第一直播流继续推流,防止观众端黑屏或卡顿的情况。
步骤202,基于第一终端的断流情况或云服务器的拉流情况,将下行直播流由第一直播 流切换为第二直播流。
其中,下行直播流为第二终端从云服务器处拉取的直播流,第二终端用于基于第二直播流展示连麦直播内容。上行直播流是指推流端(主播端,如第一终端)发送至云服务器的直播流,下行直播流是指云服务器发送至拉流端(观众端,如第二终端)的直播流。
在一种可能的实施方式中,第一终端在同步发送第一直播流和第三直播流一段时间(例如5秒)后,停止向云服务器发送第一直播流,仅向混流服务器发送第三直播流,以减少第一终端以及云服务器的数据处理量。云服务器在连麦开时候的一段时长可同时接收到第一直播流与第二直播流,从而基于下行不断流逻辑进行直播流切换。通过第一终端同步发送第一直播流和第三直播流,使得混流服务器与云服务器能够进行切流的缓冲,避免在由普通直播进入连麦的过程中出现断流的情况。
可选的,第一终端向云服务器发送的第一直播流与第一终端向混流服务器发送的第三直播流相同。或者,第一终端向云服务器发送的第一直播流与第一终端向混流服务器发送的第三直播流不同,例如分辨率不同、编码格式不同等。本申请实施例对此不作限定。
综上所述,本申请实施例中,在由普通直播向连麦直播切换的过程中,主播端并行向云服务器和混流服务器推流,云服务器在一段时间内同时接收第一直播流和第二直播流,并基于第一终端的断流情况或云服务器的拉流情况进行切流。在云服务器接收到第二直播流之前第一直播流不断流,从而实现普通直播与连麦直播的无缝切换,解决主播端直接切换直播流导致混流延迟或断流,使观众端卡顿或黑屏的问题。并且,本申请实施例的方案对观众端的硬件要求较低,无需设置额外的缓冲区提前拉流。
在一种可能的实施方式中,云服务器进行切流的方案包括两种,即上述步骤202具体包括如下步骤:
步骤一,在接收到第一直播流以及第二直播流的情况下,继续向第二终端发送第一直播流;步骤二,在第一终端切断第一直播流的情况下,将下行直播流由第一直播流切换为第二直播流,第一终端用于在连麦时长达到流重叠时长后停止发送第一直播流。
或,
步骤三,在成功接收到第二直播流的情况下,将下行直播流由第一直播流切换为第二直播流。
即本申请实施例提供的直播流切换方法存在两种实现方式。第一种方案的切流时机取决于主播端的断流情况:主播端在开始连麦后的流重叠时长内,并行向云服务器和混流服务器发送直播流;云端在同时接收到第一直播流和第二直播流的情况下,并不立即切流,继续下行第一直播流;当主播端切断第一直播流后,云端下行第二直播流。第二种方案的切流时机取决于云服务器的拉流情况:主播端开始连麦后的流重叠时长内,并行向云服务器和混流服务器发送直播流;云端在同时接收到第一直播流和第二直播流的情况下,立即切换下行的直播流至第二直播流。
由于两种方案中均设置有流重叠时长,即主播端在流重叠时长内并不切断向云服务器发送的直播流,确保云服务器能够对下行直播流进行无缝切换,因此均能够防止出现断流黑屏的情况。
针对上述第一种下行不断流方案,图3示出了第一终端、云服务器、混流服务器、对应的连麦终端以及第二终端配合完成由普通直播切换为连麦直播的过程。普通直播阶段,第一终端(主播端)向云服务器推送第一直播流A1,云服务器向直播间对应的第二终端(观众端)转发第一直播流A1。由普通直播到连麦直播的切换过程包括两阶段。其中一阶段为流重叠时长对应的阶段。一阶段内第一终端向云服务器发送第一直播流A1的同时,向混流服务器发送第三直播流A2。混流服务器对第一终端发送的第三直播流A2和连麦终端发送的连麦流B进行混流,得到第二直播流A2+B并向云服务器推流。云服务器此时接收到第一直播流A1 和第二直播流A2+B,并继续向第二终端推送第一直播流A1。二阶段为流重叠时长结束后的连麦阶段。二阶段开始时第一终端停止向云服务器发送第一直播流A1,此时云服务器接收第二直播流A2+B,并将下行直播流由第一直播流A1无缝切换为第二直播流A2+B。完成由普通直播到连麦直播的直播流切换过程。
针对上述第一种下行不断流方案,请参考图4,其示出了本申请另一个示例性实施例提供的直播流切换方法的流程图。本实施例以该方法由云服务器执行为例进行说明。该方法包括如下步骤:
步骤401,接收第一终端发送的第一直播流以及混流服务器发送的第二直播流。
步骤401的具体实施方式可以参考上述步骤201,本申请实施例在此不再赘述。
步骤402,在接收到第一直播流以及第二直播流的情况下,获取第一直播流的第一标识以及第二直播流的第二标识。
在一种可能的实施方式中,直播流对应有直播流标识。该标识用于指示直播流所属的直播间。例如,直播推流端(第一终端)基于当前直播帐号生成直播流标识。直播推流端在进行直播流编码后,将直播流标识与直播编码数据一同封装后发送至云服务器或混流服务器。
步骤403,响应于第一标识与第二标识指示第一直播流和第二直播流属于同一直播间的不同直播流,继续向第二终端发送第一直播流。
其中,同一直播间对应的直播流的直播流标识中包含相同字段。
在一种可能的实施方式中,为了使云服务器能够从大量直播流中识别出属于同一直播间的普通流和混合流,连麦流的标识与对应直播间普通流的标识中包含相同字段。
示意性的,直播间A对应的第一直播流的第一标识为STREAM_A_NORMAL,直播间A与直播间B连麦时的第二直播流对应的第二标识为STREAM_A_STREAM_B_PK。由于第一标识和第二标识中包含用于指示直播间的相同字段STREAM_A,因此云服务器确定第一直播流与第二直播流分别属于直播间A对应的普通流和混合流。
普通直播模式下的直播流与连麦直播模式下的直播流,可能分别采用不同的音频编码格式。在一种可能的实施方式中,为了实现在流重叠时长内推送普通流但观众端所展示的直播效果为连麦效果,云服务器对第一直播流的音频进行重新编码。步骤403包括如下步骤403a至步骤403c:
步骤403a,响应于第一标识与第二标识指示第一直播流和第二直播流属于同一直播间的不同直播流,获取第一直播流的音频编码格式以及第二直播流的音频编码格式。
当确定第一直播流与第二直播流分别属于同一直播间的普通流和混合流后,云服务器获取二者的音频编码格式。其中,音频编码格式可以为动态影像专家压缩标准音频层面3(Moving Picture Experts Group Audio Layer III,MP3)、高级音频编码(Advanced Audio Coding,AAC)、微软音频格式(Windows Media Audio,WMA)等。
步骤403b,响应于第一直播流与第二直播流的音频编码格式一致,继续向所述第二终端转发第一直播流。
若第一直播流与第二直播流的音频编码格式一致,则无需对第一直播流的音频编码格式进行调整。云服务器继续直接转发第一直播流。
步骤403c,响应于第一直播流与第二直播流的音频编码格式不一致,按照第二直播流的音频编码格式对第一直播流进行音频重编码,并向第二终端发送音频重编码后的第一直播流。
若第一直播流与第二直播流的音频编码格式不一致,则云服务器按照第二直播流的音频编码格式对第一直播流进行音频重编码,使下行的第一直播流的音频编码格式与第二直播流的音频编码格式一致。从而达到在继续推送第一直播流的情况下,第二终端能够展示直播开始连麦的效果。
步骤404,响应于第一标识与第二标识指示第一直播流和第二直播流属于同一直播间的 不同直播流,获取第二直播流的第二视频头信息。
其中,视频头信息中包含视频分辨率。
上述步骤通过云服务器进行编码格式转换,将普通流的编码格式转换为连麦流的编码格式,使下行直播流的音频效果与连麦直播音频效果一致。当直播为视频直播时,还需要考虑普通直播画面与连麦直播画面的差异。
例如普通直播画面为横屏画面,分辨率为1280*720,连麦直播画面为竖屏画面,分辨率为1080*2400。或者普通直播画面为竖屏画面,连麦直播画面为横屏画面。
视频头信息是用于描述视频画面信息的数据,通常添加在每一段(或每一帧)视频流的头部,比如序列参数集(Sequence Paramater Set,SPS)和图像参数集(Picture Paramater Set,PPS)。在一种可能的实施方式中,云服务器通过向第二终端下发第二直播流的视频头信息,使第二终端按照连麦画面的分辨率渲染并显示第一直播流的内容。
步骤405,向第二终端下发第二视频头信息,第二终端用于基于第二视频头信息调整第一直播流的视频画面。
可选的,第二终端可以通过拉伸、裁剪、拼接、扩边等方式调整第一直播流的分辨率。
如图5所示,第一终端开启连麦后,通过云服务器向第二终端发送连麦通知。第二终端接收到连麦通知后,将直播界面的样式从普通直播样式切换为连麦直播样式。普通直播模式下第二终端直接显示第一视频流的视频画面501。连麦直播样式下第二终端显示两个连麦直播间的拼接画面,该画面中包含第一显示区域502以及第二显示区域503。其中流重叠时长内,第二终端接收到的仍然是第一视频流。此时第二终端按照第二视频流的视频头信息调整视频画面,达到进入连麦的效果。由于此时未拉取到混合流,因此连麦直播间的显示区域(即第二显示区域503)内显示默认背景以及“对方正在赶来~”字样。当达到流重叠时长,第二终端拉取到混合流后,则显示最终的连麦画面。
在另一种可能的实施方式中,与音频编码相对应,云服务器还可以直接对第一视频流进行分辨率调整并重新编码,得到分辨率更新后的第一直播流并下发。
步骤406,在第一终端切断第一直播流的情况下,将下行直播流由第一直播流切换为第二直播流。
步骤406的具体实施方式可以参考上述步骤二,本申请实施例在此不再赘述。
本申请实施例中,在流重叠时长内,云服务器基于第二视频流的音频编码格式对第一视频流进行重编码,另一方面向第二终端下发第二视频流的视频头信息,从而使第二终端能够在仅拉取到第一视频流的情况下,按照连麦的音频编码格式和画面分辨率显示直播画面。实现了延迟推流但同步显示连麦直播内容的效果。
在一种可能的实施方式中,当采用上述第二种方案进行直播流切换时,云服务器同样可以依据直播流的标识进行识别。若识别到第一直播流的第一标识与第二直播流的第二标识指示同一直播间,则直接将下行直播流由第一直播流切换为第二直播流,无需提前下发视频头信息或者对第一直播流重编码。
上述实施例示出了由普通直播切换至连麦的过程。在一种可能的实施方式中,结束连麦同样采用流重叠的方案。如图6所示,在由一种流向另一种流切换的过程中,设置流重叠时长,在流重叠时长内并行发送两种流,达到流重叠时长后切换发送新的流。对应的,反向切换时也采用流重叠的方案。
请参考图7,其示出了本申请另一个示例性实施例提供的直播流切换方法的流程图。本实施例以该方法由云服务器执行为例进行说明。该方法包括如下步骤:
步骤701,接收第一终端发送的第一直播流以及混流服务器发送的第二直播流。
步骤702,基于第一终端的断流情况或云服务器的拉流情况,将下行直播流由第一直播流切换为第二直播流。
步骤701至步骤702的具体实施方式可以参考上述步骤201至步骤202,本申请实施例在此不再赘述。
对应的,由连麦切换为普通直播的过程同样包含两种下行不断流的实现方式。一种如下步骤703至步骤704所示,另一种如下步骤705所示。
步骤703,在重新接收到第一终端发送的第一直播流的情况下,继续向第二终端发送第二直播流。
在一种可能的实施方式中,当结束连麦时,第一终端在连麦结束前的流重叠时长内重新向云服务器发送第一直播流。流重叠时长内(连麦进入倒计时),云服务器接收第一直播流和第二直播流,但继续向第二终端下发第二直播流。
步骤704,响应于第二直播流断流,将下行直播流切换为第一直播流。
第一终端在达到流重叠时长(即连麦正式结束)时,停止向混流服务器发送第三直播流,继续向云服务器发送第一直播流。云服务器在第二直播流断流的情况下,切换发送第一直播流。
步骤705,在重新接收到第一终端发送的第一直播流的情况下,将下行直播流切换为第一直播流。
在第二种可能的实施方式中,云服务器在接收到第一直播流后立即进行切流,将下行直播流由第二直播流切换为第一直播流,观众端返回普通直播模式。
本申请实施例中,在由连麦切换为普通直播的过程中,设置流重叠时长。主播端在流重叠时长内并行向云服务器和混流服务器推流,云服务器在流重叠时长内则接收第一直播流和第二直播流,并继续向观众端推送第二直播流。达到流重叠时长后,主播端切断第三直播流。从而实现连麦直播向普通直播的无缝切换,解决主播端直接切换直播流导致混流延迟或断流,使观众端卡顿或黑屏的问题。并且,本申请实施例的方案对观众端的硬件要求较低,无需设置额外的缓冲区提前拉流。
针对上述连麦切换至普通直播的第一种方案,图8示出了第一终端、云服务器、混流服务器、对应的连麦终端以及第二终端配合完成由连麦直播切换为普通直播的过程,同样包括两阶段。一阶段为流重叠时长对应的阶段。一阶段内,第一终端在确定流重叠时长后结束连麦时,开始向云服务器发送第一直播流A1。此时云服务器接收到第一直播流A1和第二直播流A2+B,并继续向第二终端推送第二直播流A2+B。流重叠时长结束后连麦正式结束,进入普通直播。二阶段开始时第一终端停止向混流服务器发送第三直播流A2,云服务器将下行直播流由第二直播流A2+B切换为第一直播流A1。
上述实施例示出了云服务器在直播切流过程中执行的步骤。对应的,第一终端在切流过程中也会触发下行不断流逻辑,控制两种直播流的推送。请参考图9,其示出了本申请另一个示例性实施例提供的直播流切换方法的流程图。本实施例以该方法由第一终端执行为例进行说明。该方法包括如下步骤:
步骤901,向云服务器发送第一直播流,云服务器用于向请求直播拉流的第二终端转发第一直播流。
第一终端为直播推流端,基于采集到的直播数据(例如音频数据、视频数据)生成并发送至云服务器进行直播推流。普通直播模式下,第一终端直接向云服务器发送第一直播流。云服务器负责接收各个直播间推流端发送的直播流,并向直播间对应的各个拉流端进行直播推流。第二终端为直播拉流端,基于接收到的直播观看操作向云服务器发送直播拉流请求。
步骤902,响应于连麦指令,向混流服务器发送第三直播流。
其中,混流服务器用于对连麦终端发送的直播流进行混流生成第二直播流,云服务器用于在接收到第一直播流以及第二直播流的情况下,基于下行不断流逻辑将下行直播流由第一直播流切换为第二直播流。下行不断流逻辑由第一终端的断流情况或云服务器的拉流情况决 定。
连麦直播模式下,相互连麦的直播间对应的终端将己方的直播流发送至混流服务器。混流服务器负责对相互连麦的直播间对应的直播流进行混流,并向云服务器发送直播混流,由云服务器负责向连麦涉及到的各个直播间的观众端转发直播混流。
可选的,当接收到连麦操作,或者连麦应答操作时,第一终端确定接收到连麦指令。
相关技术中,主播端在进入连麦后直接将向云服务器发送的第一直播流切换为向混流服务器发送的第三直播流。由于连麦的各直播间直播流不同步导致混流延迟,以及混流服务器混流需要一定时长等因素,可能导致云服务器在一段时间内拉不到直播流,进而导致下行直播流终端,使得观众端画面卡顿或黑屏。
因此在一种可能的实施方式中,第一终端在进入连麦模式后,在向混流服务器发送第一直播流的同时,仍然继续向云服务器发送第一直播流,并不立即切断向云服务器发送的第一直播流。混流服务器对第一终端发送的第三直播流以及对应的连麦终端的直播流进行混流,生成第二直播流并发送至云服务器。云服务器在同时接收到第一直播流和第二直播流的情况下,基于下行不断流逻辑切换发送至第二终端的直播流。
步骤903,响应于连麦时长达到流重叠时长,停止向云服务器发送第一直播流。
第一终端在同步发送第一直播流和第三直播流至达到流重叠时长(例如5秒)后,停止向云服务器发送第一直播流,仅向混流服务器发送第三直播流。云服务器基于下行不断流逻辑进行切流。通过设置流重叠时长,使得混流服务器与云服务器能够进行切流的缓冲,避免在由普通直播进入连麦的过程中出现断流的情况。
请参考图10,其示出了本申请另一个示例性实施例提供的直播流切换方法的流程图。本实施例以该方法由第一终端执行为例进行说明。该方法包括如下步骤:
步骤1001,向云服务器发送第一直播流,云服务器用于向请求直播拉流的第二终端转发第一直播流。
步骤1002,响应于连麦指令,向混流服务器发送第三直播流。
步骤1003,响应于连麦时长达到流重叠时长,停止向云服务器发送第一直播流。
步骤1001至步骤1003的具体实施方式可以参考上述步骤901至步骤903,本申请实施例在此不再赘述。
步骤1004,在连麦结束前的流重叠时长内,向云服务器发送第一直播流,云服务器用于在重新接收到第一直播流的情况下,继续向第二终端发送第二直播流。
在一种可能的实施方式中,第一终端提前判断连麦结束时间,在在连麦结束前的流重叠时长内,向云服务器发送第一直播流,使云服务器提前接收直播数据,做好切流准备,以防断流。
具体的,第一终端提前判断连麦的方式包括如下两种:
响应于接收到连麦终止操作,基于流重叠时长设置定时器,并向云服务器发送第一直播流。
或,
响应于在流重叠时长后达到目标连麦时长,向云服务器发送第一直播流。
当主播或对方主播手动进行连麦终止操作时,第一终端立即基于流重叠时长设置定时器,定时器达到流重叠时长时连麦结束。或者,连麦对应有固定的时长上限(即目标连麦时长),若第一终端确定在流重叠时长后(例如5s后)达到目标连麦时长,则向云服务器发送第一直播流。
步骤1005,响应于连麦结束,停止向混流服务器发送第三直播流。
云服务器用于在第二直播流断流的情况下,将下行直播流从第二直播流切换为第一直播流。由于第一终端在提前向云服务器发送第一直播流,因此云服务器可以实现由连麦到普通 直播的无缝切换,避免卡顿或黑屏的情况。
本申请实施例中,在由连麦切换为普通直播的过程中,设置流重叠时长。主播端在流重叠时长内并行向云服务器和混流服务器推流,云服务器在流重叠时长内则接收第一直播流和第二直播流,并继续向观众端推送第二直播流。达到流重叠时长后,主播端切断第三直播流,云服务器开始向观众端推送第一直播流。从而实现连麦直播向普通直播的无缝切换,解决主播端直接切换直播流导致混流延迟或断流,使观众端卡顿或黑屏的问题。并且,本申请实施例的方案对观众端的硬件要求较低,无需设置额外的缓冲区提前拉流。
图11是本申请一个示例性实施例提供的直播流切换装置的结构框图,该装置包括如下结构:
拉流模块1101,用于接收第一终端发送的第一直播流以及混流服务器发送的第二直播流,所述第二直播流为所述第一终端以及至少一个连麦终端的直播混流,所述第一终端用于在连麦后继续向所述云服务器发送所述第一直播流;
推流模块1102,用于基于所述第一终端的断流情况或所述云服务器的拉流情况,将下行直播流由所述第一直播流切换为所述第二直播流,所述下行直播流为第二终端从云服务器处拉取的直播流,所述第二终端用于基于所述第二直播流展示连麦直播内容。
可选的,所述推流模块1102,还用于:
在接收到所述第一直播流以及所述第二直播流的情况下,继续向所述第二终端发送所述第一直播流;在所述第一终端切断所述第一直播流的情况下,将所述下行直播流由所述第一直播流切换为所述第二直播流,所述第一终端用于在连麦时长达到流重叠时长后停止发送所述第一直播流;
或,
在成功接收到所述第二直播流的情况下,将所述下行直播流由所述第一直播流切换为所述第二直播流。
可选的,所述推流模块1102,还用于:
在接收到所述第一直播流以及所述第二直播流的情况下,获取所述第一直播流的第一标识以及所述第二直播流的第二标识;
响应于所述第一标识与所述第二标识指示所述第一直播流和所述第二直播流属于同一直播间的不同直播流,继续向所述第二终端发送所述第一直播流,其中,同一直播间对应的直播流的直播流标识中包含相同字段。
可选的,所述推流模块1102,还用于:
响应于所述第一标识与所述第二标识指示所述第一直播流和所述第二直播流属于同一直播间的不同直播流,获取所述第一直播流的音频编码格式以及所述第二直播流的音频编码格式;
响应于所述第一直播流与所述第二直播流的音频编码格式一致,继续向所述第二终端转发所述第一直播流;
响应于所述第一直播流与所述第二直播流的音频编码格式不一致,按照所述第二直播流的音频编码格式对所述第一直播流进行音频重编码,并向所述第二终端发送音频重编码后的所述第一直播流。
可选的,所述装置还包括:
信息获取模块,用于响应于所述第一标识与所述第二标识指示所述第一直播流和所述第二直播流属于同一直播间的不同直播流,获取所述第二直播流的第二视频头信息,其中,视频头信息中包含视频分辨率;
信息发送模块,用于向所述第二终端下发所述第二视频头信息,所述第二终端用于基于所述第二视频头信息调整所述第一直播流的视频画面。
可选的,所述推流模块1102还用于:
在重新接收到所述第一终端发送的所述第一直播流的情况下,继续向所述第二终端发送所述第二直播流;响应于所述第二直播流断流,将所述下行直播流切换为所述第一直播流;
或,
在重新接收到所述第一终端发送的所述第一直播流的情况下,将所述下行直播流切换为所述第一直播流。
图12是本申请另一个示例性实施例提供的直播流切换装置的结构框图,该装置包括如下结构:
推流模块1201,用于向云服务器发送第一直播流,所述云服务器用于向请求直播拉流的第二终端转发所述第一直播流;
连麦模块1202,用于响应于连麦指令,向混流服务器发送第三直播流,所述混流服务器用于对连麦终端发送的直播流进行混流生成第二直播流,所述云服务器用于在接收到所述第一直播流以及所述第二直播流的情况下,基于所述第一终端的断流情况或所述云服务器的拉流情况,将下行直播流由所述第一直播流切换为所述第二直播流;
控制模块1203,用于响应于达到流重叠时长,停止向所述云服务器发送所述第一直播流。
可选的,所述推流模块1201,还用于:
在连麦结束前的流重叠时长内,向所述云服务器发送所述第一直播流,所述云服务器用于在重新接收到所述第一直播流的情况下,继续向所述第一终端发送所述第二直播流;
所述控制模块1203,还用于:
响应于连麦结束,停止向所述混流服务器发送所述第三直播流,所述云服务器用于在所述第二直播流断流的情况下,将所述下行直播流切换为所述第一直播流。
可选的,所述推流模块1201,还用于:
响应于接收到连麦终止操作,基于所述流重叠时长设置定时器,并向所述云服务器发送所述第一直播流;
或,
响应于在所述流重叠时长后达到目标连麦时长,向所述云服务器发送所述第一直播流。
综上所述,本申请实施例中,在由普通直播向连麦直播切换的过程中,主播端并行向云服务器和混流服务器推流,云服务器在一段时间内同时接收第一直播流和第二直播流,并基于第一终端的断流情况或云服务器的拉流情况进行切流。在云服务器接收到第二直播流之前第一直播流不断流,从而实现普通直播与连麦直播的无缝切换,解决主播端直接切换直播流导致混流延迟或断流,使观众端卡顿或黑屏的问题。并且,本申请实施例的方案对观众端的硬件要求较低,无需设置额外的缓冲区提前拉流。
请参考图13,其示出了本申请一个实施例提供的服务器的结构示意图。
所述服务器1300包括中央处理单元(Central Processing Unit,CPU)1301、包括随机存取存储器(Random Access Memory,RAM)1302和只读存储器(Read Only Memory,ROM)1303的系统存储器1304,以及连接系统存储器1304和中央处理单元1301的系统总线1305。所述服务器1300还包括帮助计算机内的各个器件之间传输信息的基本输入/输出(Input/Output,I/O)控制器1306,和用于存储操作系统1313、应用程序1314和其他程序模块1315的大容量存储设备1307。
所述基本输入/输出系统1306包括有用于显示信息的显示器1308和用于用户输入信息的诸如鼠标、键盘之类的输入设备1309。其中所述显示器1308和输入设备1309都通过连接到系统总线1305的输入输出控制器1310连接到中央处理单元1301。所述基本输入/输出系统1306还可以包括输入输出控制器1310以用于接收和处理来自键盘、鼠标、或电子触控笔等多个其他设备的输入。类似地,输入/输出控制器1310还提供输出到显示屏、打印机或其他 类型的输出设备。
所述大容量存储设备1307通过连接到系统总线1305的大容量存储控制器(未示出)连接到中央处理单元1301。所述大容量存储设备1307及其相关联的计算机可读介质为服务器1300提供非易失性存储。也就是说,所述大容量存储设备1307可以包括诸如硬盘或者只读光盘(Compact Disc Read-Only Memory,CD-ROM)驱动器之类的计算机可读介质(未示出)。
不失一般性,所述计算机可读介质可以包括计算机存储介质和通信介质。计算机存储介质包括以用于存储诸如计算机可读指令、数据结构、程序模块或其他数据等信息的任何方法或技术实现的易失性和非易失性、可移动和不可移动介质。计算机存储介质包括RAM、ROM、可擦除可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存或其他固态存储其技术,CD-ROM、数字视频光盘(Digital Video Disc,DVD)或其他光学存储、磁带盒、磁带、磁盘存储或其他磁性存储设备。当然,本领域技术人员可知所述计算机存储介质不局限于上述几种。上述的系统存储器1304和大容量存储设备1307可以统称为存储器。
根据本申请的各种实施例,所述服务器1300还可以通过诸如因特网等网络连接到网络上的远程计算机运行。也即服务器1300可以通过连接在所述系统总线1305上的网络接口单元1311连接到网络1312,或者说,也可以使用网络接口单元1311来连接到其他类型的网络或远程计算机系统(未示出)。
请参考图14,其示出了本申请一个示例性实施例提供的终端1400的结构框图。该终端1400可以是便携式移动终端,比如:智能手机、平板电脑、动态影像专家压缩标准音频层面3(Moving Picture Experts Group Audio Layer III,MP3)播放器、动态影像专家压缩标准音频层面4(Moving Picture Experts Group Audio Layer IV,MP4)播放器。终端1400还可能被称为用户设备、便携式终端等其他名称。
通常,终端1400包括有:处理器1401和存储器1402。
处理器1401可以包括一个或多个处理核心,比如4核心处理器、8核心处理器等。处理器1401可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器1401也可以包括主处理器和协处理器,主处理器是用于对在唤醒状态下的数据进行处理的处理器,也称中央处理器(Central Processing Unit,CPU);协处理器是用于对在待机状态下的数据进行处理的低功耗处理器。在一些实施例中,处理器1401可以在集成有图像处理器(Graphics Processing Unit,GPU),GPU用于负责显示屏所需要显示的内容的渲染和绘制。一些实施例中,处理器1401还可以包括人工智能(Artificial Intelligence,AI)处理器,该AI处理器用于处理有关机器学习的计算操作。
存储器1402可以包括一个或多个计算机可读存储介质,该计算机可读存储介质可以是有形的和非暂态的。存储器1402还可包括高速随机存取存储器,以及非易失性存储器,比如一个或多个磁盘存储设备、闪存存储设备。在一些实施例中,存储器1402中的非暂态的计算机可读存储介质用于存储至少一个指令,该至少一个指令用于被处理器1401所执行以实现本申请实施例提供的方法。
在一些实施例中,终端1400还可选包括有:外围设备接口1403。
外围设备接口1403可被用于将输入/输出(Input/Output,I/O)相关的至少一个外围设备连接到处理器1401和存储器1402。在一些实施例中,处理器1401、存储器1402和外围设备接口1403被集成在同一芯片或电路板上;在一些其他实施例中,处理器1401、存储器1402和外围设备接口1403中的任意一个或两个可以在单独的芯片或电路板上实现,本实施例对此不加以限定。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质存储有至少一条指令,所述至少一条指令由处理器加载并执行以实现如上各个实施例所述的直播流切换方 法。
根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述方面的各种可选实现方式中提供的直播流切换方法。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读存储介质中或者作为计算机可读存储介质上的一个或多个指令或代码进行传输。计算机可读存储介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
需要说明的是,本申请所涉及的信息(包括但不限于用户设备信息、用户个人信息等)、数据(包括但不限于用于分析的数据、存储的数据、展示的数据等)以及信号,均为经用户授权或者经过各方充分授权的,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。例如,本申请中涉及到的直播流、视频头信息、用户帐号等都是在充分授权的情况下获取的。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质存储有至少一条指令,所述至少一条指令由处理器加载并执行以实现如上各个实施例所述的直播流切换方法。
根据本申请的一个方面,提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述方面的各种可选实现方式中提供的直播流切换方法。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读存储介质中或者作为计算机可读存储介质上的一个或多个指令或代码进行传输。计算机可读存储介质包括计算机存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
需要说明的是,本申请所涉及的信息(包括但不限于用户设备信息、用户个人信息等)、数据(包括但不限于用于分析的数据、存储的数据、展示的数据等)以及信号,均为经用户授权或者经过各方充分授权的,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。例如,本申请中涉及到的直播流、视频头信息、用户帐号等都是在充分授权的情况下获取的。
以上所述仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (15)

  1. 一种直播流切换方法,其特征在于,所述方法由云服务器执行,所述方法包括:
    接收第一终端发送的第一直播流以及混流服务器发送的第二直播流,所述第一直播流由所述第一终端基于采集到的直播数据生成,所述第二直播流为所述第一终端以及至少一个连麦终端的直播混流,所述第一终端用于在连麦后继续向所述云服务器发送所述第一直播流;
    基于所述第一终端的断流情况或所述云服务器的拉流情况,将下行直播流由所述第一直播流切换为所述第二直播流,所述下行直播流为第二终端从云服务器处拉取的直播流,所述第二终端用于基于所述第二直播流展示连麦直播内容。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述第一终端的断流情况或所述云服务器的拉流情况,将下行直播流由所述第一直播流切换为所述第二直播流,包括:
    在接收到所述第一直播流以及所述第二直播流的情况下,继续向所述第二终端发送所述第一直播流;在所述第一终端切断所述第一直播流的情况下,将所述下行直播流由所述第一直播流切换为所述第二直播流,所述第一终端用于在连麦时长达到流重叠时长后停止发送所述第一直播流;
    或,
    在成功接收到所述第二直播流的情况下,将所述下行直播流由所述第一直播流切换为所述第二直播流。
  3. 根据权利要求2所述的方法,其特征在于,所述在接收到所述第一直播流以及所述第二直播流的情况下,继续向所述第二终端发送所述第一直播流,包括:
    在接收到所述第一直播流以及所述第二直播流的情况下,获取所述第一直播流的第一标识以及所述第二直播流的第二标识;
    响应于所述第一标识与所述第二标识指示所述第一直播流和所述第二直播流属于同一直播间的不同直播流,继续向所述第二终端发送所述第一直播流,其中,同一直播间对应的直播流的直播流标识中包含相同字段。
  4. 根据权利要求3所述的方法,其特征在于,所述响应于所述第一标识与所述第二标识指示所述第一直播流和所述第二直播流属于同一直播间的不同直播流,继续向所述第二终端发送所述第一直播流,包括:
    响应于所述第一标识与所述第二标识指示所述第一直播流和所述第二直播流属于同一直播间的不同直播流,获取所述第一直播流的音频编码格式以及所述第二直播流的音频编码格式;
    响应于所述第一直播流与所述第二直播流的音频编码格式一致,继续向所述第二终端转发所述第一直播流;
    响应于所述第一直播流与所述第二直播流的音频编码格式不一致,按照所述第二直播流的音频编码格式对所述第一直播流进行音频重编码,并向所述第二终端发送音频重编码后的所述第一直播流。
  5. 根据权利要求3所述的方法,其特征在于,所述在接收到所述第一直播流以及所述第二直播流的情况下,获取所述第一直播流的第一标识以及所述第二直播流的第二标识之后,所述方法包括:
    响应于所述第一标识与所述第二标识指示所述第一直播流和所述第二直播流属于同一直播间的不同直播流,获取所述第二直播流的第二视频头信息,其中,视频头信息中包含视频 分辨率;
    向所述第二终端下发所述第二视频头信息,所述第二终端用于基于所述第二视频头信息调整所述第一直播流的视频画面。
  6. 根据权利要求1至5任一所述的方法,其特征在于,所述基于所述第一终端的断流情况或所述云服务器的拉流情况,将下行直播流由所述第一直播流切换为所述第二直播流之后,所述方法还包括:
    在重新接收到所述第一终端发送的所述第一直播流的情况下,继续向所述第二终端发送所述第二直播流;响应于所述第二直播流断流,将所述下行直播流切换为所述第一直播流;
    或,
    在重新接收到所述第一终端发送的所述第一直播流的情况下,将所述下行直播流切换为所述第一直播流。
  7. 一种直播流切换方法,其特征在于,所述方法应用于第一终端,所述方法包括:
    向云服务器发送第一直播流,所述云服务器用于向请求直播拉流的第二终端转发所述第一直播流;
    响应于连麦指令,向混流服务器发送第三直播流,所述混流服务器用于对连麦终端发送的直播流进行混流生成第二直播流,所述云服务器用于在接收到所述第一直播流以及所述第二直播流的情况下,基于所述第一终端的断流情况或所述云服务器的拉流情况,将下行直播流由所述第一直播流切换为所述第二直播流;
    响应于连麦时长达到流重叠时长,停止向所述云服务器发送所述第一直播流。
  8. 根据权利要求7所述的方法,其特征在于,所述响应于连麦时长达到流重叠时长,停止向所述云服务器发送所述第一直播流之后,所述方法还包括:
    在连麦结束前的流重叠时长内,向所述云服务器发送所述第一直播流,所述云服务器用于在重新接收到所述第一直播流的情况下,继续向所述第二终端发送所述第二直播流;
    响应于连麦结束,停止向所述混流服务器发送所述第三直播流,所述云服务器用于在所述第二直播流断流的情况下,将所述下行直播流切换为所述第一直播流。
  9. 根据权利要求8所述的方法,其特征在于,所述在连麦结束前的流重叠时长内,向所述云服务器发送所述第一直播流,包括:
    响应于接收到连麦终止操作,基于所述流重叠时长设置定时器,并向所述云服务器发送所述第一直播流;
    或,
    响应于在所述流重叠时长后达到目标连麦时长,向所述云服务器发送所述第一直播流。
  10. 一种直播流切换装置,其特征在于,所述装置包括:
    拉流模块,用于接收第一终端发送的第一直播流以及混流服务器发送的第二直播流,所述第二直播流为所述第一终端以及至少一个连麦终端的直播混流,所述第一终端用于在连麦后继续向所述云服务器发送所述第一直播流;
    推流模块,用于基于所述第一终端的断流情况或所述云服务器的拉流情况,将下行直播流由所述第一直播流切换为所述第二直播流,所述下行直播流为第二终端从云服务器处拉取的直播流,所述第二终端用于基于所述第二直播流展示连麦直播内容。
  11. 一种直播流切换装置,其特征在于,所述装置包括:
    推流模块,用于向云服务器发送第一直播流,所述云服务器用于向请求直播拉流的第二 终端转发所述第一直播流;
    连麦模块,用于响应于连麦指令,向混流服务器发送第三直播流,所述混流服务器用于对连麦终端发送的直播流进行混流生成第二直播流,所述云服务器用于在接收到所述第一直播流以及所述第二直播流的情况下,基于所述第一终端的断流情况或所述云服务器的拉流情况,将下行直播流由所述第一直播流切换为所述第二直播流;
    控制模块,用于响应于连麦时长达到流重叠时长,停止向所述云服务器发送所述第一直播流。
  12. 一种服务器,其特征在于,所述服务器包括处理器和存储器;所述存储器中存储有至少一段程序,所述至少一段程序由所述处理器加载并执行以实现如权利要求1至6任一所述的直播流切换方法。
  13. 一种终端,其特征在于,所述终端包括处理器和存储器;所述存储器中存储有至少一段程序,所述至少一段程序由所述处理器加载并执行以实现如权利要求7至9任一所述的直播流切换方法。
  14. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有至少一条计算机程序,所述计算机程序由处理器加载并执行以实现如权利要求1至6任一所述的直播流切换方法,或,权利要求7至9任一所述的直播流切换方法。
  15. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机指令,所述计算机指令存储在计算机可读存储介质中;服务器的处理器从所述计算机可读存储介质读取所述计算机指令,所述处理器执行所述计算机指令,使得所述服务器执行如权利要求1至6任一所述的直播流切换方法;终端的处理器从所述计算机可读存储介质读取所述计算机指令,所述处理器执行所述计算机指令,使得所述终端执行如权利要求7至9任一所述的直播流切换方法。
PCT/CN2022/128356 2022-10-28 2022-10-28 直播流切换方法、装置、服务器、终端及程序产品 WO2024087197A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202280003882.XA CN115997384B (zh) 2022-10-28 2022-10-28 直播流切换方法、装置、服务器、终端及程序产品
PCT/CN2022/128356 WO2024087197A1 (zh) 2022-10-28 2022-10-28 直播流切换方法、装置、服务器、终端及程序产品

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/128356 WO2024087197A1 (zh) 2022-10-28 2022-10-28 直播流切换方法、装置、服务器、终端及程序产品

Publications (1)

Publication Number Publication Date
WO2024087197A1 true WO2024087197A1 (zh) 2024-05-02

Family

ID=85992513

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/128356 WO2024087197A1 (zh) 2022-10-28 2022-10-28 直播流切换方法、装置、服务器、终端及程序产品

Country Status (2)

Country Link
CN (1) CN115997384B (zh)
WO (1) WO2024087197A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107071584A (zh) * 2017-03-14 2017-08-18 北京潘达互娱科技有限公司 直播连麦方法及装置
CN109688419A (zh) * 2018-12-27 2019-04-26 北京潘达互娱科技有限公司 一种直播中的连麦方法、装置及服务器
US20190158889A1 (en) * 2016-09-18 2019-05-23 Tencent Technology (Shenzhen) Company Limited Live streaming method and system, server, and storage medium
CN113766251A (zh) * 2020-06-22 2021-12-07 北京沃东天骏信息技术有限公司 直播连麦的处理方法、系统、服务器及存储介质
CN115065829A (zh) * 2022-04-25 2022-09-16 武汉斗鱼鱼乐网络科技有限公司 一种多人连麦方法及相关设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4615958B2 (ja) * 2004-10-15 2011-01-19 クラリオン株式会社 デジタル放送の送出装置、受信装置およびデジタル放送システム
CN103533437A (zh) * 2013-10-30 2014-01-22 乐视致新电子科技(天津)有限公司 一种智能电视的频道切换方法及装置
US10237581B2 (en) * 2016-12-30 2019-03-19 Facebook, Inc. Presentation of composite streams to users
WO2018213481A1 (en) * 2017-05-16 2018-11-22 Sportscastr.Live Llc Systems, apparatus, and methods for scalable low-latency viewing of integrated broadcast commentary and event video streams of live events, and synchronization of event information with viewed streams via multiple internet channels
CN109963188A (zh) * 2017-12-22 2019-07-02 杭州海康威视数字技术股份有限公司 视频画面的切换方法、装置、电子设备及存储介质
CN109068157A (zh) * 2018-08-21 2018-12-21 北京潘达互娱科技有限公司 一种直播中推流参数的调整方法、装置及服务器
CN109168018A (zh) * 2018-10-17 2019-01-08 北京潘达互娱科技有限公司 一种直播中的连麦合流系统、方法、装置及自有服务器
CN111083507B (zh) * 2019-12-09 2021-11-23 广州酷狗计算机科技有限公司 连麦方法及系统、第一主播端、观众端及计算机存储介质
EP3890329B1 (en) * 2020-03-31 2023-01-11 Nokia Solutions and Networks Oy Video encoding method and apparatus
CN112019927B (zh) * 2020-09-23 2023-01-06 Oppo广东移动通信有限公司 视频直播方法、连麦设备、直播系统及存储介质
CN114449344B (zh) * 2022-02-03 2024-02-09 百果园技术(新加坡)有限公司 视频流传输方法、装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190158889A1 (en) * 2016-09-18 2019-05-23 Tencent Technology (Shenzhen) Company Limited Live streaming method and system, server, and storage medium
CN107071584A (zh) * 2017-03-14 2017-08-18 北京潘达互娱科技有限公司 直播连麦方法及装置
CN109688419A (zh) * 2018-12-27 2019-04-26 北京潘达互娱科技有限公司 一种直播中的连麦方法、装置及服务器
CN113766251A (zh) * 2020-06-22 2021-12-07 北京沃东天骏信息技术有限公司 直播连麦的处理方法、系统、服务器及存储介质
CN115065829A (zh) * 2022-04-25 2022-09-16 武汉斗鱼鱼乐网络科技有限公司 一种多人连麦方法及相关设备

Also Published As

Publication number Publication date
CN115997384B (zh) 2024-09-20
CN115997384A (zh) 2023-04-21

Similar Documents

Publication Publication Date Title
EP3562163B1 (en) Audio-video synthesis method and system
CN108347622B (zh) 多媒体数据推送方法、装置、存储介质及设备
US8904293B2 (en) Minimizing delays in web conference switches between presenters and applications
US11259063B2 (en) Method and system for setting video cover
EP2863642B1 (en) Method, device and system for video conference recording and playing
CN110708564B (zh) 一种动态切换视频流的直播转码方法及系统
US11863841B2 (en) Video playing control method and system
WO2024027768A1 (zh) 多屏视频显示方法、系统、播放端及存储介质
CN202759552U (zh) 一种基于ip网络的多终端视频同步播放的系统
EP3748978A1 (en) Screen recording method, client, and terminal device
WO2013167054A2 (zh) 一种本地通信网络业务切换方法、装置和系统
CN115243074A (zh) 视频流的处理方法及装置、存储介质、电子设备
CN101729755B (zh) 一种多媒体终端
WO2015035934A1 (en) Methods and systems for facilitating video preview sessions
WO2017016266A1 (zh) 一种实现同步播放的方法和装置
WO2024087197A1 (zh) 直播流切换方法、装置、服务器、终端及程序产品
CN105763941A (zh) 一种频道切换方法和系统
US11777871B2 (en) Delivery of multimedia components according to user activity
WO2018171567A1 (zh) 播放媒体流的方法、服务器及终端
CN112532719B (zh) 信息流的推送方法、装置、设备及计算机可读存储介质
EP3089459A1 (en) Apparatus and method for implementing video-on-demand quick switching among multiple screens
WO2024032189A1 (zh) 数据传输方法和装置
WO2024131383A1 (zh) 一种数据处理方法和相关装置
US20220303596A1 (en) System and method for dynamic bitrate switching of media streams in a media broadcast production
CN118694989A (zh) 直播数据处理方法、装置、设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22963179

Country of ref document: EP

Kind code of ref document: A1