CN108900859A - Live broadcasting method and system - Google Patents
Live broadcasting method and system Download PDFInfo
- Publication number
- CN108900859A CN108900859A CN201810943503.0A CN201810943503A CN108900859A CN 108900859 A CN108900859 A CN 108900859A CN 201810943503 A CN201810943503 A CN 201810943503A CN 108900859 A CN108900859 A CN 108900859A
- Authority
- CN
- China
- Prior art keywords
- terminal
- video
- audio
- frame
- video frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4122—Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4305—Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44227—Monitoring of local network, e.g. connection or bandwidth variations; Detecting new devices in the local network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Abstract
This application discloses a kind of live broadcasting method and systems, belong to technical field of information processing, and this method includes:When in double-current live-mode, and when in even wheat state, first terminal can synthesize the second video frame of the first video frame of itself acquisition and the third terminal acquisition for connecting wheat with it, obtain synthetic video, second audio frame of the first audio frame of itself acquisition and third terminal acquisition is subjected to audio mixing, obtain remixed audio, it will include that the first data packet of synthetic video and remixed audio is sent to second terminal and streaming media server, in this way, second terminal can generate the processing video for meeting the ratio of width to height of display screen of second terminal based on first data packet, and by include handle video the second data packet be sent to streaming media server, to ensure that the spectators' synthetic video that can watch main broadcaster and Lian Maizhe per family for holding vertical screen terminal and transverse screen terminal under double-current live-mode.
Description
Technical field
This application involves technical field of information processing, in particular to a kind of live broadcasting method and system.
Background technique
In current internet live streaming, main broadcaster can carry out double-current live streaming, and so-called double fluid live streaming refers to that main broadcaster is corresponding with
Two terminals, two terminals acquire two-path video respectively, and for the ease of subsequent descriptions, two terminals are referred to as first terminal
And second terminal.Wherein, first terminal is greater than 1 for transverse screen terminal namely the ratio of width to height of display screen, and therefore, the video of acquisition is
The video of transverse screen resolution ratio, second terminal are vertical screen terminal namely the ratio of width to height of display screen less than 1, and therefore, the video of acquisition is
The video of vertical screen resolution ratio.Later, first terminal and second terminal can send video to the Streaming Media respectively acquired respectively and take
It is engaged in device, streaming media server, can be to holding transverse screen after receiving first terminal and video that second terminal is sent respectively
The spectators user of terminal pushes the video of first terminal acquisition, pushes second terminal acquisition to the spectators user for holding vertical screen terminal
Video.
In above-mentioned live broadcasting method, when some spectators user passes through third terminal and master used in itself as even wheat person
The first terminal broadcast carries out Lian Maishi, and first terminal can receive the video of third terminal acquisition, and first terminal is acquired
Streaming media server is sent to after video and the Video Composition of third terminal acquisition.However, due to the video of first terminal acquisition
It is the video of transverse screen resolution ratio, therefore, synthetic video is also transverse screen resolution ratio, in this way, streaming media server is receiving this
After synthetic video, which can only be pushed to other the spectators users for holding transverse screen terminal.Similarly, if spectators user
Even wheat, the then video that second terminal is acquired according to itself are carried out by the second terminal of third terminal used in itself and main broadcaster
With the obtained synthetic video of Video Composition of third terminal acquisition by be vertical screen resolution ratio video, in this way, streaming media server
After receiving the synthetic video, which can only be pushed to other the spectators users for holding vertical screen terminal.That is,
In the related technology, the third terminal of spectators user can only be carried out with any terminal of two terminals of main broadcaster even wheat, in this way or
The spectators user that the spectators user for holding vertical screen terminal will be unable to watch synthetic video or hold transverse screen terminal will be unable to see
See synthetic video, in other words, live broadcasting method in the related technology can not guarantee the spectators user for holding vertical screen terminal simultaneously
The synthetic video of main broadcaster and Lian Maizhe are all watched with the spectators user for holding transverse screen terminal.Based on this, it is urgent to provide a kind of straight
Broadcasting method, to guarantee Lian Maishi, hold vertical screen terminal and hold other spectators of transverse screen terminal with can watch per family main broadcaster with
The even synthetic video of wheat person.
Summary of the invention
The embodiment of the present application provides a kind of live broadcasting method and system, can be used for during double fluid live streaming, in even wheat
When provide to hold the spectators user of transverse screen terminal and vertical screen terminal meet the company of the ratio of width to height of display screen of each self terminal simultaneously
Wheat video.The technical solution is as follows:
In a first aspect, a kind of live broadcasting method is provided, the method includes:
Whether whether first terminal detection current current in even wheat state in double-current live-mode, and detection;
If being currently at double-current live-mode, and the company's of being currently at wheat state, then first terminal obtains the first terminal
The first video frame currently acquired and the first audio frame, and obtain what the third terminal for connecting wheat with the first terminal currently acquired
Second video frame and the second audio frame;
First video frame and second video frame are synthesized, synthetic video is obtained, by first audio
Frame and second audio frame carry out audio mixing, obtain remixed audio, and include to second terminal and streaming media server transmission
First data packet of the synthetic video and the remixed audio;
When the second terminal receives first data packet, first data packet is handled, is accorded with
Close the processing video of the ratio of width to height of the display screen of the second terminal;
The second terminal shows the processing video, and to streaming media server transmission include the processing video and
Second data packet of the remixed audio.
Optionally, whether the detection is current in double-current live-mode, and whether detection is current in even wheat state,
Including:
Whether the current double fluid live streaming variable of detection is the first numerical value, and detects whether current company's wheat variable is second
Numerical value;
If the current double fluid live streaming variable is first numerical value, and current company's wheat variable is described second
Numerical value, it is determined that be currently at double-current live-mode and the company's of being currently at wheat state.
Optionally, the first terminal is transverse screen terminal, it is described by first video frame and second video frame into
Row synthesis, including:
Determine first boundary line and second boundary line parallel with the short transverse of first video frame, first boundary line away from
With second edge of second boundary line apart from first video frame with a distance from first edge from first video frame
It is equidistant, the first edge is with the second edge each parallel to the short transverse of first video frame;
It is drawn from first video of the interception between first boundary line and second boundary line in first video frame
Face, the width of first video pictures are less than the width of first video frame, and the height of first video pictures is equal to
The height of first video frame;
If the height of second video frame is identical as the height of first video frame, second video frame is spelled
Connect the side of the first edge or second edge in first video pictures.
Optionally, described that first data packet is handled, including:
The synthetic video in first data packet is obtained, and reduces the synthetic video, so that the synthetic video
Width is equal to the width of the display screen of the second terminal;
According to the side splicing where the first edge of the height video after scaling of the display screen of the second terminal
The second Blank pictures are spliced in first Blank pictures, the side where the second edge of video after scaling;
Wherein, the first edge and the second edge are parallel to the width direction of the video after the scaling, described
The height of first Blank pictures is identical as the height of second Blank pictures, and the height of first Blank pictures, described
The summation of the height of video after the height of first Blank pictures and the scaling is equal to the height of the display screen of the second terminal
Degree;
Background colour is filled in first Blank pictures and second Blank pictures in the video that splicing obtains, and
Using filled video as the processing video.
Optionally, the synthetic video and the remixed audio carry identical timestamp, and the processing video is taken
Timestamp with the synthetic video.
Optionally, the display processing video, including:
Extraction time stabs from the processing video;
Record present system time, and frame period time and the present system time based on video frame, determine described in
Handle the display time of video;
If the display time is later than the time indicated by the timestamp, the processing view is shown at current time
Frequently;
If the display time earlier than the time indicated by the timestamp, postpones to show the processing video.
Optionally, the method also includes:
It is currently at double-current live-mode if detecting, and the company's of being not currently in wheat state, then the first terminal obtains
The first video frame and the first audio frame that the first terminal currently acquires;
The first terminal sent to the second terminal include first audio frame packets of audio data, and to institute
State the third data packet that streaming media server transmission includes first video frame and first audio frame, first sound
Frequency frame carries to be stabbed at the first time;
When the second terminal receives the packets of audio data, the second terminal is obtained in the packets of audio data
First audio frame, and obtain the third video frame that the second terminal currently acquires, record the third video frame
Acquisition time;
The system time of acquisition time and the first terminal based on the third video frame and the second terminal
System time between time deviation, determine the second timestamp of the third video frame;
The third video frame is shown based on second timestamp;
It include the 4th data of first audio frame and the third video frame to streaming media server transmission
Packet, the third video frame carry second timestamp.
Optionally, the method also includes:
The request packet when second terminal sends at least one school to the first terminal, and when at least one school by described in
The request packet serial number of request packet and the corresponding storage of sending time, the sending time refer to transmission phase when each school in request packet
Should school when request packet when the second terminal system time, the request packet serial number is used for request packet when to correspondingly school and carries out
Mark;
When the first terminal receives target school when request packet, send to the second terminal for receiving
Response bag when the target school of request packet when the target school, request when request packet refers at least one described school when the target school
Request packet when any school in packet, when response bag carries the target school when target school request packet serial number of request packet with
And the first system time that the first terminal is current;
When the second terminal receives the target school when response bag, the second terminal record receives the mesh
When calibration when response bag the second terminal the second system time;
The request packet serial number that response bag carries when the second terminal is based on the target school, from the corresponding relationship of storage
The corresponding sending time of request packet serial number that response bag carries when obtaining the target school, obtains third system time;
Based on the first system time, the second system time and the third system time, described first is determined
Time deviation between the system time of terminal and the system time of the second terminal.
Optionally, the method also includes:
It is not currently in double-current live-mode if detecting, and the company's of being not currently in wheat state, then obtains described first eventually
Hold the first video frame currently acquired and the first audio frame;
It plays first video frame, and to streaming media server transmission includes first video frame and described
Single stream live data packet of first audio frame.
Optionally, the packet header of first data packet carries even wheat mark, and the even wheat mark is for notifying described the
The presently described first terminal of two terminals is in even wheat state.
Second aspect, provides a kind of live broadcast system, and the live broadcast system includes:First terminal, second terminal, third are whole
End, streaming media server and Lian Mai server;
Whether whether the first terminal current current in even wheat in double-current live-mode, and detection for detecting
State;If being currently at double-current live-mode, and the company's of being currently at wheat state, then it is current to obtain the first terminal for first terminal
The first video frame and the first audio frame of acquisition, and obtain the third terminal for connecting wheat with the first terminal currently acquires second
Video frame and the second audio frame;First video frame and second video frame are synthesized, by first audio frame
Audio mixing is carried out with second audio frame, and sending to second terminal and streaming media server includes synthetic video and audio mixing sound
First data packet of frequency;
The second terminal obtains the conjunction for including in first data packet for receiving first data packet
At video and the remixed audio, the synthetic video is handled, obtains the width for meeting the display screen of the second terminal
The processing video of high ratio;The second terminal shows the processing video, and sending to streaming media server includes the place
Manage the second data packet of video and the remixed audio;
The third terminal is used to send the second video frame currently acquired and the second audio frame to the even wheat server;
The even wheat server is for receiving second video frame and second audio frame, and to the first terminal
Send second video frame and second audio frame;
The streaming media server is used to receive first data packet and second data packet, and to except described first
Other terminals except terminal and the second terminal send first data packet or second data packet.
Optionally, if the first terminal, which is also used to detect, is currently at double-current live-mode, and the company of being not currently in
Wheat state, then the first terminal obtains the first video frame and the first audio frame that the first terminal currently acquires;To described
Second terminal transmission includes the packets of audio data of first audio frame, and is sent to the streaming media server comprising
The third data packet of the first video frame and first audio frame is stated, first audio frame carries to be stabbed at the first time;
The second terminal is also used to receive the packets of audio data, obtains first sound in the packets of audio data
Frequency frame, and the third video frame that the second terminal currently acquires is obtained, record the acquisition time of the third video frame;It is based on
The system time of the system time and the second terminal of the acquisition time of the third video frame and the first terminal it
Between time deviation, determine the second timestamp of the third video frame;The third view is shown based on second timestamp
Frequency frame;It include the 4th data packet of first audio frame and the third video frame to streaming media server transmission,
The third video frame carries second timestamp;
The streaming media server is also used to receive the third data packet and the 4th data packet, and to except described the
Other terminals except one terminal and the second terminal send the third data packet or the 4th data packet.
The third aspect, provides a kind of terminal, and the terminal includes:
Processor;
Memory for storage processor executable instruction;
Wherein, when the terminal is first terminal, the processor is configured to executing described in above-mentioned first aspect
The correlation step that first terminal executes, when the terminal is second terminal, the processor is configured to executing above-mentioned first
The correlation step that second terminal described in aspect executes.
Fourth aspect provides a kind of computer readable storage medium, is stored on the computer readable storage medium
The step of instructing, any one method described in above-mentioned first aspect realized when described instruction is executed by processor.
Technical solution bring beneficial effect provided by the embodiments of the present application includes at least:When in double-current live-mode,
And when in even wheat state, what first terminal can acquire the first video frame of itself acquisition and the third terminal for connecting wheat with it
Second video frame is synthesized, and the second audio frame of the first audio frame of itself acquisition and third terminal acquisition is carried out audio mixing,
It will include that company's wheat data packet of the remixed audio that the obtained synthetic video of synthesis and audio mixing obtain is sent to second terminal and stream
Media server, in this way, second terminal can be connected based on this wheat data packet generate include third terminal video pictures and
Meet the processing video of the ratio of width to height of the display screen of second terminal, and then the processing video and remixed audio are pushed into Streaming Media
Server, in this way, streaming media server can push the packet to the spectators user for holding vertical screen terminal and transverse screen terminal simultaneously
Video containing main broadcaster and Lian Maizhe so that under double-current live-mode, no matter hold transverse screen terminal spectators user or
Hold the spectators user of vertical screen terminal, can watch the synthetic video of main broadcaster and Lian Maizhe.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment
Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for
For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 is system architecture diagram involved in a kind of live broadcasting method provided by the embodiments of the present application;
Fig. 2 is a kind of flow chart of live broadcasting method provided by the embodiments of the present application;
Fig. 3 is a kind of flow chart of live broadcasting method provided by the embodiments of the present application;
Fig. 4 is a kind of structural schematic diagram of terminal for live streaming provided by the embodiments of the present application.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with attached drawing to the application embodiment party
Formula is described in further detail.
Before carrying out detailed explanation to the embodiment of the present application, first the application scenarios of the embodiment of the present application are given
It introduces.
Currently, in internet live streaming, spectators user can be greater than by the ratio of width to height such as notebook, desktop computers
The video of 1 transverse screen terminal viewing main broadcaster, can also vertical screen terminal by the ratio of width to height such as smart phone, tablet computers less than 1
Watch the video of main broadcaster.In order to balance using the viewing demand of the spectators user of different terminals, main broadcaster can pass through two terminals
Carry out double-current live streaming.That is, a transverse screen terminal and a vertical screen terminal can be used in live streaming in main broadcaster, wherein transverse screen is whole
The video of end acquisition transverse screen resolution ratio, and the video of the transverse screen resolution ratio of acquisition is sent to streaming media server, vertical screen terminal
The video of vertical screen resolution ratio is acquired, and the video of the vertical screen resolution ratio of acquisition is sent to streaming media server, in this way, Streaming Media
Server can push the video of vertical screen resolution ratio to the spectators user for holding vertical screen terminal, to the spectators for holding transverse screen terminal
The video of user's push transverse screen resolution ratio.During carrying out double-current live streaming, it is contemplated that vertical screen terminal is usually mobile whole
End, the equipment performance of usual mobile terminal is poor compared with the equipment performance of transverse screen terminal, therefore, carries out audio by transverse screen terminal and adopts
Collection, vertical screen terminal is based on this without audio collection, while considering the network flow higher cost of mobile terminal, therefore,
Even when wheat, the terminal of the transverse screen terminal and any spectators user in multiple spectators users that can use main broadcaster carries out even wheat.And
Live broadcasting method provided by the embodiments of the present application can be applied to double fluid live streaming in the process using transverse screen terminal with spectators user's
Terminal connects under the scene of wheat, to guarantee that holding the spectators of different the ratio of width to height terminals with that can watch per family includes main broadcaster and Lian
The video of wheat person.
Next live broadcast system provided by the embodiments of the present application is introduced.As shown in Figure 1, may include in the system
First terminal 101, second terminal 102, third terminal 103, the 4th terminal 104, streaming media server 105 and Lian Mai server
106。
Wherein, first terminal 101 can be greater than 1 terminal namely transverse screen terminal for the ratio of width to height of display screen.Or
Terminal of the ratio of width to height of display screen less than 1 namely vertical screen terminal.When first terminal 101 is transverse screen terminal, first terminal 101
Can be used for acquire include main broadcaster transverse screen resolution ratio video and include main broadcaster's sound audio.Work as first terminal
101 be vertical screen terminal when, first terminal 101 can be used for acquire include main broadcaster vertical screen resolution ratio video and include
The audio of main broadcaster's sound.In the case where not connecting wheat, first terminal 101 can send collected to streaming media server 106
Video and audio, and collected audio is sent to second terminal 102.In the case where even wheat, first terminal 101 can be to even
Wheat server 106 sends collected video and audio, and the video that the third terminal 103 of the company's of reception wheat server transmission acquires
And audio, video and itself video that third terminal 103 acquires are synthesized, synthetic video is obtained, third terminal 103 is adopted
The audio of collection and the audio of itself carry out audio mixing, obtain remixed audio, send to streaming media server 105 and second terminal 102
Synthetic video and remixed audio.
When first terminal 101 is transverse screen terminal, second terminal 102 can be vertical screen terminal.At this point, second terminal 102
For acquire include main broadcaster vertical screen resolution ratio video, when first terminal 101 be vertical screen terminal when, second terminal 102 is
Transverse screen terminal, at this point, second terminal 102 be used for acquire include main broadcaster transverse screen resolution ratio video.In addition to this, second eventually
End 102 can be also used for receiving the audio that first terminal 101 is sent, or receive the synthetic video that first terminal 101 is sent
And remixed audio.When receive be first terminal 101 send audio when, second terminal 102 to the audio received at
Reason, and send to streaming media server 105 video of the vertical screen resolution ratio of treated audio and acquisition.When what is received is
The synthetic video and remixed audio that one terminal 101 is sent, then handled synthetic video, and is sent out to streaming media server 105
Send remixed audio and treated synthetic video.
Third terminal 103 can be the terminal for carrying out even wheat in the terminal of multiple spectators users with first terminal 101.Third
Terminal 103 can acquire include the video of Lian Maizhe and include Lian Maizhe sound audio, to even wheat server 106
Send the video and audio of acquisition, and the video and audio of the main broadcaster of the transmission of the company's of reception wheat server 106.
4th terminal 104 can have multiple, and the 4th terminal 104 is the end of other spectators users in addition to Lian Maizhe
End, video and audio for receiving stream media server push.
Streaming media server 105 is used to receive the audio and video of the transmission of first terminal 101 and second terminal 102 is sent out
The audio and video sent, and push and close to the ratio of width to height according to the display screen of each 4th terminal 104 in multiple 4th terminals 104
Suitable video and audio.
Even wheat server 106 is used to receive the audio and transverse screen resolution ratio of itself acquisition of the transmission of first terminal 101
Video, the audio and video for receiving itself acquisition that third terminal 103 is sent;The third transmitted and received to first terminal 101
The audio frequency and video of terminal acquisition sends the audio and video that first terminal 101 acquires to third terminal 103.
Next detailed explanation is carried out to live broadcasting method provided by the embodiments of the present application.
Fig. 2 is a kind of flow chart of live broadcasting method provided by the embodiments of the present application.This method can be applied to aforementioned system
In first terminal in framework, referring to fig. 2, this approach includes the following steps:
Step 201:Whether whether first terminal detection current current in even wheat in double-current live-mode, and detection
State.
Wherein, double-current live-mode refers to that first terminal is regarded to streaming media server push all the way respectively with second terminal
Frequently, wherein the ratio of width to height of the video all the way of first terminal push is greater than 1, the ratio of width to height of the another way video of second terminal push
Less than 1.
Wherein, second terminal refers to the terminal that double-current live streaming is carried out with first terminal.
Step 202:If detecting, first terminal is currently at double-current live-mode, and first terminal is currently at Lian Maizhuan
State, then first terminal obtains the first video frame and the first audio frame that first terminal currently acquires, and obtains and connect with first terminal
The second video frame and the second audio frame that the third terminal of wheat currently acquires.
Step 203:First video frame and the second video frame are synthesized, synthetic video is obtained, by the first audio frame and
Second audio frame carries out audio mixing, obtains remixed audio, and sending to second terminal and streaming media server includes synthetic video
With the first data packet of remixed audio.
Step 204:When second terminal receives the first data packet of first terminal transmission, at the first data packet
Reason obtains remixed audio and meets the processing video of the ratio of width to height of the display screen of second terminal.
Step 205:Second terminal display processing video, and sending to streaming media server includes processing video and audio mixing
Second data packet of audio.
In the embodiment of the present application, when in double-current live-mode, and when in even wheat state, first terminal can will be from
First video frame of body acquisition and the second video frame of the third terminal acquisition for connecting wheat with it are synthesized, by the of itself acquisition
One audio frame and the second audio frame of third terminal acquisition carry out audio mixing, will include that the obtained synthetic video of synthesis and audio mixing obtain
To company's wheat data packet of remixed audio be sent to second terminal, in this way, can to connect wheat data packet by this raw for second terminal
At the processing video of the ratio of width to height of the video pictures for including third terminal and the display screen that meets second terminal, which is regarded
Frequency and remixed audio push to streaming media server, in this way, streaming media server can be simultaneously to holding vertical screen terminal and cross
Spectators user's push of screen terminal meets the ratio of width to height of the display screen of each self terminal and includes the company wheat view of main broadcaster and Lian Maizhe
Frequently, so that under double-current live-mode, the spectators for no matter holding the spectators user of transverse screen terminal or holding vertical screen terminal are used
Family can watch company's wheat video of main broadcaster and Lian Maizhe.
By the introduction of aforementioned system framework it is found that first terminal can be transverse screen terminal, or vertical screen terminal considers
Equipment performance to usual transverse screen terminal is higher than vertical screen terminal, and the flow cost of vertical screen terminal is higher, therefore, in the application reality
It is perpendicular for applying and mainly carrying out the second terminal of double-current live streaming as transverse screen terminal, with first terminal using the first terminal for connecting wheat in example
It is explained for screen terminal.But this does not constitute the restriction to first terminal and second terminal.
Fig. 3 is a kind of flow chart of live broadcasting method provided by the embodiments of the present application, as shown in figure 3, this method includes following
Step:
Step 301:Whether first terminal detection is current in double-current live-mode.
In the embodiment of the present application, it can store and whether be used to indicate currently in double-current live-mode in first terminal
Double fluid live streaming variable, when the value of double fluid live streaming variable is the first numerical value, first terminal, which can determine, to be currently at pair
Live-mode is flowed, otherwise, first terminal, which can determine, is not currently in double-current live-mode.
It should be noted that the initial value of double fluid live streaming variable can be third value, that is, when starting live streaming,
First terminal is not at double-current live-mode.First terminal can be after live streaming starts, and whether real-time detection receives second eventually
The double-current pairing request for being used to request to carry out double-current live streaming that end is sent, when first terminal receives the double fluid of second terminal transmission
It when pairing request, is matched with the second terminal, after successful matching, first terminal can become the double fluid live streaming of storage
The value of amount is set as the first numerical value.
Optionally, in the embodiment of the present application, after first terminal and second terminal successful matching, second terminal may be used also
With by first terminal send school when request packet come determine second terminal system time and first terminal system time it
Between time deviation, so as to it is subsequent carry out double fluid live streaming when realize time synchronization.
Illustratively, after successful matching, request packet when second terminal can send school to first terminal, when school, is asked
Seek the system time of second terminal when can carry request packet when sending the school in packet.First terminal is asked when receiving the school
After seeking packet, the response bag when school of request packet when can send to second terminal for the school can be taken in response bag when the school
The system time of the second terminal carried in request packet when with aforementioned school and the present system time of first terminal, namely
The first system time.Second terminal when response bag, can recorde second when response bag when receiving the school when receiving the school
System time, and second terminal when request packet when sending school based on the first system time, second system time and second terminal
System time determines the time deviation between the system time of first terminal and the system time of second terminal.
Wherein, when the system of the second terminal carried in request packet when second terminal can calculate second system time and school
Between between first time deviation, first time deviation is actually round-trip duration of the second terminal to first terminal, will be past
Unidirectional duration of the half as second terminal to first terminal for returning duration adds the unidirectional duration on the second system time,
Third system time is obtained, the time deviation between second system time and the third system time is calculated, which is
For the time deviation between the system time of first terminal and the system time of second terminal.
Optionally, in one possible implementation, due to second terminal send school when request packet it is possible that
The case where packet loss, or due to other reasons, the system time and first terminal of request packet are received when second terminal issues school
Deviation is too big between the second system time of response bag when school, and the half of deviation between the two may not be able to accurately characterize
The unidirectional duration that data are transmitted between one terminal and second terminal in this case cannot be true by request packet when the school
Determine the deviation between first terminal and the system time of second terminal.
It is inclined between first terminal and the system time of second terminal in order to guarantee to be determined more accurately based on this
Difference, second terminal can every certain time length to first terminal send a school when request packet, in order to distinguish second terminal send
Each school when request packet, the request packet sequence of request packet when can be carried for identifying each school in request packet when each school
Number, also, when can store each school in second terminal the request packet serial number of request packet and when issuing second terminal system
Corresponding relationship between time.When first terminal receives a school when request packet, first terminal can will be received
Request packet when request packet is as target school when school, and sent to second terminal loud when the target school of request packet when being directed to the target school
It should wrap.The first system of the serial number of request packet and first terminal currently when can carry target school in response bag when the target school
It unites the time.
And when second terminal receives a target school when response bag, it can recorde response when receiving the target school
Second system time when packet, and the first system time carried in response bag when obtaining target school and request packet serial number.According to
The request packet serial number, it is corresponding from response bag when obtaining the target school in the corresponding relationship of the request packet serial number of storage and system time
Target school when system time namely third system time of the request packet when being issued.Later, second terminal can determine third
First time deviation between system time and second system time.If the first time deviation is less than or equal to the 4th numerical value,
Then illustrate that the first time deviation is determined for unidirectional duration, at this point, second terminal can be by the first time deviation
Half obtains the 4th system time plus the unidirectional duration as unidirectional duration, and on the first system time, by second system
Time deviation between time and the 4th system time is as the system time of first terminal and the system time of second terminal
Between time deviation.
If first time deviation be greater than the 4th numerical value, illustrate the first time deviation cannot be used for determine first terminal and
The unidirectional duration of data transmission between second terminal, in this case, then response bag will be unable to for true when the target school
The time deviation between first terminal and the system time of second terminal is determined, at this point, second terminal can wait next target
The arrival of response bag when school, and when receiving next target school when response bag, it is rung when using preceding method to the target school
It should wrap and be handled.
Optionally, if second terminal receive school when response bag quantity and when the school of transmission request packet quantity phase
When the school together, and according to the last one received response bag can not still determine first terminal and second terminal system time it
Between time deviation, it is determined that between first terminal and second terminal time synchronization failure, at this point it is possible in second terminal
Display is for prompting main broadcaster that can not carry out the prompt information of double-current live streaming.
When being currently at double-current live-mode by the determination of this step, first terminal can execute step 302, if currently not
In double-current live-mode, then available the first video frame currently acquired of first terminal and the first audio frame, show this
One video frame, and encoding to the first video frame and the first audio frame encapsulates the first video frame and the according to stream media protocol
One audio frame obtains single stream live data packet, and list stream live data packet is pushed to streaming media server.
Step 302:If being currently at double-current live-mode, whether detection is current in even wheat state.
If being currently at double-current live-mode, whether first terminal can further detect current in even wheat state.
Wherein, it can store and whether be used to indicate currently in the company's wheat variable for connecting wheat state in first terminal.If this connects wheat variable
Value be second value, then first terminal can determine the company's of being currently at wheat state, otherwise, then can determine and be not currently in
Even wheat state.
It should be noted that first terminal, when starting live streaming, which is the 5th numerical value.Work as master
It broadcasts and wants to carry out Lian Maishi with some spectators user in spectators user, first terminal can be triggered by even wheat server to even
The third terminal of wheat person sends even wheat request, if the company of the being used to indicate wheat that first terminal receives even wheat server return is successful
When prompting message, then the value of even wheat variable can be revised as second value by first terminal.Alternatively, first terminal can receive
Even the third terminal of wheat person sends even wheat request by even wheat server, and sends even wheat response to even wheat server, later, when
When the successful prompting message of company of being used to indicate wheat that the company's of receiving wheat server returns, then first terminal can will company wheat variable
Value is revised as second value.During even wheat, which will be second value, when even wheat terminates
Afterwards, which can be revised as the 5th numerical value again by first terminal.
It should be noted that in the embodiment of the present application, the first numerical value and second value may be the same or different.
Third value and the 5th numerical value may be the same or different.First numerical value and third value difference, and second value and the 5th
Numerical value is different.
When determining that first terminal is currently at even wheat state, first terminal can execute step 303 and step 304, if
First terminal is not currently in even wheat state, then first terminal can execute step 305 and 306.
Step 303:If the company's of being currently at wheat state, first terminal obtains the first video frame and the first audio currently acquired
Frame, and obtain and connect the second video frame and the second audio frame that the third terminal of wheat currently acquires with first terminal.
If first terminal is currently at even wheat state, available the first video frame itself currently acquired of first terminal
With the first audio frame, and from even obtain in wheat server the second video frame that the third terminal that third terminal is sent currently acquires and
Second audio frame.
Wherein, first terminal can send data acquisition request to even wheat server, and even wheat server is receiving the number
It is available to connect the video frame and audio frame that third terminal the last time of wheat sends with first terminal after request, and will
The video frame and audio frame are sent to first terminal as the second video frame and the second audio frame.
Alternatively, in one possible implementation, whenever even wheat server receives the video frame of third terminal transmission
When with audio frame, the video frame and audio frame can be forwarded to first terminal, first terminal can be sent from even wheat server
Audio frame and video frame in obtain time indicated by the timestamp of carrying and the immediate video frame of present system time and
Audio frame is as the second video frame and the second audio frame.
Optionally, in the embodiment of the present application, first terminal is after getting the first video frame and the first audio frame, also
First video frame and the first audio frame can be encoded, and be serviced by the company of being sent to wheat after stream media protocol encapsulation
Device, so that even wheat server is when receiving the first audio frame and the first video frame after the encapsulation, by the first view after encapsulation
Frequency frame and the first audio frame are forwarded to the third terminal for connecting wheat with first terminal.In this way, third terminal can be according to receiving
The first video frame and itself acquisition the second video frame generate for itself display video frame, and play receive first
Audio frame.
Step 304:First terminal synthesizes the first video frame and the second video frame, synthetic video is obtained, by the first audio
Frame and the second audio frame audio mixing, obtain remixed audio, to second terminal and streaming media server transmission include synthetic video and
First data packet of remixed audio.
After getting the first video frame and the second video frame, first terminal can be by the first video frame and the second video
Frame is synthesized, to simultaneously be included the synthetic video of main broadcaster's video and Lian Mai person's video.
Illustratively, first terminal can determine first boundary line and second boundary parallel with the short transverse of the first video frame
The second edge of line, the distance of the first edge of first the first video frame of boundary line distance and second the first video frame of boundary line distance
It is equidistant, the short transverse of first edge and second edge each parallel to the first video frame;Position is intercepted from the first video frame
The first video pictures between the first boundary line and the second boundary line, the width of the width of the first video pictures less than the first video frame
Degree, the height of the first video pictures are equal to the height of the first video frame;If the height of the height of the second video frame and the first video frame
Spend it is identical, then by the second video frame splice in the first edge of the first video pictures or the side of second edge.
Wherein, the first video frame is the video of transverse screen resolution ratio, that is, the pixel wide of the first video frame is greater than pixel height
Degree.In view of when being broadcast live, main broadcaster is usually located in the middle section of video pictures, therefore, first terminal can be from
A part of video pictures in centrally located region are intercepted in one video frame, and the video pictures of interception and the second video frame are carried out
Splicing, to obtain synthetic video.Illustratively, first terminal can determine flat with first edge first in the first video frame
The first capable boundary line, and the distance between the first boundary line and first edge are a quarter of the pixel wide of the first video frame.
Later, first terminal can determine second boundary line parallel with second edge, and the second boundary line and second in the first video frame
The distance between edge is a quarter of the pixel wide of the first video frame.In this way, the first boundary line is between the second boundary line
The pixel wide of video pictures is the half of the pixel wide of the first video frame.By first boundary line and the second boundary line it
Between video pictures as interception video pictures.
While intercepting the first video pictures from the first video frame, first terminal can also carry out the second video frame
Processing.Wherein, if the pixels tall of the second video frame is identical as the pixels tall of the first video frame, first terminal can be direct
Second video frame is spliced to the side where the first edge of the first video pictures, or splicing in the first video pictures
Side where second edge, to obtain synthetic video.
Optionally, in the pixels tall of the second video frame situation identical with the pixels tall of the first video frame, if the
The pixel wide of two video frames is greater than the pixel wide of the first video frame, then first terminal can also cut the second video frame
It takes, and the video pictures that interception obtains is spliced with the first video pictures, to obtain synthetic video.
Optionally, if the pixels tall of the second video frame is less than the pixels tall of the first video frame, first terminal can be with
Second video frame is amplified, so that the pixels tall of the second video frame is identical as the pixels tall of the first video frame,
Later, amplified second video frame is spliced with the first video pictures, to obtain synthetic video.
Optionally, if the pixels tall of the second video frame is greater than the pixels tall of the first video frame, first terminal can be with
Second video frame is reduced, so that the pixels tall of the second video frame is identical as the pixels tall of the first video frame,
Later, the second video frame after diminution is spliced with the first video pictures, to obtain synthetic video.
Optionally, in one possible implementation, first terminal can not also intercept the first video frame, and
It is to be reduced after receiving the second video frame to the second video frame, and the second video frame is added to the first video frame
Predeterminable area in, to obtain synthetic video.
While generating synthetic video based on the first video frame and the second video frame, first terminal can also be to the first sound
Frequency frame and the second audio frame carry out audio mixing, obtain remixed audio.
After obtaining synthetic video and remixed audio, first terminal can be compiled the synthetic video and remixed audio
Code encapsulation obtains the first data packet, and includes the first data packet of the synthetic video and remixed audio to second terminal transmission,
So that second terminal is by handling the synthetic video, obtain being suitable for the processing video that vertical screen terminal is shown, and then will
The processing video and remixed audio push to streaming media server.At the same time, first terminal can also be by first data packet
It is sent to streaming media server, so that the synthetic video and remixed audio can be pushed to spectators user's by streaming media server
Transverse screen terminal.
It optionally, in the embodiment of the present application, can also include the second video frame in the first data packet, that is, obtaining
After synthetic video and remixed audio, synthetic video, remixed audio and the second video frame can be carried out coding envelope by first terminal
Dress obtains the first data packet, and sends the first data packet to second terminal and streaming media server.
Optionally, it is a kind of may in the case where, what first terminal was sent to second terminal and streaming media server can be with
It is not identical data packet.Specifically, first terminal can carry out synthetic video and remixed audio sending after coding encapsulation
To streaming media server.And for second terminal, remixed audio and the second video frame can be carried out coding encapsulation by first terminal
It is sent to second terminal later.
Wherein, it should be noted that in order to notify the current first terminal of second terminal to be in even wheat state, first terminal is also
The company wheat mark for being used to indicate first terminal and being currently at even wheat state can be carried in the packet header of the first data packet.
Step 305:If the company's of being not currently in wheat state, first terminal obtains the first video that first terminal currently acquires
Frame and the first audio frame.
If first terminal is not currently in even wheat state namely first terminal and second terminal and is currently only at double fluid directly
Mode is broadcast, in this case, available the first video frame currently acquired of first terminal and the first audio frame, so as to holding
There is the spectators user of transverse screen terminal to push first video frame and the first audio frame.
Wherein, identical timestamp is carried in the first video frame and the first audio frame, and the timestamp can be used for referring to
Show that first terminal obtains the acquisition time of first video frame and the first audio frame.
Step 306:First terminal sent to second terminal include the first audio frame packets of audio data, and to Streaming Media
Server transmission includes the third data packet of the first video frame and the first audio frame.
Wherein, after getting the first video frame and the first audio frame that first terminal currently acquires, first terminal can
It to encode first video frame and the first audio frame, and is packaged by stream media protocol, to be included
The third data packet is sent to streaming media server, to flow matchmaker by the third data packet of the first video frame and the first audio frame
Body server receive include the third data packet of the first video frame and the first audio frame when, to the sight for holding transverse screen terminal
Many users push the first video frame and the first audio frame.
At the same time, it is contemplated that since the equipment performance of first terminal is typically superior to second terminal, in addition consider second
Therefore the network flow higher cost of terminal in the embodiment of the present application, can be acquired audio by first terminal, and second is whole
End does not acquire audio, is based on this, after first terminal gets the first audio frame, first terminal can be by first audio frame
Carry out coding encapsulation, obtain include the first audio frame packets of audio data, and the packets of audio data is sent to second terminal,
With second terminal first audio frame is shared with this.
Optionally, since first terminal at this time is not in even wheat state, in order to notify second terminal first terminal
The current company's of being not in wheat state, first terminal can also carry in the packet header of packets of audio data to be used to indicate first terminal and works as
Before the company's of being not at wheat state Fei Lianmai mark.
Step 307:The data packet that the first terminal that second terminal judgement receives is sent is the first data packet or audio
Data packet.
Wherein, when first terminal and second terminal are in double-current live-mode, no matter first terminal is currently at even wheat
Still the company of being not at wheat state, first terminal can send data packet to second terminal to state, which may be first
What is sent when terminal is in even wheat state includes the first data packet of synthetic video and remixed audio, it is also possible to first terminal
What is sent when in Fei Lianmai state only includes the packets of audio data of the first audio frame of first terminal itself acquisition.
Based on this, when second terminal receives a data packet, second terminal can directly parse the data received
Packet, if being parsed to obtain synthetic video and remixed audio to the data packet received, can determine the data packet received
For the first data packet, if being parsed to obtain the first audio frame to the data packet received and not included synthetic video and audio mixing sound
Frequently, then it can determine that the data packet received is packets of audio data.
Optionally, by foregoing description it is found that first terminal can carry even wheat mark in the packet header of the first data packet, to refer to
Show that first terminal is currently at even wheat state, Fei Lianmai mark is carried in the packet header of packets of audio data, to indicate that first terminal is worked as
Before the company's of being not at wheat state, the data packet received can detecte when second terminal receives a data packet based on this
Whether the company of carrying wheat identifies in packet header, if the company of carrying wheat identifies, can determine that the data packet received is the first data packet.
If carrying Fei Lianmai mark, can determine the data packet received not is the first data packet, but being includes the first sound
The packets of audio data of frequency frame.
Optionally, in the embodiment of the present application, the also company of having can store wheat variable in second terminal.In this way, working as second eventually
When end detects that the packet header of the data packet received carries even wheat mark, company's wheat variable of storage can be set to the second number
Value, otherwise, second terminal can set the 5th numerical value for company's wheat variable of storage.
If determining that second terminal receives the first data packet by this step, step 308-310 can be executed, if passing through
This step determines that the data packet received is packets of audio data, then can execute step 311-312.
Step 308:If second terminal receives the first data packet, the first data packet is handled, obtains audio mixing sound
The processing video of frequency and the ratio of width to height for the display screen for meeting second terminal.
If second terminal receives the first data packet, it can determine that first terminal is currently at even wheat state, at this point, by
The image for having Lian Maizhe is not included in the video frame of second terminal acquisition, therefore, second terminal can be from the first data packet
Synthetic video is obtained, and by handling the synthetic video, obtains the ratio of width to height and packet for the display screen for meeting second terminal
Processing video containing Lian Maizhe.
Illustratively, second terminal can reduce synthetic video, so that the width of synthetic video is equal to the aobvious of second terminal
The width of display screen;According to the side splicing where the first edge of the height video after scaling of the display screen of second terminal the
The second Blank pictures are spliced in one Blank pictures, the side where the second edge of video after scaling;Wherein, first edge with
Second edge is parallel to the width direction of the video after scaling, the height phase of the height of the first Blank pictures and the second Blank pictures
Together, and the summation of the height of the height of the first Blank pictures, the height of the first Blank pictures and the video after scaling is equal to second
The height of the display screen of terminal;Background is filled in the first Blank pictures and the second Blank pictures in the video that splicing obtains
Color, and using filled video as processing video.
It should be noted that since the pixel wide of synthetic video is greater than the pixel wide of the display screen of second terminal, because
This, after getting synthetic video, second terminal can scaled down synthetic video first, to keep its pixel wide
It spends equal with the pixel wide of display screen.Later, second terminal can calculate the pixels tall and the of the synthetic video after reducing
Height difference between the pixels tall of the display screen of two terminals, and picture is spliced in the top edge side of the synthetic video after diminution
Plain is highly half height difference, width the first Blank pictures identical with the pixel wide of the synthetic video after diminution,
The lower edge side splicing identical with the pixels tall of the first Blank pictures and pixel wide the of synthetic video after diminution
Two Blank pictures, in this way, by after the first Blank pictures, diminution synthetic video and the video that splices of the second Blank pictures draw
The summation of the pixels tall in face is equal to the pixels tall of the display screen of second terminal, and the pixel for splicing obtained video pictures is wide
Degree is then equal to the pixel wide of the display screen of second terminal.Background colour is filled in the first Blank pictures and the second Blank pictures,
Filled video pictures may act as processing video.
In addition, being based on foregoing description it is found that second terminal can not acquire audio, and share by first eventually with first terminal
The audio of acquisition is held, therefore, second terminal can also obtain the first number while obtaining synthetic video from the first data packet
According to the remixed audio in packet.
Optionally, based on the related introduction in step 304 it is found that can also include the second video in the first data packet
Frame, alternatively, first terminal can will be sent to second terminal after the second video frame and remixed audio encapsulation, in such case
Under, the second video frame in available first data packet of second terminal, also, itself available current acquisition of second terminal
Third video frame.Later, second terminal can close second video frame and the third video frame itself currently acquired
At to obtain the processing video of the ratio of width to height for the display screen for meeting second terminal.In addition, in that case, second terminal
The remixed audio in the first data packet can also be obtained.Wherein, second terminal by the second video frame and itself currently acquire the
The realization process that three video frames are synthesized can be with reference to the reality for synthesizing the first video frame and the second video frame in previous embodiment
Existing process, details are not described herein for the embodiment of the present application.
Step 309:Second terminal shows the processing video, and includes to handle video and mix to streaming media server transmission
Second data packet of sound audio.
After getting processing video and remixed audio, second terminal can show the processing video.Wherein, due to
Timestamp is carried in one video frame, therefore, in the synthetic video handled according to the first video frame is also carried identical
Timestamp, and then also carry timestamp in the processing video handled the synthetic video, alternatively, if processing video be by
Second terminal synthesizes to obtain to the second video frame and third video frame, then due to carrying timestamp in the second video frame, then should
Timestamp is also carried in processing video.Based on this, second terminal can be from extraction time stamp in video be handled, and record second is eventually
The present system time at end, and frame period time and present system time based on video frame, when determining the display of processing video
Between.If the display time is later than the time indicated by timestamp, second terminal can show the processing video at current time, if
It shows the time indicated by timestamp of the time earlier than processing video, then postpones to show the processing video.
It should be noted that second terminal extract processing video in timestamp after, can be in current system
Between the upper frame period time plus video frame, to obtain the display time of the processing video, which is actually
The display time for the processing video being theoretically calculated.And it handles the time indicated by the timestamp of video and actually refers to
The processing video really shows the time.Later, it will be carried out between the display with the time indicated by the timestamp of processing video
Compare, if the display time is later than the time indicated by the timestamp of the processing video, illustrates to have had been subjected to the processing video
Real display time at this point, second terminal cannot be delayed again shows the processing video, but will show that the processing regards immediately
Frequently.If display the time be equal to the processing video timestamp indicated by the time, illustrate the display time being calculated with
Really the display time is identical, and in this case, second terminal can show that the processing is regarded when reaching and showing the time
Frequently.If showing the time indicated by timestamp of the time earlier than the processing video, illustrate that display should when showing that the time reaches
It is too early to handle video, at this point, second terminal can be with suspend mode certain time length, and shows that the processing regards after suspend mode certain time length
Frequently.
After second terminal shows the processing video, second terminal can also by the processing video and remixed audio into
Row coding, and be packaged by stream media protocol, to obtain the second data packet, second number is sent to streaming media server
According to packet, the processing video in second data packet can be pushed to the spectators user for holding vertical screen terminal so as to streaming media server
And remixed audio.Wherein, since remixed audio is acquire the first audio frame of first terminal acquisition and third terminal second
Audio frame audio mixing obtains, and therefore, can carry the timestamp of the first audio frame in the remixed audio, and from the foregoing it will be appreciated that
The timestamp of one audio frame and the timestamp of the first video frame can be it is identical, in this way, the timestamp of remixed audio and processing
The timestamp of video is exactly identical, stated differently, since the timestamp of the timestamp of remixed audio and processing video is exactly phase
With, therefore, second terminal may not necessarily carry out time synchronization to remixed audio and processing video again, reduce second terminal
Handle complexity.
Step 310:Streaming media server receives second that the first data packet that first terminal is sent and second terminal are sent
Data packet, and the first data packet is sent to transverse screen terminal, the second data packet is sent to vertical screen terminal.
Wherein, transverse screen terminal refers to that the transverse screen terminal that spectators user holds, vertical screen terminal refer to that spectators user holds perpendicular
Shield terminal.
Since include in the first data packet is the synthetic video for meeting the wide high proportion of transverse screen terminal, in the second data packet
Include is the processing video for meeting the wide high proportion of vertical screen terminal, and therefore, streaming media server is receiving the first data packet
After the second data packet, different videos can be pushed to different spectators users according to the terminal that spectators user holds.
Step 311:If second terminal receives packets of audio data, obtain the first audio frame in packets of audio data and
The third video frame that second terminal currently acquires, and sending to streaming media server includes the first audio frame and third video frame
The 4th data packet.
If the data packet that second terminal receives is not the first data packet but packets of audio data, it can determine that first is whole
Hold the current company of being not in wheat state.In this case, the first audio frame in the available packets of audio data of second terminal,
And obtain the third video frame that second terminal currently acquires.
After getting the first audio frame and third video frame, second terminal can directly display third video frame, and
First audio frame and third video frame are encoded and encapsulated, to obtain the 4th data packet, is sent to streaming media server
It include the 4th data packet of the first audio frame and third video frame, so that streaming media server can be to holding vertical screen terminal
Spectators user pushes third video frame and the first audio frame.
Optionally, since the first audio frame is to be acquired by first terminal, and third video frame is acquired by second terminal
, therefore, second terminal can synchronize the first audio frame and third video frame.Illustratively, second terminal can be remembered
Record obtains the acquisition time of third video frame.Later, second terminal can the acquisition time and first based on third video frame
Time deviation between the system time of terminal and the system time of second terminal determines the second timestamp of third video frame.
Using second timestamp as the timestamp of third video frame, and third video frame, later, second are shown based on the second timestamp
Terminal can include the 4th data packet of the first audio frame and third video frame, third video frame to streaming media server transmission
Carry the second timestamp.
Wherein, by step 301 it is found that second terminal can be true after first terminal and second terminal successful matching
Determine the time deviation between the system time of first terminal and the system time of second terminal.In this way, in this step, second eventually
The available time deviation is held, and present system time is subtracted into the time deviation, obtains the second timestamp, the second timestamp
When the indicated time is synchronous after eliminating time deviation between first terminal and the system time of second terminal
Between, using the second timestamp as the timestamp of third video frame, and the second timestamp is based on reference to the method introduced in step 309
Show third video frame.Later, second terminal can be by the third video frame for carrying the second timestamp and when carrying first
Between the first audio frame for stabbing encoded and encapsulated, to obtain the 4th data packet, it is straight to send the double fluid to streaming media server
Unicast packets, so that streaming media server can push third video frame and the first audio to the spectators user for holding vertical screen terminal
Frame.
Step 312:Streaming media server receives the 4th that the third data packet that first terminal is sent and second terminal are sent
Data packet, and third data packet is pushed to transverse screen terminal, the 4th data packet is pushed to vertical screen terminal.
Wherein, transverse screen terminal refers to that the transverse screen terminal that spectators user holds, vertical screen terminal refer to that spectators user holds perpendicular
Shield terminal.
Since include in third data packet is the first video frame for meeting the wide high proportion of transverse screen terminal, the 4th data packet
In include is the third video frame for meeting the wide high proportion of vertical screen terminal, therefore, streaming media server is receiving third number
After packet and the 4th data packet, different views can be pushed to different spectators users according to the terminal that spectators user holds
Frequently.
In the embodiment of the present application, when in double-current live-mode, and when in even wheat state, first terminal can will be from
First video frame of body acquisition and the second video frame of the third terminal acquisition for connecting wheat with it are synthesized, by the of itself acquisition
One audio frame and the second audio frame of third terminal acquisition carry out audio mixing, synthetic video and audio mixing that synthesis obtains are obtained mixed
Sound audio is sent to second terminal, in this way, second terminal can obtain including by handling the synthetic video
The video pictures of three terminals and meet second terminal display screen the ratio of width to height processing video, by the processing video and audio mixing sound
Frequency pushes to streaming media server, in this way, streaming media server can push the processing to vertical screen end viewer user is held
Video, so that no matter holding the spectators user of transverse screen terminal under double-current live-mode or holding the spectators of vertical screen terminal
User can watch the synthetic video of main broadcaster and Lian Maizhe.
In addition, in the embodiment of the present application, audio collection can be carried out by first terminal, second terminal can be with first eventually
The audio of acquisition is shared at end, and in this case, second terminal can be when starting double fluid live streaming, i.e., by requesting when transmission school
Packet is to determine the time deviation between the system time of first terminal, in this way, in double-current live-mode and the company of being not at
In the case where wheat state, second terminal can pass through determining first terminal after the audio for getting first terminal acquisition
Time deviation between the system time of second terminal, the video of audio and second terminal acquisition to first terminal acquisition
Timestamp is aligned, to realize the synchronization of audio and video.
Fig. 4 shows the structural block diagram for the terminal 400 for live streaming that one exemplary embodiment of the application provides.Its
In, when the terminal is transverse screen terminal, which can be laptop, desktop computer etc., when the terminal is that vertical screen is whole
When end, which can be smart phone or tablet computer etc..
In general, terminal 400 includes:Processor 401 and memory 402.
Processor 401 may include one or more processing cores, such as 4 core processors, 8 core processors etc..Place
Reason device 401 can use DSP (Digital Signal Processing, Digital Signal Processing), FPGA (Field-
Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array, may be programmed
Logic array) at least one of example, in hardware realize.Processor 401 also may include primary processor and coprocessor, master
Processor is the processor for being handled data in the awake state, also referred to as CPU (Central Processing
Unit, central processing unit);Coprocessor is the low power processor for being handled data in the standby state.?
In some embodiments, processor 401 can be integrated with GPU (Graphics Processing Unit, image processor),
GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen.In some embodiments, processor 401 can also be wrapped
AI (Artificial Intelligence, artificial intelligence) processor is included, the AI processor is for handling related machine learning
Calculating operation.
Memory 402 may include one or more computer readable storage mediums, which can
To be non-transient.Memory 402 may also include high-speed random access memory and nonvolatile memory, such as one
Or multiple disk storage equipments, flash memory device.In some embodiments, the non-transient computer in memory 402 can
Storage medium is read for storing at least one instruction, wherein when the terminal is the first terminal in above-described embodiment, this is at least
One instruction performed by processor 401 for realizing first terminal in live broadcasting method that embodiment of the method in the application provides
Performed step, if the terminal is the second terminal in above-described embodiment, at least one instruction is for by processor 401
It is performed to realize step performed by second terminal in live broadcasting method that embodiment of the method in the application provides.
In some embodiments, terminal 400 is also optional includes:Peripheral device interface 403 and at least one peripheral equipment.
It can be connected by bus or signal wire between processor 401, memory 402 and peripheral device interface 403.Each peripheral equipment
It can be connected by bus, signal wire or circuit board with peripheral device interface 403.Specifically, peripheral equipment includes:Radio circuit
404, at least one of touch display screen 405, camera 406, voicefrequency circuit 407, positioning component 408 and power supply 409.
Peripheral device interface 403 can be used for I/O (Input/Output, input/output) is relevant outside at least one
Peripheral equipment is connected to processor 401 and memory 402.In some embodiments, processor 401, memory 402 and peripheral equipment
Interface 403 is integrated on same chip or circuit board;In some other embodiments, processor 401, memory 402 and outer
Any one or two in peripheral equipment interface 403 can realize on individual chip or circuit board, the present embodiment to this not
It is limited.
Radio circuit 404 is for receiving and emitting RF (Radio Frequency, radio frequency) signal, also referred to as electromagnetic signal.It penetrates
Frequency circuit 404 is communicated by electromagnetic signal with communication network and other communication equipments.Radio circuit 404 turns electric signal
It is changed to electromagnetic signal to be sent, alternatively, the electromagnetic signal received is converted to electric signal.Optionally, radio circuit 404 wraps
It includes:Antenna system, RF transceiver, one or more amplifiers, tuner, oscillator, digital signal processor, codec chip
Group, user identity module card etc..Radio circuit 404 can be carried out by least one wireless communication protocol with other terminals
Communication.The wireless communication protocol includes but is not limited to:WWW, Metropolitan Area Network (MAN), Intranet, each third generation mobile communication network (2G, 3G,
4G and 5G), WLAN and/or WiFi (Wireless Fidelity, Wireless Fidelity) network.In some embodiments, it penetrates
Frequency circuit 404 can also include NFC (Near Field Communication, wireless near field communication) related circuit, this
Application is not limited this.
Display screen 405 is for showing UI (User Interface, user interface).The UI may include figure, text, figure
Mark, video and its their any combination.When display screen 405 is touch display screen, display screen 405 also there is acquisition to show
The ability of the touch signal on the surface or surface of screen 405.The touch signal can be used as control signal and be input to processor
401 are handled.At this point, display screen 405 can be also used for providing virtual push button and/or dummy keyboard, also referred to as soft button and/or
Soft keyboard.In some embodiments, display screen 405 can be one, and the front panel of terminal 400 is arranged;In other embodiments
In, display screen 405 can be at least two, be separately positioned on the different surfaces of terminal 400 or in foldover design;In still other reality
It applies in example, display screen 405 can be flexible display screen, be arranged on the curved surface of terminal 400 or on fold plane.Even, it shows
Display screen 405 can also be arranged to non-rectangle irregular figure, namely abnormity screen.Display screen 405 can use LCD (Liquid
Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode)
Etc. materials preparation.It should be noted that in the embodiment of the present application, when the terminal 400 is transverse screen terminal, the terminal 400
The ratio of width to height of display screen is greater than 1, for example, the ratio of width to height of the display screen of the terminal 400 can be 16:9 or 4:3.When the terminal 400
When for vertical screen terminal, then the ratio of width to height of the display screen of the terminal 400 is less than 1, for example, the ratio of width to height of the display screen of the terminal 400
It can be 9:18 or 3:4 etc..
CCD camera assembly 406 is for acquiring image or video.Optionally, CCD camera assembly 406 include front camera and
Rear camera.In general, the front panel of terminal is arranged in front camera, the back side of terminal is arranged in rear camera.One
In a little embodiments, rear camera at least two is main camera, depth of field camera, wide-angle camera, focal length camera shooting respectively
Any one in head, to realize that main camera and the fusion of depth of field camera realize background blurring function, main camera and wide-angle
Camera fusion realizes that pan-shot and VR (Virtual Reality, virtual reality) shooting function or other fusions are clapped
Camera shooting function.In some embodiments, CCD camera assembly 406 can also include flash lamp.Flash lamp can be monochromatic warm flash lamp,
It is also possible to double-colored temperature flash lamp.Double-colored temperature flash lamp refers to the combination of warm light flash lamp and cold light flash lamp, can be used for not
With the light compensation under colour temperature.
Voicefrequency circuit 407 may include microphone and loudspeaker.Microphone is used to acquire the sound wave of user and environment, and will
Sound wave, which is converted to electric signal and is input to processor 401, to be handled, or is input to radio circuit 404 to realize voice communication.
For stereo acquisition or the purpose of noise reduction, microphone can be separately positioned on the different parts of terminal 400 to be multiple.Mike
Wind can also be array microphone or omnidirectional's acquisition type microphone.Loudspeaker is then used to that processor 401 or radio circuit will to be come from
404 electric signal is converted to sound wave.Loudspeaker can be traditional wafer speaker, be also possible to piezoelectric ceramic loudspeaker.When
When loudspeaker is piezoelectric ceramic loudspeaker, the audible sound wave of the mankind can be not only converted electrical signals to, it can also be by telecommunications
Number the sound wave that the mankind do not hear is converted to carry out the purposes such as ranging.In some embodiments, voicefrequency circuit 407 can also include
Earphone jack.
Positioning component 408 is used for the current geographic position of positioning terminal 400, to realize navigation or LBS (Location
Based Service, location based service).Positioning component 408 can be the GPS (Global based on the U.S.
Positioning System, global positioning system), China dipper system or European Union Galileo system positioning component.
Power supply 409 is used to be powered for the various components in terminal 400.Power supply 409 can be alternating current, direct current,
Disposable battery or rechargeable battery.When power supply 409 includes rechargeable battery, which can be wired charging electricity
Pond or wireless charging battery.Wired charging battery is the battery to be charged by Wireline, and wireless charging battery is by wireless
The battery of coil charges.The rechargeable battery can be also used for supporting fast charge technology.
In some embodiments, terminal 400 further includes having one or more sensors 410.The one or more sensors
410 include but is not limited to:Acceleration transducer 411, gyro sensor 412, pressure sensor 413, fingerprint sensor 414,
Optical sensor 415 and proximity sensor 416.
The acceleration that acceleration transducer 411 can detecte in three reference axis of the coordinate system established with terminal 400 is big
It is small.For example, acceleration transducer 411 can be used for detecting component of the acceleration of gravity in three reference axis.Processor 401 can
With the acceleration of gravity signal acquired according to acceleration transducer 411, touch display screen 405 is controlled with transverse views or longitudinal view
Figure carries out the display of user interface.Acceleration transducer 411 can be also used for the acquisition of game or the exercise data of user.
Gyro sensor 412 can detecte body direction and the rotational angle of terminal 400, and gyro sensor 412 can
To cooperate with acquisition user to act the 3D of terminal 400 with acceleration transducer 411.Processor 401 is according to gyro sensor 412
Following function may be implemented in the data of acquisition:When action induction (for example changing UI according to the tilt operation of user), shooting
Image stabilization, game control and inertial navigation.
The lower layer of side frame and/or touch display screen 405 in terminal 400 can be set in pressure sensor 413.Work as pressure
When the side frame of terminal 400 is arranged in sensor 413, user can detecte to the gripping signal of terminal 400, by processor 401
Right-hand man's identification or prompt operation are carried out according to the gripping signal that pressure sensor 413 acquires.When the setting of pressure sensor 413 exists
When the lower layer of touch display screen 405, the pressure operation of touch display screen 405 is realized to UI circle according to user by processor 401
Operability control on face is controlled.Operability control includes button control, scroll bar control, icon control, menu
At least one of control.
Fingerprint sensor 414 is used to acquire the fingerprint of user, collected according to fingerprint sensor 414 by processor 401
The identity of fingerprint recognition user, alternatively, by fingerprint sensor 414 according to the identity of collected fingerprint recognition user.It is identifying
When the identity of user is trusted identity out, the user is authorized to execute relevant sensitive operation, the sensitive operation packet by processor 401
Include solution lock screen, check encryption information, downloading software, payment and change setting etc..Terminal can be set in fingerprint sensor 414
400 front, the back side or side.When being provided with physical button or manufacturer Logo in terminal 400, fingerprint sensor 414 can be with
It is integrated with physical button or manufacturer Logo.
Optical sensor 415 is for acquiring ambient light intensity.In one embodiment, processor 401 can be according to optics
The ambient light intensity that sensor 415 acquires controls the display brightness of touch display screen 405.Specifically, when ambient light intensity is higher
When, the display brightness of touch display screen 405 is turned up;When ambient light intensity is lower, the display for turning down touch display screen 405 is bright
Degree.In another embodiment, the ambient light intensity that processor 401 can also be acquired according to optical sensor 415, dynamic adjust
The acquisition parameters of CCD camera assembly 406.
Proximity sensor 416, also referred to as range sensor are generally arranged at the front panel of terminal 400.Proximity sensor 416
For acquiring the distance between the front of user Yu terminal 400.In one embodiment, when proximity sensor 416 detects use
When family and the distance between the front of terminal 400 gradually become smaller, touch display screen 405 is controlled from bright screen state by processor 401
It is switched to breath screen state;When proximity sensor 416 detects user and the distance between the front of terminal 400 becomes larger,
Touch display screen 405 is controlled by processor 401 and is switched to bright screen state from breath screen state.
It that is to say, the embodiment of the present application provides not only a kind of live streaming terminal, including processor and is used for storage processor
The memory of executable instruction, wherein when the terminal 400 is first terminal, processor is configured as executing shown in Fig. 2 and 3
Embodiment in first terminal correlation step, when the terminal 400 be second terminal when, processor be configured as execute Fig. 2 and 3
Shown in embodiment second terminal correlation step, moreover, the embodiment of the present application also provides a kind of computer-readable storages
Medium is stored with computer program in the storage medium, which may be implemented shown in Fig. 2-3 when being executed by processor
Embodiment in live broadcasting method.
Those of ordinary skill in the art will appreciate that realizing that all or part of the steps of above-described embodiment can pass through hardware
It completes, relevant hardware can also be instructed to complete by program, the program can store in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only memory, disk or CD etc..
The foregoing is merely the preferred embodiments of the application, not to limit the application, it is all in spirit herein and
Within principle, any modification, equivalent replacement, improvement and so on be should be included within the scope of protection of this application.
Claims (12)
1. a kind of live broadcasting method, which is characterized in that the method includes:
Whether whether first terminal detection current current in even wheat state in double-current live-mode, and detection;
If being currently at double-current live-mode, and the company's of being currently at wheat state, then it is current to obtain the first terminal for first terminal
The first video frame and the first audio frame of acquisition, and obtain the third terminal for connecting wheat with the first terminal currently acquires second
Video frame and the second audio frame;
The first terminal synthesizes first video frame and second video frame, obtains synthetic video, will be described
First audio frame and second audio frame carry out audio mixing, obtain remixed audio, and send out to second terminal and streaming media server
Send include the synthetic video and the remixed audio the first data packet;
When the second terminal receives first data packet, first data packet is handled, is obtained described mixed
The processing video of sound audio and the ratio of width to height for the display screen for meeting the second terminal;
The second terminal shows the processing video, and to streaming media server transmission include the processing video with it is described
Second data packet of remixed audio.
2. the method according to claim 1, wherein whether first terminal detection is current in double fluid live streaming
Whether mode, and detection are current in even wheat state, including:
Whether the current double fluid live streaming variable of first terminal detection is the first numerical value, and the current company's wheat variable of detection whether be
Second value;
If the current double fluid live streaming variable is first numerical value, and current company's wheat variable is second number
Value, it is determined that be currently at double-current live-mode and the company's of being currently at wheat state.
3. the method according to claim 1, wherein the first terminal is transverse screen terminal, the first terminal
First video frame and second video frame are synthesized, including:
The first terminal determines first boundary line and second boundary line parallel with the short transverse of first video frame, described the
The distance of first edge of one boundary line apart from first video frame and second boundary line apart from first video frame the
Two edges are equidistant, and the first edge is with the second edge each parallel to the short transverse of first video frame;
From first video pictures of the interception between first boundary line and second boundary line, institute in first video frame
The width for stating the first video pictures is less than the width of first video frame, and the height of first video pictures is equal to described the
The height of one video frame;
If the height of second video frame is identical as the height of first video frame, second video frame splicing is existed
The first edge of first video pictures or the side of second edge.
4. the method according to claim 1, wherein described handle first data packet, including:
The remixed audio and synthetic video in first data packet are obtained, and reduces the synthetic video, so that the synthesis
The width of video is equal to the width of the display screen of the second terminal;
According to the side splicing first where the first edge of the height video after scaling of the display screen of the second terminal
The second Blank pictures are spliced in Blank pictures, the side where the second edge of video after scaling;
Wherein, the first edge and the second edge are parallel to the width direction of the video after the scaling, and described first
The height of Blank pictures is identical as the height of second Blank pictures, and the height of first Blank pictures, described first
The summation of the height of video after the height of Blank pictures and the scaling is equal to the height of the display screen of the second terminal;
Background colour is filled in first Blank pictures and second Blank pictures in the video that splicing obtains, and will be filled out
Video after filling is as the processing video.
5. the method according to claim 1, wherein the synthetic video and the remixed audio carry it is identical
Timestamp, and the processing video carries the timestamp of the synthetic video.
6. according to the method described in claim 5, it is characterized in that, the second terminal shows the processing video, including:
Extraction time stabs from the processing video;
Present system time, and frame period time and the present system time based on video frame are recorded, determines the processing
The display time of video;
If the display time is later than the time indicated by the timestamp, the processing video is shown at current time;
If the display time earlier than the time indicated by the timestamp, postpones to show the processing video.
7. the method according to claim 1, wherein the method also includes:
It is currently at double-current live-mode if detecting, and the company's of being not currently in wheat state, then described in the first terminal acquisition
The first video frame and the first audio frame that first terminal currently acquires;
The first terminal sent to the second terminal include first audio frame packets of audio data, and to the stream
Media server transmission includes the third data packet of first video frame and first audio frame, first audio frame
It carries and stabs at the first time;
When the second terminal receives the packets of audio data, the second terminal obtains the institute in the packets of audio data
The first audio frame is stated, and obtains the third video frame that the second terminal currently acquires, records the acquisition of the third video frame
Time;
The system time of acquisition time and the first terminal based on the third video frame is with the second terminal
Time deviation between the system time, determines the second timestamp of the third video frame;
The third video frame is shown based on second timestamp;
It include the 4th data packet of first audio frame and the third video frame, institute to streaming media server transmission
It states third video frame and carries second timestamp.
8. the method according to the description of claim 7 is characterized in that the method also includes:
The request packet when second terminal sends at least one school to the first terminal, and by least one described school when requests
The request packet serial number of request packet and sending time corresponding storage when each school in packet, the sending time, which refers to, sends corresponding school
When request packet when the second terminal system time, request packet is marked when the request packet serial number is used for correspondingly school
Know;
When the first terminal receives target school when request packet, it is directed to second terminal transmission described in receiving
Response bag when the target school of request packet when target school, when request packet refers at least one described school when the target school in request packet
Any school when request packet, the request packet serial number of request packet and institute when response bag carries the target school when target school
State the first terminal current the first system time;
When the second terminal receives the target school when response bag, the second terminal record receives the target school
When response bag when the second terminal the second system time;
The request packet serial number that response bag carries when the second terminal is based on the target school, from the request packet serial number and hair of storage
The corresponding sending time of request packet serial number for sending response bag when obtaining the target school in the corresponding relationship of time to carry obtains the
Three system times;
Based on the first system time, the second system time and the third system time, the first terminal is determined
System time and the second terminal system time between time deviation.
9. any method according to claim 1, which is characterized in that the method also includes:
It is not currently in double-current live-mode if detecting, and the company's of being not currently in wheat state, then obtains the first terminal and work as
The first video frame and the first audio frame of preceding acquisition;
First video frame is played, and sending to the streaming media server includes first video frame and described first
Single stream live data packet of audio frame.
10. -9 any method according to claim 1, which is characterized in that the packet header of first data packet carries company
Wheat mark, the even wheat mark is for notifying the presently described first terminal of the second terminal to be in even wheat state.
11. a kind of live broadcast system, which is characterized in that the live broadcast system includes:First terminal, second terminal, third terminal, stream
Media server and Lian Mai server;
Whether whether the first terminal currently current in Lian Maizhuan in double-current live-mode, and detection for detecting
State;If being currently at double-current live-mode, and the company's of being currently at wheat state, then first terminal obtains the first terminal and currently adopts
The first video frame and the first audio frame of collection, and obtain the second view that the third terminal for connecting wheat with the first terminal currently acquires
Frequency frame and the second audio frame;First video frame and second video frame are synthesized, by first audio frame and
Second audio frame carries out audio mixing, and sending to second terminal and streaming media server includes synthetic video and remixed audio
The first data packet;
The second terminal is for receiving first data packet, and obtaining the synthesis for including in first data packet view
Frequency and the remixed audio, handle the synthetic video, obtain the ratio of width to height for meeting the display screen of the second terminal
Processing video;The second terminal shows the processing video, and includes that the processing regards to streaming media server transmission
Second data packet of frequency and the remixed audio;
The third terminal is used to send the second video frame currently acquired and the second audio frame to the even wheat server;
The even wheat server is sent for receiving second video frame and second audio frame, and to the first terminal
Second video frame and second audio frame;
The streaming media server is used to receive first data packet and second data packet, and to except the first terminal
First data packet or second data packet are sent with other terminals except the second terminal.
12. system as claimed in claim 11, which is characterized in that
If the first terminal, which is also used to detect, is currently at double-current live-mode, and the company's of being not currently in wheat state, then institute
It states first terminal and obtains the first video frame and the first audio frame that the first terminal currently acquires;It is sent to the second terminal
It include the packets of audio data of first audio frame, and sending to the streaming media server includes first video frame
With the third data packet of first audio frame, first audio frame carries to be stabbed at the first time;
The second terminal is also used to receive the packets of audio data, obtains first audio in the packets of audio data
Frame, and the third video frame that the second terminal currently acquires is obtained, record the acquisition time of the third video frame;Based on institute
It states between the acquisition time of third video frame and the system time of the first terminal and the system time of the second terminal
Time deviation, determine the second timestamp of the third video frame;The third video is shown based on second timestamp
Frame;It include the 4th data packet of first audio frame and the third video frame, institute to streaming media server transmission
It states third video frame and carries second timestamp;
The streaming media server is also used to receive the third data packet and the 4th data packet, and whole to removing described first
Other terminals except end and the second terminal send the third data packet or the 4th data packet.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810943503.0A CN108900859B (en) | 2018-08-17 | 2018-08-17 | Live broadcasting method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810943503.0A CN108900859B (en) | 2018-08-17 | 2018-08-17 | Live broadcasting method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108900859A true CN108900859A (en) | 2018-11-27 |
CN108900859B CN108900859B (en) | 2020-07-10 |
Family
ID=64354523
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810943503.0A Active CN108900859B (en) | 2018-08-17 | 2018-08-17 | Live broadcasting method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108900859B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110062252A (en) * | 2019-04-30 | 2019-07-26 | 广州酷狗计算机科技有限公司 | Live broadcasting method, device, terminal and storage medium |
CN110267064A (en) * | 2019-06-12 | 2019-09-20 | 百度在线网络技术(北京)有限公司 | Audio broadcast state processing method, device, equipment and storage medium |
CN110493610A (en) * | 2019-08-14 | 2019-11-22 | 北京达佳互联信息技术有限公司 | Method, apparatus, electronic equipment and the storage medium of chatroom unlatching video pictures |
CN110505489A (en) * | 2019-08-08 | 2019-11-26 | 咪咕视讯科技有限公司 | Method for processing video frequency, communication equipment and computer readable storage medium |
CN110602521A (en) * | 2019-10-10 | 2019-12-20 | 广州华多网络科技有限公司 | Method, system, computer readable medium and device for measuring mixed drawing time delay |
CN110740346A (en) * | 2019-10-23 | 2020-01-31 | 北京达佳互联信息技术有限公司 | Video data processing method, device, server, terminal and storage medium |
CN111083507A (en) * | 2019-12-09 | 2020-04-28 | 广州酷狗计算机科技有限公司 | Method and system for connecting to wheat, first main broadcasting terminal, audience terminal and computer storage medium |
CN111654736A (en) * | 2020-06-10 | 2020-09-11 | 北京百度网讯科技有限公司 | Method and device for determining audio and video synchronization error, electronic equipment and storage medium |
CN111726695A (en) * | 2020-07-02 | 2020-09-29 | 聚好看科技股份有限公司 | Display device and audio synthesis method |
CN112291579A (en) * | 2020-10-26 | 2021-01-29 | 北京字节跳动网络技术有限公司 | Data processing method, device, equipment and storage medium |
CN113573117A (en) * | 2021-07-15 | 2021-10-29 | 广州方硅信息技术有限公司 | Video live broadcast method and device and computer equipment |
CN114095772A (en) * | 2021-12-08 | 2022-02-25 | 广州方硅信息技术有限公司 | Virtual object display method and system under live microphone connection and computer equipment |
CN117560538A (en) * | 2024-01-12 | 2024-02-13 | 江西微博科技有限公司 | Service method and device of interactive voice video based on cloud platform |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105812951A (en) * | 2016-03-24 | 2016-07-27 | 广州华多网络科技有限公司 | Stream media data interaction method, terminal, server and system |
US20160295170A1 (en) * | 2015-04-02 | 2016-10-06 | Telepresence Technologies, Llc | Architectural Scale Communications Systems and Methods Therefore |
CN106161955A (en) * | 2016-08-16 | 2016-11-23 | 天脉聚源(北京)传媒科技有限公司 | A kind of live image pickup method and device |
CN106454404A (en) * | 2016-09-29 | 2017-02-22 | 广州华多网络科技有限公司 | Live video playing method, device and system |
CN107027048A (en) * | 2017-05-17 | 2017-08-08 | 广州市千钧网络科技有限公司 | A kind of live even wheat and the method and device of information displaying |
CN108093268A (en) * | 2017-12-29 | 2018-05-29 | 广州酷狗计算机科技有限公司 | The method and apparatus being broadcast live |
CN108401194A (en) * | 2018-04-27 | 2018-08-14 | 广州酷狗计算机科技有限公司 | Timestamp determines method, apparatus and computer readable storage medium |
-
2018
- 2018-08-17 CN CN201810943503.0A patent/CN108900859B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160295170A1 (en) * | 2015-04-02 | 2016-10-06 | Telepresence Technologies, Llc | Architectural Scale Communications Systems and Methods Therefore |
CN105812951A (en) * | 2016-03-24 | 2016-07-27 | 广州华多网络科技有限公司 | Stream media data interaction method, terminal, server and system |
CN106161955A (en) * | 2016-08-16 | 2016-11-23 | 天脉聚源(北京)传媒科技有限公司 | A kind of live image pickup method and device |
CN106454404A (en) * | 2016-09-29 | 2017-02-22 | 广州华多网络科技有限公司 | Live video playing method, device and system |
CN107027048A (en) * | 2017-05-17 | 2017-08-08 | 广州市千钧网络科技有限公司 | A kind of live even wheat and the method and device of information displaying |
CN108093268A (en) * | 2017-12-29 | 2018-05-29 | 广州酷狗计算机科技有限公司 | The method and apparatus being broadcast live |
CN108401194A (en) * | 2018-04-27 | 2018-08-14 | 广州酷狗计算机科技有限公司 | Timestamp determines method, apparatus and computer readable storage medium |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110062252A (en) * | 2019-04-30 | 2019-07-26 | 广州酷狗计算机科技有限公司 | Live broadcasting method, device, terminal and storage medium |
CN110267064B (en) * | 2019-06-12 | 2021-11-12 | 百度在线网络技术(北京)有限公司 | Audio playing state processing method, device, equipment and storage medium |
CN110267064A (en) * | 2019-06-12 | 2019-09-20 | 百度在线网络技术(北京)有限公司 | Audio broadcast state processing method, device, equipment and storage medium |
CN110505489A (en) * | 2019-08-08 | 2019-11-26 | 咪咕视讯科技有限公司 | Method for processing video frequency, communication equipment and computer readable storage medium |
CN110493610A (en) * | 2019-08-14 | 2019-11-22 | 北京达佳互联信息技术有限公司 | Method, apparatus, electronic equipment and the storage medium of chatroom unlatching video pictures |
CN110602521A (en) * | 2019-10-10 | 2019-12-20 | 广州华多网络科技有限公司 | Method, system, computer readable medium and device for measuring mixed drawing time delay |
CN110740346A (en) * | 2019-10-23 | 2020-01-31 | 北京达佳互联信息技术有限公司 | Video data processing method, device, server, terminal and storage medium |
CN110740346B (en) * | 2019-10-23 | 2022-04-22 | 北京达佳互联信息技术有限公司 | Video data processing method, device, server, terminal and storage medium |
CN111083507B (en) * | 2019-12-09 | 2021-11-23 | 广州酷狗计算机科技有限公司 | Method and system for connecting to wheat, first main broadcasting terminal, audience terminal and computer storage medium |
CN111083507A (en) * | 2019-12-09 | 2020-04-28 | 广州酷狗计算机科技有限公司 | Method and system for connecting to wheat, first main broadcasting terminal, audience terminal and computer storage medium |
CN111654736A (en) * | 2020-06-10 | 2020-09-11 | 北京百度网讯科技有限公司 | Method and device for determining audio and video synchronization error, electronic equipment and storage medium |
CN111726695A (en) * | 2020-07-02 | 2020-09-29 | 聚好看科技股份有限公司 | Display device and audio synthesis method |
CN112291579A (en) * | 2020-10-26 | 2021-01-29 | 北京字节跳动网络技术有限公司 | Data processing method, device, equipment and storage medium |
CN113573117A (en) * | 2021-07-15 | 2021-10-29 | 广州方硅信息技术有限公司 | Video live broadcast method and device and computer equipment |
CN114095772A (en) * | 2021-12-08 | 2022-02-25 | 广州方硅信息技术有限公司 | Virtual object display method and system under live microphone connection and computer equipment |
CN114095772B (en) * | 2021-12-08 | 2024-03-12 | 广州方硅信息技术有限公司 | Virtual object display method, system and computer equipment under continuous wheat direct sowing |
CN117560538A (en) * | 2024-01-12 | 2024-02-13 | 江西微博科技有限公司 | Service method and device of interactive voice video based on cloud platform |
CN117560538B (en) * | 2024-01-12 | 2024-03-22 | 江西微博科技有限公司 | Service method of interactive voice video based on cloud platform |
Also Published As
Publication number | Publication date |
---|---|
CN108900859B (en) | 2020-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108900859A (en) | Live broadcasting method and system | |
CN109600678B (en) | Information display method, device and system, server, terminal and storage medium | |
CN108401124B (en) | Video recording method and device | |
US11153609B2 (en) | Method and apparatus for live streaming | |
CN109729411B (en) | Live broadcast interaction method and device | |
WO2019105239A1 (en) | Video stream sending method, playing method, device, equipment and storage medium | |
CN109982102A (en) | The interface display method and system and direct broadcast server of direct broadcasting room and main broadcaster end | |
CN111918090B (en) | Live broadcast picture display method and device, terminal and storage medium | |
CN109413453B (en) | Video playing method, device, terminal and storage medium | |
CN109348247A (en) | Determine the method, apparatus and storage medium of audio and video playing timestamp | |
CN109618212A (en) | Information display method, device, terminal and storage medium | |
CN109660855A (en) | Paster display methods, device, terminal and storage medium | |
CN110418152B (en) | Method and device for carrying out live broadcast prompt | |
CN110290392B (en) | Live broadcast information display method, device, equipment and storage medium | |
CN107896337B (en) | Information popularization method and device and storage medium | |
CN110278464A (en) | The method and apparatus for showing list | |
CN108769738B (en) | Video processing method, video processing device, computer equipment and storage medium | |
CN111083516A (en) | Live broadcast processing method and device | |
CN110996117B (en) | Video transcoding method and device, electronic equipment and storage medium | |
CN112118477A (en) | Virtual gift display method, device, equipment and storage medium | |
CN108900921A (en) | Even wheat live broadcasting method, device and storage medium | |
CN113271470B (en) | Live broadcast wheat connecting method, device, terminal, server and storage medium | |
CN110751539A (en) | Article information processing method, article information processing device, article information processing terminal, article information processing server, and storage medium | |
CN109302632A (en) | Obtain method, apparatus, terminal and the storage medium of live video picture | |
CN110958464A (en) | Live broadcast data processing method and device, server, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |