CN110602522B - Multi-path real-time live webRTC stream synthesis method - Google Patents

Multi-path real-time live webRTC stream synthesis method Download PDF

Info

Publication number
CN110602522B
CN110602522B CN201910962940.1A CN201910962940A CN110602522B CN 110602522 B CN110602522 B CN 110602522B CN 201910962940 A CN201910962940 A CN 201910962940A CN 110602522 B CN110602522 B CN 110602522B
Authority
CN
China
Prior art keywords
webrtc
data
audio
video
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910962940.1A
Other languages
Chinese (zh)
Other versions
CN110602522A (en
Inventor
唐东明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Minzu University
Original Assignee
Southwest Minzu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Minzu University filed Critical Southwest Minzu University
Priority to CN201910962940.1A priority Critical patent/CN110602522B/en
Publication of CN110602522A publication Critical patent/CN110602522A/en
Application granted granted Critical
Publication of CN110602522B publication Critical patent/CN110602522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Abstract

The invention discloses a method for synthesizing a multi-channel real-time live WebRTC stream, which specifically comprises the following steps: 1. preprocessing the received WebRTC data packet; 2. performing real-time multi-channel audio mixing; 3. performing multi-channel video mixing; 4. and synchronously synthesizing and forwarding multiple paths of WebRTC streaming media. The invention can synthesize the WebRTC protocol streaming media data generated by a plurality of endpoints in real time, ensures the synchronization of audio and video while ensuring the real-time performance, and solves the problem of actual production.

Description

Multi-path real-time live webRTC stream synthesis method
Technical Field
The invention belongs to the technical field of software development, and particularly relates to a method for synthesizing a multi-channel real-time live WebRTC stream.
Background
In order to realize the direct launch of the high-efficient live streaming media world wide Web alliance and the Internet engineering task group on the browser, a new technology for directly carrying out real-time audio and video conversation through the Web browser is established: web Real-Time Communication (WebRTC). WebRTC is a collection of standards, protocols, and JavaScript APIs that enable peer-to-peer audio, video, and data sharing between browsers or peers. WebRTC does not need to depend on third-party plug-in or special software, Web application can directly realize real-time streaming media communication through standard JavaScript API call, and at present, mainstream browsers Chrome, Firefox and Safari all support WebRTC. Developers only need to adopt JavaScript language to simply call, and the Web program can carry out point-to-point data transmission to realize rich teleconference experience. WebRTC has been widely used in a variety of application scenarios, such as health care, online education, remote collaboration, online monitoring, and the like. When a real-time streaming media live platform is actually realized by using a WebRTC protocol, it is necessary to synchronize, synthesize and forward user audio and video data of multiple anchor broadcasters or multiple peers to a viewer, for example, when three anchor broadcasters A, B, C distributed in different places simultaneously perform online activities, it is necessary to efficiently synthesize audio and video data generated by the three anchor broadcasters in real time, and then send the synthesized audio and video data to the viewer.
Disclosure of Invention
Aiming at the problem, the invention provides a method for synthesizing a multi-channel real-time live WebRTC stream. The method aims to provide a solution for the situation that user audio and video data of multiple anchor broadcasts or multiple peers need to be synchronously synthesized and forwarded to a viewer when streaming media is broadcasted directly by using WebRTC in some practical application scenes.
The invention discloses a method for synthesizing a multi-channel real-time live webRTC stream, which comprises the following steps of:
step 1: preprocessing the received WebRTC data packet: the WebRTC adopts a standard RTP format when packaging streaming media data, packet description information is arranged in a header of an RTP package, seq _ number represents a streaming number of a packet, timestamp represents a timestamp when the data packet is sent for physical generation, and different data packet preprocessing modes are adopted for audio and video data.
Step 2: the data carried by one WebRTC audio data packet is 960 samples, and the data of the left channel and the right channel is stored in an LR crossing mode in a memory space after being decoded into PCM data. Performing real-time multi-channel audio mixing: setting the mixed audio PCM data buffer area as buff _ out, and setting two paths of audio samples L stored in an LR crossing mode1R1And L2R2Then the mixing formula for synthesizing two audio samples into one audio sample is as follows,
Figure BDA0002229552890000011
beta is treble suppression coefficient, and can be adjusted according to actual needs to suppress the popping sound generated by the synthesized sound. When multi-channel audio mixing is carried out in real time, the received audio is directly transcoded into PCM data, and then the mixed audio value of the audio value at the position corresponding to the buf _ out is sequentially calculated by the formula (1).
And step 3: performing real-time multi-channel video mixing: because the video acquisition pixel resolution selected by each live broadcast end is different but the aspect ratio is the same during multi-channel live broadcast, only unified specifications can be adopted for the finally output multi-channel composite video images, the height and the width of the input video image A are respectively set as H _ in and W _ in, the height and the width of the image A on the composite output video image are changed into H _ in _ scale and W _ in _ scale, and the positions of the upper left corner on the output image are x and y; video images generally use YUV format to store data, and in order to reduce the burden of video encoding and decoding, Y, U, V three components are stored in different matrixes respectively by using YUV plane format (planar format).
And 4, step 4: synchronous synthesis and forwarding of multiple paths of WebRTC streaming media:
the video WebRTC data packet draws an image on an output canvas through the steps 1 and 3; the audio WebRTC data packet is preprocessed in the step 1 and then decoded into PCM data, and then the audio is mixed according to the step 2; at the moment, two sending threads exist in the system, the video sending thread is responsible for encoding and packaging the latest image on the output canvas into a WebRTC data packet, and the audio sending thread is responsible for encoding and packaging the data after sound mixing in the audio output buffer into the WebRTC data packet; the two sending threads add correct time stamps to the sent data under the coordination of global system time to ensure the synchronization of audio and video; and finally, sending the data to a watching end.
Further, the video data preprocessing in step 1 specifically includes:
step 1.1: analyzing a packet header of p of the WebRTC data packet, and analyzing a data area to obtain video extension information;
step 1.2: comprehensively judging whether the data packet is invalid or not, and discarding an empty data packet;
step 1.3: the data packets are inserted into the queue QV according to the seq _ number descending order of the data packets;
step 1.4: traversing the QV in sequence, setting a taken data packet q, and discarding all packets of the current timestamp if the difference value between the timestamp of q and the timestamp of p is larger than theta; judging whether the seq _ number of the packet with the q and the timestamp is continuous or not backwards, and starting packet loss management if the seq _ number of the packet with the q and the timestamp is discontinuous;
step 1.5: and if all the WebRTC packages of a complete video are received, analyzing the video encoding head information of the WebRTC package load data to obtain the height and width information of the video image and judging whether the change is caused.
Further, step 3 specifically comprises:
step 3.1: calculating a zoom ratio and zooming the input image to a target size;
step 3.2: i, sequentially calculating the destination copy starting position Y _ pos of the Y component of the ith line of the input image after scaling from the line 1 to the line H _ in _ scale; copying the ith line of the scaled Y component of the input image to a target space starting with Y _ pos;
step 3.3: i, sequentially calculating the destination copy starting position U _ pos of the U component of the ith line of the zoomed input image in the target output image according to the height and width information halved from the line 1 to the line H _ in _ scale/2; copying the ith line of the U component of the input image after scaling to a target space with U _ pos as the start; calculating a destination copy starting position V _ pos of a V component of an ith line calculated by halving the zoomed input image according to the height and width information, wherein the V component of the ith line is positioned in the target output image; the ith line of the scaled input image V component is copied to the target space starting with V _ pos.
Compared with the prior art, the invention has the beneficial technical effects that:
the invention can synthesize the WebRTC protocol streaming media data generated by a plurality of endpoints in real time, ensures the synchronization of audio and video while ensuring the real-time performance, and solves the problem of actual production. The practical use result shows that the method provided by the invention can synthesize multiple paths of WebRTC streams in real time and realize the synchronization of audio and video. The method can be expanded to be used in the fields of multi-person wheat connection, online chorus and the like in the future. An application scene developer needing to perform real-time synthesis of the multi-path WebRTC streaming media can realize the real-time synthesis of the multi-path WebRTC streaming media according to the introduced steps without large change, the development cost can be saved, and the subsequent upgrade and maintenance are easy to realize.
Drawings
FIG. 1 is a flow diagram of the present invention.
Fig. 2 is a schematic diagram of a multi-path WebRTC streaming media synchronous synthesizing and forwarding method.
Fig. 3 is a flow chart when the present invention is implemented.
Detailed Description
The invention is described in further detail below with reference to the figures and specific embodiments.
The whole flow of the multi-channel real-time live WebRTC stream synthesis method is shown in fig. 1, wherein a multi-channel WebRTC data packet receiving module in the figure is a multi-thread working module and is used for receiving stream data sent by a terminal peer and received by WebRTC connection maintained by ICE; when the WebRTC data packet is received, audio and video also need to be processed separately; the method comprises the steps that after a data packet is received, the data packet needs to be preprocessed, in the preprocessing process, processing needs to be carried out on the basis of statistics of the condition of the current received packet according to various characteristics of audio data and video data, and under certain conditions, packet loss rearrangement work needs to be carried out; after receiving a complete audio and video data packet, data decoding is needed, the audio and video data have corresponding compression coding specifications, video coding formats recommended by WebRTC are H.264 and VP8, an audio coding format is OPUS, and the decoded data need to be written into a buffer area; under the constraint condition of a uniform time axis, calculating the relative time stamp of the decoded data by adopting an algorithm again, combining the multiple paths of data to generate a corresponding time stamp, and carrying out audio and video hard coding on the data; and finally, calculating the sending time of the audio and the video on the premise of unifying the time axis, sending data at corresponding moments, achieving audio and video synchronization, and finally relaying the data to a receiving terminal through WebRTC peer-to-peer connection maintained by an ICE agent and playing the streaming media. The method specifically comprises the following steps:
step 1: preprocessing the received WebRTC data packet: the WebRTC adopts a standard RTP format when packaging streaming media data, packet description information is arranged in a header of an RTP package, seq _ number represents a streaming number of a packet, timestamp represents a timestamp when the data packet is sent for physical generation, and different data packet preprocessing modes are adopted for audio and video data.
Step 2: the data carried by one WebRTC audio data packet is 960 samples, and the data of the left channel and the right channel is stored in an LR crossing mode in a memory space after being decoded into PCM data. Performing real-time multi-channel audio mixing: setting the mixed audio PCM data buffer area as buff _ out, setting two pathsAudio samples L stored in LR interleaved fashion1R1And L2R2Then the mixing formula for synthesizing two audio samples into one audio sample is as follows,
Figure BDA0002229552890000041
beta is treble suppression coefficient, and can be adjusted according to actual needs to suppress the popping sound generated by the synthesized sound. When multi-channel audio mixing is carried out in real time, the received audio is directly transcoded into PCM data, and then the mixed audio value of the audio value at the position corresponding to the buf _ out is sequentially calculated by the formula (1).
And step 3: performing real-time multi-channel video mixing: because the video acquisition pixel resolution selected by each live broadcast end is different but the aspect ratio is the same during multi-channel live broadcast, only unified specifications can be adopted for the finally output multi-channel composite video images, the height and the width of the input video image A are respectively set as H _ in and W _ in, the height and the width of the image A on the composite output video image are changed into H _ in _ scale and W _ in _ scale, and the positions of the upper left corner on the output image are x and y; video images generally use YUV format to store data, and in order to reduce the burden of video encoding and decoding, Y, U, V three components are stored in different matrixes respectively by using YUV plane format (planar format).
And 4, step 4: synchronous synthesis and forwarding of multiple paths of WebRTC streaming media:
the multi-path WebRTC streaming media synchronous composition and forwarding method is shown in fig. 2, and each receiving thread is responsible for receiving one path of audio or video. The video WebRTC data packet draws an image on an output canvas through the steps 1 and 3; the audio WebRTC data packet is preprocessed in the step 1 and then decoded into PCM data, and then the audio is mixed according to the step 2; at the moment, two sending threads exist in the system, the video sending thread is responsible for encoding and packaging the latest image on the output canvas into a WebRTC data packet, and the audio sending thread is responsible for encoding and packaging the data after sound mixing in the audio output buffer into the WebRTC data packet; the two sending threads add correct time stamps to the sent data under the coordination of global system time to ensure the synchronization of audio and video; and finally, sending the data to a watching end.
In order to better implement the present invention, the steps for implementing the present invention are explained in detail with reference to fig. 3 as follows:
1. building a WebRTC Gateway of the user to connect a client of live streaming to the Gateway so as to realize centralized reception of multimedia streams pushed by a plurality of streaming pushing terminals, wherein the WebRTC Gateway can be developed secondarily on the basis of an open-source solution, such as Janus and the like;
2. writing a WebRTC stream preprocessing entry function process (), and then respectively realizing preprocessing functions video () and audio () aiming at the video stream;
in the video processing (), firstly, a header of a received WebRTC packet is analyzed to obtain a sequence number seq _ number and a timestamp, and then whether the received WebRTC packet is a valid data packet is judged according to data load, and if the received WebRTC packet is not a valid data packet, the received WebRTC packet is discarded; putting the received effective packets into a queue according to the sequence of seq _ number; and then judging whether packets are received or not in the same time frame according to the continuity of the seq _ number, if the packets of a certain time frame are completely received, performing video decoding to obtain the basic information of the video and judging whether the basic information of the video and the previous frame are changed or not.
In the audioprocess (), firstly, a header of a received WebRTC packet is analyzed to obtain a sequence number seq _ number and a timestamp, and then whether the received WebRTC packet is a valid data packet is judged according to data load, and if the received WebRTC packet is not a valid data packet, the received WebRTC packet is discarded; putting the received effective packets into a queue according to the sequence of seq _ number; and taking out the data packet at the head of the queue for opus decoding to obtain the basic information of the audio data and judging whether the basic information of the audio data and the previous data packet are changed.
3. An audio decoding function opus _ decode () is implemented for decoding the encoded audio data in opus format into PCM data, and opus _ multi _ encode () is implemented for synthesizing PCM data from multiple sources according to the method described in step 2 of the above summary.
4. And implementing a video decoding function H264_ decode () for decoding the encoded video data in the H264 format into YUV data, and implementing H264_ multi _ encode () for synthesizing H264 data from multiple sources according to the method in step 3.
5. Realizing a video _ send _ thread () thread function, calling the h264_ multi _ encode () function realized in the step 4 at a fixed frame rate interval in the thread to compress and code the synthesized picture, packaging the compressed and coded data into a data packet according to the video specification of a WebRTC protocol, and sending the data packet to a viewer connected to the WebRTC Gateway; realizing an audio _ send _ thread () thread function, calling opus _ multi _ encode () realized in the step 3 by adopting a fixed frame rate interval in the function to perform compression coding on the synthesized audio, packaging the data subjected to compression coding into a data packet according to a WebRTC protocol audio specification, and sending the data packet to a viewer connected to a WebRTC Gateway; note that the overall length of time that video _ send _ thread () has been sent is coordinated when implementing audio _ send _ thread (), requiring a thread to sleep for a small amount of time if the lead or lag exceeds a threshold.
6. And starting the thread realized in the step 5 in the WebRTC Gateway realized in the step 1, and enabling the thread to run all the time.

Claims (3)

1. A multi-channel real-time live webRTC stream synthesis method is characterized by comprising the following steps:
step 1: preprocessing the received WebRTC data packet: the WebRTC adopts a standard RTP format when packaging streaming media data, packet description information is arranged in a header of an RTP package, seq _ number represents a streaming number of a packet, timestamp represents a timestamp when the data packet is sent for physical generation, and different data packet preprocessing modes are adopted for audio and video data;
step 2: performing real-time multi-channel audio mixing: setting the mixed audio PCM data buffer as buff _ out, and setting two paths of audio samples stored in LR crossing modeL 1 R 1AndL 2 R 2then the mixing formula for synthesizing two audio samples into one audio sample is as follows,
Figure DEST_PATH_IMAGE002
(1)
in the formula, L1、R1Respectively represent the volume values of the 1 st path left channel and the right channel, L2、R2Respectively representing the 2 nd path left channel and right channel volume values,βfor the high-pitch suppression coefficient, when the real-time multi-channel audio mixing is carried out, the received audio is directly transcoded into PCM data, and then the mixed audio value of the audio value at the position corresponding to the buff _ out is sequentially calculated by using a formula (1);
and step 3: performing real-time multi-channel video mixing: input-enabled video imageARespectively has a height and a width ofH_inAndW_inon the composite output video imageABecome high and wideH_in_scaleAndW_in_scaleand the position of the upper left corner on the output image isxAndy(ii) a In order to reduce the burden of video coding and decoding, YUV plane format is adoptedYUVThe three components of (a) are stored in different matrices respectively;
and 4, step 4: synchronous synthesis and forwarding of multiple paths of WebRTC streaming media:
the video WebRTC data packet draws an image on an output canvas through the steps 1 and 3; the audio WebRTC data packet is preprocessed in the step 1 and then decoded into PCM data, and then the audio is mixed according to the step 2; at the moment, two sending threads exist in the system, the video sending thread is responsible for encoding and packaging the latest image on the output canvas into a WebRTC data packet, and the audio sending thread is responsible for encoding and packaging the data after sound mixing in the audio output buffer into the WebRTC data packet; the two sending threads add correct time stamps to the sent data under the coordination of global system time to ensure the synchronization of audio and video; and finally, sending the data to a watching end.
2. The method for synthesizing the multi-channel real-time live WebRTC stream according to claim 1, wherein the preprocessing of the video data in the step 1 specifically comprises:
step (ii) of1.1: parsing WebRTC data packetspAnalyzing the data area to obtain video extension information;
step 1.2: comprehensively judging whether the data packet is invalid or not, and discarding an empty data packet;
step 1.3: inserting the data packets into the queue in descending sequence of seq _ number of the data packetsQV
Step 1.4: traverse in sequenceQVSetting the data taken outqIf, ifqTime stamp ofpTime stamp difference of>θAll packets of the current timestamp are discarded,θa threshold value representing a timestamp difference; backward judgment andqif the seq _ number of the packets with the same timestamp is continuous, starting packet loss management if the seq _ number of the packets with the same timestamp is discontinuous;
step 1.5: and if all the WebRTC packages of a complete video are received, analyzing the video encoding head information of the WebRTC package load data to obtain the height and width information of the video image and judging whether the change is caused.
3. The method for synthesizing the multi-channel real-time live WebRTC stream according to claim 1, wherein the step 3 specifically comprises:
step 3.1: calculating a zoom ratio and zooming the input image to a target size;
step 3.2: sequentially calculating the ith line of the input image after scaling from 1 line to H _ in _ scale lineYComponent in target output imageYThe destination copy start position Y _ pos of the line where the component is located; the scaled input imageYThe ith row of the component is copied to the target space starting with Y _ pos;
step 3.3: sequentially calculating the ith line of the input image after zooming according to the height and width information halving calculation from the 1 line to the H _ in _ scale/2 lineUComponent in target output imageUThe destination copy starting position U _ pos of the line where the component is located; the scaled input imageUCopy the ith line of the component to the target space starting with U _ pos; calculating the ith line of the zoomed input image according to the height and width information by half calculationVComponent in target output imageVThe destination copy start position V _ pos of the line where the component is located; the scaled input imageVCopy the ith line of the component to V_posIs the starting target space.
CN201910962940.1A 2019-10-11 2019-10-11 Multi-path real-time live webRTC stream synthesis method Active CN110602522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910962940.1A CN110602522B (en) 2019-10-11 2019-10-11 Multi-path real-time live webRTC stream synthesis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910962940.1A CN110602522B (en) 2019-10-11 2019-10-11 Multi-path real-time live webRTC stream synthesis method

Publications (2)

Publication Number Publication Date
CN110602522A CN110602522A (en) 2019-12-20
CN110602522B true CN110602522B (en) 2021-08-03

Family

ID=68866428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910962940.1A Active CN110602522B (en) 2019-10-11 2019-10-11 Multi-path real-time live webRTC stream synthesis method

Country Status (1)

Country Link
CN (1) CN110602522B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111277885B (en) * 2020-03-09 2023-01-10 北京世纪好未来教育科技有限公司 Audio and video synchronization method and device, server and computer readable storage medium
CN112533075A (en) * 2020-11-24 2021-03-19 湖南傲英创视信息科技有限公司 Video processing method, device and system
CN113473162B (en) * 2021-04-06 2023-11-03 北京沃东天骏信息技术有限公司 Media stream playing method, device, equipment and computer storage medium
CN113259762B (en) * 2021-04-07 2022-10-04 广州虎牙科技有限公司 Audio processing method and device, electronic equipment and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008083293A2 (en) * 2006-12-29 2008-07-10 Glowpoint, Inc. Video call distrubutor
CN105430537A (en) * 2015-11-27 2016-03-23 刘军 Method and server for synthesis of multiple paths of data, and music teaching system
CN106921866A (en) * 2017-05-03 2017-07-04 广州华多网络科技有限公司 The live many video guide's methods and apparatus of auxiliary
EP3334118A1 (en) * 2016-12-09 2018-06-13 Sap Se Attack protection for webrtc providers
CN108322514A (en) * 2018-01-09 2018-07-24 安徽小马创意科技股份有限公司 The research and development method of the self-defined hybrid technology of multi-path audio-frequency data based on WebRTC

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109547844A (en) * 2018-12-19 2019-03-29 网宿科技股份有限公司 Audio/video pushing method and plug-flow client based on WebRTC agreement

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008083293A2 (en) * 2006-12-29 2008-07-10 Glowpoint, Inc. Video call distrubutor
CN105430537A (en) * 2015-11-27 2016-03-23 刘军 Method and server for synthesis of multiple paths of data, and music teaching system
EP3334118A1 (en) * 2016-12-09 2018-06-13 Sap Se Attack protection for webrtc providers
CN106921866A (en) * 2017-05-03 2017-07-04 广州华多网络科技有限公司 The live many video guide's methods and apparatus of auxiliary
CN108322514A (en) * 2018-01-09 2018-07-24 安徽小马创意科技股份有限公司 The research and development method of the self-defined hybrid technology of multi-path audio-frequency data based on WebRTC

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Webrtc和PWA的视频互动直播系统;彭永超 费璟昊;《电脑编程技巧与维护》;20170118;全文 *

Also Published As

Publication number Publication date
CN110602522A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN110602522B (en) Multi-path real-time live webRTC stream synthesis method
CN107846633B (en) Live broadcast method and system
US9426423B2 (en) Method and system for synchronizing audio and video streams in media relay conferencing
TWI414183B (en) Information processing apparatus and method and non-temporary computer-readable recording medium
JP6317872B2 (en) Decoder for synchronizing the rendering of content received over different networks and method therefor
CN109361945A (en) The meeting audiovisual system and its control method of a kind of quick transmission and synchronization
CA2737728A1 (en) Low latency video encoder
SG183571A1 (en) Movie file download device and method
CA2795694A1 (en) Video content distribution
CN105900445A (en) Robust live operation of DASH
US20230319371A1 (en) Distribution of Multiple Signals of Video Content Independently over a Network
CN101489091A (en) Audio signal transmission processing method and apparatus
CN108494792A (en) A kind of flash player plays the converting system and its working method of hls video flowings
CN103108186A (en) Method of achieving high-definition transmission of videos
CN111901630A (en) Data transmission method, device, terminal equipment and storage medium
Tang et al. Audio and video mixing method to enhance WebRTC
CN1996813B (en) Self-adapted media transfer management of the continuous media stream used for LAN/WAN environment
CN110300338A (en) A method of it is switched fast and plays group broadcasting video frequency
CN108449650A (en) A kind of RTMP live streaming flows to HTTP FLV live TV streams real time conversion systems and its working method
CN115209163A (en) Data processing method, data processing device, storage medium and electronic equipment
CN114422810A (en) Multipath live broadcast synchronous calibration method based on mobile terminal director station
CN114339316A (en) Video stream coding processing method based on live video
CN116847128B (en) Video superposition processing method based on 5G VoLTE video teleconference
Johanson Designing an environment for distributed real-time collaboration
CN115209230A (en) Method for realizing real-time video transmission based on RTMP protocol

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant