CN114640818A - Video processing method and device, electronic equipment and storage medium - Google Patents

Video processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114640818A
CN114640818A CN202210209872.3A CN202210209872A CN114640818A CN 114640818 A CN114640818 A CN 114640818A CN 202210209872 A CN202210209872 A CN 202210209872A CN 114640818 A CN114640818 A CN 114640818A
Authority
CN
China
Prior art keywords
stream
sub
target
value
production end
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210209872.3A
Other languages
Chinese (zh)
Inventor
李科阳
李庆波
王孝庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202210209872.3A priority Critical patent/CN114640818A/en
Publication of CN114640818A publication Critical patent/CN114640818A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a video processing method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring stream release signaling information sent by a production end; acquiring stream subscription signaling information sent by a plurality of consuming terminals according to the stream release signaling information; determining target sub-stream value sets of a plurality of consumption terminals based on the stream publishing signaling information and the stream subscribing signaling information; sending the target sub-stream value set to a production end, and enabling the production end to encode the sub-stream set on the basis of the target sub-stream value set; and transmitting the sub-stream set uploaded by the production end to a plurality of consumption ends. According to the method, the demands of the consumption end are collected, the production end is controlled to push the stream as required, and the video stream required to be issued by the production end is dynamically adjusted, so that the bandwidth resource of the production end is saved, the power consumption loss is reduced, and the processing efficiency of the video stream is improved.

Description

Video processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
Multicast Simulcast is a technical scheme of a video conference, and the core flow of the multicast Simulcast is that a production end encodes collected video frames for multiple times to generate multiple video streams with different resolutions and different qualities, and then the video streams are pushed to a server, and the server selects a proper video stream to be forwarded to a consumption end according to the network bandwidth of the consumption end, the user requirements and the like. The production end can fixedly publish a plurality of video streams regardless of whether the video streams are subscribed by people or not.
Disclosure of Invention
The application provides a video processing method, a video processing device, electronic equipment and a storage medium, so as to improve the processing efficiency of streaming media. The technical scheme of the application is as follows:
in a first aspect, an embodiment of the present application provides a video processing apparatus, including:
the production terminal acquisition module is used for acquiring stream issuing signaling information sent by the production terminal;
the consumer terminal acquisition module is used for acquiring stream subscription signaling information sent by a plurality of consumer terminals according to the stream release signaling information;
a demand aggregation module, configured to determine a target sub-stream value set of the multiple consuming terminals based on the stream publishing signaling information and the stream subscription signaling information;
the production control module is used for sending the target sub-flow value set to the production end and enabling the production end to encode the sub-flow set based on the target sub-flow value set;
and the stream sending module is used for sending the sub-stream set uploaded by the production end to the plurality of consumption ends.
In a second aspect, an embodiment of the present application provides a video processing apparatus, including:
the production terminal acquisition module is used for acquiring stream issuing signaling information sent by the production terminal;
a consuming end obtaining module, configured to obtain stream subscription signaling information sent by multiple consuming ends according to the stream publishing signaling information;
a demand aggregation module, configured to determine a target substream value set of the multiple consuming terminals based on the stream publishing signaling information and the stream subscription signaling information;
the production control module is used for sending the target sub-stream value set to the production end, and enabling the production end to encode the sub-stream set based on the target sub-stream value set;
and the stream sending module is used for sending the sub-stream set uploaded by the production end to the plurality of consumption ends.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the video processing method according to the embodiment of the first aspect of the present application.
In a fourth aspect, embodiments of the present application provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the video processing method described in the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product, which includes computer instructions, and when executed by a processor, the computer instructions implement the steps of the video processing method described in the first aspect of the present application.
The technical scheme provided by the embodiment of the application at least has the following beneficial effects:
the demand of the consumption end can be collected, and then the production end is controlled to push flow according to the demand.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application and are not to be construed as limiting the application.
Fig. 1 is an architecture diagram of the SFU architecture multicast Simulcast mode.
Fig. 2 is a flow diagram illustrating a video processing method according to an example embodiment.
FIG. 3 is a block diagram illustrating an extended RTCP message in accordance with an exemplary embodiment.
Fig. 4 is a flowchart illustrating a video processing method according to another exemplary embodiment.
Fig. 5 is an architecture diagram illustrating a system employing a video processing method in accordance with an exemplary embodiment.
Fig. 6 is a flowchart illustrating a video processing method according to yet another exemplary embodiment.
Fig. 7 is a flowchart illustrating a video processing method according to yet another exemplary embodiment.
Fig. 8 is a block diagram illustrating a video processing apparatus according to an example embodiment.
Fig. 9 is a block diagram illustrating a video processing apparatus according to another exemplary embodiment.
FIG. 10 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present application better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in this application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
An SFU (Selective Forwarding Unit), an audio and video conference system architecture, where an SFU server is mainly responsible for collecting and Forwarding video streams and does not perform coding and decoding processing on multimedia data. The SFU is like a video stream router, receives audio and video streams of the terminal and forwards the audio and video streams to other terminals according to the requirements. The SFU is widely applied to audio and video conferences, and particularly, after WebRTC (Web-Real-Time Communication, Web-page instant messaging) is popularized, media servers supporting WebRTC multiparty Communication are basically SFU structures. The WebRTC client encodes the same video stream multiple times using different resolutions and bit rates and sends it to the SFU router, which then decides which terminal receives which video stream.
RTCP (The RTP control Protocol) is a sister Protocol of Real-time Transport Protocol RTP (Real-time Transport Protocol), RTCP and RTP work together, RTP transmits actual data, RTCP transmits various control signaling, and is mainly used for feeding back The transmission quality of RTP.
Multicast Simulcast is a technical scheme of a video conference, as shown in fig. 1, a core process of the multicast Simulcast is that a production end encodes acquired video frames for multiple times, so as to generate multiple video streams with different resolutions and different qualities, and then the video streams are pushed to an SFU server, and the SFU server selects a suitable video stream to be forwarded to a consumption end according to network bandwidth of the consumption end, user requirements and the like.
It is understood that Simulcast mode means that a sharer of video can send multiple video streams of different resolutions (e.g., 1080P, 720P, 360P) simultaneously to the SFU server. And the SFU server may select a suitable path of the received multiple paths of video streams to send out according to the specific situation of each terminal. For example, since computer-side networks are particularly good, 1080P resolution video is sent to the computer side; and because the mobile network is poor, the video with the resolution of 360P is sent to the mobile phone end.
It should be noted that, in the embodiment of the present application, the producing side refers to a publishing side of the video stream, and the consuming side refers to a receiving side of the video stream. As an example, each of the 4 terminals connected to the SFU server and participating in the conference may send the audio/video stream to be shared to the SFU server, and may obtain the video stream shared by the terminals used by the other 3 participants from the SFU server. That is to say, the terminal used by the participant is the production end when being used for sharing the video stream, and the terminal used by the participant is the consumption end when being used for acquiring the video streams of other participants.
A production end in the related art may publish a plurality of video streams fixedly, regardless of whether the video streams are subscribed by anyone. If the video stream is unsubscribed, bandwidth waste and power consumption increase at the production end can be caused.
In view of the above problems, embodiments of the present application provide a video processing method and apparatus, an electronic device, and a storage medium, which optimize an existing Simulcast scheme and dynamically adjust a video stream to be issued by a production end according to requirements of a consumption end, thereby saving bandwidth resources of the production end, reducing power loss, and improving processing efficiency of the video stream.
Obtaining video streams on demand may be understood as, for example, if the consumer only subscribes to a 360p stream, then the producer only sends a 360p stream, and if the consumer does not subscribe to any stream, then the producer does not send any stream.
Wherein 360P is a video display format, the letter P represents progressive scan, and the number 360 represents the vertical resolution thereof, that is, there are 360 horizontal lines of scan lines in the vertical direction; 360 represents a vertical resolution of 360 and an aspect ratio of 4:3(480x360) or 15:9(600x 360).
Fig. 2 is a flow diagram of a video processing method according to one embodiment of the present application. It should be noted that the video processing method according to the embodiment of the present application can be applied to the video processing apparatus according to the embodiment of the present application. The video processing apparatus may be configured on an electronic device. As shown in fig. 2, the video processing method may include the following steps.
In step S201, stream distribution signaling information sent by the production end is acquired.
In this embodiment, an SFU server (hereinafter referred to as a server) obtains stream distribution signaling information sent by a producer, that is, the producer distributes a stream to the server through signaling, but does not send specific stream data.
It can be understood that the production side needs to first send the relevant information of the publishing side to the server side, and the consumption side connected to the server side can specify which production side's push stream to subscribe according to the relevant data, so the production side needs to first send the information including, for example, its own ID to the server side. At this time, the production end only informs the service end of the rooms which have joined the video conference through signaling and are communicated.
Also, the stream distribution signaling information includes sub-stream categories provided by the production side, which the production side notifies the service side, and sub-stream categories that the production side can provide, such as 1080P, 720P, 360P, and so on. Although the production end does not upload specific stream data, the production end acquires audio and video data in real time, and the data acquisition of the production end is in a normal state and is not influenced by the service end. The production end can acquire video information through a camera or can acquire the video information in a screen recording mode.
It should be noted that the production end performs stream distribution according to the requirement of the consumption end, for example, performs stream distribution as long as the consumption end has a camera opened, that is, the stream distribution signaling information is sent.
Optionally, the signaling format for establishing the connection may be WebSocket or HTTP (Hyper Text Transfer Protocol). The WebSocket is a Protocol for performing full duplex communication on a single TCP (Transmission Control Protocol) connection.
In step S202, according to the stream publishing signaling information, stream subscription signaling information sent by a plurality of consumers is obtained.
In this embodiment, the server acquires, according to the stream publishing signaling information, stream subscription signaling information sent by the multiple consuming terminals, where the stream subscription signaling information includes an expected sub-stream value, and the expected sub-stream value is a parameter of a sub-stream that the consuming terminal desires to acquire, for example, 720P.
The consumption end subscribes the stream from the server end through signaling, namely the consumption end informs the server end to send the audio and video stream to the consumption end.
In step S203, a set of target substream values of the multiple consumers is determined based on the stream publishing signaling information and the stream subscribing signaling information.
In this embodiment, the server determines a set of target sub-stream values of the multiple consumers based on the stream publishing signaling information and the stream subscription signaling information.
It can be understood that, because the core of the video processing method of the present application is on-demand production, the server needs to first obtain the sub-stream value required by the consumer, send the sub-stream value required to the producer for processing, and then send the streaming data uploaded by the producer to the consumer.
In this embodiment, the substream values of the demands of all consumers are collected into a target substream value set, and the demands are sent to the production end for processing in the form of the target substream value set.
In step S204, the target substream value set is sent to the production end, and the production end is made to encode the substream set based on the target substream value set.
In this embodiment, the server sends the target sub-stream value set to the production end, and the production end sets an encoder to encode the target sub-stream value set to obtain the sub-stream set.
It should be noted that the sub-stream set in the embodiment of the present application includes one or more sub-streams.
Optionally, the server sends the target sub-stream value set to the producer through an extended RTCP message, where a message structure of the extended RTCP message is shown in fig. 3, and the meaning of each field in the message is explained as follows:
version V (version), fixed as 2;
p (padding): whether the alignment is at the tail end or not is fixed to be 0;
RC (report count): the number of reporting blocks, e.g., RC in fig. 3 is 2;
pt (payload type): RTCP packet type, fixed as 220;
length is the total length of RTCP packets;
a Producer ID, which is an issuing end ID, wherein each device may have a plurality of issuing streams, for example, the IDs of a front camera and a rear camera are different;
the substream level to be pushed is classified into 3 levels, i.e. LOW, MIDDLE, and HIGH, respectively, where LOW is 1, MIDDLE is 2, and HIGH is 4, and if multiple levels are to be pushed, the operation is performed, for example: pushing LOW and MIDDLE at the same time would then be 3, while pushing LOW and HIGH would then be 5. 3 rd gear corresponds to 3 substreams. The gears are only given as an example, and may also include 4 th gear and 5 th gear, which are not limited herein.
Different issuing ends are distinguished through the issuing end ID, and the consuming end can designate the issuing end ID when sending the subscription signaling. In the overall processing flow, the production end firstly needs to publish the ID of the publishing end to the service end through signaling, then the consumption end can obtain the ID of the publishing end, and the consumption end can appoint to subscribe the stream published by the ID of the publishing end.
For example, after receiving the extended RTCP message sent by the server, the production end sets an encoder for a corresponding push stream, and encodes a LOW-quality sub-stream if LOW needs to be pushed; if MIDDLE needs to be pushed, the medium-quality sub-stream is coded; if HIGH needs to be pushed, the HIGH quality sub-stream is encoded. For example, HIGH, MIDDLE, and LOW correspond to 1080P, 720P, and 360P streams, respectively.
As an example of a scenario, when the production side is a WebRTC client, multiple encoding is performed on the same video stream by using different encoders to obtain multiple different sub-streams, and the multiple different sub-streams are sent to the SFU server.
In step S205, the sub-stream set uploaded by the production side is transmitted to a plurality of consumption sides.
According to the video processing method of the embodiment of the application, the target sub-stream value sets corresponding to the total requirements of a plurality of consumption ends are collected according to the stream publishing signaling information published by the production end and the stream subscription signaling information published by the consumption ends, then the target sub-stream value sets are sent to the production end, the production end is enabled to produce video streams according to the requirements of the consumption ends, and the produced video streams are sent to the consumption ends. By collecting the demands of the consumption end, the production end is controlled to push the stream as required, and the video stream required to be issued by the production end is dynamically adjusted, so that the bandwidth resource of the production end is saved, the power consumption loss is reduced, and the processing efficiency of the video stream is improved.
Fig. 4 is a flow diagram of a video processing method according to another embodiment of the present application. As shown in fig. 4, the video processing method may include the following steps, and the present embodiment is described with reference to fig. 5.
In step S401, stream distribution signaling information sent by the production end is acquired.
In step S402, stream subscription signaling information sent by a plurality of consumers is acquired.
In step S403, estimating the bandwidth of the current consuming side of the multiple consuming sides, and determining an optimal sub-stream value within the bandwidth range based on the sub-stream category in the stream issuing signaling information;
it will be appreciated that the higher the quality of the video stream, the greater the bandwidth requirements. For example, to transmit a 1080P video stream, the bandwidth needs to be several megabytes; if 480P video streams are to be transmitted, the bandwidth may be 100K to meet the requirements. And performing bandwidth estimation of the consuming end, mainly judging whether the consuming end meets the transmission of the sub-stream with high quality, and if not, sending the sub-stream to the video stream with lower quality of the consuming end.
Taking the Tencent meeting APP as an example, if a small window is opened by the terminal, the required quality of the video stream is low, and if the small window is changed into a large window, the required quality of the video stream is high.
Optionally, the bandwidth of the consuming end is estimated through a congestion control algorithm, and the algorithm for estimating the bandwidth of the consuming end is not limited herein, and other algorithms may also be selected.
Optionally, when the bandwidth range is not enough to transmit the sub-stream with the lowest quality in the sub-stream class, the sub-stream with the lowest quality in the sub-stream class is used as the optimal sub-stream value.
That is, when the bandwidth of the consuming side is small, the transmission condition is not satisfied for any sub-stream distributed by the stream of the producing side, and a sub-stream with the minimum quality that the producing side can provide is selected as the optimal sub-stream value.
In step S404, the smaller of the optimal sub-stream value and the expected sub-stream value of the current consuming end is used as the target sub-stream value of the current consuming end;
in this embodiment, the server takes the smaller of the optimal sub-stream value and the expected sub-stream value of the current consumer as the target sub-stream value of the current consumer.
That is, the server compares the sub-stream desired by the consumer with the sub-stream suitable for bandwidth transmission, and takes the smaller sub-stream as the target sub-stream value of the consumer.
In step S405, a target substream value set is obtained by aggregating target substream values of each of the plurality of consumers.
In this embodiment, the server collects target sub-stream values of each of the plurality of consumers to obtain a target sub-stream value set.
It can be understood that the server collects the demands of all consumers, and then sends the demands to the production end for production.
In step S406, the target set of substreams is sent to the production end, which is caused to encode the set of substreams based on the target set of substreams.
In step S407, the sub-stream set uploaded by the production side is transmitted to a plurality of consumption sides.
It should be noted that, in this embodiment, the implementation processes of the steps S201 to S202 and the steps S206 to S207 may refer to the descriptions of the implementation processes of the steps S101 to S102 and S104 to S105, respectively, and are not described herein again.
In the video processing method of the embodiment of the application, the SFU server collects the target sub-stream value sets corresponding to the total demand of the plurality of consumption terminals according to the stream publishing signaling information published by the production terminal and the stream subscribing signaling information sent by the consumption terminal, and then sends the target sub-stream value sets to the production terminal, so that the production terminal produces the video stream according to the demand of the consumption terminal, and sends the produced video stream to the consumption terminal. The SFU server side collects the requirements of the consumption side, and then controls the production side to push the stream as required, and the video stream required to be released by the production side is dynamically adjusted, so that the bandwidth resource of the production side is saved, the power consumption loss is reduced, and the processing efficiency of the video stream is improved. When the SFU server side collects the target sub-stream value sets of a plurality of consumption sides, the expected demand and the actual bandwidth transmission capacity of each consumption side are comprehensively considered, the target sub-stream value of each consumption side is comprehensively evaluated, and therefore the more suitable video stream is finally obtained and sent to the consumption side.
Fig. 6 is a flow diagram of a video processing method according to another embodiment of the present application. As shown in fig. 6, the video processing method may include the following steps.
In step S601, stream issuing signaling information sent by the production end is obtained.
In step S602, stream subscription signaling information sent by a plurality of consumers is acquired according to the stream publishing signaling information.
In step S603, a target substream value set of the plurality of consumers is determined based on the stream publishing signaling information and the stream subscribing signaling information.
In step S604, it is detected whether the currently uploaded sub-stream set of the production end is consistent with the target sub-stream value set.
In this embodiment, the server periodically detects whether the sub-stream set currently uploaded by the production end is consistent with the target sub-stream value set. As an example, the frequency of the timing detection is 1 second.
It can be understood that, when the production end acquires a plurality of corresponding sub-streams according to the target sub-stream value set sent by the service end and pushes the sub-streams to the service end, there may be a problem in the network or the bandwidth of the production end is insufficient, resulting in some sub-streams not being transmitted to the service end. Or, the RTCP message sent by the server may be lost, so that the production end does not receive the RTCP message sent by the server. All of the above situations may cause the sub-stream set currently uploaded by the production end to be inconsistent with the target sub-stream value set. There are other possible situations that may cause the currently uploaded sub-stream set at the production end to be inconsistent with the target sub-stream value set, and will not be described one by one here.
And if the situation is not avoided, the server adds a timing detection mechanism to detect whether the production end uploads a plurality of sub-streams corresponding to the target sub-stream value set to the server according to the requirement. If the fact that the production end does not upload the sub-stream set required by the consumption end according to the requirements is detected, the server end sends the target sub-stream set required by the consumption end to the production end again, and the production end obtains the required plurality of sub-streams according to the target sub-stream set. By the timing detection method, the server can be ensured to receive the corresponding sub-streams obtained by the production end according to the target sub-stream value set sent by the server.
When the consumption end changes, for example, a certain consumption end (namely) adjusts a small window into a large window in the process of a video conference, or a new participant is added in the middle of the conference, the server end acquires a new sub-flow value set according to the change of the consumption end. The other function of the timing detection is to find the change of the target sub-stream value set of the server in time and send the new target sub-stream value set to the production end in time, so as to obtain the corresponding new sub-stream set. Moreover, if the consumption end changes too much, the new target sub-stream value set needs to be sent too frequently, which will affect the performance of the service end, and the timing detection can control the highest frequency of sending the target sub-stream value set, so that the performance of the service end is kept in a certain range.
In step S605, in response to that the sub-stream set currently uploaded by the production end is inconsistent with the target sub-stream value set, the target sub-stream value set is sent to the production end, so that the production end encodes the sub-stream set based on the target sub-stream value set.
In this embodiment, when the server detects that the sub-stream set currently uploaded by the production end is inconsistent with the target sub-stream value set, the server sends the target sub-stream value set to the production end, and the production end sets an encoder to encode the sub-stream set based on the target sub-stream value set to obtain the sub-stream set.
It should be noted that, when the server finds that the sub-stream set currently uploaded by the production end is consistent with the target sub-stream value set through timing detection, the following steps are executed according to the operation flow.
In step S606, the sub-stream set uploaded by the production side is transmitted to a plurality of consumption sides.
It should be noted that, in this embodiment, the implementation processes of the steps S601 to S603 and S606 may refer to the descriptions of the implementation processes of the steps S201 to S203 and S205, respectively, and are not described herein again.
In the video processing method of the embodiment of the application, the SFU server collects the target sub-stream value sets corresponding to the total demand of the plurality of consumption terminals according to the stream publishing signaling information published by the production terminal and the stream subscribing signaling information sent by the consumption terminal, and then sends the target sub-stream value sets to the production terminal, so that the production terminal produces the video stream according to the demand of the consumption terminal, and sends the produced video stream to the consumption terminal. The SFU server side collects the requirements of the consumption side, and then controls the production side to push the stream as required, and the video stream required to be released by the production side is dynamically adjusted, so that the bandwidth resource of the production side is saved, the power consumption loss is reduced, and the processing efficiency of the video stream is improved. The SFU server side ensures that the server side can receive the corresponding sub-streams obtained by the production side according to the target sub-stream value set sent by the server side through a timing detection mode, and timely discovers the change of the target sub-stream value set of the server side and sends the change to the production side, so that the requirements of the consumption side are timely met, and the performance of the server side is ensured.
Fig. 7 is a flow diagram of a video processing method according to another embodiment of the present application. As shown in fig. 7, the video processing method may include the following steps.
In step S701, stream distribution signaling information sent by the production end is acquired.
In step S702, stream subscription signaling information sent by a plurality of consumers is obtained according to the stream publishing signaling information.
In step S703, a set of target substream values of the multiple consumers is determined based on the stream publishing signaling information and the stream subscribing signaling information.
In step S704, the target set of substreams is sent to the production side, which is caused to encode the set of substreams based on the target set of substreams.
In step S705, a target substream corresponding to each target substream value in the target substream value set is determined based on the substream set currently uploaded by the production end.
It can be understood that although the server has sent the target sub-stream value set required by the consumer to the producer, and the producer is enabled to set the encoder to obtain the corresponding sub-stream, there may be two cases in which the sub-stream set currently uploaded to the server by the producer is consistent with the sub-stream set corresponding to the target sub-stream value set required by the consumer; in another case, the sub-stream set currently uploaded to the service end by the production end is inconsistent with the sub-stream set corresponding to the target sub-stream value set required by the consumption end.
Under normal conditions, for example, under the condition that the network is normal, most of the substream sets which are currently uploaded to the server by the production end are consistent with the substream sets corresponding to the target substream value sets required by the consumption end, the speed of processing the video stream by the production end is high, and the video stream required by the consumption end can be sent by the service end in time.
For example, the production end has a poor network environment at some time, and cannot encode the sub-stream with high quality because its uplink bandwidth is relatively low, and can only encode the sub-stream with low quality corresponding to the target sub-stream value set, or the service end fails to successfully send the target sub-stream value set to the production end because of the network reason, and then the sub-stream set currently uploaded by the production end on the server is also the sub-stream set uploaded for the target sub-stream value set sent last time.
Therefore, the sub-stream set currently uploaded by the production end is not necessarily consistent with the sub-stream set corresponding to the target sub-stream value set, and the service end needs to determine the target sub-stream corresponding to each target sub-stream value in the target sub-stream value set based on the sub-stream set currently uploaded by the production end, and then send the target sub-stream to the consumption end. Namely, the server selects an appropriate sub-stream from the currently uploaded sub-stream set of the consumer and sends the selected sub-stream to the consumer.
Optionally, in response to that the sub-stream set currently uploaded by the production end includes a first sub-stream corresponding to the current target sub-stream value in the target sub-stream value set, the first sub-stream is taken as the target sub-stream.
That is to say, when there is a sub-stream corresponding to the target sub-stream value required by the consumer in the sub-stream set uploaded by the producer, the corresponding sub-stream is taken as the target sub-stream.
Optionally, in response to that the sub-stream set currently uploaded by the production end does not include the first sub-stream corresponding to the current target sub-stream value in the target sub-stream value set, the sub-stream with the highest quality in at least one sub-stream with the quality lower than that of the first sub-stream in the sub-stream set currently uploaded by the production end is taken as the target sub-stream.
That is to say, when there is no sub-stream corresponding to the target sub-stream value required by the consumer in the sub-stream set currently uploaded by the producer, one sub-stream is selected from the sub-stream set currently uploaded by the producer, and the sub-stream is a sub-stream having a quality lower than the target sub-stream value and having a highest quality among all the sub-streams having a quality lower than the target sub-stream value.
It should be noted that after the server sends the sub-stream set inconsistent with the target sub-stream value set to the consumer, and after the production end uploads the sub-stream set consistent with the target sub-stream value set to the server, the server immediately sends a new sub-stream set to the consumer, so as to meet the requirements of the consumer in time.
In step S706, the target substream corresponding to the target substream value is sent to the consuming side corresponding to the target substream value.
In this embodiment, the consuming side sends the target sub-stream corresponding to the target sub-stream value to the consuming side corresponding to the target sub-stream value.
It should be noted that, in this embodiment, the implementation processes of the steps S701 to S704 may refer to the descriptions of the implementation processes of the steps S201 to S204, which are not described herein again.
In the video processing method of the embodiment of the application, the SFU server collects the target sub-stream value sets corresponding to the total demand of the plurality of consumption terminals according to the stream publishing signaling information published by the production terminal and the stream subscribing signaling information sent by the consumption terminal, and then sends the target sub-stream value sets to the production terminal, so that the production terminal produces the video stream according to the demand of the consumption terminal, and sends the produced video stream to the consumption terminal. The SFU server side collects the requirements of the consumption side, and then controls the production side to push the stream as required, and the video stream required to be released by the production side is dynamically adjusted, so that the bandwidth resource of the production side is saved, the power consumption loss is reduced, and the processing efficiency of the video stream is improved. And when the SFU server side does not timely receive the sub-streams required by the consumer side and uploaded by the production side, the appropriate sub-streams are selected from the current sub-stream set and sent to the consumer side, so that the application of the consumer side is timely met.
Fig. 8 is a block diagram illustrating a video processing device according to an example embodiment. Referring to fig. 8, the video processing apparatus may include: a production side acquisition module 801, a consumer side acquisition module 802, a demand aggregation module 803, a production control module 804, and a stream sending module 805.
Specifically, the production side obtaining module 801 is configured to obtain stream issuing signaling information sent by the production side;
a consuming side obtaining module 802, configured to obtain, according to the stream publishing signaling information, stream subscription signaling information sent by multiple consuming sides;
a demand aggregation module 803, configured to determine a target sub-stream value set of multiple consumers based on the stream publishing signaling information and the stream subscription signaling information;
the production control module 804 is used for sending the target substream value set to the production end, and enabling the production end to encode the substream set based on the target substream value set;
and a stream sending module 805, configured to send the sub-stream set uploaded by the production end to multiple consumption ends.
According to the video processing device disclosed by the embodiment of the application, the target sub-stream value sets corresponding to the total requirements of the plurality of consumption ends are collected according to the stream publishing signaling information issued by the production end and the stream subscription signaling information issued by the consumption end, and then the target sub-stream value sets are sent to the production end, so that the production end produces video streams according to the requirements of the consumption end, and the produced video streams are sent to the consumption end. By collecting the demands of the consumption end, the production end is controlled to push the stream as required, and the video stream required to be issued by the production end is dynamically adjusted, so that the bandwidth resource of the production end is saved, the power consumption loss is reduced, and the processing efficiency of the video stream is improved.
Fig. 9 is a block diagram illustrating a video processing apparatus according to another exemplary embodiment. Referring to fig. 9, the video processing apparatus may include: a production side acquisition module 901, a consumer side acquisition module 902, a demand aggregation module 903, a production control module 904, and a stream sending module 905.
It should be noted that the production side obtaining module 901, the consumption side obtaining module 902, the demand aggregation module 903, the production control module 904, and the stream sending module 905 in this embodiment have the same specific structures and functions as the production side obtaining module 801, the consumption side obtaining module 802, the demand aggregation module 803, the production control module 804, and the stream sending module 805.
In some embodiments of the present application, the stream publishing signaling information includes a sub-stream type provided by the production end, the stream subscribing signaling information includes a desired sub-stream value, and the requirement aggregating module 903 includes:
an estimating unit 9031, configured to estimate a bandwidth of a current consumer of the multiple consumers, and determine an optimal sub-stream value within a bandwidth range based on the sub-stream category;
a comparing unit 9032, configured to use the smaller of the optimal sub-stream value and the expected sub-stream value of the current consuming end as a target sub-stream value of the current consuming end;
the aggregation unit 9033 is configured to aggregate the target sub-flow values of the plurality of consumers to obtain a target sub-flow value set.
In some embodiments of the present application, the estimation unit 9031 is further configured to:
and in response to the sub-stream with the minimum quality in the sub-stream classes being not enough to be transmitted in the bandwidth range, taking the sub-stream with the minimum quality as the optimal sub-stream value.
In some embodiments of the present application, the production control module 904 comprises:
a timing detection unit 9041, configured to detect whether the currently uploaded sub-stream set of the production end is consistent with the target sub-stream value set at a timing;
the production end sending unit 9042 is configured to send the target sub-stream value set to the production end in response to that the currently uploaded sub-stream set of the production end is inconsistent with the target sub-stream value set, so that the production end encodes the sub-stream set based on the target sub-stream value set.
In some embodiments of the present application, the stream sending module 905 comprises:
the sub-stream acquiring unit 9051 is configured to determine, based on the sub-stream set currently uploaded by the production end, a target sub-stream corresponding to each target sub-stream value in the target sub-stream value set;
and the consuming side sending unit 9052 is configured to send the target sub-stream corresponding to the target sub-stream value to the consuming side corresponding to the target sub-stream value.
In some embodiments of the present application, the sub-stream obtaining unit 9051 is specifically configured to:
and in response to the fact that the sub-stream set uploaded by the production end currently comprises a first sub-stream corresponding to the current target sub-stream value in the target sub-stream value set, taking the first sub-stream as a target sub-stream.
In some embodiments of the present application, the sub-stream obtaining unit 9051 is further configured to:
and in response to that the sub-stream set currently uploaded by the production end does not include a first sub-stream corresponding to the current target sub-stream value in the target sub-stream value set, taking a sub-stream with the highest quality in at least one sub-stream with the quality lower than that of the first sub-stream in the sub-stream set currently uploaded by the production end as a target sub-stream.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The video processing device of the embodiment of the application firstly controls the production end to push stream as required by collecting the demands of the consumption end, and realizes dynamic adjustment of the video stream required to be issued by the production end, thereby saving bandwidth resources of the production end, reducing power consumption loss and improving the processing efficiency of the video stream.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 10 is a block diagram of an electronic device for implementing a video processing method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 10, the electronic apparatus includes: one or more processors 1001, memory 1002, and interfaces for connecting the various components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing some of the necessary operations (e.g., as an array of servers, a group of blade servers, or a multi-processor system). Fig. 10 illustrates an example of a processor 1001.
The memory 1002 is a non-transitory computer readable storage medium provided herein. Wherein the memory stores instructions executable by at least one processor to cause the at least one processor to perform the video processing method provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the video processing method provided by the present application.
The memory 1002, as a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the video processing method in the embodiment of the present application (for example, the production side acquisition module 801, the consumer side acquisition module 802, the demand aggregation module 803, the production control module 804, and the stream transmission module 805 shown in fig. 8, or the production side acquisition module 901, the consumer side acquisition module 902, the demand aggregation module 903, the production control module 904, and the stream transmission module 905 shown in fig. 9). The processor 1001 performs the various functions of the server by executing non-transitory software programs, instructions, and modules stored in the memory 1002.
The memory 1002 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the video processing electronic device, and the like. Further, the memory 1002 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 1002 may optionally include memory located remotely from the processor 1001, which may be connected to video processing electronics over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device for implementing the video processing method may further include: an input device 1003 and an output device 1004. The processor 1001, the memory 1002, the input device 1003, and the output device 1004 may be connected by a bus or other means, and the bus connection is exemplified in fig. 10.
The input device 1003 may receive input numeric or character information and generate key signal inputs relating to user settings and function control of the video processing electronics, such as an input device such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, etc. The output devices 1004 may include a display device, auxiliary lighting devices (e.g., LEDs), and tactile feedback devices (e.g., vibrating motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
In an exemplary embodiment, a computer program product is also provided, in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the above-described method.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed at the same time.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (17)

1. A video processing method, comprising:
acquiring stream release signaling information sent by a production end;
acquiring stream subscription signaling information sent by a plurality of consuming terminals according to the stream release signaling information;
determining a set of target substream values for the plurality of consumers based on the stream publish signaling information and the stream subscribe signaling information;
sending the target substream value set to the production end, and enabling the production end to encode a substream set based on the target substream value set;
and sending the sub-stream set uploaded by the production end to the plurality of consumption ends.
2. The method of claim 1, wherein the flow publishing signaling information comprises a sub-flow category provided by the producer, wherein the flow subscription signaling information comprises a desired sub-flow value, and wherein determining the target set of sub-flow values for the plurality of consumers based on the flow publishing signaling information and the flow subscription signaling information comprises:
estimating the bandwidth of the current consumer end in the plurality of consumer ends, and determining the optimal sub-stream value in the bandwidth range based on the sub-stream category;
taking the smaller of the optimal sub-stream value and the expected sub-stream value of the current consuming end as a target sub-stream value of the current consuming end;
and collecting the target sub-stream values of the plurality of consumption terminals to obtain a target sub-stream value set.
3. The method of claim 2, wherein estimating the bandwidth of a current consumer of the plurality of consumers and determining an optimal sub-stream value within the bandwidth range based on the sub-stream class, further comprises:
and in response to the bandwidth range not being enough to transmit the sub-stream with the minimum quality in the sub-stream classes, taking the sub-stream with the minimum quality as an optimal sub-stream value.
4. The method of claim 1, wherein said sending the target set of substreams to the production end, having the production end encode a set of substreams based on the target set of substreams, comprises:
regularly detecting whether the sub-flow set currently uploaded by the production end is consistent with the target sub-flow value set or not;
and responding to the fact that the sub-flow set uploaded currently by the production end is inconsistent with the target sub-flow value set, sending the target sub-flow value set to the production end, and enabling the production end to encode the sub-flow set based on the target sub-flow value set.
5. The method of claim 1, wherein the sending the set of sub-streams uploaded by the producer to the plurality of consumers comprises:
determining a target substream corresponding to each target substream value in the target substream value set based on the currently uploaded substream set of the production end;
and sending the target sub-stream corresponding to the target sub-stream value to a consumption end corresponding to the target sub-stream value.
6. The method of claim 5, wherein the determining a target substream corresponding to each target substream value in the set of target substream values based on the set of substreams currently uploaded by the production end comprises:
and in response to the fact that the sub-stream set uploaded by the production end currently comprises a first sub-stream corresponding to the current target sub-stream value in the target sub-stream value set, taking the first sub-stream as a target sub-stream.
7. The method of claim 6, wherein the determining a target substream corresponding to each target substream value in the set of target substream values based on a set of substreams currently uploaded by the production end further comprises:
and in response to that the sub-stream set currently uploaded by the production end does not include a first sub-stream corresponding to the current target sub-stream value in the target sub-stream value set, taking a sub-stream with the highest quality in at least one sub-stream with the quality lower than that of the first sub-stream in the sub-stream set currently uploaded by the production end as a target sub-stream.
8. A video processing apparatus, comprising:
the production terminal acquisition module is used for acquiring the stream release signaling information sent by the production terminal;
a consuming end obtaining module, configured to obtain stream subscription signaling information sent by multiple consuming ends according to the stream publishing signaling information;
a demand aggregation module, configured to determine a target sub-stream value set of the multiple consuming terminals based on the stream publishing signaling information and the stream subscription signaling information;
the production control module is used for sending the target sub-flow value set to the production end and enabling the production end to encode the sub-flow set based on the target sub-flow value set;
and the stream sending module is used for sending the sub-stream set uploaded by the production end to the plurality of consumption ends.
9. The apparatus of claim 8, wherein the flow publishing signaling information comprises a sub-flow category provided by the producer, wherein the flow subscription signaling information comprises a desired sub-flow value, and wherein the demand aggregation module comprises:
the estimation unit is used for estimating the bandwidth of the current consumption end in the plurality of consumption ends and determining the optimal sub-stream value in the bandwidth range based on the sub-stream category;
the comparison unit is used for taking the smaller of the optimal sub-stream value and the expected sub-stream value of the current consumption end as the target sub-stream value of the current consumption end;
and the aggregation unit is used for aggregating the target sub-flow values of the plurality of consumption ends to obtain a target sub-flow value set.
10. The apparatus of claim 9, wherein the estimation unit is further configured to:
and in response to the bandwidth range not being enough to transmit the sub-stream with the minimum quality in the sub-stream classes, taking the sub-stream with the minimum quality as an optimal sub-stream value.
11. The apparatus of claim 8, wherein the production control module comprises:
the timing detection unit is used for detecting whether the currently uploaded sub-flow set of the production end is consistent with the target sub-flow value set or not at regular time;
and the production end sending unit is used for responding to the inconsistency between the currently uploaded sub-stream set of the production end and the target sub-stream value set, sending the target sub-stream value set to the production end, and enabling the production end to encode the sub-stream set based on the target sub-stream value set.
12. The apparatus of claim 8, wherein the stream sending module comprises:
the sub-stream acquisition unit is used for determining a target sub-stream corresponding to each target sub-stream value in the target sub-stream value set based on the sub-stream set currently uploaded by the production end;
and the consumption end sending unit is used for sending the target sub-stream corresponding to the target sub-stream value to the consumption end corresponding to the target sub-stream value.
13. The apparatus according to claim 12, wherein the sub-stream obtaining unit is specifically configured to:
and in response to the fact that the sub-stream set uploaded by the production end currently comprises a first sub-stream corresponding to the current target sub-stream value in the target sub-stream value set, taking the first sub-stream as a target sub-stream.
14. The apparatus of claim 13, wherein the sub-stream obtaining unit is further configured to:
and in response to that the sub-stream set currently uploaded by the production end does not include a first sub-stream corresponding to the current target sub-stream value in the target sub-stream value set, taking a sub-stream with the highest quality in at least one sub-stream with the quality lower than that of the first sub-stream in the sub-stream set currently uploaded by the production end as a target sub-stream.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the video processing method of any of claims 1 to 7.
16. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the video processing method according to any one of claims 1 to 7.
17. A computer program product comprising computer program/instructions, characterized in that the computer program/instructions, when executed by a processor, implement the steps of the method of any of claims 1 to 7.
CN202210209872.3A 2022-03-04 2022-03-04 Video processing method and device, electronic equipment and storage medium Pending CN114640818A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210209872.3A CN114640818A (en) 2022-03-04 2022-03-04 Video processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210209872.3A CN114640818A (en) 2022-03-04 2022-03-04 Video processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114640818A true CN114640818A (en) 2022-06-17

Family

ID=81948580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210209872.3A Pending CN114640818A (en) 2022-03-04 2022-03-04 Video processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114640818A (en)

Similar Documents

Publication Publication Date Title
US9641559B2 (en) Methods and systems for dynamic adjustment of session parameters for effective video collaboration among heterogeneous devices
US9532002B2 (en) System for enabling meshed conferences to be seamlessly promoted to full MCU based conferences
JP6325741B2 (en) A framework that supports a hybrid of mesh and non-mesh endpoints
EP2583463B1 (en) Combining multiple bit rate and scalable video coding
AU2011305593B2 (en) System and method for the control and management of multipoint conferences
US20110161836A1 (en) System for processing and synchronizing large scale video conferencing and document sharing
EP2684346B1 (en) Method and apparatus for prioritizing media within an electronic conference according to utilization settings at respective conference participants
US20110157298A1 (en) System for processing and synchronizing large scale video conferencing and document sharing
US20100302346A1 (en) System for processing and synchronizing large scale video conferencing and document sharing
US9270937B2 (en) Real time stream provisioning infrastructure
CN105323651A (en) QoS-guaranteed video stream method and system, and transmitting server
US8791982B1 (en) Video multicast engine
KR20140056296A (en) Techniques for dynamic switching between coded bitstreams
CN104469259A (en) Cloud terminal video synthesis method and system
CN112929704A (en) Data transmission method, device, electronic equipment and storage medium
CN102957729A (en) Equipment and method for multimedia conference audio and video transmission
CN115209189B (en) Video stream transmission method, system, server and storage medium
CN114640818A (en) Video processing method and device, electronic equipment and storage medium
US11757967B2 (en) Video communications network with value optimization
CN111314738A (en) Data transmission method and device
CN113259730A (en) Code rate adjustment method and device for live broadcast
CN111404908B (en) Data interaction method and device, electronic equipment and readable storage medium
CN110708604B (en) Method and device for adapting IP channel bandwidth of video forwarding server
Ko et al. Enhancing QoE of WebRTC-based Video Conferencing using Deep Reinforcement Learning
Rodríguez et al. Cross-device Videoconferencing based on Adaptive Multimedia Streams.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination