WO2009156867A2 - Systems,methods, and media for providing cascaded multi-point video conferencing units - Google Patents

Systems,methods, and media for providing cascaded multi-point video conferencing units Download PDF

Info

Publication number
WO2009156867A2
WO2009156867A2 PCT/IB2009/006906 IB2009006906W WO2009156867A2 WO 2009156867 A2 WO2009156867 A2 WO 2009156867A2 IB 2009006906 W IB2009006906 W IB 2009006906W WO 2009156867 A2 WO2009156867 A2 WO 2009156867A2
Authority
WO
WIPO (PCT)
Prior art keywords
mcu
representations
slave
scalable video
distributing
Prior art date
Application number
PCT/IB2009/006906
Other languages
French (fr)
Other versions
WO2009156867A3 (en
Inventor
Yair Wiener
Ori Modai
Original Assignee
Radvision Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Radvision Ltd. filed Critical Radvision Ltd.
Priority to JP2011515672A priority Critical patent/JP5809052B2/en
Priority to CN200980131651.1A priority patent/CN102204244B/en
Priority to EP09769678.5A priority patent/EP2304941B1/en
Publication of WO2009156867A2 publication Critical patent/WO2009156867A2/en
Publication of WO2009156867A3 publication Critical patent/WO2009156867A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/152Multipoint control units therefor

Definitions

  • the disclosed subject matter relates to systems, methods, and media for providing cascaded multi-point video conferencing units.
  • MCU multi-point conference unit
  • An MCU is a device that enables conference endpoints (such as a video telephone, a video-enabled personal computer, etc.) to connect together.
  • MCUs are limited in the number of endpoints that can be connected to them.
  • an MCU may be limited to connecting ten endpoints.
  • Cascading is a process by which the two or more MCUs can communicate, and thus enable the endpoints connected to each to communicate (at least to some extent).
  • Cascading may also be used to decrease bandwidth on a wide area network
  • WAN when a first set of users (who are local to each other) are connected to an MCU that is remotely located from the users and perhaps connected to a second set of users local to the MCU.
  • first users may be able to connect to a first MCU that is local to them (e.g., via a local area network), and that MCU may be able to connect, via a wide area network, to a second MCU that is remotely located from the first MCU.
  • the first MCU may then locally handle the transfer of video between the first users, while the wide area network may only need to handle video being transmitted between the first set of users and the second set of users.
  • systems, methods, and media for providing cascaded multi-point video conferencing units are provided.
  • systems for providing cascaded multi-point conference units comprising: at least one encoder that encodes a video signal into representations using a scalable video protocol based on required configurations of parameters for a first multi-point conferencing unit (MCU) and a second MCU; and at least one interface that distributes a first one of the representations to the first MCU and a second one of the representations to the second MCU without distributing the first one of the representations to the second MCU.
  • MCU multi-point conferencing unit
  • methods for providing cascaded multi-point conference units comprising: encoding a video signal into representations using a scalable video protocol based on required configurations of parameters for a first multi-point conferencing unit (MCU) and a second MCU; and distributing a first one of the representations to the first MCU and a second one of the representations to the second MCU without distributing the first one of the representations to the second MCU.
  • MCU multi-point conferencing unit
  • computer-readable media containing computer- executable instructions that, when executed by a processor, cause the processor to perform a method for providing cascaded multi-point conference units are provided, the method comprising: encoding a video signal into representations using a scalable video protocol based on required configurations of parameters for a first multi-point conferencing unit (MCU) and a second MCU; and distributing a first one of the representations to the first MCU and a second one of the representations to the second MCU without distributing the first one of the representations to the second MCU.
  • MCU multi-point conferencing unit
  • FIG. 1 is a block diagram of a master-slave cascading configuration in accordance with some embodiments of the disclosed subject matter.
  • FIG. 2 is a diagram of a process for transmitting video in a master-slave cascading configuration in accordance with some embodiments of the disclosed subject matter.
  • FIG. 3 is a block diagram of a routed mesh cascading configuration in accordance with some embodiments of the disclosed subject matter.
  • FIG. 4 is a diagram of a process for transmitting video in a routed mesh cascading configuration in accordance with some embodiments of the disclosed subject matter.
  • FIG. 5 is a block diagram of a full mesh cascading configuration in accordance with some embodiments of the disclosed subject matter.
  • FIG. 6 is a diagram of a process for transmitting video in a full mesh cascading configuration in accordance with some embodiments of the disclosed subject matter.
  • Multi-point Conferencing Units are cascaded and video is transmitted between the MCUs using a scalable video protocol.
  • a scalable video protocol may include any video compression protocol that allows decoding of different representations of video from data encoded using that protocol.
  • the different representations of video may include different resolutions (spatial scalability), frame rates (temporal scalability), bit rates (SNR scalability), and/or any other suitable characteristic.
  • Different representations may be encoded in different subsets of the data, or may be encoded in the same subset of the data, in different embodiments.
  • some scalable video protocols may use layering that provides one or more representations (such as a high resolution image of a user) of a video signal in one layer and one or more other representations (such as a low resolution image of the user) of the video signal in another layer.
  • some scalable video protocols may split up a data stream (e.g., in the form of packets) so that different representations of a video signal are found in different portions of the data stream.
  • Examples of scalable video protocols may include the Scalable Video Coding (SVC) protocol defined by the Scalable Video Coding Extension of the H.264/ AVC Standard (Annex G) from the International Telecommunication Union (ITU), the MPEG2 protocol defined by the Motion Picture Experts Group, the H.263 (Annex O) protocol from the ITU, and the MPEG4 part 2 FGS protocol from the Motion Picture Experts Group, each of which is hereby incorporated by reference herein in its entirety.
  • SVC Scalable Video Coding
  • Annex G Scalable Video Coding Extension of the H.264/ AVC Standard
  • MPEG2 protocol defined by the Motion Picture Experts Group
  • H.263 (Annex O) protocol from the ITU
  • MPEG4 part 2 FGS protocol from the Motion Picture Experts Group
  • MCUs may be cascaded in a master and slave configuration as illustrated in FIGS. 1 and 2.
  • a master MCU 1 100 may be coupled to a plurality of endpoints 1101, 1102, 1103, and 1104 (which may be local to MCU 1100), a first slave MCU 1200, and a second slave MCU 1300.
  • Endpoints 1 101, 1102, 1103, and 1104 may be any suitable endpoints for use in a video conferencing system, such as endpoints provided by LifeSize Communications, Inc. and Aethra, Inc., and any suitable number (including none) of endpoints may be used.
  • two slave MCUs 1200 and 1300 are shown, any suitable number of slave MCUs may be used.
  • slave MCUs 1200 and 1300 may also be coupled to a plurality of endpoints 1201, 1202, 1203, and 1204 (which may be local to MCU 1200), and endpoints 1301, 1302, 1303. 1304, and 1305 (which may be local to MCU 1300), respectively.
  • Endpoints 1201, 1202, 1203, 1204, 1301 , 1302, 1303, 1304, and 1305 may be any suitable endpoints for use in a video conferencing system, such as endpoints provided by LifeSize Communications, Inc. and Aethra, Inc., and any suitable number of endpoints may be used.
  • slave MCU 1200, and/or slave MCU 1300 may provide any suitable MCU functions, such as those functions provided by MCUs provided by Tandberg Telecom AS and Polycom, Inc.
  • master MCU 1100 may be coupled to slave MCU
  • streams 1012 and 1015 may be implemented in some embodiments.
  • one stream 1012 and one stream 1015 may be implemented.
  • Streams 1012 and 1015 may be transmitted between master MCU 1100 and slave MCU 1200 using any suitable hardware and/or software, and may be transmitted on one or more physical and/or logical paths (e.g., such as via computer, telephone, satellite, and/or any other suitable network).
  • any suitable number of streams 1013 and 1014 may be implemented in some embodiments.
  • one stream 1013 and one stream 1014 may be implemented.
  • Streams 1013 and 1014 may be transmitted between master MCU 1100 and slave MCU 1300 using any suitable hardware and/or software, and may be transmitted on one or more physical and/or logical paths (e.g., such as via computer, telephone, satellite, and/or any other suitable network).
  • streams 1012 and 1013 may be used to convey video from a source slave MCU 1200 or 1300, respectively, to master MCU 1100. This is illustrated at step 2008 of FIG. 2.
  • These streams may include a composite video of multiple users (e.g., which may include one or more of the users at the local endpoints) or a video of a single user.
  • Streams 1012 and 1013 may be implemented using any suitable protocol, such as the H.264, H.263, and H.261 protocols from the ITU for example.
  • one or more encoders 1211 in slave MCU 1200 may send one or more (designated as N in FIG.
  • streams 1012 to master MCU 1100, and one or more encoders 1311 in slave MCU 1300 may send one or more (designated as N in FIG. 1) streams 1013 to master MCU 1100. These streams may then be received by N corresponding decoders 11 1 1 (for streams 1012) and N corresponding decoders 1 112 (for streams 1013).
  • master MCU 1100 may encode (using one or more encoders 1 122) a master stream using a scalable video protocol, such as the Scalable Video Coding (SVC) protocol defined by the Scalable Video Coding Extension of the H.264/AVC Standard (Annex G).
  • SVC Scalable Video Coding
  • This master stream may include any suitable number of layers. Each layer may be configured for a different configuration of parameters as required by one or more of slave MCUs 1200 and 1300 and endpoints 1201, 1202, 1203, 1204, 1301, 1302, 1303, 1304, and 1305.
  • a configuration of parameters may include any suitable settings of parameters for receiving a video signal, such as specified values of a bit rate, a frame rate, a resolution, etc.
  • master MCU 1 100 may distribute one or more layers to a decoder (e.g., 1212 or 1312) in each slave according to its required configuration(s) of parameters as illustrated at step 2012 of FIG. 2.
  • a decoder e.g., 1212 or 1312
  • slave MCU 1200 may only receive one of the layers in the master stream, while slave MCU 1300 may receive two of the layers in the master stream.
  • multiple layers may be substantially simultaneously distributed to the slave MCUs using a multicast network.
  • Each slave MCU 1200 and 1300 may next transcode/decode the one or more layers to a required video format as illustrated at step 2014 of FIG. 2.
  • slave MCU 1200 may receive a single layer that may then be transcoded to two different types of video streams corresponding to the requirements of local endpoints 1201 and 1202 coupled to slave MCU 1200. Any suitable transcoding technique may be used in accordance with some embodiments.
  • the video stream(s), and/or the received layer(s), can then be provided to the one or more local endpoints coupled to the master and slave MCUs based on the requirements of the endpoints as illustrated at steps 2016 and 2018 of FIG. 2.
  • MCUs may be cascaded in a routed mesh configuration as illustrated in FIGS. 3 and 4.
  • a master MCU 3100 may be coupled to a plurality of endpoints 3101, 3102, 3103, and 3104 (which may be local to MCU 3100), a first slave MCU 3200, a second slave MCU 3300, and a third slave MCU 3400.
  • Endpoints 3101, 3102, 3103, and 3104 may be any suitable endpoints for use in a video conferencing system, and any suitable number (including none) of endpoints may be used. Although three slave MCUs 3200, 3300, and 3400 are shown, any suitable number of slave MCUs may be used. Like master MCU 3100, slave MCUs 3200, 3300, and 3400 may also be coupled to a plurality of endpoints 3201, 3202, 3203. and 3204 (which may be local to MCU 3200), endpoints 3301. 3302, and 3303 (which may be local to MCU 3300), and endpoints 3401 and 3402 (which may be local to MCU 3400), respectively. Endpoints 3201, 3202, 3203, 3204, 3301, 3302, 3303, 3401, and 3402 may be any suitable endpoints for use in a video conferencing system, and any suitable number of endpoints may be used.
  • slave MCU 3100, slave MCU 3200, slave MCU 3300, and/or slave MCU 3400 may provide any suitable MCU functions.
  • master MCU 3100 may be coupled to slave MCU
  • streams 3012, 3013, and 3014 may be implemented in some embodiments, and streams going in the opposite direction between each slave MCU and the master MCU 3100 may additionally be present (although they are not illustrated to avoid overcomplicating FIG. 2).
  • Streams 3012, 3013, and 3014 may be transmitted between master MCU 3100 and slave MCUs 3200, 3300, and 3400 using any suitable hardware and/or software, and may be transmitted on one or more physical and/or logical paths (e.g., such as via computer, telephone, satellite, and/or any other suitable network).
  • slave MCU 3200 may encode (using encoder 3211) the video into a scalable video protocol, such as a layered video stream 3012 corresponding to configurations of parameters required by the master MCU and/or other slave MCUs, as illustrated at step 4010 of FIG. 4.
  • This layered video stream may be implemented using the Scalable Video Coding (SVC) protocol defined by the Scalable Video Coding Extension of the H.264/AVC Standard, for example.
  • SVC Scalable Video Coding
  • a configuration of parameters may include any suitable settings of parameters for a video signal, such as specified values of a bit rate, a frame rate, a resolution, etc.
  • This layered video stream 3012 may then be sent to decoder 311 1 at the master MCU as shown at steps 4012 and 4014 of FIG. 4. [0029] Master MCU 3100 may then extract layers 3013 and 3014 for each destination
  • the master MCU may extract the relevant layers of the stream for each slave MCU according to each slave MCU's required configuration parameters.
  • the master and slave MCUs may then transcode/decode the received layer(s)
  • the distribution may be accomplished in any suitable manner, such as by sending the layer(s) to each destination slave MCU directly (e g., using a unicast mechanism), by using a multicast transport layer, by using a central network entity (acting as a routing mesh; not shown), etc.
  • MCUs may be cascaded in a full mesh configuration as illustrated in FIGS 5 and 6.
  • a first MCU 5100 may be coupled to a plurality of endpoints 5101, 5102, 5103, and 5104 (which may be local to MCU 5100), a second MCU 5200, and a third MCU 5300.
  • Endpoints 5101, 5102, 5103, and 5104 may be any suitable endpoints for use in a video conferencing system, and any suitable number (including none) of endpoints may be used.
  • three MCUs 5100, 5200, and 5300 are shown, any suitable number of MCUs may be used.
  • the other MCUs 5200 and 5300 may also be coupled to a plurality of endpoints 5201, 5202, 5203, and 5204 (which may be local to MCU 5200) and endpoints 5301. 5302, and 5303 (which may be local to MCU 5300), respectively.
  • Endpoints 5201, 5202, 5203, 5204, 5301, 5302, and 5303 may be any suitable endpoints for use in a video conferencing system, and any suitable number of endpoints may be used.
  • 5100, 5200, and/or 5300 may provide any suitable MCU functions.
  • MCU 5100 may be coupled to MCU 5200 by streams
  • MCU 5200 may be coupled to MCU 5300 by streams 5015 and 5016. Any suitable number of streams 501 1, 5012, 5013, 5014, 5015, and 5016 may be implemented in some embodiments.
  • Streams 5011, 5012. 5013, 5014, 5015, and 5016 may be transmitted between MCUs 5100. 5200, and 5300 using any suitable hardware and/or software, and may be transmitted on one or more physical and/or logical paths (e.g., such as via computer, telephone, satellite, and/or any other suitable network).
  • the source MCU may provide the video directly to each other MCU.
  • MCU 5200 (acting as a source MCU) may provide a video of one of its local participants to the other MCUs (acting as destination MCUs) by first encoding the video using a scalable video protocol, e.g., to form a layered video stream, based on the configuration parameters required by the other MCUs, as illustrated at step 6010 of FIG. 6.
  • a scalable video protocol e.g., to form a layered video stream
  • the encoding may be performed by encoders 5211 and 5213, which may first form a layered video stream (e.g., by using the Scalable Video Coding (SVC) protocol defined by the Scalable Video Coding Extension of the H.264/A ⁇ 7 C Standard) and then modify the layered stream using Coarse Grain Scalability (CGS), Medium Grain Scalability (MGS), Fine Grain Scalability (FGS), and /or any other suitable technique to match the layered stream to the required configuration parameters of the other MCUs.
  • SVC Scalable Video Coding
  • CGS Coarse Grain Scalability
  • MCS Medium Grain Scalability
  • FGS Fine Grain Scalability
  • the required layers of the video stream may be sent to destination MCU 5100 via stream 5011 and to destination MCU 5300 via stream 5015, as illustrated at steps 6012 and 6014 of FIG. 6.
  • This transmission can be performed directly (e.g., using a unicast mechanism), using a multicast transport layer, using a central network entity (acting as a routing mesh; not shown), etc.
  • the destination MCUs may then transcode/decode the received layer(s) (as illustrated at step 6016 of FIG. 6) and distribute the transcoded/ ' decoded video and/or received layer(s) to the participants at the local endpoints (as illustrated at steps 6018 of FIG.

Abstract

Systems, methods, and media for providing cascaded multi-point video conferencing units are provided. In some embodiments, systems for providing cascaded multi-point conference units are provided, the systems comprising: at least one encoder that encodes a video signal into representations using a scalable video protocol based on required configurations of parameters for a first multi-point conferencing unit (MCU) and a second MCU; and at least one interface that distributes a first one of the representations to the first MCU and a second one of the representations to the second MCU without distributing the first one of the representations to the second MCU.

Description

SYSTEMS, METHODS, AND MEDIA FOR PROVIDING CASCADED MULTI-POINT VIDEO CONFERENCING UNITS
Technical Field
[0001] The disclosed subject matter relates to systems, methods, and media for providing cascaded multi-point video conferencing units.
Background
[0002] As organizations and individuals interact over ever-increasing distances, and communication technology advances and becomes less expensive, more and more people are using video conferencing systems. An important part of a typical video conferencing system is a multi-point conference unit (MCU) (also sometimes referred to as a multipoint control unit). An MCU is a device that enables conference endpoints (such as a video telephone, a video-enabled personal computer, etc.) to connect together.
[0003] Typically, MCUs are limited in the number of endpoints that can be connected to them. For example, an MCU may be limited to connecting ten endpoints. In order to have larger conferences than ten users, it is necessary to either obtain a larger MCU, or to cascade two or more smaller MCUs. Cascading is a process by which the two or more MCUs can communicate, and thus enable the endpoints connected to each to communicate (at least to some extent).
[0004] Cascading may also be used to decrease bandwidth on a wide area network
(WAN) when a first set of users (who are local to each other) are connected to an MCU that is remotely located from the users and perhaps connected to a second set of users local to the MCU. For example, with a cascaded arrangement, such first users may be able to connect to a first MCU that is local to them (e.g., via a local area network), and that MCU may be able to connect, via a wide area network, to a second MCU that is remotely located from the first MCU. The first MCU may then locally handle the transfer of video between the first users, while the wide area network may only need to handle video being transmitted between the first set of users and the second set of users.
[0005] Current techniques for cascading MCUs, however, can present difficulties when different configurations of parameters (such as bit rate, frame rate, resolution, etc.) are required by one or more of the cascaded MCUs, or the endpoints connected to them.
Summary
[0006] Systems, methods, and media for providing cascaded multi-point video conferencing units are provided. In some embodiments, systems for providing cascaded multi-point conference units are provided, the systems comprising: at least one encoder that encodes a video signal into representations using a scalable video protocol based on required configurations of parameters for a first multi-point conferencing unit (MCU) and a second MCU; and at least one interface that distributes a first one of the representations to the first MCU and a second one of the representations to the second MCU without distributing the first one of the representations to the second MCU.
[0007] In some embodiments, methods for providing cascaded multi-point conference units are provided, the methods comprising: encoding a video signal into representations using a scalable video protocol based on required configurations of parameters for a first multi-point conferencing unit (MCU) and a second MCU; and distributing a first one of the representations to the first MCU and a second one of the representations to the second MCU without distributing the first one of the representations to the second MCU. [0008] In some embodiments, computer-readable media containing computer- executable instructions that, when executed by a processor, cause the processor to perform a method for providing cascaded multi-point conference units are provided, the method comprising: encoding a video signal into representations using a scalable video protocol based on required configurations of parameters for a first multi-point conferencing unit (MCU) and a second MCU; and distributing a first one of the representations to the first MCU and a second one of the representations to the second MCU without distributing the first one of the representations to the second MCU.
Brief Description of the Drawings
[0009] FIG. 1 is a block diagram of a master-slave cascading configuration in accordance with some embodiments of the disclosed subject matter.
[0010] FIG. 2 is a diagram of a process for transmitting video in a master-slave cascading configuration in accordance with some embodiments of the disclosed subject matter.
[0011] FIG. 3 is a block diagram of a routed mesh cascading configuration in accordance with some embodiments of the disclosed subject matter.
[0012] FIG. 4 is a diagram of a process for transmitting video in a routed mesh cascading configuration in accordance with some embodiments of the disclosed subject matter.
[0013] FIG. 5 is a block diagram of a full mesh cascading configuration in accordance with some embodiments of the disclosed subject matter.
[0014] FIG. 6 is a diagram of a process for transmitting video in a full mesh cascading configuration in accordance with some embodiments of the disclosed subject matter. Detailed Description
[0015] Systems, methods, and media for providing cascaded multi-point video conferencing units are provided. In accordance with various embodiments, two or more Multi-point Conferencing Units (MCUs) are cascaded and video is transmitted between the MCUs using a scalable video protocol.
[0016] A scalable video protocol may include any video compression protocol that allows decoding of different representations of video from data encoded using that protocol. The different representations of video may include different resolutions (spatial scalability), frame rates (temporal scalability), bit rates (SNR scalability), and/or any other suitable characteristic. Different representations may be encoded in different subsets of the data, or may be encoded in the same subset of the data, in different embodiments. For example, some scalable video protocols may use layering that provides one or more representations (such as a high resolution image of a user) of a video signal in one layer and one or more other representations (such as a low resolution image of the user) of the video signal in another layer. As another example, some scalable video protocols may split up a data stream (e.g., in the form of packets) so that different representations of a video signal are found in different portions of the data stream. Examples of scalable video protocols may include the Scalable Video Coding (SVC) protocol defined by the Scalable Video Coding Extension of the H.264/ AVC Standard (Annex G) from the International Telecommunication Union (ITU), the MPEG2 protocol defined by the Motion Picture Experts Group, the H.263 (Annex O) protocol from the ITU, and the MPEG4 part 2 FGS protocol from the Motion Picture Experts Group, each of which is hereby incorporated by reference herein in its entirety. [0017] In some embodiments, MCUs may be cascaded in a master and slave configuration as illustrated in FIGS. 1 and 2. As shown, a master MCU 1 100 may be coupled to a plurality of endpoints 1101, 1102, 1103, and 1104 (which may be local to MCU 1100), a first slave MCU 1200, and a second slave MCU 1300. Endpoints 1 101, 1102, 1103, and 1104 may be any suitable endpoints for use in a video conferencing system, such as endpoints provided by LifeSize Communications, Inc. and Aethra, Inc., and any suitable number (including none) of endpoints may be used. Although two slave MCUs 1200 and 1300 are shown, any suitable number of slave MCUs may be used. Like master MCU 1100, slave MCUs 1200 and 1300 may also be coupled to a plurality of endpoints 1201, 1202, 1203, and 1204 (which may be local to MCU 1200), and endpoints 1301, 1302, 1303. 1304, and 1305 (which may be local to MCU 1300), respectively. Endpoints 1201, 1202, 1203, 1204, 1301 , 1302, 1303, 1304, and 1305 may be any suitable endpoints for use in a video conferencing system, such as endpoints provided by LifeSize Communications, Inc. and Aethra, Inc., and any suitable number of endpoints may be used.
[0018] In addition to providing one or more of the features described herein, master
MCU 1100, slave MCU 1200, and/or slave MCU 1300 may provide any suitable MCU functions, such as those functions provided by MCUs provided by Tandberg Telecom AS and Polycom, Inc.
[0019] As illustrated in FIG. 1, master MCU 1100 may be coupled to slave MCU
1200 by streams 1012 and/or 1015 and to slave MCU 1300 by streams 1013 and/or 1014. Any suitable number of streams 1012 and 1015 may be implemented in some embodiments. For example, one stream 1012 and one stream 1015 may be implemented. Streams 1012 and 1015 may be transmitted between master MCU 1100 and slave MCU 1200 using any suitable hardware and/or software, and may be transmitted on one or more physical and/or logical paths (e.g., such as via computer, telephone, satellite, and/or any other suitable network). Similarly, any suitable number of streams 1013 and 1014 may be implemented in some embodiments. For example, one stream 1013 and one stream 1014 may be implemented. Streams 1013 and 1014 may be transmitted between master MCU 1100 and slave MCU 1300 using any suitable hardware and/or software, and may be transmitted on one or more physical and/or logical paths (e.g., such as via computer, telephone, satellite, and/or any other suitable network).
[002Oj In some embodiments, streams 1012 and 1013 may be used to convey video from a source slave MCU 1200 or 1300, respectively, to master MCU 1100. This is illustrated at step 2008 of FIG. 2. These streams may include a composite video of multiple users (e.g., which may include one or more of the users at the local endpoints) or a video of a single user. Streams 1012 and 1013 may be implemented using any suitable protocol, such as the H.264, H.263, and H.261 protocols from the ITU for example. For example, one or more encoders 1211 in slave MCU 1200 may send one or more (designated as N in FIG. 1) streams 1012 to master MCU 1100, and one or more encoders 1311 in slave MCU 1300 may send one or more (designated as N in FIG. 1) streams 1013 to master MCU 1100. These streams may then be received by N corresponding decoders 11 1 1 (for streams 1012) and N corresponding decoders 1 112 (for streams 1013).
[0021] As illustrated at step 2010 of FIG. 2, master MCU 1100 may encode (using one or more encoders 1 122) a master stream using a scalable video protocol, such as the Scalable Video Coding (SVC) protocol defined by the Scalable Video Coding Extension of the H.264/AVC Standard (Annex G). This master stream may include any suitable number of layers. Each layer may be configured for a different configuration of parameters as required by one or more of slave MCUs 1200 and 1300 and endpoints 1201, 1202, 1203, 1204, 1301, 1302, 1303, 1304, and 1305. A configuration of parameters may include any suitable settings of parameters for receiving a video signal, such as specified values of a bit rate, a frame rate, a resolution, etc.
[0022] After encoding the master stream, master MCU 1 100 may distribute one or more layers to a decoder (e.g., 1212 or 1312) in each slave according to its required configuration(s) of parameters as illustrated at step 2012 of FIG. 2. For example, slave MCU 1200 may only receive one of the layers in the master stream, while slave MCU 1300 may receive two of the layers in the master stream. In some embodiments, multiple layers may be substantially simultaneously distributed to the slave MCUs using a multicast network. [0023] Each slave MCU 1200 and 1300 may next transcode/decode the one or more layers to a required video format as illustrated at step 2014 of FIG. 2. For example, slave MCU 1200 may receive a single layer that may then be transcoded to two different types of video streams corresponding to the requirements of local endpoints 1201 and 1202 coupled to slave MCU 1200. Any suitable transcoding technique may be used in accordance with some embodiments.
[0024] The video stream(s), and/or the received layer(s), can then be provided to the one or more local endpoints coupled to the master and slave MCUs based on the requirements of the endpoints as illustrated at steps 2016 and 2018 of FIG. 2. [0025] In some embodiments. MCUs may be cascaded in a routed mesh configuration as illustrated in FIGS. 3 and 4. As showrn, a master MCU 3100 may be coupled to a plurality of endpoints 3101, 3102, 3103, and 3104 (which may be local to MCU 3100), a first slave MCU 3200, a second slave MCU 3300, and a third slave MCU 3400. Endpoints 3101, 3102, 3103, and 3104 may be any suitable endpoints for use in a video conferencing system, and any suitable number (including none) of endpoints may be used. Although three slave MCUs 3200, 3300, and 3400 are shown, any suitable number of slave MCUs may be used. Like master MCU 3100, slave MCUs 3200, 3300, and 3400 may also be coupled to a plurality of endpoints 3201, 3202, 3203. and 3204 (which may be local to MCU 3200), endpoints 3301. 3302, and 3303 (which may be local to MCU 3300), and endpoints 3401 and 3402 (which may be local to MCU 3400), respectively. Endpoints 3201, 3202, 3203, 3204, 3301, 3302, 3303, 3401, and 3402 may be any suitable endpoints for use in a video conferencing system, and any suitable number of endpoints may be used.
[0026] In addition to providing one or more of the features described herein, master
MCU 3100, slave MCU 3200, slave MCU 3300, and/or slave MCU 3400 may provide any suitable MCU functions.
[0027] As illustrated in FIG. 3, master MCU 3100 may be coupled to slave MCU
3200 by stream 3012, to slave MCU 3300 by stream 3013, and to slave MCU 3400 by stream 3014. Any suitable number of streams 3012, 3013, and 3014 may be implemented in some embodiments, and streams going in the opposite direction between each slave MCU and the master MCU 3100 may additionally be present (although they are not illustrated to avoid overcomplicating FIG. 2). Streams 3012, 3013, and 3014 may be transmitted between master MCU 3100 and slave MCUs 3200, 3300, and 3400 using any suitable hardware and/or software, and may be transmitted on one or more physical and/or logical paths (e.g., such as via computer, telephone, satellite, and/or any other suitable network). [0028] To provide video from one or more participants (such as the current speaker) at one or more local endpoints of slave MCU 3200 (a source slave MCU) to the master MCU and/or other slave MCUs (destination slave MCUs), slave MCU 3200 may encode (using encoder 3211) the video into a scalable video protocol, such as a layered video stream 3012 corresponding to configurations of parameters required by the master MCU and/or other slave MCUs, as illustrated at step 4010 of FIG. 4. This layered video stream may be implemented using the Scalable Video Coding (SVC) protocol defined by the Scalable Video Coding Extension of the H.264/AVC Standard, for example. A configuration of parameters may include any suitable settings of parameters for a video signal, such as specified values of a bit rate, a frame rate, a resolution, etc. This layered video stream 3012 may then be sent to decoder 311 1 at the master MCU as shown at steps 4012 and 4014 of FIG. 4. [0029] Master MCU 3100 may then extract layers 3013 and 3014 for each destination
MCU from the received layered video stream as illustrated at step 4016 of FIG. 4, and send the extracted layers to slave MCUs 3300 and 3400 using encoders 3112 and 3113 as illustrated at steps 4018 and 4024 of FIG. 4. The master MCU may extract the relevant layers of the stream for each slave MCU according to each slave MCU's required configuration parameters.
[0030] The master and slave MCUs may then transcode/decode the received layer(s)
(as illustrated at steps 4020 and 4026 of FIG. 4) and distribute the transcoded/decoded video and/or layer(s) to the participants at the local endpoints (as illustrated at steps 4022 and 4028 of FIG. 4). The distribution may be accomplished in any suitable manner, such as by sending the layer(s) to each destination slave MCU directly (e g., using a unicast mechanism), by using a multicast transport layer, by using a central network entity (acting as a routing mesh; not shown), etc.
[0031] In some embodiments. MCUs may be cascaded in a full mesh configuration as illustrated in FIGS 5 and 6. As shown, a first MCU 5100 may be coupled to a plurality of endpoints 5101, 5102, 5103, and 5104 (which may be local to MCU 5100), a second MCU 5200, and a third MCU 5300. Endpoints 5101, 5102, 5103, and 5104 may be any suitable endpoints for use in a video conferencing system, and any suitable number (including none) of endpoints may be used. Although three MCUs 5100, 5200, and 5300 are shown, any suitable number of MCUs may be used. Like first MCU 5100, the other MCUs 5200 and 5300 may also be coupled to a plurality of endpoints 5201, 5202, 5203, and 5204 (which may be local to MCU 5200) and endpoints 5301. 5302, and 5303 (which may be local to MCU 5300), respectively. Endpoints 5201, 5202, 5203, 5204, 5301, 5302, and 5303 may be any suitable endpoints for use in a video conferencing system, and any suitable number of endpoints may be used. [0032] In addition to providing one or more of the features described herein, MCUs
5100, 5200, and/or 5300 may provide any suitable MCU functions.
[0033] As illustrated in FIG. 5, MCU 5100 may be coupled to MCU 5200 by streams
5011 and 5012, and to MCU 5300 by streams 5013 and 5014. Similarly, MCU 5200 may be coupled to MCU 5300 by streams 5015 and 5016. Any suitable number of streams 501 1, 5012, 5013, 5014, 5015, and 5016 may be implemented in some embodiments. Streams 5011, 5012. 5013, 5014, 5015, and 5016 may be transmitted between MCUs 5100. 5200, and 5300 using any suitable hardware and/or software, and may be transmitted on one or more physical and/or logical paths (e.g., such as via computer, telephone, satellite, and/or any other suitable network).
[0034] To provide video from a source MCU to the other MCUs in a full mesh configuration, the source MCU may provide the video directly to each other MCU. For example, MCU 5200 (acting as a source MCU) may provide a video of one of its local participants to the other MCUs (acting as destination MCUs) by first encoding the video using a scalable video protocol, e.g., to form a layered video stream, based on the configuration parameters required by the other MCUs, as illustrated at step 6010 of FIG. 6. The encoding may be performed by encoders 5211 and 5213, which may first form a layered video stream (e.g., by using the Scalable Video Coding (SVC) protocol defined by the Scalable Video Coding Extension of the H.264/Aλ7C Standard) and then modify the layered stream using Coarse Grain Scalability (CGS), Medium Grain Scalability (MGS), Fine Grain Scalability (FGS), and /or any other suitable technique to match the layered stream to the required configuration parameters of the other MCUs.
[0035] The required layers of the video stream may be sent to destination MCU 5100 via stream 5011 and to destination MCU 5300 via stream 5015, as illustrated at steps 6012 and 6014 of FIG. 6. This transmission can be performed directly (e.g., using a unicast mechanism), using a multicast transport layer, using a central network entity (acting as a routing mesh; not shown), etc.
[0036] The destination MCUs may then transcode/decode the received layer(s) (as illustrated at step 6016 of FIG. 6) and distribute the transcoded/'decoded video and/or received layer(s) to the participants at the local endpoints (as illustrated at steps 6018 of FIG.
6)-
[0037] Although the invention has been described and illustrated in the foregoing illustrative embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the invention can be made without departing from the spirit and scope of the invention, which is only limited by the claims which follow. Features of the disclosed embodiments can be combined and rearranged in various ways.

Claims

What is claimed is:
1. A system for providing cascaded multi-point conference units, comprising: at least one encoder that encodes a video signal into representations using a scalable video protocol based on required configurations of parameters for a first multi-point conferencing unit (MCU) and a second MCU; and at least one interface that distributes a first one of the representations to the first MCU and a second one of the representations to the second MCU without distributing the first one of the representations to the second MCU.
2. The system of claim 1, wherein the parameters include at least one of bit rate, frame rate, and resolution
3. The system of claim 1 , wherein the at least one interface is part of a master MCU, the first MCU is a slave MCU. and the second MCU is a slave MCU.
4. The system of claim 1 , wherein the at least one encoder is in a third, slave MCU, the at least one interface is in a master MCU, the first MCU is a slave MCU, and the second MCU is a slave MCU.
5. The system of claim 1, wherein the at least one encoder is in a third MCU, and the first MCU, the second MCU, and the third MCU are arranged in a full mesh configuration.
6. The system of claim 1 , further comprising a transcoder and/or decoder that transcodes and/or decodes at least one of the representations into a resulting signal.
7. The system of claim 6, further comprising an interface that sends the resulting signal to an endpoint.
8. The system of claim 1, wherein the scalable video protocol is implemented using the Scalable Video Coding (SVC) protocol defined by the Scalable Video Coding Extension of the H.264/AVC Standard.
9. The system of claim 1 , wherein the representations are layers in a layered video stream.
10. A method for providing cascaded multi-point conference units, comprising: encoding a video signal into representations using a scalable video protocol based on required configurations of parameters for a first multi-point conferencing unit (MCU) and a second MCU; and distributing a first one of the representations to the first MCU and a second one of the representations to the second MCU without distributing the first one of the representations to the second MCU.
11. The method of claim 10, wherein the parameters include at least one of bit rate, frame rate, and resolution.
12. The method of claim 10, wherein the distributing is performed from a master MCU, the first MCU is a slave MCU, and the second MCU is a slave MCU.
13. The method of claim 10, wherein the encoding is performed in a third, slave MCU. the distributing is performed in a master MCU, the first MCU is a slave MCU, and the second MCU is a slave MCU.
14. The method of claim 10, wherein the encoding is performed in a third MCU, and the first MCU, the second MCU, and the third MCU are arranged in a full mesh configuration.
15. The method of claim 10, further comprising transcoding and/or decoding at least one of the representations into a resulting signal.
16. The method of claim 15, further comprising sending the resulting signal to an endpoint.
17. The method of claim 10, wherein the scalable video protocol is implemented using the Scalable Video Coding (SVC) protocol defined by the Scalable Video Coding Extension of the H.264/AVC Standard.
18. The method of claim 10, wherein the representations are layers in a layered video stream.
19. A computer-readable medium containing computer-executable instructions that, when executed by a processor, cause the processor to perform a method for providing cascaded multi-point conference units, the method comprising: encoding a video signal into representations using a scalable video protocol based on required configurations of parameters for a first multi-point conferencing unit (MCU) and a second MCU; and distributing a first one of the representations to the first MCU and a second one of the representations to the second MCU without distributing the first one of the representations to the second MCU.
20. The medium of claim 19, wherein the parameters include at least one of bit rate, frame rate, and resolution.
21. The medium of claim 19, wherein the distributing is performed from a master MCU, the first MCU is a slave MCU, and the second MCU is a slave MCU.
22. The medium of claim 19, wherein the encoding is performed in a third, slave MCU. the distributing is performed in a master MCU, the first MCU is a slave MCU, and the second MCU is a slave MCU.
23. The medium of claim 19, wherein the encoding is performed in a third MCU. and the first MCU. the second MCU, and the third MCU are arranged in a full mesh configuration.
24. The medium of claim 19, wherein the method further comprises transcoding and/or decoding at least one of the representations into a resulting signal.
25. The medium of claim 24, wherein the method further comprises sending the resulting signal to an endpoint.
26. The medium of claim 19. wherein the scalable video protocol is implemented using the Scalable Video Coding (SVC) protocol defined by the Scalable Video Coding Extension of the H.264/AVC Standard.
27. The medium of claim 19, wherein the representations are layers in a layered video stream.
PCT/IB2009/006906 2008-06-23 2009-06-22 Systems,methods, and media for providing cascaded multi-point video conferencing units WO2009156867A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2011515672A JP5809052B2 (en) 2008-06-23 2009-06-22 System, method and medium for providing a cascaded multipoint video conference device
CN200980131651.1A CN102204244B (en) 2008-06-23 2009-06-22 Systems, methods, and media for providing cascaded multi-point video conferencing units
EP09769678.5A EP2304941B1 (en) 2008-06-23 2009-06-22 Systems, methods, and media for providing cascaded multi-point video conferencing units

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/144,471 2008-06-23
US12/144,471 US8319820B2 (en) 2008-06-23 2008-06-23 Systems, methods, and media for providing cascaded multi-point video conferencing units

Publications (2)

Publication Number Publication Date
WO2009156867A2 true WO2009156867A2 (en) 2009-12-30
WO2009156867A3 WO2009156867A3 (en) 2010-04-22

Family

ID=41430805

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2009/006906 WO2009156867A2 (en) 2008-06-23 2009-06-22 Systems,methods, and media for providing cascaded multi-point video conferencing units

Country Status (5)

Country Link
US (1) US8319820B2 (en)
EP (1) EP2304941B1 (en)
JP (1) JP5809052B2 (en)
CN (1) CN102204244B (en)
WO (1) WO2009156867A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101895718A (en) * 2010-07-21 2010-11-24 杭州华三通信技术有限公司 Video conference system multi-image broadcast method, and device and system thereof
US10455196B2 (en) 2012-07-30 2019-10-22 Polycom, Inc. Method and system for conducting video conferences of diverse participating devices

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100149301A1 (en) * 2008-12-15 2010-06-17 Microsoft Corporation Video Conferencing Subscription Using Multiple Bit Rate Streams
US8380790B2 (en) * 2008-12-15 2013-02-19 Microsoft Corporation Video conference rate matching
GB0905184D0 (en) * 2009-03-26 2009-05-06 Univ Bristol Encryption scheme
CN101820418B (en) * 2010-03-19 2012-10-24 博康智能网络科技股份有限公司 Universal security equipment control method for extensible protocol and system
US8947492B2 (en) 2010-06-18 2015-02-03 Microsoft Corporation Combining multiple bit rate and scalable video coding
CN102790872B (en) * 2011-05-20 2016-11-16 南京中兴软件有限责任公司 A kind of realization method and system of video conference
US8866873B2 (en) * 2011-10-08 2014-10-21 Mitel Networks Corporation System for distributing video conference resources among connected parties and methods thereof
CN102611562B (en) 2012-02-06 2015-06-03 华为技术有限公司 Method and device for establishing multi-cascade channel
CN103428483B (en) * 2012-05-16 2017-10-17 华为技术有限公司 A kind of media data processing method and equipment
CN102847727B (en) * 2012-08-27 2015-06-17 天津天重中直科技工程有限公司 Pressure detection device of four-high mill
NO336150B1 (en) * 2012-12-19 2015-05-26 Videxio As Procedure and unit for optimizing large-scale video conferencing
US20150365244A1 (en) * 2013-02-22 2015-12-17 Unify Gmbh & Co. Kg Method for controlling data streams of a virtual session with multiple participants, collaboration server, computer program, computer program product, and digital storage medium
NO341411B1 (en) 2013-03-04 2017-10-30 Cisco Tech Inc Virtual endpoints in video conferencing
US10187433B2 (en) 2013-03-15 2019-01-22 Swyme Ip Bv Methods and systems for dynamic adjustment of session parameters for effective video collaboration among heterogenous devices
US8982177B2 (en) * 2013-07-08 2015-03-17 Avaya Inc. System and method for whiteboard collaboration
JP2015192230A (en) * 2014-03-27 2015-11-02 沖電気工業株式会社 Conference system, conference server, conference method, and conference program
US9338401B2 (en) * 2014-03-31 2016-05-10 Polycom, Inc. System and method for a hybrid topology media conferencing system
CN105704424A (en) * 2014-11-27 2016-06-22 中兴通讯股份有限公司 Multi-image processing method, multi-point control unit, and video system
US9911193B2 (en) 2015-11-18 2018-03-06 Avaya Inc. Semi-background replacement based on rough segmentation
US9948893B2 (en) 2015-11-18 2018-04-17 Avaya Inc. Background replacement based on attribute of remote user or endpoint
CN107317995A (en) * 2016-04-26 2017-11-03 中兴通讯股份有限公司 A kind of MCU cascade structures and its control method and control system
JP7113294B2 (en) * 2016-09-01 2022-08-05 パナソニックIpマネジメント株式会社 Multi-view imaging system
US9942517B1 (en) 2016-10-04 2018-04-10 Avaya Inc. Multi-mode video conferencing system
US10165159B2 (en) 2016-10-04 2018-12-25 Avaya Inc. System and method for enhancing video conferencing experience via a moving camera
JP6931815B2 (en) * 2018-02-27 2021-09-08 パナソニックIpマネジメント株式会社 Video conferencing equipment
US11431764B2 (en) * 2020-03-13 2022-08-30 Charter Communications Operating, Llc Combinable conference rooms

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2615459A1 (en) 2005-07-20 2007-01-20 Vidyo, Inc. System and method for a conference server architecture for low delay and distributed conferencing applications
EP1830568A2 (en) 2006-03-01 2007-09-05 Polycom, Inc. Method and system for providing continuous presence video in a cascading conference
WO2007115133A2 (en) 2006-03-29 2007-10-11 Vidyo, Inc. System and method for transcoding between scalable and non-scalable video codecs
WO2008042852A2 (en) 2006-09-29 2008-04-10 Vidyo, Inc. System and method for multipoint conferencing with scalable video coding servers and multicast

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0888842A (en) * 1994-09-19 1996-04-02 Oki Electric Ind Co Ltd Picture transmission system
US6122259A (en) * 1996-02-27 2000-09-19 Hitachi, Ltd. Video conference equipment and multipoint video conference system using the same
SE9703849L (en) * 1997-03-14 1998-09-15 Ericsson Telefon Ab L M Scaling down images
JP4244394B2 (en) * 1998-02-17 2009-03-25 富士ゼロックス株式会社 Multipoint conference system
US20020126201A1 (en) * 2001-03-08 2002-09-12 Star-Bak Communication Inc. Systems and methods for connecting video conferencing to a distributed network
GB2384932B (en) * 2002-01-30 2004-02-25 Motorola Inc Video conferencing system and method of operation
US7984174B2 (en) 2002-11-11 2011-07-19 Supracomm, Tm Inc. Multicast videoconferencing
JP4329358B2 (en) * 2003-02-24 2009-09-09 富士通株式会社 Stream delivery method and stream delivery system
US20050008240A1 (en) * 2003-05-02 2005-01-13 Ashish Banerji Stitching of video for continuous presence multipoint video conferencing
US7321384B1 (en) * 2003-06-03 2008-01-22 Cisco Technology, Inc. Method and apparatus for using far end camera control (FECC) messages to implement participant and layout selection in a multipoint videoconference
NO318974B1 (en) * 2003-07-07 2005-05-30 Tandberg Telecom As Distributed MCU
NO318911B1 (en) 2003-11-14 2005-05-23 Tandberg Telecom As Distributed composition of real-time media
NO320115B1 (en) * 2004-02-13 2005-10-24 Tandberg Telecom As Device and method for generating CP images.
US8614732B2 (en) * 2005-08-24 2013-12-24 Cisco Technology, Inc. System and method for performing distributed multipoint video conferencing
JP2009508454A (en) * 2005-09-07 2009-02-26 ヴィドヨ,インコーポレーテッド Scalable low-latency video conferencing system and method using scalable video coding
JP2009518981A (en) 2005-12-08 2009-05-07 ヴィドヨ,インコーポレーテッド System and method for error resilience and random access in video communication systems
CA2633366C (en) 2005-12-22 2015-04-28 Vidyo, Inc. System and method for videoconferencing using scalable video coding and compositing scalable video conferencing servers
US8436889B2 (en) * 2005-12-22 2013-05-07 Vidyo, Inc. System and method for videoconferencing using scalable video coding and compositing scalable video conferencing servers
EP1997236A4 (en) 2006-03-03 2011-05-04 Vidyo Inc System and method for providing error resilience, random access and rate control in scalable video communications
JP2008085677A (en) * 2006-09-27 2008-04-10 Toshiba Corp Information control device, information synthesizer and program
JP4256421B2 (en) * 2006-11-21 2009-04-22 株式会社東芝 Video composition apparatus, video composition method, and video composition processing program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2615459A1 (en) 2005-07-20 2007-01-20 Vidyo, Inc. System and method for a conference server architecture for low delay and distributed conferencing applications
EP1830568A2 (en) 2006-03-01 2007-09-05 Polycom, Inc. Method and system for providing continuous presence video in a cascading conference
WO2007115133A2 (en) 2006-03-29 2007-10-11 Vidyo, Inc. System and method for transcoding between scalable and non-scalable video codecs
WO2008042852A2 (en) 2006-09-29 2008-04-10 Vidyo, Inc. System and method for multipoint conferencing with scalable video coding servers and multicast

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2304941A4

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101895718A (en) * 2010-07-21 2010-11-24 杭州华三通信技术有限公司 Video conference system multi-image broadcast method, and device and system thereof
CN101895718B (en) * 2010-07-21 2013-10-23 杭州华三通信技术有限公司 Video conference system multi-image broadcast method, and device and system thereof
US10455196B2 (en) 2012-07-30 2019-10-22 Polycom, Inc. Method and system for conducting video conferences of diverse participating devices
US11006075B2 (en) 2012-07-30 2021-05-11 Polycom, Inc. Method and system for conducting video conferences of diverse participating devices
US11503250B2 (en) 2012-07-30 2022-11-15 Polycom, Inc. Method and system for conducting video conferences of diverse participating devices

Also Published As

Publication number Publication date
JP2011525770A (en) 2011-09-22
EP2304941A2 (en) 2011-04-06
EP2304941B1 (en) 2019-05-08
CN102204244A (en) 2011-09-28
CN102204244B (en) 2014-06-18
JP5809052B2 (en) 2015-11-10
EP2304941A4 (en) 2014-09-10
US8319820B2 (en) 2012-11-27
WO2009156867A3 (en) 2010-04-22
US20090315975A1 (en) 2009-12-24

Similar Documents

Publication Publication Date Title
US8319820B2 (en) Systems, methods, and media for providing cascaded multi-point video conferencing units
EP1683356B1 (en) Distributed real-time media composer
US9215416B2 (en) Method and system for switching between video streams in a continuous presence conference
US8442120B2 (en) System and method for thinning of scalable video coding bit-streams
JP4921488B2 (en) System and method for conducting videoconference using scalable video coding and combining scalable videoconference server
CN101427573B (en) System and method for thinning of scalable video coding bit-streams
EP2324640B1 (en) Systems, methods, and media for providing selectable video using scalable video coding
US9596433B2 (en) System and method for a hybrid topology media conferencing system
EP1997236A2 (en) System and method for providing error resilience, random access and rate control in scalable video communications
EP2965508B1 (en) Video conference virtual endpoints
JP2015097416A (en) System and method for providing error tolerance, random access and rate control in scalable video communication
US8934530B2 (en) Spatial scalability using redundant pictures and slice groups
JP2005535219A (en) Method and apparatus for performing multiple description motion compensation using hybrid prediction code
JP2013042492A (en) Method and system for switching video streams in resident display type video conference
Bailleul et al. Branched inter layer prediction structure for reduced bandwidth distribution of SHVC streams

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200980131651.1

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09769678

Country of ref document: EP

Kind code of ref document: A2

ENP Entry into the national phase

Ref document number: 2011515672

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2009769678

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE