WO2011134228A1 - 视频会议系统的多点控制单元及其视频处理方法 - Google Patents

视频会议系统的多点控制单元及其视频处理方法 Download PDF

Info

Publication number
WO2011134228A1
WO2011134228A1 PCT/CN2010/077080 CN2010077080W WO2011134228A1 WO 2011134228 A1 WO2011134228 A1 WO 2011134228A1 CN 2010077080 W CN2010077080 W CN 2010077080W WO 2011134228 A1 WO2011134228 A1 WO 2011134228A1
Authority
WO
WIPO (PCT)
Prior art keywords
universal port
terminal
data
ethernet
control unit
Prior art date
Application number
PCT/CN2010/077080
Other languages
English (en)
French (fr)
Inventor
高恩克
陈涛
李文
刘克华
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2011134228A1 publication Critical patent/WO2011134228A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/152Multipoint control units therefor

Definitions

  • a conference television system refers to a system capable of real-time interaction and transmission of multiple points of video and sound, and is usually composed of a background conference management 105, a multipoint controlling unit (MCU) 104, and terminals 101 at various points.
  • the background conference management 105 is used to implement the conference management function, and can be performed through the Internet (Internet); the MCU 104 performs the media processing and other functions according to the conference requirements; here, the terminal 101 refers to a device participating in the conference, and includes a camera.
  • a collection of terminal devices such as video reception and display, microphones, and loudspeakers.
  • the working principle of the system is: Each terminal passes the transmission network
  • the unit that implements the video processing function in the MCU 104 is defined as a VPU (Video Processing Unit).
  • VPU Video Processing Unit
  • the number of terminals the form of the picture (such as six-picture, 12-picture, 16-picture, etc.) and quality (such as images with different resolutions such as 1080p, 720p, Dl, CIF, etc.), that is, the capacity and performance of the conference television system.
  • the port Universal Port, abbreviated as UP
  • UP is a general term in the conference TV industry. It does not refer to the port or interface that is generally understood. It means that the MCU in the conference TV system can support various types of terminals (such as HD).
  • a conference TV system with such performance is like having Support for a variety of terminals "universal ports, the same, any terminal can be connected to hold a conference.
  • a conference system has 40 terminals, X1 to X40, where X1 X20 is a high-definition terminal, a code stream is 8 Mb/s, X21 X40 is a standard definition terminal, and a code stream is 2 Mb/s.
  • XI requires HD (1080p resolution) 16 screens (than 3 ⁇ 4. Another 'J from X2 ⁇ X10, X21 ⁇ X27) video.
  • the X21 only needs to achieve 720p resolution 4 screens (t ⁇ ⁇ another 'J from XI, X2, X6, X25) ⁇ Video?
  • P the requirements of other terminals are also different. Therefore, to achieve this conference, the MCU must have "universality" to meet the requirements of all participating terminals.
  • the requirement of the terminal is the application level requirement, not the general port or interface in the physical layer sense (such as Ethernet port, E1 interface).
  • the requirements at the terminal application level are often the processing of media streams, such as the aforementioned video and audio formats, code rates, various forms of multi-picture and multi-picture content requirements.
  • the universal port here refers to the implementation of "universal port" on media processing, enabling the processing and support of media streams of different types of terminals.
  • the processing of media streams focuses on video processing, while video processing is mainly done in the VPU. Therefore, as long as the VPU implements a universal port, that is, a video conference implements a universal port.
  • the VPU implements a universal port that is unitized to the VPU. Each unit can correspond to one terminal, and is responsible for completing various requirements of the corresponding terminal.
  • This unit is like a media stream port of a VPU, and we define it as Universal Port (UP).
  • UP can be different internally and its capabilities are different.
  • the Media Controller (MC) inside the MCU is responsible for coordinating the connection between the UP and the terminal.
  • MC Media Controller
  • the universal port has become a trend in conference TV.
  • Rapid IO is a high-speed serial interface, currently up to 3.125G bit/s, some chips can support 4 X mode (but only hardware settings, software can not be changed), that is, 4 channels of parallel, reaching 12.5G bit / s.
  • the Rapid lO supports the copper-axis cable connection inside the board, through the backplane, and within a certain length. As shown in Figure 2, the solution differs from the above solution in that the baseband data between UPs is transmitted by Rapid lO. The following describes how it works: 1) In a conference, the MC is based on the capabilities and idleness of each UP. State, assign UP1 to XI. 2) UP1 and XI establish a connection through Gigabit Ethernet and receive the stream of XI.
  • UP1 decodes the stream of XI, and then passes it to the Rapid IO switch through Rapid IO1 for other UP multi-picture composition, ie baseband data; meanwhile, UP 1 receives other UP output from Rapid IO switch through Rapid IO 1
  • the baseband data is used for multi-picture synthesis.
  • UP1 encodes the synthesized multi-picture and passes it to XI via Gigabit Ethernet to implement XI requirements. Due to the high bandwidth of Rapid IO, the interaction of baseband data between UPs can be satisfied.
  • the Rapid IO interface has a dedicated switching chip that can form a network.
  • the Rapid IO interface (which is an in-board technology) supports custom copper shaft cables for transmission, but the length is limited, usually within a few meters, which limits its application space;
  • Rapid IO has high requirements on the layout and routing of the boards, and the network stability of the components is poor. Because Rapid IO is mainly used for intra-board interconnection, the technical risk of forming complex networks between boards is high. In summary, the problem of the above solution is mainly the use of Rapid IO to form a network with high cost, limited chip selection, limited application range, and a large technical risk of forming a network, which are not conducive to video conferencing systems. Expansion and smooth upgrades.
  • a primary object of the present invention is to provide a multipoint control unit of a video conference system and a video processing method thereof to solve at least the above problems.
  • a multipoint control unit of a video conference system including: a video processing unit including a plurality of universal port units, and a plurality of universal port units through an Ethernet interface and a plurality of video conference systems a terminal connection, each of the plurality of universal port units is configured to decompress compressed video data from a terminal in the video conference system, and send the decompressed data to the Ethernet switch;
  • An internal Ethernet is formed by connecting to a plurality of universal port units through an Ethernet interface, and is configured to receive decompressed data sent by multiple universal port units.
  • a video processing method of a multipoint control unit of a video conference system including: receiving, by a first universal port unit, compressed video data from a terminal of a video conference system through an Ethernet interface;
  • the universal port unit decompresses the compressed video data and sends the decompressed data to the Ethernet switch through the Ethernet interface.
  • each UP is connected to an Ethernet switch through an Ethernet interface based on the Ethernet technology, and is used for transmitting baseband data (ie, the decompressed data) between the UPs, thereby realizing the video conference multipoint control unit media. It handles the common port function, and the networking based on Ethernet technology has better stability, lower cost, easy expansion, and strong compatibility.
  • FIG. 1 is a schematic diagram of a typical conference television system according to the related art
  • FIG. 2 is a schematic diagram of a conference television system implementing a Universal Port function based on Rapid IO according to the related art
  • FIG. 3 is a video conference according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a video conferencing system implementing a Universal Port function based on Ethernet according to a preferred embodiment of the present invention
  • FIG. 5 is an internal block diagram of an Ethernet-based universal port unit according to a first preferred embodiment of the present invention.
  • Figure 6 is a block diagram showing an implementation of a universal port unit according to a second preferred embodiment of the present invention;
  • Figure 7 is a 6-screen form diagram of a second preferred embodiment of the present invention;
  • Figure 8 is an embodiment of the present invention.
  • the multipoint control unit (MCU) of the video conference system includes: a video processing unit (VPU) 20 comprising a plurality of universal port units (UP) 202, the plurality of universal port units 202 being connected to a plurality of terminals 50 of the video conferencing system via an Ethernet interface, each of the plurality of universal port units 202
  • the universal port unit is respectively configured to decompress (decompress decompressed data) the compressed video data from one terminal in the video conferencing system, and send the decompressed data to the Ethernet switch 204; the Ethernet switch 204, which passes the Ethernet
  • the interface is coupled to the plurality of universal port units 202 to form an internal Ethernet for receiving decompressed data transmitted by the plurality of universal port units 202.
  • each UP uses the Rapid IO interface to connect with the Rapid IO switch to form a network, there are problems of high cost, narrow chip selection, limited application range, and high technical risk of forming a network.
  • each UP is connected to an Ethernet switch through an Ethernet interface to form an Ethernet network, and is used for transmitting baseband data (that is, the above-mentioned decompressed data) between the UPs, thereby realizing
  • the video conference multi-point control unit has a universal port function on the media processing, and the networking based on the Ethernet technology has better stability and lower cost.
  • Ethernet transmission cable (which can be a commonly used network cable) is not limited in length, it is widely used based on the Ethernet interface networking. Jt ⁇ , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
  • each of the plurality of universal port units 202 is further configured to receive, by the Ethernet switch 204, a common port unit (other than one or more) from the plurality of universal port units.
  • a common port unit other than one or more
  • the decompressed data is processed, and the processed data is sent to the corresponding terminal through an external Ethernet.
  • the method further includes: a media control unit (MC) 30, configured to be according to the first terminal in the video conference system (may be a terminal located at XI in FIG. 3)
  • the request message (which may include a personalized requirement such as a picture format, a code stream rate, a content, and a form required by the first terminal), and notifies the first universal port unit of the plurality of universal port units (may be In Fig. 3, UP1) receives the compressed video data of the first terminal.
  • the Media Controller (MC) inside the MCU controls and coordinates the work of all UPs.
  • the MC selects a universal port unit (such as the first universal port unit) corresponding to the first terminal according to the usage and processing capability of all the current universal port units, and notifies the UP1.
  • a universal port unit such as the first universal port unit
  • the MC may select an idle UP from multiple UPs of the VPU to receive the compressed video data sent by the terminal as the UP allocated for the terminal.
  • 4 is a schematic diagram of a conference television system implementing a Universal Port function based on Ethernet in accordance with a preferred embodiment of the present invention.
  • the first universal port unit 202 includes: an external Ethernet interface A, connected to the first terminal through an external Ethernet, and configured to receive the notification command according to the media control unit 30. a compressed video data sent by a terminal; a decoding module 2021, configured to decompress the compressed video data received by the external Ethernet interface A, to obtain decompressed data; and a first media processing module 2022, configured to perform scaling processing on the decompressed data.
  • each of the plurality of universal port units in the VPU can include the above-described modules to enable processing of compressed video data from a certain terminal (including decompression) , scaling, and compression processing).
  • the working principle of the above module is as follows: 1) The MC notifies the external Ethernet interface A of the UP1 to receive the compressed video data sent by the first terminal to the MCU according to the situation reported by the first terminal (located at the XI), and the decoding module 2021 of the UP1 Decoding the image into an original image or an image of an intermediate format (ie, the decompressed data described above); 2) the first media processing module 2022 in UP1 scales the decoded image (ie, the decompressed data described above) to form a large, Medium and small picture (ie, the image of the predetermined size mentioned above); 3) The compression module 2023 of UP1 compresses the large, medium and small pictures with a specific algorithm; 4) UP1 compresses the data through the internal network interface B to unicast or Multicast mode is sent to the Ethernet switch 204 as other Composite material for UPx multi-screen.
  • the Ethernet switch 204 is further configured to send compressed data from a second universal port unit (which may be multiple, such as 6 universal port units of UP2 to UP7) of the plurality of universal port units to the first universal The port unit (UP1); the internal Ethernet interface B of the first universal port unit is further configured to receive compressed data from the Ethernet switch 204 according to an instruction of the media control unit 30 (ie, the second universal port unit pair is from the second terminal (eg, The compressed data of the 6 terminals at X3 to X8 is decompressed, scaled, and compressed.)
  • the first universal port unit further includes: a decompression module 2024 for internal Ethernet The compressed data received by the interface B is decompressed to obtain an image of a predetermined size of the second terminal corresponding to the second universal port unit (when the second universal port unit is N, obviously the second terminal is also N);
  • the second media processing module 2025 is configured to synthesize the image of the predetermined size of the second terminal corresponding to the second universal port unit to obtain the first terminal And a
  • each UP needs to exchange image data with other UPs. For example, if a high-definition terminal requests to view a 16-picture high-definition image in a conference, the UP corresponding to the terminal needs to be passed through the Ethernet switch. The other 16 UPs obtain small picture images of 16 other terminals.
  • the terminal can be provided with a code stream that meets its type requirements, and the user at the terminal can view any one or more videos of the own site or other site.
  • UP1 receives compressed pictures from other UPx (ie, the above-mentioned compressed data) from the internal Ethernet interface B according to the instruction of the MC, for example, receiving other n (multiple pictures) UPx small picture compressed code streams.
  • the decompression module 2024 of UP1 decodes the received small picture compressed code stream into an original image (ie, the image of a predetermined size described above), and then sends it to the second media processing module 2025, for example, the decoded n UPx 3)
  • the second media processing module 2025 synthesizes and processes the related image to form original image data (which is a baseband image, that is, the above-mentioned composite image) required by the first terminal, and then sends it to the encoding module 2026;
  • the encoding module 2026 compresses and encodes the baseband image into an image format and a rate code stream (ie, a compressed video code stream corresponding to the type of the first terminal) required by the first terminal, and sends the image to the first terminal for playing.
  • compression module 2023 and decompression module 2024 may compress and decompress the received data using a shallow compression (eg, MJPEG, H.264 low complexity algorithm, etc.) algorithm.
  • a shallow compression eg, MJPEG, H.264 low complexity algorithm, etc.
  • the GE (Ethernet) interface transmits baseband video streams, especially high-definition baseband video streams.
  • the original image baseband data of 1080p30 (4:2:0) reaches 94M Byte/s.
  • the throughput of the GE interface should not be too large. To this end, the baseband data must be transmitted between the UPs to find a low delay (eg, the total delay of the codec is finally reached).
  • the image compression algorithm is less than 10ms), used for internal image exchange, and can be called “shallow compression algorithm,". If you use DSP to implement this algorithm, in order to achieve the delay requirement, this algorithm requires low complexity (such as: 30% of the complexity of H.264 bp algorithm), in terms of bandwidth, code rate fluctuation, and even image quality. Relax, for example, you can use the MJPEG (Motion Join Photographic Expert Group) algorithm (MJPEG is a dynamic image compression technology developed on the basis of the JPEG algorithm), or you can use the 4th complexity algorithm to implement the above-mentioned "shallow compression algorithm". .
  • MJPEG Motion Join Photographic Expert Group
  • the first universal port unit is further configured to receive compressed data corresponding to the second terminal from the second universal port unit by joining the multicast group.
  • the first universal port unit is further configured to receive compressed data corresponding to the second terminal from the second universal port unit by joining the multicast group.
  • multiple universal port units in the VPU join a multicast group, and any one of the universal port units can choose to receive compressed data of one or more of the other common port units belonging to the same multicast group.
  • video processing unit 20 is a hierarchical topology.
  • the VPU of the MCU is composed of a large number of distributed media processing points, and each media processing point may be referred to as a Universal Port (UP).
  • the layered topology structure may be: if the thousand UPs constitute a media processing board, if the thousands of media processing boards constitute a media processing ⁇ I, if the thousands of media processing 11 constitutes a media processing rejection, if the thousands of media processing refuses to form a VPU.
  • the universal port unit 202 can be implemented by a processing chip such as a DSP chip.
  • UP is an independent unit. It can be composed of one or several DSPs (SOCs).
  • SOCs DSPs
  • One or more Ethernet ports are connected.
  • Network port B (that is, internal Ethernet interface B) is connected to other UPs through an internal Ethernet switch.
  • the UP is an independent unit, and the internal implementation of the UP in the same architecture may be different. Referring to FIG.
  • UPs (UP1-UP8) form a VPU through an internal high-speed Ethernet; and at the same time, an external high-speed Ethernet is connected to establish a connection with the terminal according to the instruction of the MC.
  • This establishes a distributed media processing architecture based on different UP configurations. In fact, the entire architecture of the Ethernet network is a mature technology, the key to its implementation is the implementation of UP.
  • UP1 UP8 may be different according to project requirements.
  • the UP which is also composed of multiple processing chips, may have different internal processing chips; according to their interfaces and capabilities, these processing chips respectively perform the corresponding module functions. For example, i sets the MC to allocate UP1 in Figure 4 to process the requests of the terminal XI.
  • the internal structure and external interface of the UP1 are as shown in FIG. 6. At this time, the terminal XI is required to implement a six-picture as shown in FIG.
  • the UP1 is composed of three DSPs.
  • Each DSP has a Gigabit Ethernet port GE (as internal Ethernet interface B or external Ethernet interface A) and a VP (Video Port).
  • the processing capabilities of each DSP can be different, and the corresponding functional modules are also different.
  • the VP (Video Port) port can be used for connection between DSPs. It can also use high-speed interfaces such as Rapid IO, PCIE, and 10GE network ports or internal high-speed links of multi-core processing chips.
  • the DSP1 performs the functions of the decoding module 2021, the first media processing module 2022, and the compression module 2023; the DSP2 performs the functions of the decompression module 2024 and the second media processing module 2025; and the DSP 3 completes the function of the encoding module 2026.
  • the specific working principle of UP1 is:
  • the MC After receiving the request of the terminal XI, the MC notifies the UP1 to receive the code stream of the terminal XI, and outputs the six pictures of the six pictures of the terminal X2 X7 to the terminal XI, wherein the picture of the terminal X2 is required to be a large picture, and the others are small. Picture
  • the DSP1 of UP1 decodes the code stream of the terminal XI, and then sends the video data to the DSP2 through VP1 for multi-picture synthesis or loopback display; and simultaneously uses the MJPEG algorithm for the shallow video compression of the decoded video data. , becomes a large, medium and small picture stream, unicast or multicast to internal high-speed Ethernet, used as a synthesis of other UP multi-pictures;
  • DSP2 receives the "shallow compressed" code stream of UP2 UP7 (that is, the shallow stream is compressed using the MJPEG algorithm in (1), and the code stream of UP2 is "Shallow compression, large picture code 3 ⁇ 43 ⁇ 4, other small picture code 3 ⁇ 43 ⁇ 4;
  • DSP2 combines the six images shown in Figure 7 with the "shallow compression", code stream decoding (also according to the requirements of the terminal, the synthesized multi-picture contains its own picture, or separate loopback display);
  • DSP2 synthesized multi-picture data is finally sent to DSP3 through VP2; (6) DSP3 according to the required bit rate of the terminal (such as lMb/s, 2Mb/s, 4Mb/s, 8Mb/s, etc.)
  • the picture is encoded and sent to the terminal XI through a GE port connected to an external high-speed Ethernet to implement a six-picture request of the terminal XI.
  • the UPs in the above-mentioned preferred embodiments are interconnected by an Ethernet interface, which can be linearly expanded, and is not limited by the board, the chassis, and the rack, and is not limited by the conference. A UP fault can be quickly isolated and replaced by other idle UPs.
  • FIG. 8 is a video processing method of a multipoint control unit of a video conference system according to an embodiment of the present invention. Referring to FIG. 3, the method includes the following steps: Step S802, a first universal port unit (such as UP1) receives an interface through an Ethernet interface.
  • Step S802 a first universal port unit (such as UP1) receives an interface through an Ethernet interface.
  • the compressed video data of the first terminal of the video conferencing system (such as terminal XI); step S804, the first universal port unit decompresses the compressed video data, and sends the decompressed data to the Ethernet switch 204 through the Ethernet interface; Step S806, the Ethernet switch 204 sends the decompressed data to the second universal port unit (which may be one or more, such as UP3, UP5, and UP6) through the Ethernet interface for synthesizing processing and synthesizing the picture (through the external Ethernet) is sent to the corresponding second terminal (such as terminals Xn, X7, and X10).
  • the second universal port unit which may be one or more, such as UP3, UP5, and UP6
  • the Ethernet switch 204 sends the decompressed data to the second universal port unit (which may be one or more, such as UP3, UP5, and UP6) through the Ethernet interface for synthesizing processing and synthesizing the picture (through the external Ethernet) is sent to the corresponding second terminal (such as terminals
  • the method further includes: the first universal port unit receiving, by the Ethernet switch 204, decompression of a common port unit (which may be one or more, such as UP4-UP8) from the VPU except itself Data; the first universal port unit synthesizes the decompressed data according to the requirements of the corresponding first terminal, and transmits the processed synthesized picture (via external Ethernet) to the first terminal.
  • a common port unit which may be one or more, such as UP4-UP8
  • the first universal port unit synthesizes the decompressed data according to the requirements of the corresponding first terminal, and transmits the processed synthesized picture (via external Ethernet) to the first terminal.
  • Step 1 The MC notifies the external network port A of the UP to receive the compressed video data sent by the terminal to the MCU according to the situation reported by the terminal;
  • Step 2 the decoding module 2021 of the UP decodes the image into an original image or an image of an intermediate format; 3, the first media processing module 2022 in the UP scales the decoded image to form a large, medium, and small picture;
  • Step 4 the UP compression module 2023 performs a specific algorithm (MJPEG algorithm) on the large, medium, and small frames. compression.
  • MJPEG algorithm specific algorithm
  • Step 5 The UP unicasts or multicasts the compressed data to the internal Ethernet through the internal Ethernet interface B, as a composite material of other UPx multi-pictures; Step 6, at the same time, the UP receives the internal Ethernet interface B according to the instruction of the MC. Compressed images from other UPx. For example, receiving a small picture compression code of other n (multiple pictures) UPx by adding a multicast group: charging; step 7, the decompression module 2024 of the UP decodes the received compressed code stream into an original image, and then The second media processing module 2025 is sent to the second media processing module 2025.
  • the second media processing module 2025 synthesizes and processes the related image to form original image data required by the terminal, and then sends the original image data to the encoding module 2026;
  • Step 9 the encoding module 2026 of the UP baseband
  • the image is compressed and encoded into an image format and a rate code stream required by the terminal, and sent to the terminal.
  • the Universal Port function of the video conference can be realized at a lower cost, and the capacity expansion and upgrade capability of the MCU can be improved at the same time;
  • An UP and other UPs can form a distributed UP-based media processing architecture through internal high-speed Ethernet, using "shallow compression,” algorithms, and the internal high-speed network has sufficient bandwidth to transfer baseband data between UPs, and With low latency, high definition video conferencing is achieved. It will be apparent to those skilled in the art that the various modules or steps of the present invention described above can be implemented with a general purpose computing device that can be centralized on a single computing device.

Description

视频会议系统的多点控制单元及其视频处理方法 技术领域 本发明涉及通信领域, 具体而言, 涉及一种视频会议系统的多点控制单 元及其视频处理方法。 背景技术 会议电视系统是指能够实现多点的视频和声音进行实时交互和传输的系 统, 通常是由后台会议管理 105、 多点控制单元(Multipoint Controlling Unit, MCU ) 104和各点的终端 101组成, 如图 1所示。 后台会议管理 105用于实现对会议管理的功能, 可以通过 Internet (因特 网) 进行; MCU 104 则是根据会议要求, 完成媒体处理等功能; 这里终端 101 是指参加会议一方的设备, 包含了摄像头、 视频接收和显示、 麦克风、 以及扩音器等终端设备的集合。 该系统的工作原理是: 各终端通过传输网络
103 (包括以太网、 E1等网络) 4巴视频传送到 MCU 104, MCU 104才艮据后台 会议管理 105提出的会议要求, 把与会终端视频进行解码、 交互、 合成、 和 编码等处理, 再传送给各终端, 从而实现会议电视的功能。 MCU 104中实现 视频处理功能的单元定义为 VPU ( Video Processing Unit, 视频处理单元 )„ 由此可知, MCU是会议电视的核心, 而 VPU则是 MCU的核心, 它决 定了会议电视系统能够处理的终端的数量、 画面的形式(比如六画面、 12画 面、 16画面等形式) 和质量 (比如图像有 1080p, 720p, Dl , CIF等不同的 分辨率), 即会议电视系统的容量和性能。 通用端口 ( Universal Port, 简称为 UP ), 是会议电视行业内通用的说法, 并非指通常意义上理解的端口或接口, 而是指会议电视系统中 MCU可以支 持各种不同类型的终端 (比如有高清、 标清终端, 或其他处理能力、 码流、 格式不同的终端), 并且它们之间可以不限制终端类型, 随意组合举行会议, 并满足它们各自对视频音频格式、 不同的码率、 不同形式的多画面和多画面 内容的不同要求。具有这样性能的会议电视系统好比拥有支持各种终端的 "通 用端口,, 一样, 连接任何终端都可以举行会议。 例如, 某会议系统有 40个终端, 分别为 X1〜X40, 其中 X1 X20为高清 终端, 码流为 8Mb/s, X21 X40为标清终端, 码流为 2Mb/s。 现在要举行一 场有 X1〜X10和 X21〜X30参力口的会议, XI要求实现高清 ( 1080p分辨率) 16 画面 (比 ¾。分另' J来自 X2〜X10, X21〜X27 ) 的视频, 而 X21 则只需实现 720p分辨率 4画面 ( t匕^口分另 'J来自 XI , X2, X6, X25 ) ό 视频? P可, 其他 终端的要求也都不尽相同。 所以, 要实现这个会议, MCU 必须具有 "通用 性", 满足所有与会终端的要求。 而终端的这种要求是应用层面的要求, 不是 物理层意义上的一般端口或接口 (如以太网口、 E1接口)。 终端应用层面的要求往往是对媒体流的处理, 比如前面提到的视频音频 格式、 码率、 各种形式的多画面和多画面内容的要求。 所以这里的通用端口 就是指在媒体处理上实现 "通用端口" 化, 使得能够处理和支持不同类型终 端的媒体流。 在会议电视中, 媒体流的处理重点在于视频处理, 而视频处理 主要在 VPU中完成。 所以, 只要 VPU实现通用端口, 即视频会议实现了通 用端口。 VPU实现通用端口是对 VPU单元化, 每个单元可以对应一个终端, 负 责完成对应终端的各种要求。 这个单元好比一个 VPU 的媒体流端口, 我们 4巴它定义为 Universal Port ( UP )。各 UP内部可以不同, 能力也不一样, MCU 内部的媒体控制单元 ( Media Controller, MC ) 负责协调 UP和终端的连接。 目前, 通用端口已经成为会议电视的一个趋势, 下面介绍实现 VPU 通 用端口的一种常用方法: 基于 Rapid IO (一种高速嵌入式互连总线) 实现 Universal Port功能的方法。
Rapid IO是一种高速率串行接口, 目前最高达到 3.125G bit/s, 一些芯片 可以支持 4 X模式(但只能硬件设置, 软件不能更改), 也就是 4路并行, 达 到 12.5G bit/s。 Rapid lO支持板内、 通过背板、 一定长度内定制的铜轴电缆 连接。 如图 2所示, 该方案和上述方案的不同之处在于这里用 Rapid lO传输 各 UP之间的基带数据, 下面说明其工作原理: 1 ) 在一个会议中, MC根据 各 UP的能力和空闲状态, 分配 UP1对应 XI。 2 ) UP1和 XI通过千兆以太 网建立连接, 接收 XI的码流。 3 ) UP1解码 XI的码流后, 通过 Rapid IO1 传递到 Rapid IO交换机, 用于其他 UP多画面合成的素材, 即基带数据; 同 时, UP 1通过 Rapid IO 1从 Rapid IO交换机接收到其他 UP输出的基带数据 用于多画面合成。 4 ) UP1 将合成的多画面进行编码, 通过千兆以太网传递 给 XI , 从而实现 XI的要求。 由于 Rapid IO 的高带宽, 可以满足各 UP之间基带数据的交互, 另外 Rapid IO接口有专用的交换芯片, 可以组成网络。 但是, 这个方案存在以下 缺陷:
1 ) 具有 Rapid IO接口的芯片价格比较贵, 成本较高, 并且不够普遍, 选择面较窄导致具体的实现方式有限;
2 ) Rapid IO接口(是一种板内技术)虽然支持定制的铜轴电缆进行传输, 但长度有限, 一般在几米之内, 限制了其应用空间;
3 ) Rapid IO对单板的布局和走线要求比较高, 组成的网络稳定性较差, 并且由于 Rapid IO主要应用于板内互联, 因此在板间形成复杂网络存在的技 术风险较大。 综上所述,上述方案的问题主要是使用 Rapid IO组成网络存在成本较高、 芯片选择面窄、 应用范围受限、 以及组成网络的技术风险较大的问题, 这些 问题都不利于视频会议系统的扩容和平滑升级。 发明内容 本发明的主要目的在于提供一种视频会议系统的多点控制单元及其视频 处理方法, 以至少解决上述问题。 根据本发明的一个方面, 提供了一种视频会议系统的多点控制单元, 包 括: 视频处理单元, 其包括多个通用端口单元, 多个通用端口单元通过以太 网接口与视频会议系统的多个终端连接, 多个通用端口单元中的各个通用端 口单元分别用于对来自视频会议系统中的一个终端的压缩视频数据进行解压 缩, 并将解压缩数据发送给以太网交换机; 以太网交换机, 其通过以太网接 口与多个通用端口单元连接形成内部以太网, 用于接收多个通用端口单元发 送来的解压缩数据。 根据本发明的另一方面, 提供了一种视频会议系统的多点控制单元的视 频处理方法, 包括: 第一通用端口单元通过以太网接口接收来自视频会议系 统的终端的压缩视频数据; 第一通用端口单元对压缩视频数据进行解压缩, 并将解压缩数据通过以太网接口发送给以太网交换机。 通过本发明, 基于以太网技术, 将各 UP通过以太网接口与以太网交换 机连接, 用于传输各 UP之间的基带数据(即上述解压缩数据), 从而实现了 视频会议多点控制单元媒体处理上的通用端口功能, 并且基于以太网技术的 组网稳定性较好、 成本较低、 容易扩容、 以及兼容性较强。 此外, 由于以太 网传输线缆的长度不受限从而基于以太网接口组网应用广泛。 附图说明 此处所说明的附图用来提供对本发明的进一步理解, 构成本申请的一部 分, 本发明的示意性实施例及其说明用于解释本发明, 并不构成对本发明的 不当限定。 在附图中: 图 1是根据相关技术的典型的会议电视系统示意图; 图 2是根据相关技术的基于 Rapid IO实现 Universal Port功能的会议电 视系统示意图; 图 3是根据本发明实施例的视频会议系统的示意图; 图 4是根据本发明优选实施例的基于以太网实现 Universal Port功能的视 频会议系统的示意图; 图 5是根据本发明第一优选实施例的基于以太网的通用端口单元的内部 框图; 图 6是才艮据本发明第二优选实施例的通用端口单元的实现框图; 图 7是 居本发明第二优选实施例的 6画面形式图; 以及 图 8是才艮据本发明实施例的视频会议系统的多点控制单元的视频处理方 法的流程图。 具体实施方式 下文中将参考附图并结合实施例来详细说明本发明。 需要说明的是, 在 不冲突的情况下, 本申请中的实施例及实施例中的特征可以相互组合。 图 3是根据本发明实施例的视频会议系统的示意图, 该视频会议系统的 多点控制单元 (MCU ), 包括: 视频处理单元 ( VPU ) 20 , 其包括多个通用端口单元(UP ) 202 , 多个 通用端口单元 202通过以太网接口与视频会议系统的多个终端 50连接, 多 个通用端口单元 202中的各个通用端口单元分别用于对来自视频会议系统中 的一个终端的压缩视频数据进行解压缩(得到解压缩数据), 并将解压缩数据 发送给以太网交换机 204; 以太网交换机 204 , 其通过以太网接口与多个通用端口单元 202连接形 成内部以太网, 用于接收多个通用端口单元 202发送来的解压缩数据。 如图 2所示的相关技术中由于 UP使用 Rapid IO接口与 Rapid IO交换机 连接而组成网络, 从而存在成本较高、 芯片选择面窄、 应用范围受限、 以及 组成网络的技术风险较大的问题。 本实施例基于廉价、 成熟的以太网技术, 将各 UP通过以太网接口与以太网交换机连接从而形成以太网络, 用于传输 各 UP之间的基带数据 (即上述解压缩数据 ), 从而实现了视频会议多点控制 单元媒体处理上的通用端口功能, 并且基于以太网技术的组网稳定性较好以 及成本较低。 此外, 由于以太网传输线缆 (可以为一般使用的网线) 长度不 受限从而基于以太网接口组网应用广泛。 jt匕夕卜, 本实施例由于各 UP釆用以太网方式组成网络, 以太网技术成熟、 廉价, 通过以太网交换机就可以很方便地增加更多的 UP, 从而实现扩容, 理论上不限于 UP数量和空间; 并且, 各 UP (可以内部实现方式不同, 处理 能力不一样) 可以通过以太网交换机形成以太网络, 能力差的 UP可以处理 要求低的终端, 能力强的 UP既可以处理要求高的也可以处理要求低的终端, 因此兼容性较强,且在升级过程中,视情况不必完全更换或淘汰能力差的 UP , 可以实现平滑过渡。 优选地, 多个通用端口单元 202中的各个通用端口单元分别还用于通过 以太网交换机 204接收来自多个通用端口单元中除自己以外的其他通用端口 单元 (可以是一个或多个, 具体才艮据对应的终端的要求或者由媒体控制单元 30通过指令来协调)的解压缩数据, 并按照对应的终端的要求(由媒体控制 单元将对应的终端的要求通过指令发送给各个通用端口单元) 对该解压缩数 据进行处理, 将处理后的数据通过外部以太网发送给该对应的终端。 因此, 以太网交换机 204用于^ 1 VPU 中的多个通用端口单元组成以太网络, 实现 各个通用端口单元之间的数据交换。 优选地, 如图 3所示, 在上述的 MCU中, 还包括: 媒体控制单元(MC ) 30 , 用于根据视频会议系统中的第一终端 (可以为如图 3 中位于 XI处的终 端) 的请求消息 (该请求消息中可以包括第一终端所要求的画面格式、 码流 速率、 内容、 和形式等个性化要求), 通知多个通用端口单元中的第一通用端 口单元 (可以为如图 3 中 UP1 )接收第一终端的压缩视频数据。 MCU 内部 的媒体控制单元(Media Controller, MC )控制和协调所有 UP的工作。 接收 到来自第一终端的请求消息后, MC才艮据当前所有通用端口单元的使用情况 和处理能力, 选择一个通用端口单元 (如第一通用端口单元) 与第一终端相 对应, 并通知 UP1接收第一终端送往 MCU的压缩视频数据。 这样, MC可 以在接收到视频会议系统中任一终端的请求消息后,从 VPU的多个 UP中选 择一个空闲的 UP以作为为该终端分配的 UP来接收其发送的压缩视频数据。 图 4是根据本发明优选实施例的基于以太网实现 Universal Port功能的会 议电视系统的示意图。 图 5是根据本发明第一优选实施例的基于以太网的通 用端口单元的内部^ I图。 优选地, 如图 4和图 5所示, 第一通用端口单元 202包括: 外部以太网 接口 A, 通过外部以太网与第一终端连接, 用于才艮据媒体控制单元 30的通知 指令接收第一终端发送的压缩视频数据; 解码模块 2021 , 用于解压缩外部以 太网接口 A 接收到的压缩视频数据, 得到解压缩数据; 第一媒体处理模块 2022 , 用于对上述解压缩数据进行缩放处理, 得到预定尺寸的图像; 压缩模 块 2023 , 用于对上述预定尺寸的图像进行压缩, 得到压缩数据; 内部以太网 接口 B , 与以太网交换机 204连接, 用于将上述压缩数据发送给以太网交换 机 204。 该优选实施例提供了通用端口单元的具体实施方案, 显然, VPU中的多 个通用端口单元中的每一个均可包括上述模块以实现对来自某一终端的压缩 视频数据的处理(包括解压缩、 缩放、 和压缩处理)。 例如, 上述模块的工作 原理为: 1 ) MC根据第一终端 (位于 XI处)上报的情况, 通知 UP1的外部 以太网接口 A接收第一终端送往 MCU 的压缩视频数据, UP1 的解码模块 2021解码该图像成原始图像或某一中间格式的图像(即上述的解压缩数据 ); 2 ) UP1中的第一媒体处理模块 2022对解码图像 (即上述的解压缩数据)进 行缩放, 形成大、 中、 小画面 (即上述的预定尺寸的图像); 3 ) UP1的压缩 模块 2023对大、 中、 小画面釆用特定算法进行压缩; 4 ) UP1将压缩数据通 过内部网络接口 B以单播或组播方式发送到以太网交换机 204上, 作为其它 UPx多画面的合成素材。 优选地, 以太网交换机 204还用于将来自多个通用端口单元中的第二通 用端口单元 (可以为多个, 如为 UP2至 UP7的 6个通用端口单元) 的压缩 数据发送给第一通用端口单元 ( UP1 ); 第一通用端口单元的内部以太网接口 B还用于根据媒体控制单元 30的指令接收来自以太网交换机 204的压缩数据 (即第二通用端口单元对来自第二终端 (如为处于 X3至 X8处的 6个终端) 的压缩视频数据经解压缩、 缩放、 和压缩处理后的得到的数据); 第一通用端 口单元还包括: 解压缩模块 2024 , 用于对内部以太网接口 B接收的压缩数据 进行解压缩, 得到与第二通用端口单元对应的第二终端 (当第二通用端口单 元为 N个时, 显然第二终端也为 N个)的预定尺寸的图像; 第二媒体处理模 块 2025 ,用于将上述与第二通用端口单元对应的第二终端的预定尺寸的图像 进行合成处理, 得到第一终端所需的画面形式的合成图像; 编码模块 2026 , 用于对上述合成图像进行压缩编码, 得到与第一终端的类型相对应的压缩视 频码流, 并通过外部以太网接口 A发送给第一终端。 该优选实施例提供了 UP1对其他 UPx发送的压缩数据进行解压缩、 合 成、 和压缩编码处理后, 将符合终端所需图像格式、 速率码流等要求的压缩 视频码流发送给终端进行播放的具体实施方案。 为实现通用端口的功能, 每 一个 UP都需要与其它 UP交互图像数据, 如: 某一个高清终端在会议中要 求看一个 16画面的高清图像,那么与这个终端对应的 UP需要通过以太网交 换机从其它 16个 UP获得 16个其它终端的小的画面图像。 这样, 可以为终 端提供符合其类型要求的码流, 位于终端处的用户可以观看任意一个或多个 自己会场或其他会场的视频。 例如, 1 ) UP1才艮据 MC的指令, 从内部以太 网接口 B接收来自其他 UPx的压缩画面 (即上述的压缩数据), 比如接收其 它 n (多画面数量 ) 个 UPx的小画面压缩码流; 2 ) UP1的解压缩模块 2024 将接收到的小画面压缩码流解码成原始图像(即上述的预定尺寸的图像), 然 后送到第二媒体处理模块 2025 , 比如, 解码后的 n个 UPx的小画面; 3 ) 第 二媒体处理模块 2025 合成和处理相关图像, 形成第一终端需要的原始图像 数据 (为基带图像, 即上述的合成图像), 然后送给编码模块 2026; 4 ) UP1 的编码模块 2026 将基带图像压缩编码成第一终端需要的图像格式和速率码 流(即与第一终端的类型相对应的压缩视频码流),发送给第一终端进行播放。 优选地,压缩模块 2023和解压缩模块 2024可以使用浅压缩(如 MJPEG、 H.264 低复杂度算法等) 算法对接收到的数据进行压缩和解压缩。 由于釆用 GE (以太网)接口传送基带视频流, 尤其是高清基带视频流有很大的困难, 一路 1080 p30(4:2:0)的原始图像基带数据就达到了 94M Byte/s„ 考虑到大量 UP通过以太网交换机互联, 为了避免冲突和保证效率, GE口的吞吐量也不 宜过大。 为此, 在各个 UP之间传递基带数据必须找到一种低延时 (如: 最 终达到编解码总延时要小于 10ms )的图像压缩算法, 用于内部图像交换, 可 以称为 "浅压缩算法,,。 如果釆用 DSP实现这种算法, 为了达到延时要求, 这种算法要求复杂度低(如: H.264 bp算法复杂度的 30% ), 在带宽、 码率 波动、甚至是图像质量方面要求有所放松,比如可以使用 MJPEG( Motion Join Photographic Expert Group )算法( MJPEG是在 JPEG算法基础上发展起来的 动态图像压缩技术), 也可以使用 4氏复杂度算法来实现上述的 "浅压缩 算法"。 不同分辨率图像基带数据流量如下: 1080p30 4:2:0 - 94 MByte/s; 720p30 4:2:0 - 42 MByte/s; Dl 30fps 4:2:0 - 16 MByte/s; CIF 30fps 4:2:0 - 5 MByte/s 优选地, 第一通用端口单元还用于通过加入组播组的方式接收来自第二 通用端口单元的对应于第二终端的压缩数据。 这样, VPU中的多个通用端口 单元加入一个组播组, 其中的任意一个通用端口单元都可以选择接收与其属 于同一组播组的其他通用端口单元中的一个或多个的压缩数据。 釆用组播方 式, 数据传输效率会更高。 优选地, 上述的第二终端可以为多个, 上述的第二通用端口单元也可以 为多个。 这样, 视频会议系统中的任意一个终端可以选择观看其他任意一个 或多个终端的视频。 优选地, 视频处理单元 20为分层拓朴结构。 MCU的 VPU由大量分散 的媒体处理点构成, 每个媒体处理点对外可以称为通用端口单元 ( Universal Port, UP )。 分层的拓朴结构可以是: 若千 UP组成媒体处理板, 若千媒体处 理板组成媒体处理^ I , 若千媒体处理 11组成媒体处理拒, 若千媒体处理拒组 成 VPU。 这样, 可以实现 MCU容量的扩展, 视频会议系统的扩容也将不受 单板、 机框、 机架、 甚至地域的限制, 理论上可以无限扩容。 优选地, 通用端口单元 202可以由处理芯片 (如 DSP芯片) 实现。 UP 是一个独立运作的单元, 内部可由一片或几片 DSP ( SOC )构成, 对外有一 个以太网口 A (即上述外部以太网接口 A ) 与视频会议终端连接, 对内有一 个或多个以太网口 B (即内部以太网接口 B ) 通过内部以太网交换机与其它 UP连接。 按照以上技术方案, UP是个独立的单元, 同一架构下的 UP , 其内部实 现方式可以不尽相同。 参照图 4, 不尽相同的 UP ( UP1-UP8 )通过内部高速 以太网组成 VPU; 同时连接外部高速以太网, 才艮据 MC的指令和终端建立连 接。 这样就建立了一个基于不同 UP构成的分布式媒体处理架构。 而事实上, 整个架构的以太网网络是成熟的技术, 其实现的关键在于 UP的实现。
UP 的实现形式虽然可以不一样, 但是它们都具备图 5 所示的内部结构 和外部接口。 在具体实施上, Universal Port可以是一片高性能的处理芯片组 成, 也可以由几片相对廉价的处理芯片组成, 如图 4 中 UP1 UP8都可能根 据项目要求不尽相同。 同样由多片处理芯片组成的 UP, 其内部处理芯片也 可能不尽相同; 根据它们的接口和能力, 这些处理芯片分别完成相应的模块 功能。 例如, i设 MC分配图 4中的 UP1来处理终端 XI的各项请求, UP1的 内部结构和外部接口如图 6所示。 此时, 终端 XI要求实现一个如图 7所示 的六画面, 其中画面一到六分别来自终端 Χ2〜Χ7 , 定终端 X2 X7分别对 应 UP2 UP7, 如图 6所示, UP1 由三片 DSP组成, 各 DSP具有千兆网口 GE (作为内部以太网接口 B或外部以太网接口 A )和 VP ( Video Port, 视频 端口), 各 DSP 的处理能力可以不同, 相应地完成的功能模块也不同。 DSP 之间釆用 VP ( Video Port ) 口连接, 也可以釆用 Rapid IO、 PCIE、 万兆网口 等高速接口或多核处理芯片的内部高速链路等技术连接。 其中, DSP1 完成 解码模块 2021、 第一媒体处理模块 2022、 和压缩模块 2023 的功能; DSP2 完成解压缩模块 2024和第二媒体处理模块 2025的功能; DSP3完成编码模 块 2026的功能。 UP1的具体工作原理为:
( 1 ) MC收到终端 XI的要求后, 通知 UP1接收终端 XI的码流, 并输 出终端 X2 X7六个画面合成的六画面到终端 XI , 其中要求终端 X2的画面 为大画面, 其他为小画面;
( 2 ) UP1 的 DSP1 对终端 XI 的码流进行解码, 然后将视频数据通过 VP 1发送给 DSP2用于多画面合成或环回显示; 同时对解码后的视频数据使 用 MJPEG算法进行 "浅压缩,,, 变成大、 中、 小画面码流, 单播或组播到内 部高速以太网, 用作其他 UP多画面的合成;
( 3 )才艮据 MC的通知, DSP2接收 UP2 UP7的 "浅压缩" 码流(即同 ( 1 ) 中使用 MJPEG算法进行 "浅压缩,, 得到的码流), 其中 UP2的码流为 "浅压缩,, 大画面码¾¾, 其他为小画面码¾¾;
( 4 ) DSP2对 "浅压缩,, 码流进行解码后, 合成图 7所示的六画面 (也 可才艮据终端的要求, 合成的多画面包含自己的画面, 或单独环回显示);
( 5 ) DSP2合成的多画面数据最后通过 VP2发送给 DSP3; ( 6 ) DSP3 按照终端需要的码率 (比如 lMb/s、 2Mb/s、 4Mb/s、 8Mb/s 等), 对合成的多画面进行编码, 通过连接外部高速以太网的 GE口发送给终 端 XI , 从而实现终端 XI的六画面请求。 上述优选实施例中的 UP之间釆用以太网接口互联, 能够线性的扩容, 不受单板、 机框、 和机架的限制, 也不受会议的限制。 某个 UP的故障能够 被快速隔离, 并被其它空闲 UP替代。 UP内部的芯片选择面广, 可以降低成 本和实现高氏搭配。 随着处理芯片的升级而平滑过渡, 即新扩容的能力通过 增加新单板完成, 老的单板可以继续使用而不需要被替换。 图 8是根据本发明实施例的视频会议系统的多点控制单元的视频处理方 法, 结合图 3 , 该方法包括以下步骤: 步骤 S802, 第一通用端口单元(如 UP1 )通过以太网接口接收来自视频 会议系统的第一终端 (如终端 XI ) 的压缩视频数据; 步骤 S804, 第一通用端口单元对上述压缩视频数据进行解压缩, 并将解 压缩数据通过以太网接口发送给以太网交换机 204; 步骤 S806,以太网交换机 204将上述解压缩数据通过以太网接口发送给 第二通用端口单元 (可以是一个或多个, 如 UP3、 UP5、 和 UP6 ) 以进行合 成处理并将合成画面(通过外部以太网)发送给对应的第二终端(如终端 Xn、 X7、 和 X10 )。 优选地, 在上述的方法中, 还包括: 第一通用端口单元通过以太网交换 机 204接收来自 VPU 中除自己以外的其他通用端口单元 (可以是一个或多 个, 如 UP4-UP8 ) 的解压缩数据; 第一通用端口单元按照与其对应的第一终 端的要求对该解压缩数据进行合成处理, 并将处理后的合成画面 (通过外部 以太网)发送给第一终端。 结合图 4和图 5 , —个 UP实现的流程如下: 步骤 1 , MC根据终端上报的情况, 通知 UP的对外网络端口 A接收终 端送往 MCU的压缩视频数据; 步骤 2 , UP的解码模块 2021解码该图像成原始图像或某一中间格式的 图像; 步骤 3 , UP中的第一媒体处理模块 2022对解码的图像进行缩放, 形成 大、 中、 小画面; 步骤 4, UP的压缩模块 2023对大、 中、 小画面釆用特定算法 (MJPEG 算法) 进行压缩。 步骤 5 , UP将压缩数据通过内部以太网接口 B单播或组播到内部以太网 上, 作为其它 UPx多画面的合成素材; 步骤 6, 同时, UP根据 MC的指令, 从内部以太网接口 B接收来自其 他 UPx的压缩画面。 比如, 通过加入组播组的方式接收其它 n (多画面数量 ) 个 UPx的小画面压缩码:;充; 步骤 7, UP的解压缩模块 2024将接收到的压缩码流解码成原始图像, 然后送到第二媒体处理模块 2025。 比如, 解码 n个 UPx的小画面; 步骤 8, 第二媒体处理模块 2025合成和处理相关图像, 形成终端需要的 原始图像数据, 然后送给编码模块 2026; 步骤 9 , UP的编码模块 2026将基带图像压缩编码成终端需要的图像格 式和速率码流, 发送给终端。 从以上的描述中, 可以看出, 本发明实现了如下技术效果:
( 1 ) 可以以较低的成本实现视频会议的 Universal Port功能, 同时提升 MCU的扩容和升级能力;
( 2 )一个 UP可以和其他 UP通过内部高速以太网组成一个基于分布式 UP的媒体处理架构, 釆用 "浅压缩,, 算法, 内部的高速网的带宽足够各 UP 之间传递基带数据, 并具有较小的延时, 实现高清视频会议。 显然, 本领域的技术人员应该明白, 上述的本发明的各模块或各步骤可 以用通用的计算装置来实现, 它们可以集中在单个的计算装置上, 或者分布 在多个计算装置所组成的网络上, 可选地, 它们可以用计算装置可执行的程 序代码来实现, 从而, 可以将它们存储在存储装置中由计算装置来执行, 并 且在某些情况下, 可以以不同于此处的顺序执行所示出或描述的步骤, 或者 将它们分别制作成各个集成电路模块, 或者将它们中的多个模块或步骤制作 成单个集成电路模块来实现。 这样, 本发明不限制于任何特定的硬件和软件 结合。 以上所述仅为本发明的优选实施例而已, 并不用于限制本发明, 对于本 领域的技术人员来说, 本发明可以有各种更改和变化。 凡在本发明的 ^"神和 原则之内, 所作的任何修改、 等同替换、 改进等, 均应包含在本发明的保护 范围之内。

Claims

权 利 要 求 书 一种视频会议系统的多点控制单元, 其特征在于, 包括:
视频处理单元, 其包括多个通用端口单元, 所述多个通用端口单元 通过以太网接口与所述视频会议系统的多个终端连接, 所述多个通用端 口单元中的各个通用端口单元分别用于对来自所述视频会议系统中的一 个终端的压缩视频数据进行解压缩, 并将解压缩数据发送给所述以太网 交换机;
所述以太网交换机, 其通过以太网接口与所述多个通用端口单元连 接形成内部以太网, 用于接收所述多个通用端口单元发送来的所述解压 缩数据。 根据权利要求 1所述的多点控制单元, 其特征在于, 各个所述通用端口 单元分别还用于通过所述以太网交换机接收来自所述多个通用端口单元 中除自己以外的其他通用端口单元的解压缩数据, 并按照对应的终端的 要求对所述解压缩数据进行处理, 将处理后的数据发送给所述对应的终 端。 根据权利要求 1所述的多点控制单元, 其特征在于, 还包括: 媒体控制 单元, 用于根据所述视频会议系统中的第一终端的请求消息, 通知所述 多个通用端口单元中的第一通用端口单元接收所述第一终端的压缩视频 数据。 根据权利要求 3所述的多点控制单元, 其特征在于, 所述第一通用端口 单元包括:
外部以太网接口, 通过外部以太网与所述第一终端连接, 用于才艮据 所述媒体控制单元的通知指令接收所述第一终端发送的所述压缩视频数 据;
解码模块, 用于解压缩所述外部以太网接口接收到的所述压缩视频 数据, 得到所述解压缩数据;
第一媒体处理模块, 用于对所述解压缩数据进行缩放处理, 得到预 定尺寸的图像;
压缩模块, 用于对所述预定尺寸的图像进行压缩, 得到压缩数据; 内部以太网接口, 与所述以太网交换机连接, 用于将所述压缩数据 发送给所述以太网交换机。 才艮据权利要求 4所述的多点控制单元, 其特征在于, 所述以太网交换机 还用于将来自所述多个通用端口单元中的第二通用端口单元的压缩数据 发送给所述第一通用端口单元;
所述内部以太网接口还用于根据所述媒体控制单元的指令接收来自 所述以太网交换机的所述压缩数据;
所述第一通用端口单元还包括:
解压缩模块, 用于对所述内部以太网接口接收的所述压缩数据进行 解压缩, 得到与所述第二通用端口单元对应的第二终端的预定尺寸的图 像;
第二媒体处理模块, 用于将所述与所述第二通用端口单元对应的第 二终端的预定尺寸的图像进行合成处理, 得到所述第一终端所需的画面 形式的合成图像;
编码模块, 用于对所述合成图像进行压缩编码, 得到与所述第一终 端的类型相对应的压缩视频码流, 并通过所述外部以太网接口发送给所 述第一终端。
6. 根据权利要求 5所述的多点控制单元, 其特征在于, 所述压缩模块和所 述解压缩模块使用浅压缩算法进行压缩和解压缩。
7 根据权利要求 5所述的多点控制单元, 其特征在于: 所述第一通用端口 单元还用于通过加入组播组的方式接收来自所述第- -通用端口单元的对 应于第二终端的压缩数据。
8. 根据权利要求 5至 7中任一项所述的多点控制单元, 其特征在于, 所述 第二终端为多个, 所述第二通用端口单元为多个。
9 才艮据权利要求 1所述的多点控制单元, 其特征在于, 所述视频处理单元 为分层拓朴结构。
10. 根据权利要求 1至 7中任一项所述的多点控制单元, 其特征在于, 各所 述通用端口单元由处理芯片实现。
11. 一种视频会议系统的多点控制单元的视频处理方法, 其特征在于, 包括: 第一通用端口单元通过以太网接口接收来自所述视频会议系统的终 端的压缩视频数据; 所述第一通用端口单元对所述压缩视频数据进行解压缩, 并将解压 缩数据通过以太网接口发送给以太网交换机。
12. 居权利要求 11所述的方法, 其特征在于, 还包括:
所述第一通用端口单元通过所述以太网交换机接收来自第二通用端 口单元的解压缩数据;
所述第一通用端口单元按照所述终端的要求对所述解压缩数据进行 处理, 并将处理后的数据发送给所述终端。
PCT/CN2010/077080 2010-04-30 2010-09-17 视频会议系统的多点控制单元及其视频处理方法 WO2011134228A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201010170172.5 2010-04-30
CN 201010170172 CN101867767B (zh) 2010-04-30 2010-04-30 视频会议系统的多点控制单元及其视频处理方法

Publications (1)

Publication Number Publication Date
WO2011134228A1 true WO2011134228A1 (zh) 2011-11-03

Family

ID=42959296

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2010/077080 WO2011134228A1 (zh) 2010-04-30 2010-09-17 视频会议系统的多点控制单元及其视频处理方法

Country Status (2)

Country Link
CN (1) CN101867767B (zh)
WO (1) WO2011134228A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102932656A (zh) * 2011-08-08 2013-02-13 中兴通讯股份有限公司 视频数据传输方法及装置
CN102984599B (zh) * 2012-12-21 2016-04-20 中国电子科技集团公司第三十二研究所 基于RapidIO协议网络的视频采集传输装置及方法
CN105227547B (zh) * 2015-09-09 2018-10-12 重庆邮电大学 一种基于众核平台的流媒体流量发生系统
CN110430386B (zh) * 2019-07-26 2020-08-11 四川新东盛科技发展有限公司 一种基于云资源池技术的视频会议系统及其工作方法
CN113038183B (zh) * 2021-03-26 2023-03-17 苏州科达科技股份有限公司 基于多处理器系统的视频处理方法、系统、设备及介质
CN117319675B (zh) * 2023-11-28 2024-02-27 苏州元脑智能科技有限公司 服务器管理控制芯片的视频压缩系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6327276B1 (en) * 1998-12-22 2001-12-04 Nortel Networks Limited Conferencing over LAN/WAN using a hybrid client/server configuration
CN1372416A (zh) * 2002-03-29 2002-10-02 武汉邮电科学研究院 基于软交换的视频会议系统多点控制器
CN1832569A (zh) * 2005-03-08 2006-09-13 华为技术有限公司 一种会议电视系统及会议电视实现方法
CN1849824A (zh) * 2003-10-08 2006-10-18 思科技术公司 用于执行分布式视频会议的系统和方法

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1486046A (zh) * 2002-09-27 2004-03-31 记忆科技(深圳)有限公司 小型办公及家庭办公网关
US7929012B2 (en) * 2006-01-05 2011-04-19 Cisco Technology, Inc. Method and architecture for distributed video switching using media notifications
US8144186B2 (en) * 2007-03-09 2012-03-27 Polycom, Inc. Appearance matching for videoconferencing
US7729299B2 (en) * 2007-04-20 2010-06-01 Cisco Technology, Inc. Efficient error response in a video conferencing system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6327276B1 (en) * 1998-12-22 2001-12-04 Nortel Networks Limited Conferencing over LAN/WAN using a hybrid client/server configuration
CN1372416A (zh) * 2002-03-29 2002-10-02 武汉邮电科学研究院 基于软交换的视频会议系统多点控制器
CN1849824A (zh) * 2003-10-08 2006-10-18 思科技术公司 用于执行分布式视频会议的系统和方法
CN1832569A (zh) * 2005-03-08 2006-09-13 华为技术有限公司 一种会议电视系统及会议电视实现方法

Also Published As

Publication number Publication date
CN101867767A (zh) 2010-10-20
CN101867767B (zh) 2013-08-07

Similar Documents

Publication Publication Date Title
US11503250B2 (en) Method and system for conducting video conferences of diverse participating devices
JP5640104B2 (ja) スケーラブルビデオ符号化においてシグナリング及び時間レベルスイッチングを実施するためのシステム及び方法
WO2011134228A1 (zh) 视频会议系统的多点控制单元及其视频处理方法
US20060168637A1 (en) Multiple-channel codec and transcoder environment for gateway, MCU, broadcast and video storage applications
CN101262587B (zh) 一种实现多画面视频会议的方法及多点控制单元
EP3127326B1 (en) System and method for a hybrid topology media conferencing system
WO2011116611A1 (zh) 用于电视会议的视频播放方法
JP2007511954A (ja) 分散型実時間メディア構成器
JP2002305733A (ja) マルチキャスト会議装置、及びマルチキャスト会議プログラム
WO2012079424A1 (zh) 分布式视频的处理方法、系统及多点控制单元
JP6066221B2 (ja) リアルタイムメディアストリームを切り替える装置、方法およびコンピュータプログラム
WO2012041117A1 (zh) 一种对视频会议终端集中监控的方法和系统及相关装置
WO2011134224A1 (zh) 一种视频处理方法及其系统、mcu视频处理单元
WO2012028018A1 (zh) 分布式视频处理方法及视频会议系统
WO2023071356A1 (zh) 视频会议处理方法、处理设备、会议系统以及存储介质
JP2015192230A (ja) 会議システム、会議サーバ、会議方法及び会議プログラム
JP2013042492A (ja) 常駐表示式ビデオ会議においてビデオストリームを切替える方法およびシステム
JP2010200273A (ja) ネットワーク制御システム、方法及びプログラム
TWI531244B (zh) 視訊會議資料處理方法及系統
CN112788429B (zh) 一种基于网络的屏幕共享系统
Fujii et al. 4K & 2K multi-resolution video communication with 60 fps over IP networks using JPEG2000
WO2014180366A1 (zh) 一种实现hevc高清网元接入的方法、装置及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10850559

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10850559

Country of ref document: EP

Kind code of ref document: A1