GB2443969A - Method of transmitting scaled images using video processing hardware architecture - Google Patents
Method of transmitting scaled images using video processing hardware architecture Download PDFInfo
- Publication number
- GB2443969A GB2443969A GB0722701A GB0722701A GB2443969A GB 2443969 A GB2443969 A GB 2443969A GB 0722701 A GB0722701 A GB 0722701A GB 0722701 A GB0722701 A GB 0722701A GB 2443969 A GB2443969 A GB 2443969A
- Authority
- GB
- United Kingdom
- Prior art keywords
- data
- processing
- video
- architecture
- scaling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 36
- 238000000034 method Methods 0.000 title claims abstract description 11
- 238000004590 computer program Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 5
- 235000019800 disodium phosphate Nutrition 0.000 abstract description 41
- 230000006835 compression Effects 0.000 abstract description 8
- 238000007906 compression Methods 0.000 abstract description 8
- 239000000203 mixture Substances 0.000 abstract description 7
- 230000033001 locomotion Effects 0.000 abstract description 3
- 230000001419 dependent effect Effects 0.000 abstract description 2
- 230000000007 visual effect Effects 0.000 abstract 1
- 238000004891 communication Methods 0.000 description 14
- 230000006870 function Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 2
- 235000008694 Humulus lupulus Nutrition 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/163—Interprocessor communication
- G06F15/173—Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/80—Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
- G06F15/8007—Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors single instruction multiple data [SIMD] multiprocessors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4092—Image resolution transcoding, e.g. by using client-server architectures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
- H04N1/393—Enlarging or reducing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234363—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
- H04N21/2365—Multiplexing of several video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25808—Management of client data
- H04N21/25833—Management of client data involving client hardware characteristics, e.g. manufacturer, processing or storage capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/434—Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
- H04N21/4347—Demultiplexing of several video streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/147—Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/152—Multipoint control units therefor
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Graphics (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
A method of transmitting differently scaled images to different destinations uses a video processing hardware architecture for a multipoint control unit (MCU) wherein a motherboard 20 carries a field programmable gate array (FPGA) connected to a plurality of additional video signal processors. These additional processors, illustrated by daughterboards 22, are interconnected with a plurality of high bandwidth (ie / 3Gb/s) links 26. Each daughterboard typically comprises a field programmable gate array (FPGA) (32, figure 4) and four digital signal processors (DSP) (28, figure 4). Also provided are stream processors P (36, figure 4) which may carry out image scaling as well as compression, composition, motion compensation, encoding and other operations, which removes some of the processing load from other DSPs 28, and avoids using up available bandwidth by passing data between processors. The hardware architecture presented ensures that data processing is ideally performed by local DSPs thereby avoiding local bottlenecks, and where necessary, data may be moved in an efficient manner using the high bandwidth links. This architecture is particularly suited to video conferencing systems wherein the MCU is required to provide differently scaled images to different participants, dependent upon the capabilities of their local equipment. This overcomes the disadvantage of present MCUs which lack the processing capability or internal bandwidth availability to meet these processing-intensive demands. By processing the data internally using intelligent multicasting decision, intelligent scaling and compression decisions and acknowledging that the video data may be compressed without loss of visual quality, processing and bandwidth requirements may be met.
Description
HARDWARE ARC.HITECTURE FOR VIDEO CONFERENCING This invention relates to
a hardware architecture for a multipoint control unit.
Video conferencing and the associated hardware, falls broadly into two camps. In the first camp, "conferencing" occurs between only two participants and the participants are connected directly to one another through some form of data network. In this form of network, only two endpoints are involved and true conferencing only occurs if multiple participants are present at one of the two endpoint sites. Examples of this type of conferencing are, at the low technology end, PC enabled endpoints interconnecting using software such as Netmeeting or Skype and at the higher end equipment using dedicated endpoint hardware interconnected, for example, via ISDN links.
In the second camp, video conferencing allows more than two endpoints to interact with one another. This is achieved by providing at least one centralised co-ordinating point; a so-called "multipoint control unit (MCU)", which receives video and audio streams from the endpoints, combines these in a desired way and re-transmits the combined composite video/audio stream to the participants. Typically the conference view transmitted to the endpoints is the same for each endpoint. The composition may change over time but is the same for all the participants.
The provision of only a single composition is a significant problem because each participant must therefore receive a conference stream tailored so that it is acceptable to the least capable endpoint in the conference. In this situation therefore many endpoints are not used to their full capacity and may experience degraded images and audio as a result.
More recently, modern MCUs such as the Codian MCU 4200 series have been designed to allow a unique view to be created for each participant. This allows the full capabilities of each endpoint to be utilised and also allows different compositions for different participants so that, for example, the emphasis of a particular participant in the conference may be different for a different user. However, the processing of video data in real time is a highly processor intensive task. It also requires the movement of large quantities of data. This is particularly so once the data has been decompressed in order to perform high quality processing. Thus processing power and bandwidth constraints are a significant bottleneck in the creation of high quality video conferencing MCUs which allow multiple views of the conference to be produced.
Figure 1 shows a typical prior art MCU architecture.
This architecture has a plurality of digital signal processors 2 such as the Texas Instruments TMS series, which are interconnected via a Time Division Multiplexed (TOM) bus 4. A controller and network interface 6 is also connected to the TDM bus.
Each DSP 2 is allocated one or more frames on the TOM bus. It will be appreciated that the 1DM bus is a significant bottleneck, Whilst increased processing power for the MCU may be achieved by adding more powerful DSPs or additional DSPs, all the data flowing between DSPs and between the network 8 and the DSPs must fit into a finite number of time slots on the TDM bus 4. Thus, this form of architecture scales poorly and cannot accommodate the processing requirements of per-participant compositions.
Figure 2 shows an alternative prior art configuration. A plurality of DSPs 2-1 are each connected to a PCI bus 10-1. Similarly, a plurality of DSPs 2-2, 2-3 and 2-4 are connected to respective PCI buses 10-2, 10-3 and 10-4. The PCI buses 10-2, 10-3 and 10-4 are in turn connected via buffers 12 to a further PCI bus 14. A significant advantage of this architecture over that shown in Figure 1 is that the DSPs in group 2-1 may communicate amongst one another with the only bottleneck being the PCI bus 10- 1. This is true also for the groups 2-2, 2-3 and 2-4. However, should a DSP in group 2- 1 wish to communicate with a DSP for example, in group 2-3, the PCI bus 14 must be utilised. Thus although this architecture is a significant improvement on that shown in Figure 1 in terms of scalability and the ability to effectively use a plurality of DSPs, the PCI bus 14 must still be used for certain combinations of intra-DSP communication and thus becomes a performance limiting factor for the MCU.
Attempts have been made to offload processing from DSPs. For example, IDT has recently released details of a so-called "Pre-processing switch (PPS)" under part number IDT 70K2000. The PPS carries out predetermined functions before delivery to a processor such as a DSP or FPGA. Processing is determined based on the address range on the switch to which packets are sent. The chip is designed for use in 3G mobile telephony and is designed to offload basic tasks from DSPs which would normally be carried out inefficiently by the DSP. US6,883,084 also proposes the use of path processing although in that case it is proposed as an alternative to a Von Neumann type sequential processor. This patent proposes the use of a plurality of path processors carrying out simultaneous processing of alternative data sets so that the processing of unusable data sets does not delay programme flow. It teaches against a hybrid approach including path processors with other types of processors.
According to the invention there is provided a method of routing data in a multipoint control unit having a plurality of signal processing means, comprising storing a network map holding data representative of a network topology which interconnects the signal processing means, providing a plurality of switches to switch data between the signal processing means, and carrying out switching in unicast or multicast mode dependent on the network map.
Other aspects and features of the present invention will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments of the invention in conjunction with the accompanying figures.
Figure 1 is a schematic block diagram of a prior art MCU architecture; Figure 2 is schematic block diagram of an alternative prior art MCU architecture; Figure 3 is a schematic block diagram showing a motherboard and a plurality of daughterboards in accordance with the present invention; and Figure 4 is a schematic block diagram of a daughterboard in accordance with the invention.
Figure 5 illustrates an exemplary computing system that may be employed to implement processing functionality in embodiments provided herein.
With reference to Figure 3, a motherboard 20 carries a field programmable gate array (FPGA) and other associated components. The motherboard 20 may include control circuitry which, for example, enables an auto attendant interface to be produced to allow users to configure the MCU and which may also control data flow in the MCU.
These components may alternatively be on a separate board as is known in the art.
The motherboard 20 also includes connectors which permit the mounting of one or more daughterboards 22. In the preferred embodiment, four daughterboards may be connected to the motherboard 20. The connection may for example be made using pluggable connectors. By using a plurality of such connectors, in the preferred embodiment the daughterboards are both electrically coupled and mechanically mounted to the motherboard by such connectors.
The motherboard 20 carries an FPGA 24 which carries out routing functions (among other functions). Primarily, the FPGA 24 routes data between the controller (not shown), network interface (not shown) and the plurality of daughterboards 22. The FPGA 24 preferably has four high bandwidth links 26 which may have a bandwidth of 3Gb/sec or higher and which connect the motherboard with a first layer of daughterboards. It is noted that links 26 (and 38 below) may include physical links, a switch fabric, or other suitable structures or system for connecting motherboards, daughterboards, and DSPs. Data flows to the distal daughterboards are routed through the first layer of daughterboards as explained in more detail below.
With reference also to Figure 4, each daughterboard typically has four DSPs 28 each with associated memory 30. Each daughterboard also has an FPGA 32 which incorporates a switch 34. Switch 34 may include structure logic for receiving packets on an input and sending the packets out in a selectable manner, e.g., similar to a network switch. The FPGA 32 includes stream processors 36 which are described in more detail below, and two high bandwidth links 38.
The daughterboards 22 are each identical and the links 38 may be used to connect to another daughterboard or to the motherboard. In this way, extra processing capability may be added to the architecture simply by adding additional daughterboards. In a minimal configuration, a single daughterboard may be mounted on the motherboard. In a maximal configuration, four daughterboards may be mounted to the motherboard and each daughterboard may have additional daughterboards (three in this example) stacked thereon. As explained above, each daughterboard itself may include four DSPs and thus in this particular example, a configuration including four daughterboards, the architecture may have 64 DSPs. Of course, various numbers of DSPs and/or daughterboards may be used and the maximal configuration is with reference only to this particular example of 16 daughterboards, each including four DSPs.
Several strategies are used to alleviate bandwidth congestion on the interconnects between the DSPs. Firstly, each interconnect between daughterboards operates at a bandwidth of 3 Gb/sec or higher which is a substantially higher bandwidth than in the prior art. Secondly, the daughterboards each have only four DSPs sharing a local interconnect which may communicate amongst one another without using bandwidth on any other interconnect in the architecture. Thus with appropriate resource allocation, the DSPs on any one daughterboard may experience high utilisation without significant bandwidth impact for the architecture as a whole.
Furthermore, data may flow between DSPs in any one of the four branches shown in Figure 3, without using bandwidth available to the other branches.
A further strategy involves the use of stream processors 36 located in each of the daughterboard FPGAs. These stream processors take advantage of an unusual characteristic of video conferencing as explained below.
Typically, data flowing between endpoints in a video conference is highly compressed in view of bandwidth constraints, for example, with Internet connected endpoints.
However, this compression prevents manipulation of the images. Thus within an MCU, video processing is carried out on uncompressed data. Typically this increases the volume of data by a factor between 10 and 100 and typically by a factor of about 80.
Thus a typical video stream may have a bandwidth requirement of 50 Mb/sec, for example. This is a significant problem peculiar to video conferencing since processing is carried out on many simultaneous streams and must be carried out in real time.
However, since the end result of the processing will be transmitted in compressed form and also typically over a lossy network, it is acceptable to carry out compression within the MCU. Such compression may be lossless or given the nature of the output network, lossy. Thus Applicants have appreciated that the bandwidth constraints within the MCU may be alleviated by performing compression and decompression within the MCU for data in transit between DSPs. However, this in itself is computationally expensive. Accordingly, the novel architecture of the present invention includes stream processors 36 which are formed in each daughterboard FPGA 32. The media stream processors 36 may act on several pixels when performing compression and thus the FPGA may keep a frame or a portion of a video frame in memory 40 so that the processors 36 in this mode are not strictly stream processors.
The processors 36 may carry out further operations including, but not limited to composition, alpha blending, motion compensation, variable length encoding and decoding, frame comparison, combinations thereof, and the like. By carrying out these steps on the fly as data is passed between DSPs 28, processing load is removed from the DSPs and also bandwidth limitations are mitigated.
As a further enhancement, data destined for several different DSPs may be sent in unicast format until a routing branch is required in which case some data may be sent in multicast form. This avoids having multiple streams of the same data passing along the same link. For example, if the daughterboard 22 at the far left of Figure 3 wishes to communicate with a DSP on the daughterboard 22' at the bottom of the figure and also with the daughterboard 22" at the far right of the figure, the data may be unicast until it reaches the motherboard 20 at which point it may be multicast to each of the two respective branches of daughterboards radiating out from the motherboard and then unicast along each of the branches. This step may be carried out within the FPGA as part of its routing algorithm To facilitate this, each switch may maintain a representation of the topology of the entire MCU architecture, for example in tree form, and is operable to manipulate the tree and to determine an appropriate multicast or unicast format for the next hop or hops. Alternatively, the route may be determined at the data source and routing information carried with the data which is interpreted by the switches enroute.
The media stream processors 36 may also use factorised scaling to assist with reducing the bandwidth of communications between DSPs. For example, if different participant compositions require differently scaled versions of the same image such as an image scaled to a half for one participant and a quarter for another participant, the FPGAs may be configured to make sensible scaling decisions. In this example the FPGA may scale the whole image to a half, transmit the thereby reduced data as far as the routing branch which chooses between the DSP which will process the half and the DSP which will process the quarter image and at that point further scale the image down to a quarter for onward transmission to the DSP dealing with the quarter scaled image.
The intelligent routing, multicast and scaling/compression operations are carried out by each daughterboard FPGA and accordingly the processing load for these intelligent routing decisions is distributed amongst each of the daughterboards.
In this way, therefore, the architecture described above maximises the utilisation of the DSPs by ensuring that data is ideally allocated to local DSPs and also where data must be transmitted between more distant DSPs, that the data is transmitted in the most efficient format. Furthermore by employing very high bandwidth links between the DSPs, any bandwidth bottlenecks are largely avoided. Accordingly, the architecture provides a highly scalable and very powerful processing platform for high definition per participant composed multi-conference video conferencing.
Of course, other features and advantages will be apparent to those skilled in the art.
The foregoing system overview represents some exemplary implementations, but other implementations will be apparent to those skilled in the art, and all such alternatives are deemed equivalent and within the spirit and scope of the present invention, only as limited by the claims.
Those skilled in the art will further recognize that the operations of the various embodiments may be implemented using hardware, software, firmware, or combinations thereof, as appropriate. For example, some processes can be carried out using processors or other digital circuitry under the control of software, firmware, or hard-wired logic. (The term "logic" herein refers to fixed hardware, programmable logic and/or an appropriate combination thereof, as would be recognized by one skilled in the art to carry out the recited functions.) Software and firmware can be stored on computer-readable media. Some other processes can be implemented using analog circuitry, as is well known to one of ordinary skill in the art. Additionally, memory or other storage, as well as communication components, may be employed in embodiments of the invention.
Figure 5 illustrates a typical computing system 500 that may be employed to implement processing functionality in embodiments of the invention. Computing systems of this type may be used in the any one or more of an MCU, controller, motherboard, daughterboard, or DSP, for example. Those skilled in the relevant art will also recognize how to implement embodiments of the invention using other computer systems or architectures. Computing system 500 can include one or more processors, such as a processor 504. Processor 504 can be implemented using a general or special purpose processing engine such as, for example, a microprocessor, microcontroller or other control logic. In this example, processor 504 is connected to a bus 502 or other communications medium.
Computing system 500 can also include a main memory 508, such as random access memory (RAM) or other dynamic memory, for storing information and instructions to be executed by processor 504. Main memory 508 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 504. Computing system 500 may likewise include a read only memory (ROM) or other static storage device coupled to bus 502 for storing static information and instructions for processor 504.
The computing system 500 may also include information storage system 510, which may include, for example, a media drive 512 and a removable storage interface 520.
The media drive 512 may include a drive or other mechanism to support fixed or removable storage media, such as a hard disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a compact disk (CD) or digital versatile disk (DVD) drive (R or RW), or other removable or fixed media drive. Storage media 518, may include, for example, a hard disk, floppy disk, magnetic tape, optical disk, CD or DVD, or other fixed or removable medium that is read by and written to by media drive 514.
As these examples illustrate, the storage media 518 may include a computer-readable storage medium having stored therein particular computer software or data.
In alternative embodiments, information storage system 510 may include other similar components for allowing computer programs or other instructions or data to be loaded into computing system 500. Such components may include, for example, a removable storage unit 522 and an interface 520, such as a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory module) and memory slot, and other removable storage units 522 and interfaces 520 that allow software and data to be transferred from the removable storage unit 518 to computing system 500.
Computing system 500 can also include a communications interface 524.
Communications interface 524 can be used to allow software and data to be transferred between computing system 500 and external devices. Examples of communications interface 524 can include a modem, a network interface (such as an Ethernet or other network interface card (NIC)), a communications port (such as for example, a USB port), a PCMCIA slot and card, etc. Software and data transferred via communications interface 524 are in the form of signals which can be electronic, electromagnetic, optical or other signals capable of being received by communications interface 524. These signals are provided to communications interface 524 via a channel 528. This channel 528 may carry signals and may be implemented using a wireless medium, wire or cable, fiber optics, or other communications medium. Some examples of a channel include a phone line, a cellular phone link, an RF link, a network interface, a local or wide area network, and other communications channels.
In this document, the terms "computer program product," "computer-readable medium" and the like may be used generally to refer to media such as, for example, memory 508, storage device 518, or storage unit 522. These and other forms of computer-readable media may store one or more instructions for use by processor 504, to cause the processor to perform specified operations. Such instructions, generally referred to as "computer program code" (which may be grouped in the form of computer programs or other groupings), when executed, enable the computing system 500 to perform functions of embodiments of the invention. Note that the code may directly cause the processor to perform specified operations, be compiled to do so, and/or be combined with other software, hardware, and/or firmware elements (e.g., libraries for performing standard functions) to do so.
In an embodiment where the elements are implemented using software, the software may be stored in a computer-readable medium and loaded into computing system 500 using, for example, removable storage drive 514, drive 512 or communications interface 524. The control logic (in this example, software instructions or computer program code), when executed by the processor 504, causes the processor 504 to perform the functions of embodiments of the invention as described herein.
It will be appreciated that, for clarity purposes, the above description has described embodiments of the invention with reference to different functional units and processors. However, it will be apparent that any suitable distribution of functionality between different functional units, processors or domains may be used without detracting from embodiments of the invention. For example, functionality illustrated to be performed by separate processors or controllers may be performed by the same processor or controller. Hence, references to specific functional units are only to be seen as references to suitable means for providing the described functionality, rather than indicative of a strict logical or physical structure or organization.
Although embodiments of the invention have been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein.
Rather, the scope of embodiments of the invention is limited only by the claims.
Additionally, although a feature may appear to be described in connection with particular embodiments, one skilled in the art would recognize that various features of the described embodiments may be combined in accordance with embodiments of the invention.
Furthermore, although individually listed, a plurality of means, elements or method steps may be implemented by, for example, a single unit or processor. Additionally, although individual features may be included in different claims, these may possibly be advantageously combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. Also, the inclusion of a feature in one category of claims does not imply a limitation to this category, but rather the feature may be equally applicable to other claim categories, as appropriate.
Claims (4)
- Claims 1. In a video processing hardware architecture for a multipointcontrol unit, comprising a plurality of signal processing means adapted to perform processing of data representative of video images and a plurality of links interconnecting the signal processing means, the architecture further comprising a plurality of stream processors arranged to process data as it passes between the signal processing means over the said links, a method of transmitting a scaled video image to different destinations at different respective scaling levels, comprising, performing video scaling of video data in a first of the stream processors to a first scale level required by a first destination and performing subsequent scaling of the video data using a second stream processor to a second, smaller scaling level for a second destination.
- 2. A method according to claim 1, comprising storing a network map holding data representative of a network topology of the architecture and making scaling decisions with reference to the map.
- 3. A computer readable medium encoded with computer program instructions which when executed on hardware for routing video data in a multipoint control unit having a plurality of signal processing means, the computer program causes the hardware to carry out they instructions of claim 1.
- 4. The computer readable medium of claim 3, wherein the computer program instructions further comprise storing a network map holding data representative of a network topology of the architecture and making scaling decisions with reference to the map.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GBGB0623096.5A GB0623096D0 (en) | 2006-11-20 | 2006-11-20 | Hardware architecture for video conferencing |
Publications (3)
Publication Number | Publication Date |
---|---|
GB0722701D0 GB0722701D0 (en) | 2007-12-27 |
GB2443969A true GB2443969A (en) | 2008-05-21 |
GB2443969B GB2443969B (en) | 2009-02-25 |
Family
ID=37605580
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GBGB0623096.5A Ceased GB0623096D0 (en) | 2006-11-20 | 2006-11-20 | Hardware architecture for video conferencing |
GB0722701A Active GB2443969B (en) | 2006-11-20 | 2007-11-20 | Hardware architecture for video conferencing |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GBGB0623096.5A Ceased GB0623096D0 (en) | 2006-11-20 | 2006-11-20 | Hardware architecture for video conferencing |
Country Status (1)
Country | Link |
---|---|
GB (2) | GB0623096D0 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2397964A (en) * | 2000-01-13 | 2004-08-04 | Accord Networks Ltd | Optimising resource allocation in a multipoint communication control unit |
WO2005112424A1 (en) * | 2004-05-19 | 2005-11-24 | Dstmedia Technology Co., Ltd. | Method for displaying image |
-
2006
- 2006-11-20 GB GBGB0623096.5A patent/GB0623096D0/en not_active Ceased
-
2007
- 2007-11-20 GB GB0722701A patent/GB2443969B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2397964A (en) * | 2000-01-13 | 2004-08-04 | Accord Networks Ltd | Optimising resource allocation in a multipoint communication control unit |
WO2005112424A1 (en) * | 2004-05-19 | 2005-11-24 | Dstmedia Technology Co., Ltd. | Method for displaying image |
Also Published As
Publication number | Publication date |
---|---|
GB0623096D0 (en) | 2006-12-27 |
GB2443969B (en) | 2009-02-25 |
GB0722701D0 (en) | 2007-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2105015B1 (en) | Hardware architecture for video conferencing | |
US7535485B2 (en) | Delay reduction for transmission and processing of video data | |
US20060168637A1 (en) | Multiple-channel codec and transcoder environment for gateway, MCU, broadcast and video storage applications | |
EP2517425B1 (en) | Method and device for filtering media packets | |
US6442758B1 (en) | Multimedia conferencing system having a central processing hub for processing video and audio data for remote users | |
US7245660B2 (en) | Method and an apparatus for mixing compressed video | |
KR101017933B1 (en) | Fully redundant linearly expandable broadcast router | |
EP1360798A1 (en) | Control unit for multipoint multimedia/audio conference | |
US9560096B2 (en) | Local media rendering | |
GB2443966A (en) | Hardware architecture for video conferencing | |
GB2443968A (en) | Video conference multipoint control unit (MCU) allowing unicast and multicast transmission | |
GB2443969A (en) | Method of transmitting scaled images using video processing hardware architecture | |
GB2443967A (en) | Video processing hardware architecture for video conferencing | |
CN112788429B (en) | Screen sharing system based on network | |
JP4944377B2 (en) | Linearly expandable distribution router device | |
US6973080B1 (en) | System for using indirect memory addressing to perform cross-connecting of data transfers | |
JPH10290205A (en) | Data transmitter | |
WO1999021326A2 (en) | Resource optimization in a multiprocessor system for a packet network | |
JPH07297834A (en) | Self-routing switch | |
JP2005165396A (en) | Contents storage/distribution system and apparatus, and multi-point connecting apparatus |