GB2531774A - Video data transmission method in a multi-source display system - Google Patents

Video data transmission method in a multi-source display system Download PDF

Info

Publication number
GB2531774A
GB2531774A GB1419328.8A GB201419328A GB2531774A GB 2531774 A GB2531774 A GB 2531774A GB 201419328 A GB201419328 A GB 201419328A GB 2531774 A GB2531774 A GB 2531774A
Authority
GB
United Kingdom
Prior art keywords
image
source
data
data rate
composite
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1419328.8A
Other versions
GB2531774B (en
GB201419328D0 (en
Inventor
Tocze Lionel
Visa Pierre
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1419328.8A priority Critical patent/GB2531774B/en
Publication of GB201419328D0 publication Critical patent/GB201419328D0/en
Publication of GB2531774A publication Critical patent/GB2531774A/en
Application granted granted Critical
Publication of GB2531774B publication Critical patent/GB2531774B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/41Bandwidth or redundancy reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00095Systems or arrangements for the transmission of the picture signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32502Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device in systems having a plurality of input or output devices
    • H04N1/32507Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device in systems having a plurality of input or output devices a plurality of input devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/38Circuits or arrangements for blanking or otherwise eliminating unwanted parts of pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/41Bandwidth or redundancy reduction
    • H04N1/411Bandwidth or redundancy reduction for the transmission or storage or reproduction of two-tone pictures, e.g. black and white pictures
    • H04N1/413Systems or arrangements allowing the picture to be reproduced without loss or modification of picture-information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

A method of controlling communication of image data comprising of at least part of a first image from a first image source, comprises: determining an available communication resource for communicating at least part of first image from the first source to a target device; obtaining a composite image arrangement, comprising the at least part of the first image and at least part of at least one other image obtained from a source other than the first image source (e.g. from wireless sources 115-1 to 115-5, Figure 1); determining a data rate adaptation for communicating, based on the available communication resource and composite image arrangement. The composite arrangement is such that at least one portion of the first image does not form part of the composite image; determining the data rate adaptation comprises reducing (e.g. removing) the image data corresponding to a first image portion not forming part of the composite image. When part of a source image is overlapped by another, the overlapped part is under-sampled or eliminated, reducing image data. Arrangement of the composite image may be by user interface (GUI). Reducing colour sampling, resolution or image quality may be used.

Description

VIDEO DATA TRANSMISSION METHOD IN A MULTI-SOURCE DISPLAY
SYSTEM
FIELD OF THE INVENTION
The present invention relates to control of communication of image data from a plurality of sources to a device. More specifically, the invention relates to control of communication of image data from a plurality of sources wherein the image data from a plurality of sources forms a single aggregate or composite image for display. The invention is particularly advantageous when the composite image comprises of a combination of individual images sent from the plurality of sources and at least a part of one individual image from one source is overlain by a part of another individual image from another source or is cropped out by the edges of the composite image. The aim of the invention is to efficiently utilize communication resources, in particular, wireless communication resources shared by the plurality of image data sources, by adaptively modifying the data rate for the transmission of individual images or source images from their source.
BACKGROUND OF THE INVENTION
Transmission of video content over a communication network, in particular a wireless communication network is constrained by the capacity or bandwidth of the communication means connecting the network. If a wireless communication network is used, these constraints are even higher due to signal interference, noise, larger data overhead in packets, etc. While efficient video encoding can to some extent reduce the amount of video data, it remains substantially large especially with the increasing deployment of high definition and ultra-high definition video standards. As wireless communication resources are often shared, the limited bandwidth is further constrained due to sharing, depending on the number of transmitters and receivers sharing the resources.
Display of a composite video image comprising of individual video images received from multiple sources is required in a number of settings such as video conferencing, business presentations, displays in lecture-rooms and class rooms, gaming, advertising, entertainment etc. This requires receiving multiple source images, rendering the multiple sources into a single composite image and displaying the composite image in the form of a video collage or spatial montages of individual video images (source images).
Transmission of multiple source images to a rendering and/or display means is preferably carried out over a wireless network. The wireless communication resource is shared and the bandwidth is split amongst the individual video image sources. This determines the maximum available throughput for each source and in-turn imposes a limit on how many source images can be combined in a composite image and/or a limit on the resolution of individual source images transmitted from each source. Therefore, there exists a tradeoff between number of sources and the image quality.
If for example, for a system composed of 6 sources supporting resolution up to 1080p30 (meaning 1.5 Gbps), the bandwidth available is 7 Gbps, only a maximum of 4 sources with maximum resolution is displayed. Alternatively, up to 5 sources may be selected with different resolutions. For example three using 1080p30 (1.5 Gbps) and two using 720p30 (0.67 Gbps) may be selected instead of maximum of 1080p30 for all. In this example therefore bandwidth supports the required throughput (3*1.5 + 2 *0.67 = 5.84 Gbps).
Moreover, as wireless medium is sensitive to environment condition, bandwidth availability is subject to variation, which therefore requires adaptation of the system and increase the constraint for multi-source display support.
Indeed, if bandwidth is reduce from 20% (from 7Gbps to 5.6 Gbps), then none of the previous multi-source display is possible, without applying further resolution reduction, and therefore decreasing the video quality.
The display of a composite image as addressed above requires that an arrangement of the source images within the composite be determined using a rendering device. This may be predetermined or may be controllable through a user interface of the rendering device. The arrangement determines the size, position and orientation of the individual source images within the composite image. The arrangement also determines the position of the source images with respect to other source images within that composite image. For example, one source image may overlap a second source image, at least partially so as to obscure the overlapped part of the second source image. In such a case, the obscured part of the second source image is unnecessarily transmitted resulting in an inefficient use of communication resources.
In an alternative solution, some compression is used to reduce source data throughput requirement. However it requires further coding processing inside all sources and decoding processing on the rendering device.
This increases the cost and complexity of such devices.
State of the art multi-source display systems are based on a specific device which gathers all the video input and are in charge of the composition of the final display. Such systems therefore impose a predetermined fixed limit on the number of video sources to match the available number of inputs provided.
Published patent application US2014/0082685 Al discloses a method and apparatus for adjusting data transmission rate in a wireless communication system and deals with the transmission of video data through a wireless medium. The data rate encoding of the video (and therefore its quality) is then adapted to fit to the bandwidth available, for example by use of a pixel dropping. It therefore enables the video content to adapt to the variable link condition of the wireless channel for the transmission of one video stream. This document does not teach how image data may be wirelessly transmitted from several sources for a multi-view display on a shared communication resource.
Published patent application US 2013/0246576 Al discloses an interconnection mechanism for multiple data streams and deals with a multi display systems able to handle display of several source images, taking into account shared bandwidth of the interconnection system. However, this solution always need video interconnection through "Capture node" and is not able to manage in case the interconnection arrangements of individual source differ, as for example when the interconnection is realized by different type of adapter.
Moreover, even when providing more flexible solutions for managing a variable number of sources, it always require adding some specific hardware and therefore needs reconfiguration of the system that cannot be easily managed by the user.
One aim of the present invention is to provide a data rate adaptation method which reduces the amount of video data transmitted from the sources to accommodate for the available wireless bandwidth without unnecessarily constraining the composition of multi-source display layout either in terms of the number of sources selected or in terms of the chosen image resolution. This results in a scalable system in which the user can organise the rendering of multiple-source-video content enabling the user to determine quality and organisation.
Another aim of the invention is to provide a method that could adapt to different wireless condition by minimising video quality impact for same layout organisation. This will be supported by the possibility to adjust data rate of some portion of video sources, keeping global quality as high as possible.
Another goal of the invention is to provide a method, which could be supported either through specific adapter, or through standard device implementing simple video processing.
SUMMARY OF INVENTION
It is a broad objective of the proposed present invention to permit efficient use of communication resources, in particular wireless communication resources by prioritizing communication of those portions of constituent individual images that are intended and capable of display in the composite image by a target device. Efficient use of communication resource is achieved by adapting data rate of source in accordance with available communication resources while keeping image quality of prioritized portions at the maximum. Another objective is to provide scalability of the system where the number of sources supported by the system is not limited by connection, nor by limited communication resource. Additionally, such a system would also be able to support any layout organization and support source with different video resolutions.
According to the first aspect of the invention, there is provided method of controlling communication of image data comprising of at least part of a first image from a first image source, the method comprising: a step of determining available communication resource for the communication of the at least part of first image from the first image source to a target device; a step of obtaining an arrangement of a composite image, the composite image comprising of the at least part of the first image and at least part of at least one other image; wherein at least one of the part of at least one other image is obtained from a source other than the first image source; a step of determining a data rate adaptation for the communication of the at least part of the first image from the first image source to the target device, the data rate adaptation being based on the available communication resource and the arrangement of the composite image; and wherein the arrangement of composite image is such that at least one portion of the first image does not form part of the composite image; and the step of determining the data rate adaptation comprises a step of reducing the image data corresponding to the at least one portion of the first image not forming part of the composite image.
By adapting the data rate based on the available communication resource and the arrangement of the composite image, any required reduction in image data relating to the first image may be obtained by reduction in relation to the portion of the first image not part of composite image. This method therefore permits retaining as high an image quality as possible by not losing any or much image data relating to the displayed part of the image.
Preferably according to an embodiment of the invention, in the above method the data rate adaption comprises of any one or more of the following: * reduced colour sampling rate; * reduced image quality or resolution; and * increased loss image compression.
This allows various means of reducing image data corresponding to image portions that are intended or not for display in the composite image or at the target device.
Preferably according to another embodiment the reducing image data comprises removing from the first image all the data associated with at least one portion of the first image not forming part of the composite image.
This allows complete reduction in data relating to the portion that is not displayed, thereby permitting maximum reduction with respect to that portion. By prioritizing this type of image reduction, image quality of the composite image as displayed may be maintained as high as possible. Preferably according to another embodiment, the reducing image data comprises applying data compression to at least one portion of the first image not forming part of the composite image.
This allows some reduction in data compression and is particularly advantageous if the available communication is not too constrained so as to allow the portion of the image not displayed to be communicated nonetheless in diminished quality so as to permit display if there is an inadvertent or deliberate change in the composition of the composite image.
Optionally in the above embodiment, applying data compression may comprise of replacing data of the at least one portion of the first image by a constant value, and the method further comprising applying a lossless compression algorithm to the entire first image.
This alternative allows for an easy way of blocking all data relating to the portions not intended for display and further applying lossless compression to the entire image to obtain further overall reduction in data to be sent from the respective source. As the compression is lossless, there is virtually no loss in image quality with respect to displayed part of the image.
Preferably according to an embodiment, in the above method further comprises a step of determining the at least one portion of the first image not part of the composite image as a result of being overlapped in the composite image by at least a portion of the at least one other image or being cropped out by at least one peripheral edge of the composite image.
By determining portions of the first image that is either overlapped by another source image or cropped-out of the frame of the composite image, the portions not displayed in the composite image can easily be determined. Any image data reduction techniques may then be applied to these portions as a matter of priority over other part that are displayed in order to best use limited communications resource without unduly undermining quality of the displayed image.
Preferably according to another embodiment of the invention, the above method further comprises determining all portions of the first image and the at least one other image that are not part of the composite image.
By determining all portions of the individual images not for display, the data rate adaptation may take into account all possible image portions not intended for display, therefore allowing full capacity for rate reduction without first degrading the quality of the displayed parts.
Preferably according to another embodiment in the above method the data rate adaptation comprises reducing or removing all image data for at least one of the portions of the first image and the at least one other image that are not part of the composite image.
As an alternative to replacing with the constant, reducing and removing all data relating to image portions not intended for display allows for greater flexibility in adapting to the available communication resource. At the extreme, all the data relating to the portion may be removed and the lack of data at the receiver may be construed as a constant (i.e. blanked out portion).
In the moderate data reduction technique, only some data (such as base layer) relating to the portions may be communicated from the source. This allows at least some of the image (in lower quality) to be visible in case the communication of an otherwise overlapping other image is interrupted of stopped. Alternatively, it allows a user to change the arrangement of the composite image by assessing is the overlapped or cropped out image portions. Preferably according to another embodiment in the above method the step of data rate adaptation comprises reducing the quality of the entire first image or removing the entire first image if the ratio of sum of all portions of the first image not forming part of the composite image to the entire first image is above a predetermined threshold.
This embodiment ensures that if a sufficient proportion of a source image is not intended for display, then it is unlikely that it serves a useful purpose in the display. Therefore for such source images, reducing the quality of the entire image or by removing it all together significant savings in required communication resource may be achieved.
Preferably according to another embodiment in the above method the step of data rate adaption further comprises reducing the image data corresponding to at least one other portion of the first image forming part of the composite image.
The advantage of this embodiment is apparent when even the extreme data rate reduction with respect to portions not intended for display fair to accommodate the communication of source images within the available communication resources.
Preferably according to another embodiment in the above method the step of data rate reduction is implemented if ratio of the size of the parts of the first image intended for display to the total size of the composite image is below a predetermined threshold.
This embodiment is particularly advantageous if the size of the source image displayed correlates to its significance in the composite. This allows of data rate reduction with respect to source images that occupy a small insignificant proportion of the composite image.
Preferably according to another embodiment in the above method the step of determining the data rate adaptation is further based on the environmental conditions of the network over which the at least part of the first image data is communicated to the target device.
By adapting the data rate based on environmental conditions, the available communication resource may be adjusted based on noise, interference, etc. Preferably according to another embodiment in the above method the step of determining the data rate adaptation is further based on a communication scheme and/or communication means used for the communication of at least one part of the first image data from the first image source to the target device.
As different communication schemes and communication means have different levels of resilience (for example error resilience), making allowance to the available communication resource for this resilience or lack is thereof advantageous.
Preferably according to another embodiment the above method further comprises a step of informing at least the first image source of the determined data rate adaptation.
This allows the adaptation to the existing rate or the adapted new rates to be communicated to source.
Preferably according to another embodiment in the above method further comprises a step of the first image source transmitting the at least part of the first image to the target device in accordance with the determined data rate adaptation.
This allows the source to transmit at the new adapted rate after the rate adaptation is determined.
Preferably according to another embodiment in the above method the arrangement of composite image is determined using a user interface.
This allows a user to select location, size, position and orientation of individual images forming part of the composite image.
Preferably according to another embodiment in the above method the communication resource is a wireless communication resource.
Preferably according to another embodiment in the above method the first image is part of a video stream.
According to a second aspect, the invention may be embodied in an executable computer program, comprising a sequence of instructions for implementing the method according to any one of embodiments of the first aspect of the invention.
According to a third aspect, the invention may be embodied in a computer readable storage medium storing instructions of a computer program for implementing the method according to any one of embodiments of the first aspect of the invention.
According to a fourth aspect of the invention, there is provided a device for determining the data rate for communication of image data, the image data comprising of at least part of a first image from a first image source, the device comprising: means for obtaining an arrangement of a composite image, the composite image comprising of the at least part of the first image and at least part of at least one other image; wherein at least one of the part of at least one other image is obtained from a source other than the first image source; means for determining available communication resource for the communication of the at least part of first image from the first image source to a target device; means for determining a data rate adaptation for communication of the at least part of the first image from the first image source to the target device, the data rate adaptation being based on the available communication resource and the arrangement of the composite image; wherein: the arrangement of composite image is such that at least one portion of the first image does not form part of the composite image; and the means for determining the data rate adaptation comprises means of reducing the image data corresponding to the at least one portion of the first image not forming part of the composite image.
Preferably according to an embodiment of the invention, in the above device the means for determining the data rate adaptation comprises any one or more of the following: a. means for reducing colour sampling rate; b. means for reducing image quality or resolution; and c. means for applying image compression.
Preferably according to another embodiment, in the above device the means for reducing the image data comprises means for removing from the first image all the data of the at least one portion not forming part of the composite 30 image.
Preferably according to another embodiment, in the above device the means for reducing the image data comprises means for applying data compression to the at least one portion not forming part of the composite image.
Preferably according to another embodiment, in the above device the means for applying data compression comprises means of replacing the data of the at least one portion not forming part of the composite image and device further comprising means for applying lossless compression algorithm to the entire first image.
Preferably according to another embodiment, in the above device the means of identifying the at least one portion of the first image not forming part of the composite image.
Preferably according to another embodiment, the above device further comprising means for determining all portions of the first image and the at least one other image that are not part of the composite image.
Preferably according to another embodiment, in the above device the means for data rate adaptation comprises means for reducing or removing all image data for at least one of the portions of the first image and the at least one other image that are not part of the composite image.
Preferably according to another embodiment, in the above device the means for data rate adaptation comprises means for reducing the quality of the entire first image or for removing the entire first image if the ration of sum of all portions of the first image not forming part of the composite image to the entire first image is above a predetermined threshold.
Preferably according to another embodiment, in the above device the means for determining data rate adaptation further comprising means for reducing the image data corresponding to at least one other portion of the first image forming part of the composite image.
Preferably according to another embodiment, the above device further comprising means of data rate adaptation based on the environmental conditions of the network over which the at least part of the first image data is communicated to the target device.
Preferably according to another embodiment, in the above device further comprising means for data rate adaptation based on a communication scheme and/or communication means used for the communication of at least one part of the first image data from the first image source to the target device. Preferably according to another embodiment, in the above device further comprising means for informing at least the first image source of the determined data rate adaptation.
According to a fifth aspect of the invention, there is provided a system comprising a plurality of sources and a display device interconnected by a communication network for transmitting images from the sources to the display device, the system further comprising a device according to any one of the embodiments of the fourth aspect of the invention for carrying out the steps of the method according to any embodiments of the first aspect of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
Further advantages of the present invention will become apparent to those skilled in the art upon examination of the drawings and detailed description. Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings.
Figure 1 illustrates a schematic representation of a communications network, in which the present invention can be applied.
Figures 2a and 2b shows schematics of a communication device of the system in which the present invention can be implemented.
Figures 3a and 3b shows exemplary layouts of individual source images in the composite image.
Figure 4 represents an algorithm for data rate adaption in accordance with an embodiment of the invention.
Figures 5a and 5b shows examples of layout organisation information in accordance with an embodiment of the invention.
Figure 6 represents an algorithm for determining overlap of source images in accordance with an embodiment of the invention.
Figures 7 and 9 shows the overlap area information table for each of the source image in the exemplary layouts of figure 3a and 3b respectively.
Figures 8 and 10 describes the result of pixel block information obtained according to an aspect of the invention for the exemplary layouts of Figures 3a and 3b respectively.
DETAILED DESCRIPTION
The invention will now be described by means of specific non-limiting exemplary embodiments and by reference to the figures.
Figure 1 illustrates a system, where the method of data rate adaptation in accordance with an embodiment of the invention in a multi-source display system could be applied. Preferably the system 100 is a wireless network composed of several devices, where device 105 is a user interface controller, such as a tablet or a smartphone, enabling the user via its user interface to select from video sources 115-1 to 115-5 and define a layout organisation for these sources to form a composite image for display on a target device 110. These operations are preferably performed through a specific application that gathers source information and informs the display device via wireless communication means. Examples of wireless communication means are 60 GHz RF network 100 or using independent wireless interfaces such as Bluetooth, WiFi (802.11g as example), infrared, etc. In this particular example the 60 GHz network is preferred.
The wireless network 100 further comprises a target device 110 for display, such as video-projector, a Television set or more generally any type of display device. This device is either connected through video connection 135 (HDMI or DisplayPort) to a wireless adapter 120 in order to communicate with the wireless network using for example standard interface such as WiGig, 802.11ad or specific 60 GHz wireless interface, either wireless adapter is integrated in device 110, providing wireless connection and direct video to the display. Preferably the display device is able to support a standard video format up to 4k2k (3840*2160 pixels) at 30 frame per seconds (fps).
Video sources 115-1 to 115-5 may provide different video formats such as 720p30 (12801720), 1080p30 (1920*1080) up to 4k2k 30 fps. Preferably all video source devices (except 115-2) are connected to system 100 using internal component using same wireless standard as display device (Wigig, 802.11ad, etc). Device 115-2 is an example of connection through video interface 130 (HDMI or DisplayPort) to an external wireless adapter 125 adapted to communicate with adapter 120.
Using a user interface controller 105, images received wirelessly from any source or a set of sources (115-1 to 115-5) may be displayed on device 110. Preferably the wireless connection is at least a one 60 GHz channel able to provide a global throughput up to 3.5 Gbps data rate. In order to support high data rate such as needed for raw video 4k2k video transmission (3840*2160*30*24 = 5.98 Gbps), several 60 GHz radio channels may be aggregated to provide necessary bandwidth allocation for a 4k2k video source transmission.
It should be noted that user interface controller may be an integrated function inside wireless adapters (120 or 125) or inside any of the source or display device.
Preferably, the adapter 120 is the master of the wireless network of the system 100, taking in charge synchronisation of the wireless network and insertion, removal of wireless nodes using state of the art techniques (such as Beacon, messages services).
Figure 2a shows the functional block diagram of wireless external adapters 120 or 125, connected either to video-projector device, either to source device (or integrated in these devices) from Figure 1. Preferably an adapter comprises a main controller 201, and several physical layer units (denoted PHY A and PHY B) 211 and 212, able to provide wireless transmission/reception communication means, and follows a standard implementation such as WiGig (or 802.11ac). Such standard implementation may be used when standard devices (for example device 115-1) are present in the system 100. Using aggregation mechanism, these PHY provide sufficient bandwidth to support a 4k2k display target device 110.
Preferably, the adapter further comprises a video output 239 enabling the connection of a target device such as video projector 110, a video input 240 enabling connection of a video source device, as for example with device 115-2.
Preferably, the main controller 201 is itself composed of a Random Access Memory (denoted RAM) 233; an Electronically-Erasable Programmable Read-Only Memory (denoted EEPROM) 232, that stores information such as layout of the sources on the display and/or pixel blocks information of each source; a micro-controller or Central Processing Unit (denoted CPU) 231; a user interface 234 that either communicates with external user interface controller 105 to receive layout information or provides all interface to manage the definition of the layout, such as source selection and positioning; a medium access controller (denoted MAC) 238; a video processing controller 235; a video interface controller 236; and a video Random Access Memory (denoted video RAM) 237.
Preferably, CPU 231, MAC 238, video processing controller 235, user interface 234 exchange control information via a communication bus 244, on which is also connected RAM 233, and EEPROM 232. CPU 231 controls the overall operation of the adapter as it is capable of executing, from the memory RAM 233, instructions pertaining to a computer program, once these instructions have been loaded from the memory EEPROM 232.
Preferably via the user interface 234, the user can receive configuration how to manage the display of the several source of the system on the video-projector. This interface may be a wired interface (like Ethernet, Universal Serial Bus USB), a wireless interface (infrared, Bluetooth) or the 60 GHz wireless system 100 in use.
Preferably, the adapter 120, with the help of video processing unit 235 generates for the video-projector a video-output signal on interface 239, by composition of the several sub-areas received from all sources through communication interfaces (211/212) as further explained with reference to Figure 4.
Preferably, the adapter 125, with the help of video processing unit 235 will receive video signal connected to the video input 240 (HDMI for example) and detected by video interface 236. Preferably the video interface is able to detect format of the source such as size and frame rate (720p30, 1080p30 or 4k2k 30) or chroma sub-sampling (4:4:4, 4:2:2 or 4:2:0). In that case, for data rate adaptation support, video processing unit 235 will performs all necessary transformations of video data which are temporary stored in video RAM 237, such as pixel block(s) generation or data frame-rate/chroma sub-sampling adaptation to fit available radio bandwidth, in regards of the determination received after display layout analysis as further explain with reference to Figure 4.
Preferably, the MAC 238 is in charge of controlling the emission and 10 reception of MAC frames conveying control data and/or video data, on the different PHYs (211-212).
Preferably, the control data will be used for protocol management such as capacity link determination or transmission scheme determination sharing between devices of the system. It will also be used to provide source pixel block(s) information as the result of the data rate adaptation method. It may also be used to transfer all available source information (format) to the user interface controller 105.
For data communications between devices, the MAC 238 may rely on several physical layer units 211 and 212 (in this example). Each physical layer unit could be a wireless unit, operating in the 60 GHz band, with a typical useful throughput in the order of up to 3.5 Gbps, for best transmission condition. By use of aggregation, the system is therefore able to manage delivery of video data to fulfil a 4k2k video display device. Moreover, the MAC may allocate channels to each physical layer unit thus enabling sharing of the bandwidth between a set of source using Time Division Multiple Access (TDMA) allocation scheme.
Preferably, a wireless physical layer unit 211 or 212 comprises a modem, a radio module and antennas. The radio module is responsible for processing a signal output by the modem before it is sent out by means of the antenna. For example, the processing can be done by frequency transposition and power amplification processes. Conversely, the radio module is also responsible for processing a signal received by the antenna before it is provided to the modem. The modem is responsible for modulating and demodulating the digital data exchanged with the radio module. For instance, the modulation and demodulation scheme applied is of Orthogonal Frequency-Division Multiplexing (OFDM) type. Antennas, for both transmission and reception, may be set either with quasi omni-directional pattern (for control data sharing for example), either with quasi omni-directional or directional radiation pattern antennas, for video data transmission, enabling either broadcast of common video data, either long range connection and better spatial reuse. Radio module also provides means to measure RSSI (Radio Signal Strength Indication) to identify positions of antennas that enable the best signal reception.
Preferably, the MAC 238 is able to manage reception on several PHY simultaneously.
Figure 2b shows a functional block diagram of a wireless device 1151, 115-3, 115-4 or 115-5 in an alternative implementation of the invention.
According to this alternative, the wireless device comprises a video memory 250 that contains the original video content of the device parts of which may be transmitted according to the composite image layout on the final display. The wireless device further comprises a wireless transmission unit 270 (such as 802.11ac or Wigig). Preferably, the wireless transmission unit 270 itself comprises a video codec unit 275, able to provide entropy encoding function in order to reduce transmission size of data transmitted (or a reduced video pixel block); a MAC unit 280, able to manage medium access layer according to the wireless standard; and a PHY unit 285 that will enable to transmit/receive data for control or data emission.
The wireless device may further comprises a central unit (255) configured to at least support video content delivery and preferably comprising of a Central Processing Unit 260, which performs the steps of the algorithm of Figure 4 (such as analysis and reception of source pixel block(s) information and generation of required pixel blocks, or such as configuring wireless module to perform the video transmission as determined in step 460). The central unit 260 may further comprise a mask memory 265, where location(s) identified by pixel block(s) of resolution "0:0:0" are set to a same value (as further explained in step 455), and mask is apply to original video memory 250 in order to use standard wireless entropy encoding capability of standard wireless transmission unit 270.
Using such a wireless device 115-1 which may be a standard wireless device, source data rate adaptation according to an embodiment of the invention may be achieved preferably by applying software upgrade alone, therefore enabling greater compatibility and support of the method, and with low adoption/implementation cost.
Figures 3-a and 3-b are two possible examples of video sources layout on the final display that could be used in implementing one or more aspects of the present invention.
The layout of the source images is defined preferably on the device 105, using graphical user interface and drag and drop features of such device. User may first select one or more sources for display from all available sources on the network, and then define their positions in the final display, knowing size of the current source video.
Figure 3a is a first example of the layout in which a full display size is a 4K2K (L=3840 and H=2160 pixels) display, in which top left corner correspond to (X,Y) position of value (0,0) and bottom right corner to value (3839,2159). Preferably, five sources of format 1080p30 (1920*1080 pixels) are selected from the five source devices (115-1 to 115-5) and are respectively set on final display to be displayed as: * On top left corner (0,0), for video 300-1 issued from device 115-1; * On top right corner (1920,0), for video 300-4 issued from device 115-4; * In middle of the screen (960,540), for video 300-3 issued from device 115-3; * In bottom left corner (0,1080), for video 300-2 issued from device 115-2; and * In bottom right corner (1920,1080), for video 300-5 issued from device 115-5.
In this example, video 300-3 is on the upper layer of the layout and should be fully displayed. All the other sources are set on the background layer and will only be partially visible.
Figure 3b is another example of the layout in which a full display size is a 4K2K (L=3840 and H=2160 pixels) display, in which top left corner correspond to (X,Y) position of value (0,0) and bottom right corner to value (3839,2159); three sources of different formats are selected from the five source devices (115-1 to 115-5) and are respectively set on final display to be displayed as: * On top left corner (0, 0), for 4k2k video 305-1 of resolution (3840*2160), issued from device 115-1. This video is on the background layer and will be displayed partially; * At position (400, 380), for 720p30 video 305-2 of resolution (1280*720), issued from device 115-3. This video is set on an intermediate layer and as recovered by video 305-3 is also partially displayed; and * At position (1620, 980), for 1080p30 video 305-3 of resolution (1920*1080), issued from device 115-5. This video is set on the upper layer and therefore is fully visible.
These two exemplary layouts illustrate some of the advantages of the system. Figures 3-a and 3-b show that a variable number of video sources may be selected for the multi-source display, taking advantage of the wireless connectivity. As soon as a device is a member of the wireless network, it should be able to display its video, preferably with no limitation when implementing the method hereafter described, where only visible part of video content will be transmitted.
Figure 3b also illustrates that the method according to the invention is able to support different format of sources such as a mixed of 720p30, 1080p30 and 4k2k video as illustrated. By using state of the art frame rate adaptor, formats such as 720p60, 1080p60 and others may also be supported.
Preferably, when managing the layout at Graphical User Interface (GUI), any type of layout is supported, meaning that several layer levels (from level 0 for Background up to Level N (example N=2 for Figure 3b) are possible and positioning of video source is without constraints.
As will be explained in relation with Figure 4, for the multi-sources display in accordance with a preferred embodiment, any hidden pixels of source image of layer level N-1 due to overlay by another source image in layer level N will be identified and this information is then used to reduce and adapt the video source data rate to the available wireless bandwidth. In the two examples provided, the system even with optimal limited bandwidth of 7 Gbps (2*3.5 Gbps) is able to sustain and adapt to support layout that normally requires more data throughput such as: 5*(1080"1920"24"30) = 7.46 Gbps (for Figure 3a); or [(1080*1920)÷(720*1280)+(3840*2160)]*24*30 = 8.12 Gbps (for Figure 3b).
Figure 4 describes main steps of the data rate adaptation method algorithm. The algorithm starts once the wireless system 100 is in place. It means that all devices 115-1 to 115-5, 110 and 105 are able to communicate together through the wireless network (with internal or external adapters 125120) or dedicated interface (Bluetooth, Infrared, etc).
Any addition of new device or any removal is then taken into account by the system and managed by adapter 120. Once added in the wireless network, any new device reports video source status connection and provides information of video format provided by the device, when detected by video interface 236. A device joining the system also reports its capability to generate pixel block(s). In case the reported capability does not match with the system requirement, the device may preferably be excluded from the system.
In a first step 400, adapter 120 provides to user interface controller 105 all necessary information to manage GUI interface such as: * Number of sources available in the wireless system and their identifier, enabling selection of video to render on the display 110; * Source video formats detected, enabling size rendering for layout selection on user interface controller (such as described in Figures 3); and * Display format rendering (here supposed 4k2k).
This information is exchanged by use of for example wireless message support.
Then, step 405 to define the layout organisation is performed preferably on the user interface controller 105 using screen and control interface (touchpad, drag and drop facilities, ...). For example, in this step the user is able to select the sources and provide a preview of the video sources disposal on the display 110. This step provides mean to indicate the layer level of the video source, where layer level N means that all other layers of level below that level are not visible on the final display rendering if overlapped by an image in layer N. In this step, the user is also able to select the format used at the source level, when several formats are supported such as 720p or 1080p. Once disposal of sources on the display is finalised, step 410 is then performed. It provides the device 120 preferably through wireless message communication, the list of selected source(s) to be rendered on the display 110, and for each of the selected sources the following information: * the format selected for the rendering (ex: 1080p meaning 1920*1080) * the position of the Top left corner of the video source (X00, Ygio) in the global coordinate of the display * the layer level of the rendering (from 0 for background level up to level N for the top layer, which is the video that will not be hidden by any other video).
Knowing the user interface requirements, the device 120 is then ready to perform steps to adapt the data rate of each source to the wireless system capacity.
It first determines the available bandwidth (step 415), in order to be able to check the correct adaptation of source(s) data rate to the wireless network. This determination depends of the number of wireless channels available and the link quality associated with each selected sources in the system. Knowing the number of channels enable to know the aggregate throughput the system may handle (For a 2 PHY, with independent channel, up to 2*3.5 Gbps). Link qualities are estimated through specific communication that enables the measuring of either RSSI (Radio Signal Strength Indication), either BER (Bit error rate) and therefore enable the selection of the best transmission mode for each channel available. When links quality are good, the most efficient transmission (16QAM for example) mode is used, to benefit from the maximum bandwidth available (here 7 Gbps for example). In case link qualities are not so good, transmission mode is adapted to provide more redundancy or more robust modulation (QPSK), therefore decreasing the available bandwidth as a portion of the global bandwidth. This is the aim of this step to characterise the available bandwidth in accordance with the communication environment.
It may be noted that using same radio channel is possible for bandwidth aggregation, if system use directional antenna, enabling in that case simultaneous transmission in parallel on same channel by using spatial diversity.
To illustrate this example, we suppose 2 channels with good quality are available, providing the maximum wireless data throughput up to 7 Gbps.
Then for each source in the selected source list (received from user interface controller in step 410), adapter 120 performs the following operations: In step 420, it performs an overlap determination (further explained in Figure 6) that identifies all the area which overlays current video source.
Results of this step for the two exemplary layouts of Figure 3a and 3b are further explained with Figures 7 and 9. In this step the number of overlap areas are identified, and for each overlap area one or more of the following information may be determined: * Position of Top left corner of the overlap area (Xp", Yp") in the 25 coordinate system of the source * Width L in the X axe of the overlap area * Height H in the Y axe of the overlap area In step 425, it then determines a set of pixel block(s) information, which will enable the video source transmitter to provide video data rate adapted to the multi display requirements and the limited available bandwidth of the wireless network.
Pixel block information may include one or more of the following elements: * Position of the Top left corner of the block (X'pos, Y'p,$) in the coordinate system of the source * The size of the block area (here rectangle area) with its width L' and height H', (L' is following the X axis and H' the Y axis).
* A resolution to apply to the colour coding of the area, where several values exist, and selection of a value enables data rate adjustment. Values of resolution are for example in the range "4:4:4" for full colour value, "4:2:2" for a reduced 2/3 colour value, "4:2:0" for a reduce 1/2 colour value and "0:0:0" is a specific value to indicate that this area is either not transmitted or either use maximum throughput reduction, as for the final display it is overlapped by at least one other video source.
* A bloc ID may also be specified, and is possibly use by the adapter in charge of the final display to identify with no ambiguity the area received from one source, to correctly set the position of the content in the final rendering.
Results of the step 425 for the two layouts example of Figure 3a and 3b are further explained with reference to figures 8 and 10.
In step 430 it is checked if the previous steps 420 and 425 have been performed for all the list of selected sources. If yes, then the output of these two steps should be transmitted to the display for rendering. If there is still a selected source in the list for which the steps 420 and 425 have not been performed, then these steps are processed for that selected source.
In the next step of the algorithm 435, it is determined if the available bandwidth as determined in step 415 is sufficient for the transmission of the pixel blocks determined for each selected source as determined in the step 425. This step is performed by first determining the manner of transmission that will occurs between the selected source(s) and the display 30 adapter using the different wireless channel and TDMA transmission scheme and secondly by calculating if the required bandwidth for all the source(s) transmission during the TDMA is not more than the bandwidth availability determines during step 415. This calculation takes care of all the overhead introduced by the wireless transmission scheme, such as FEC, Header and Interframe GAP duration, as known in state of the art.
If bandwidth availability check fails, i.e if the required bandwidth is more than the available bandwidth, then step 440 will adapt a subset of pixel block(s) determined during steps 425 in order to reduce the necessary bandwidth transmission requirement of these pixel block(s). For that purpose, in order to keep the highest video quality as possible by the limited wireless bandwidth, it will select the smallest area (s) and apply for each of them a lower resolution of the colour coding (Reducing from 4:4:4 to 4:2:2 for 1/3 reduction gain for example). After each colour reduction, check in regard of the bandwidth available is performed to stop adjustment as soon as it fits the available bandwidth.
Selection of smallest area is performed using a criteria level (for example area less than 10% of global display) for applying the colour reduction only to this small area, without degrading quality of big displayed area. Then for reduction two possible alternatives may exits. Either the colour reduction to only small areas is performed by first applying (4:4:4 to 4:2:2) reduction, then (4:2:2 to 4:2:0) reduction if necessary without modifying other visible area, therefore enabling high quality rendering on the main source video of the multi-source display. Alternatively, colour reduction is performed by applying the (4:4:4 to 4:2:2) colour reduction to small areas, and if this is not sufficient, applying the same reduction to some of the other areas (even if above criteria level) in order to obtain a more uniform quality level multi-source display.
After step 435 (or 440), the algorithm has preferably determined for all video sources a set of pixel block(s), and TDMA transmission scheme that each wireless device should apply, time of transmission, channel used. Therefore, device adapter 120 informs all source using wireless control messages about: * TDMA scheme transmission * its own set of pixel block(s) information to provide. (message 445) Once all sources that participate to the display are informed (step 450), each of them will apply one of the technique herein after described to provide the necessary video source content (step 455) to the wireless display adapter, in order to provide multi-source layout as captured in the user interface controller.
A first possibility is using video processing unit 235 of wireless adapter 125 to read inside video RAM 237 pixels that are in a block pixels of resolution different than the "0:0:0Y resolution, and apply it the determined resolution that guarantees fitting of data rate to the available bandwidth. The areas that are overlapped (or in other words hidden by another source) are not transmitted on the wireless medium. The same process could be applied to standard wireless adapters (115-1 to 115-5, except 115-2) reading video memory corresponding to location determined in the block pixel and applying video colour resolution. This technique is the preferred one, as it provides better video quality.
Another possibility for standard wireless device 115-1 (for example) is to use entropy encoding technique, which will result in high compression of the useless video data of the not visible pixels area. This is performed by, at first using the pixels block information for which resolution is set to "0:0:0", to set all the memory of the source video corresponding to this location to a same value, then applying entropy encoding will ensure encoding of the pixels block(s) using the minimum number of bits. Therefore the data rata of the video source is greatly reduced and is adapted to the available bandwidth. The adapter 120 is able to perform such kind of operation and therefore, when a mix of devices exists as in the system 100, this technique is the selected one, even if it provides lower quality level.
Then, under control of device adapter 120, the TDMA that enables data transmission using aggregated wireless throughput is established in step 460. Therefore, device adapter receives from all the video sources selected in the layout, the video data as requested in the pixel block(s) information exchange. The display adapter 120 is then able to compose the final display (step 465), gathering each source of video content and knowing their rendering position thanks to the layout organisation (step 410) and the pixel block description information's.
In the described preferred embodiment, algorithm is performed by adapter device 120 near the display device 110 (or embedded in it). Any other adapter or device (as for example user interface controller), or more generally, any processing device, may perform the same type of algorithm.
An example of the layout description information will be described with reference to figures 5-a and 5-b which depict layout organisation information (transmitted in step 410) corresponding respectively to layout examples of Figures 3-a and 3-b.
Figure 5a illustrates the layout organisation information of the five 1080p30 sources of the layout of Figure 3a. The number of line in the table corresponds to the number of sources requested in the final display. As all 5 adapters of system 100 are implied in the final display, there are five ID informations. The ID value refers to identifier allocated by the wireless network when video source device enters in the system 100. For example, ID 3 refers to node 105-3 and more generally 115-x indicates the node that get the ID x when it is identified in the wireless network of the system 100.
As all video sources are 1080p30 source their respective size information (L*H) is set to (1920*1080) pixels rectangle area. Then Xo, and Yoe, refer to their position in the full 4k2k display, indicating respectively that: * Source 105-1 is in the top-left corner covering area size 1920*1080 from position (0, 0); * Source 105-2 is in the bottom-left of the display covering area size 1920*1080 from position (0, 1080); * Source 105-3 is in the middle of the display covering area size 1920*1080 from position (960,540); * Source 105-4 is in the top right of the display covering area size 1920*1080 from position (1920,0); and * Source 105-5 is in the bottom right corner covering area size 1920*1080 from position (1920,1080).
Level in the layout is finally indicated in last column of Figure 5a, where 0 indicates background level (for devices 105-1,105-2, 105-4 and 105-5) and level 1 means that video from device 105-3 is on the top layer, meanings that this full video image (1920*1080) is visible.
Figure 5b illustrates the layout organisation information of the five 1080p30 sources of the layout of Figure 3b.
The number of line in the table corresponds to the number of sources requested in the final display. Here 3 of the 5 adapters of system 100 are selected in the final display, corresponding to: * Source 105-1, which is a 4k2k source (size indicated as 3840*2160), that appears in the top-left corner at position (0, 0). This video is set at the background layer of the display (Level 0); * Source 105-3, which is a 720p30 source (size = 1280*720), set at position (400, 380). This video is set at an intermediate layer of the display (Level 1); and * Source 105-5, which is a 1080p30 source (Size set to 19201 080), set at position (1620, 980). This video is set at the top layer of the current display (Level 2). The image from this source will be fully displayed.
Figure 6 describes in details step 420 of Figure 4, which determines overlap area(s) for each source involved in the multi-source display. Using layout organization information, first step 600 of the overlap determination is to retrieve sequentially for each video source its layer level. The source under consideration in each iteration is the current source i, and its layer level is denoted as L1.
Then for each other source among the selected sources in the final display, steps 605 and 610 perform a check operation, to verify if there exists overlap between source j and the current studied source i.
Step 605 performs calculation of variables as follow: MaxLeft = Max(X00,,X00D MinRight = Min(X00 i-FL,, FEU) MaxTop = Max(Ygio Y glo j) MinBottom = Min(Ygb j+Hi) Then step 610 verifies if the two following conditions are fulfilled: * Layer level Li > (C1), determining that the source j is on a superior layer level, therefore potentially overlapping the video of the source i.
* (MaxLeft < MinRight) and (MaxTop<MinBottom) (C2), determining that overlap exists in X and Y axis, therefore that rectangle areas are overlapping.
If check of step 610 is positive, i.e. both the above mentioned conditions are met, then step 615 stores overlap area information (relative to source i coordinate), where: Xp0, = MaxLeft -Xgio, Ygg, = MaxTop -Y * glo L = MinRight -MaxLeft, H = MinBottom -MaxTop At that step, a further check is performed to determine if previous stored overlap information(s) does not overlap with the current one. In case such overlap occurs, the common overlap area is determined and is extracted for the current one, resulting in the creation of a plurality of independent stored overlap area information (at least 2).
After either of the two conditions of step 610 is a negative, the 20 algorithm performs step 620 to check that overlapping study has been performed for all other source j other than the source studied i. If this is not the case then it reiterates steps 605-610 and 615.
Otherwise study of overlapping is fully performed for the current source i, then algorithm of Figure 4 continue with step 425.
Figure 7 describes the five overlap area information tables obtained for each of the source 115-1 to 115-5 respectively providing video 300-1 to 300-5.
For source 115-1, which layout organisation information corresponds to ID 1, only source 115-3 (of ID 3 in the layout organisation information of Figure 5a) has a possible overlapping area (fulfilling condition C1), as only this source is at a layer level above 0. Then calculation obtained in step 610 is as follows: o MaxLeft = Max(0, 960) = 960 o MinRight = Min(0+1920, 960+1920) = 1920 o MaxTop = Max(0, 540) = 540 o MinBottom = Min(0+1080, 540+ 1080) = 1080 Therefore, condition C2 is fulfilled as: (MaxLeft < MinRight) => 960 < 1920; and (MaxTop<MinBottom) => 540 < 1080.
Resulting overlap area information is a one element table 700, whereas determined in step 615: o;Ds = MaxLeft -Xgio i = 960 -0 = 96Q o G. = MaxTop -Ygio 1 = 540 -0 = 540, o L = MinRight -MaxLeft = 1920 -960 = 960, o H = MinBottom -MaxTop = 1080 -540 = 540.
Applying same algorithm for source 115-2 (layout organisation information corresponds to ID 2), only source 115-3 has a possible overlapping area (fulfilling condition C1). Then calculation obtained in step 610 is as follows: o MaxLeft = Max(0, 960) = 960 o MinRight = Min(0+1920, 960+1920) = 1920 o MaxTop = Max(1080, 540) = 1080 o MinBottom = Min(1080+1080, 540+ 1080) = 1620 Therefore, condition C2 is fulfilled as: (MaxLeft < MinRight) => 960 < 1920; and (MaxTop<MinBottom) => 1080 < 1620.
Resulting overlap area information is a one-element table 710, determined in step 615 as: o Xp" = MaxLeft -Xgio 2 = 960 -0 = 960, o Ypo, = MaxTop -Yglo 2 = 1080 -1080 = 0, o L = MinRight -MaxLeft = 1920 -960 = 960, o H = MinBottom -MaxTop = 1620 -1080 = 540.
Same algorithm applied to the sources 115-4 and 115-5 results in overlap area information as shown respectively in tables 730 and 740.
For source 115-3, as no other sources is above layer 1 (ID 3 in the layout organisation information Figure 5a), there is no overlap area information. Table 720 is therefore an empty table.
Figure 9 describes the three overlap area information tables (900, 910 and 920) obtained for the source 115-1 (associated with video 305-1), 115- 3 (associated with video 305-2), and 115-5 (associated with video 305-3).
For source 115-1, which layout organisation information corresponds to ID 1, the two other sources are possible overlapping areas (fulfilling condition C1), as their respective layer level are above 0. Then calculation obtained in step 610 is as follows: For source 115-3, * MaxLeft = Max(0, 400) = 400 * MinRight = Min(0+3840, 400+1280) = 1680 * MaxTop = Max(0, 380) = 380 * MinBottom = Min(0+2160, 380+ 720) = 1100 For source 115-5, * MaxLeft = Max(0, 1620) = 1620 * MinRight = Min(0+3840, 1620+1920) = 3540 * MaxTop = Max(0, 980) = 980 * MinBottom = Min(0+2160, 980+ 1080) = 2060 Therefore, for both sources, condition C2 is fulfilled as: For source 115-3, - (MaxLeft < MinRight) => 400 < 1680; and - (MaxTop<MinBottom) => 380 < 1100.
For source 115-5, - (MaxLeft < MinRight) => 1620 < 3540; and - and (MaxTop<MinBottom) => 980 < 2060.
Resulting overlap area information from source 115-3 is a one-element (ID=1) of the table 900, as determined in step 615: Xp" 0) = MaxLeft -X9101= 400 -0 = 400, Ypos (1) = MaxTop -Yool= 380 -0 = 380, L (1) = MinRight -MaxLeft = 1680 -400 = 1280, * H (1) = MinBottom -MaxTop = 1100 -380 = 720 An intermediate resulting overlap area information from source 115-5 is determined in step 615 as: Xpos (t) = MaxLeft -Xplo 1 = 1620 -0 = 1620, Ypos (1) = MaxTop -Ygio 1 = 980 -0 = 980, L (1.) = MinRight -MaxLeft = 3540 -1620 = 1920, H (1') = MinBottom -MaxTop = 2060 -980 = 1080 However, as explained in step 615, this result (1') is then compared with the first determined overlap area (1) from source 115-3 and identify a common overlap area (using same type of calculation) as follow: * MaxLeft = Max(400, 1620) = 1620 * MinRight = Min(400+1280, 1620+1920) = 1680 * MaxTop = Max(380, 980) = 980 * MinBottom = Min(380+720, 980+ 1080) = 1100 Therefore, condition C2 is fulfilled as: * (MaxLeft < MinRight) => 1620 < 1680; and * (MaxTop<MinBottom) => 980 < 1100.
Overlap area between solution (1) and (1') is therefore: * ;05 COM = MaxLeft -X9101 = 1620 -0 = 1620, * Ypos "m = MaxTop -Yob 1 = 980 -0 = 980, * L cm= MinRight -MaxLeft = 1680 -1620 = 60, * H "m = MinBottom -MaxTop = 1100 -980 = 120 This area is subtracted from intermediate result (1') and then define the two new (complimentary) overlap area information (see ID 2 and 3 of Table 25 900): Xpos (2) = Min (Xpos (1') L (1') 7 Xpos corn L corn) = 1680, Ypos (2) = Ypos corn = 980, L (2) = (L (1') -L cam) = 1920 -60 = 1860, H (2) = H com = 120 And * Xpos (3) = Xpos corn = 1620, * Ypos (3) = Mln (Ypos (1) + H (1) Ypos com H con) = 1100, * L (3) = L = 1920, * H (3) = (H (15) -H corn) = 1080 -120 = 960 For source 115-3, for which layout organisation information corresponds to ID 3, only source 115-5 is a possible overlapping areas (fulfilling condition C1), as its layer level of 2 is above value 1 of Layer of source 115-3.
Then calculation obtained in step 610 is the following one: MaxLeft = Max(400, 1620) = 1620 MinRight = Min(400+1280, 1620 +1920) = 1680 MaxTop = Max(380, 980) = 980 MinBottom = Min(380+720, 980+ 1080) = 1100 Therefore, condition C2 is fulfilled as: * (MaxLeft < MinRight) => 1620 < 1680; and * (MaxTop<MinBottom) => 980 < 1100.
Resulting overlap area information from source 115-5 is a one element (ID=1) of the table 910, whereas determined in step 615: Xp" = MaxLeft -Xplo 3 = 1620 -400 = 1220, Ypos = MaxTop -Ygio 3 = 980 -380 = 600, L = MinRight -MaxLeft = 1680 -1620 = 60, H = MinBottom -MaxTop = 1100 -980 = 120 For source 115-5, as no other sources is above layer 2 (ID 3 in the layout organisation information Figure 5b), there is no overlap area information. Table 920 is therefore an empty table.
Source pixel block determination is performed in step 425 and will determine a set of Pixel blocks information, corresponding to the full video source set of pixels used in the multi-display.
It will enable each source to provide data adapted to its allocated wireless bandwidth, by avoiding or reducing to a minimum the data corresponding to hidden part of the video source.
Using result of the overlap area determination step, pixels blocks information may be determined using the following exemplary steps.
A determination of an ordered * edge tables, containing information of: * Yedge as the Y value (in the source coordinate) of either Top or Bottom edge of an overlapping area. A set of overlap area identifier, where area identifier is - included in the set when Yedge correspond to the top coordinate of the identified overlap area; and - removed from the set when edge is Y I the bottom coordinate of the * identified overlap area.
Then for each region between two consecutive dgie of Y the ordered table, starting from initial position Y=O, pixel blocks area identifier e are added as e follows: * As a block of resolution "4:4:4", when (X,Y) coordinates are not in the range of any of the set of overlap area identifier * As a block of resolution "0:0:0", when (X,Y) coordinates are in the range of one of the set of overlap area identifier * Size of the block is determined as follow: - Height H' is the size between two consecutive Yedge (up to the Height of the video source) - VVidth L' is the size of the X interval where block is either "4:4:4" or "0:0:0".
Figure 8 and 10 describes the result of pixel block information, obtained to support the layout of Figure 3a and 3b respectively according to one aspect of the invention. In Figure 8, Table 800 is obtained by first determining from Table 700, the following Y * edge tables: [(540,{1}), (1080,{}] Where first element corresponds to the top of overlap area ID 1 of Table 700, and second element to the bottom of the same overlap area. Therefore starting from Y' p" =X' p" =0, a first pixel block is added to table 800, as: * a "4:4:4" block resolution, as in the Y range [0,539], set of overlapping area is empty for all X value (from [0,1919], as source 115-1 is a 1080p video source), * with H' = 540 (size of Y interval) and L'=1920 (size of X interval) Then from position starting at (X' p" =0, Y' p" =540), two pixels blocks are added.
The first one, where X in range [0,959], does not belong to overlap area ID 1 (starting at position 960, till position 960+960 =1920 (Max X source 115-1 resolution)) and is therefore identified as: * a "4:4:4" block resolution, * with H' = 540 (size of Y interval (1080-540)) and L'=960 (size of X interval). The last one, where X in range [960,1919], belongs to overlap area ID 1 (starting at position 960, till position 960+960 =1920 (Max X source 115-1 resolution)) and is therefore identified as: * a "0:0:0" block resolution, * starting at (X' p" =960, Y' p" =540), initial coordinate of X,Y intervals, * with H' = 540 (size of Y interval (1080-540)) and L'=960 (size of X interval).
The determination is ended when all set of coordinate (X,Y) in maximum source range (1920,1080) have been checked.
In the same way, Table 810 is obtained by first determining from Table 710, the following Yedge tables: [ (0,{1}), (540,{}] Where first element corresponds to the top of overlap area ID 1 of Table 710, and second element to the bottom of the same overlap area. Therefore starting from Y' p" =X' p" =0, a first pixel block where X in range [0,959], does not belong to overlap area ID 1 (starting at position 960, till position 960+960 =1920 (Max X source 115-2 resolution)) and is therefore identified as: * a "4:4:4" block resolution, * starting at (X' pas =0, Y' poS =0), * with H' = 540 (size of Y interval (540-0)) and L'=960 (size of X interval) A second pixel block is then added for the Yedge element value 0, where X in range [960,1919], and therefore belongs to overlap area ID 1 (starting at position 960, till position 960+960 =1920) and identifying as: * a "0:0:0" block resolution, * starting at (X' p" =960, Y' p" =0), * with H' = 540 (size of Y interval (540-0)) and L'=960 (size of X interval) Then starting from (X' p",Y' p")=(0,540), corresponding to second element Yedge, as no more overlapping area are existing in the set, a final pixel block is added with value: * a "4:4:4" block resolution, * Starting at (X' pos =0, Y' pos =540), * With H' = 540 (size of Y interval from Max Y resolution to last Yedge (1080-540)) and L'=1920 (size of full X resolution interval).
The same principle is used to obtained tables 830 and 840.
Table 820 represents the specific case of source 115-3, where no overlap area information exists in table 720. Therefore, all the video should be transmitted and the table 820 contains a one pixel block definition of the video source resolution dimension (here 19201 080 => L'=1920, H'=1080) starting from initial position X'p"=Y'p"=0 at the maximum resolution "4:4:4".
It should be noted that first data rate adaptation is estimate in these determination step, using either "4:4:4" full resolution indication, or "0:0:0" indicating that the block is useless for the display. Further adaptation adjustment may be performed using intermediate level for useful pixel blocks, as described in step 440 with reference to figure 4. Using the highest level for useful pixel blocks enables the adapter to ability to always provide the best possible resolution applicable to the system.
Adding all the elements of the tables of Figure 8 set as "4:4:4" resolution, corresponds to the necessary data rate adjustment request to transmit all the sources to display.
The value is then equal to: 4 * (1920*540+960*540)*24*30 + (19201 080*24*30) = 5.97 Gbps which fits with the optimal limited bandwidth of 7 Gbps.
In figure 10, table 1000 is obtained by first determining from table 900, the following Yedge tables: [(380,{1}), (980,{1,2}), (1100,{3}), (2060,{})] Where: * First element corresponds to the top of overlap area ID 1 of Table 900, therefore ID 1 is inserted in the set.
* Second element corresponds to the top of overlap area ID 2, therefore overlap area ID 2 is inserted in the set, that already contains ID 1. As the bottom edge of this area is still not reached, it should then always been taken into account in the set.
* Third element corresponds to the top of overlap area ID 3, therefore overlap area ID 3 is inserted in the set. However as the same coordinate corresponds to bottom edge of overlap areas ID 1 (380+720=1100) and ID 2 (980+120=1100), this two ID are removed from the set. Therefore it results in a set composed of the only overlap area ID.
* Last element corresponds to the bottom of overlap area ID 3, therefore overlap area ID 3 is removed from the set. Therefore overlap area set is empty.
From these Y * edge table, to study the full scale 2160 pixels of the Y axe (for source 115-1 of size 3840*2160, Y intervals [0,379], [380,979], [980,1099], [1100, 2059] and [2060,2159] need to be considered, where initial position of the interval is the Yp" of each pixel block added.
For Y interval [0,379], as there is no overlap area ID in the set (no element in the Y * edge table), a first "visible" pixel block is defined covering the full X interval [0, 3839] as X source resolution 115-1 is 3840 pixels. This pixel block is defined by: * a "4:4:4" block resolution, starting at (X' pas =0, Y' pos =0), With H' = 380 (size of Y interval) and L'=3840 (size of full X resolution interval).
For Y interval [380,979], the overlap area information of table 900 at the index (ID) 1 allows the definition of three set of X coordinate intervals: * Two X intervals [0,399] and [1680,3839], where there is no overlap area as the only overlap area in the set is just overlaying the source 115-1 from 5 Xpos = 400 to X=(Xp"+L)-1=(400+1280)-1=1679.
* A third interval, [400,1679], where the source 115-1 is hidden by the overlap area ID 1.
Therefore 3 pixel blocks are added to table 1000, following X in ascending order: -One for X interval [0,399], defined by: * a "4:4:4" block resolution, * starting at (X' p" =0, Y' p" =380), With H' = 600 (size of Y interval) and L'=400 (size of X interval).
- One for X interval [400,1679], defined by: * a "0:0:0" block resolution, * starting at (X' pp$ =400, Y' p" =380), * With H' = 600 and L'=1280.
One for X interval [1680,3839], defined by: * a "4:4:4" block resolution, * starting at (X' p" =1680, Y' p" =380), * With H' = 600 and L'=2160.
For Y interval [980,1099], taken into account the overlap areas information from the set {1,2} of table 900, enables the definition of three sets of X coordinate intervals: -Two X intervals [0,399] and [3540,3839], where there is no overlap area as the two overlap areas in the set are just overlaying the source 115-1 * from Xp" = 400 to X=(Xp"+L)-1=(400-F1280)-1=1679, for overlap area ID 1 * from Xpos = 1680 to X=(Xpos+L)-1=(1680+1860)-1=3539, for overlap area ID 2 - A third interval, [400,3539], where the source 115-1 is hidden by.the complementary overlap areas of ID 1 and 2.
Therefore 3 pixel blocks are added to table 1000, following X ascending order: One for X interval [0,399], defined by: * a "4:4:4" block resolution, starting at (X' p"=0, Y' p"=980), With H' = 120 (size of Y interval) and L'=400 (size of X interval). One for X interval [400,3539], defined by: * a "0:0:0" block resolution, * starting at (X' p"=400, Y' p"=980), * With H' = 120 and L=3140.
One for X interval [3540,3839], defined by: * a "4:4:4" block resolution, starting at (X' p"=1680, Y' p"=980), With H' = 120 and L'=300.
For Y interval [1100, 2059], taken into account the overlap area information from the set {3} of table 900, enables the definition of three sets of X coordinate intervals: - Two X intervals [0,1619] and [3540,3839], where there is no overlap area as the overlap area in the set is just overlaying the source 115-1 20 from Xp" = 1620 to X=(Xp"+L)-1=(1620+1920)-1=3539, - A third interval [1620,3539], where the source 115-1 is hidden by the overlap area ID 3.
Therefore 3 pixel blocks are added to table 1000, following X ascending order: -One for X interval [0,1619], defined by: * a "4:4:4" block resolution, starting at (X' pos =0, Y' p" =1100), * With H' = 960 (size of Y interval) and L'=1620 (size of X interval). One for X interval [1620,3539], defined by: * a "0:0:0" block resolution, * starting at (X' p" =1620, Y' p" =1 1 00), * With H' = 960 and li=1920.
- One for X interval [3540,3839], defined by: * a "4:4:4" block resolution, * starting at (X' pos=3540, Y' pos=1100), * With H' = 960 and L'=300.
For last Y interval [2060,2159], as the overlap area ID set is empty, a full "visible" pixel block is defined covering the X interval [0, 3839] as X source resolution 115-1 is 3840 pixels. This pixel block is defined by: - a "4:4:4" block resolution, - starting at (X' p" =0, Y' p" =2060), -With H' = 100 (size of Y interval) and L'=3840 (size of X interval).
Using same principle as for table 1000, Table 1010 is obtained by first determining from Table 910, the following Yedge tables: [(600,{1}), (720,{ })] Where: -First element corresponds to the top of overlap area ID 1 of Table 910, and therefore ID 1 is inserted in the set.
- Second element corresponds to the bottom of overlap area ID 1, and therefore overlap area ID 1 is removed from the set. From these Y * edge table, to study the full scale 720 pixels of the Y axe (for source 115-3 of size 1280'720, Y intervals [0,599] and [600,719] need to be considered, where initial position of the interval is the;08 of each pixel block added.
For Y interval [0,599], as there is no overlap area ID in the set (no element in the Yedge table), a first "visible" pixel block is defined covering the full X interval [0, 719] as X source resolution 115-3 is 720 pixels. This pixel block is defined by: - a "4:4:4" block resolution, - starting at (X' pos =0, Y' pos =0), - With H' = 600 (size of Y interval) and L'=1280 (size of full X resolution interval).
For Y interval [600,719], the overlap area information of table 910 at the index (ID) 1 enables the definition of two sets of X coordinate intervals: - A first X intervals [0,1219], where there is no overlap area as the only overlap area in the set is just overlaying the source 115-3 from Xpos = 1220 to X=(Xpos+L)-1=(1220+60)-1=1279 (X max resolution).
- A second interval, [1220,1279], where the source 115-3 is hidden by the overlap area ID 1.
Therefore 2 pixel blocks are added to table 1010, following X ascending order: - One for X interval [0,1219], defined by: o a "4:4:4" block resolution, o starting at (X' pos =0, pos =600), o With H' = 120 (size of Y interval) and L'=1220 (size of X interval).
- One for X interval [1220,1279], defined by: o a "0:0:0" block resolution, o starting at (X' p" =1220, Y' p" =600), o With H' = 120 and L=60.
Finally, Table 1020 represents the specific case of source 115-5, where no overlap area information exists in table 920. Therefore, all the video should be transmitted and the table 1020 contains a one pixel block definition of the video source resolution dimension (here 1920*1080 => L'=1920, H'=1080) starting from initial position X'p"=Y'p"=0 at the maximum resolution "4:4:4".
Adding all the elements of the tables of Figure 10 set as "4:4:4" resolution, corresponds to the necessary data rate adjustment request to transmit all the sources to display.
The value is then equal to: ([3840*(380+100)+(400+2160)*600+(400+300)*120+(1 620+300)*960 ]*24*0) + (1280*600+1220"120)"24*30 + (1920*1080"24"30) = 5.97 Gbps which fits with the optimal limited bandwidth of 7 Gbps.
The layouts in accordance with figures 3-a and 3-b are merely two exemplary layouts supported by the present invention. Alternate layouts are envisaged such as a layout where one or more source images lie partially outside the frame of the composite image, wherein the portions of source images lying outside the composite image are cropped-out and treated in the same manner as overlapped portions in embodiments described above.

Claims (35)

  1. CLAIMS1. A method of controlling communication of image data comprising of at least part of a first image from a first image source, the method comprising: determining available communication resource for the communication of the at least part of first image from the first image source to a target device; obtaining an arrangement of a composite image, the composite image comprising of the at least part of the first image and at least part of at least one other image; wherein at least one of the part of at least one other image is obtained from a source other than the first image source; determining a data rate adaptation for the communication of the at least part of the first image from the first image source to the target device, the data rate adaptation being based on the available communication resource and the arrangement of the composite image; wherein: the arrangement of composite image is such that at least one portion of the first image does not form part of the composite image; and the step of determining the data rate adaptation comprises a step of reducing the image data corresponding to the at least one portion of the first image not forming part of the composite image.
  2. 2. A method according to claim 1, wherein the step of data rate adaption comprises of any one or more of the following: reducing colour sampling rate; reducing image quality or resolution; and applying image compression.
  3. 3. A method according to claim 1 or claim 2, wherein the step of reducing the image data comprises of removing from the first image all the data of the at least one portion not forming part of the composite image.
  4. 4. A method according to claim 1 or claim 2, wherein the step of reducing the image data comprises of applying data compression to the at least one portion of the first image not forming part of the composite image.
  5. 5. A method according to claim 4, wherein applying data compression comprises of replacing the data of the at least one portion of the first image by a constant value and the method further comprising applying a lossless compression algorithm to the entire first image.
  6. 6. A method according to any one of claims 1 to 5, further comprising a step of determining the at least one portion of the first image not forming part of the composite image as a result of being overlapped in the composite image by at least a portion of the at least one other image or being cropped out by at least one peripheral edge of the composite image.
  7. 7. A method according to any one of the preceding claim, further comprising determining all portions of the first image and the at least one other image that are not part of the composite image.
  8. 8. A method according to claim 7, wherein data rate adaptation comprises reducing or removing all image data for at least one of the portions of the first image and the at least one other image that are not part of the composite image.
  9. 9. A method according to claim 7, wherein the step of data rate adaptation comprises reducing the quality of the entire first image or removing the entire first image if the ratio of sum of all portions of the first image not forming part of the composite image to the entire first image is above a predetermined threshold.
  10. 10. A method according to any one of the preceding claims, wherein the step of data rate adaption further comprises reducing the image data corresponding to at least one other portion of the first image forming part of the composite image.
  11. 11. A method according to claims 7 and 10, wherein the step of data rate reduction is implemented if ratio of total size of first image in the composite image to the total size of the composite image is below a predetermined threshold.
  12. 12. A method according any one of preceding claims, wherein the step of determining the data rate adaptation is further based on the environmental conditions of the network over which the at least part of the first image data is communicated to the target device.
  13. 13. A method according any one of preceding claims, wherein the step of determining the data rate adaptation is further based on a communication scheme and/or communication means used for the communication of at least one part of the first image data from the first image source to the target device.
  14. 14. A method according to any one of preceding claims, further comprising a step of informing at least the first image source of the determined data rate adaptation.
  15. 15. A method according to any one of preceding claims, further comprising a step of the first image source transmitting the at least part of the first image to the target device in accordance with the determined data rate adaptation.
  16. 16. A method according to any one of the preceding claims, wherein 30 the arrangement of composite image is determined using a user interface.
  17. 17. A method according any one of preceding claims, wherein the communication resource is a wireless communication resource.
  18. 18. A method according to any preceding claim, wherein at least the first image is part of a video stream.
  19. 19. An executable computer program, comprising a sequence of instructions for implementing the method according to any one of claims 1 to 17.
  20. 20. A computer readable storage medium storing instructions of a computer program for implementing the method according to any one of claims 1 to 16.
  21. 21. A method of controlling communication of image data as substantially described herein with reference to figures 3a, 3b, 4, 5a, 5b, 6, 7, 8, 9 and 10
  22. 22. A device for determining the data rate for communication of image data, the image data comprising of at least part of a first image from a first image source, the device comprising: means for obtaining an arrangement of a composite image, the composite image comprising of the at least part of the first image and at least part of at least one other image; wherein at least one of the part of at least one other image is obtained from a source other than the first image source; means for determining available communication resource for the communication of the at least part of first image from the first image source to a target device; means for determining a data rate adaptation for communication of the at least part of the first image from the first image source to the target device, the data rate adaptation being based on the available communication resource and the arrangement of the composite image; wherein: the arrangement of composite image is such that at least one portion of the first image does not form part of the composite image; and the means for determining the data rate adaptation comprises means of reducing the image data corresponding to the at least one portion of the first image not forming part of the composite image.
  23. 23. A device according to claims 22, wherein the means for determining the data rate adaptation comprises any one or more of the following: a. means for reducing colour sampling rate; b. means for reducing image quality or resolution; and c. means for applying image compression.
  24. 24. A device according to claim 22 or claim 23, wherein the means 15 for reducing the image data comprises means for removing from the first image all the data of the at least one portion not forming part of the composite image.
  25. 25. A device according to claim 22 or claim 23, wherein the means for reducing the image data comprises means for applying data compression to the at least one portion not forming part of the composite image.
  26. 26. A device according to claim 25, wherein the means for applying data compression comprises means of replacing the data of the at least one portion not forming part of the composite image and device further comprising means for applying lossless compression algorithm to the entire first image.
  27. 27. A device according to any one of claims 22 to 26, further comprising means of identifying the at least one portion of the first image not forming part of the composite image.
  28. 28. A device according to any one of claims 22 to 27, further comprising means for determining all portions of the first image and the at least one other image that are not part of the composite image.
  29. 29. A device according to claim 28, wherein the means for data rate adaptation comprises means for reducing or removing all image data for at least one of the portions of the first image and the at least one other image that are not part of the composite image.
  30. 30. A device according to claim 29, wherein the means for data rate adaptation comprises means for reducing the quality of the entire first image or for removing the entire first image if the ration of sum of all portions of the first image not forming part of the composite image to the entire first image is above a predetermined threshold.
  31. 31. A device according to any one of claims 22 to 30, wherein the means for determining data rate adaptation further comprising means for reducing the image data corresponding to at least one other portion of the first image forming part of the composite image.
  32. 32. A device according to claim 22 or claim 31, further comprising means of data rate adaptation based on the environmental conditions of the network over which the at least part of the first image data is communicated to the target device.
  33. 33. A device according to any one of claims 22 to 32, the device further comprising means for data rate adaptation based on a communication 25 scheme and/or communication means used for the communication of at least one part of the first image data from the first image source to the target device.
  34. 34. A device according to any one of claims 22 to 33, the device further comprising means for informing at least the first image source of the determined data rate adaptation.
  35. 35. A system comprising a plurality of sources and a display device interconnected by a communication network for transmitting images from the sources to the display device, the system further comprising a device according to any one of claim 22 to 34 for carrying out the steps of the method according to any one of claims 1 to 18.35. A system for controlling communication of image data as substantially described herein with reference to figures 1, 2a and 2b.
GB1419328.8A 2014-10-30 2014-10-30 Video data transmission method in a multi-source display system Active GB2531774B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1419328.8A GB2531774B (en) 2014-10-30 2014-10-30 Video data transmission method in a multi-source display system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1419328.8A GB2531774B (en) 2014-10-30 2014-10-30 Video data transmission method in a multi-source display system

Publications (3)

Publication Number Publication Date
GB201419328D0 GB201419328D0 (en) 2014-12-17
GB2531774A true GB2531774A (en) 2016-05-04
GB2531774B GB2531774B (en) 2017-05-03

Family

ID=52118436

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1419328.8A Active GB2531774B (en) 2014-10-30 2014-10-30 Video data transmission method in a multi-source display system

Country Status (1)

Country Link
GB (1) GB2531774B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130060886A1 (en) * 2011-09-02 2013-03-07 Microsoft Corporation Cross-Frame Progressive Spoiling Support for Reduced Network Bandwidth Usage

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130060886A1 (en) * 2011-09-02 2013-03-07 Microsoft Corporation Cross-Frame Progressive Spoiling Support for Reduced Network Bandwidth Usage

Also Published As

Publication number Publication date
GB2531774B (en) 2017-05-03
GB201419328D0 (en) 2014-12-17

Similar Documents

Publication Publication Date Title
US10659927B2 (en) Signal transmission method and apparatus
EP2750470B1 (en) Methods and system for transmitting data between television receivers
CN103828384B (en) Wireless channel perception self-adaption video bitrate coding based on software
US8437319B2 (en) Wireless network system and method of configuring the same
JP2008533913A (en) System, method, and apparatus for wireless content distribution between a general content source and a general content sink
WO2010067625A1 (en) Downlink reference signal transmitting method, base station, user equipment, and wireless communication system
CN105474691A (en) Information processing device and information processing method
US20210084513A1 (en) Multichannel communication systems
US20160080825A1 (en) Information processing apparatus and information processing method
CN105900481B (en) bandwidth selection method of wireless fidelity technology and access point AP
US20130172034A1 (en) Communication apparatus and communication method
US8761063B2 (en) Method and apparatus for transmitting a packet in a wireless network
US8963996B2 (en) Communication of stereoscopic three-dimensional (3D) video information including an uncompressed eye view video frames
JP2011082808A (en) Information distribution system, information distribution apparatus and information distribution method
KR101289937B1 (en) Headend apparatus for transmitting video broadcasting content using channel bonding, and broadcasting reciever and method for recieving video broadcasting content
GB2531774A (en) Video data transmission method in a multi-source display system
GB2522468A (en) Methods and devices for distributing video data in a multi-display system using a collaborative video cutting scheme
KR101608772B1 (en) Method of exchanging messages exchanging and a sink device
US9300979B2 (en) Methods for transmitting and receiving data contents, corresponding source and destination nodes and storage means
US11051058B2 (en) Real-time wireless video delivery system using a multi-channel communications link
US20220232417A1 (en) Device for transmitting data in wireless av system and device for receiving data in wireless av system
US10412360B2 (en) Wireless transmission system, method and device for stereoscopic video
JP5078793B2 (en) Radio communication apparatus and central control radio communication apparatus
US20220255601A1 (en) Device and method for performing channel selection in wireless av system
GB2544289B (en) Method and device for transmission of video data packets