GB2568460A - Encoding and transmission of display data - Google Patents

Encoding and transmission of display data Download PDF

Info

Publication number
GB2568460A
GB2568460A GB1716982.2A GB201716982A GB2568460A GB 2568460 A GB2568460 A GB 2568460A GB 201716982 A GB201716982 A GB 201716982A GB 2568460 A GB2568460 A GB 2568460A
Authority
GB
United Kingdom
Prior art keywords
data
tile group
transport unit
output buffer
transport
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1716982.2A
Other versions
GB201716982D0 (en
GB2568460B (en
Inventor
Skinner Colin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DisplayLink UK Ltd
Original Assignee
DisplayLink UK Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DisplayLink UK Ltd filed Critical DisplayLink UK Ltd
Priority to GB1716982.2A priority Critical patent/GB2568460B/en
Priority to GB2210974.8A priority patent/GB2606502B/en
Publication of GB201716982D0 publication Critical patent/GB201716982D0/en
Priority to PCT/GB2018/052845 priority patent/WO2019077303A1/en
Publication of GB2568460A publication Critical patent/GB2568460A/en
Application granted granted Critical
Publication of GB2568460B publication Critical patent/GB2568460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/391Resolution modifying circuits, e.g. variable screen formats
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations
    • H04L47/286Time to live
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/127Prioritisation of hardware or computational resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/33Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • H04N19/64Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234327Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2350/00Solving problems of bandwidth in display systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/12Frame memory handling
    • G09G2360/122Tiling
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/10Use of a protocol of communication by packets in interfaces along the display data pipeline
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/16Use of wireless transmission of display information

Abstract

A method involves dividing a frame of the data into a plurality of tile groups. Each group is encoded to generate encoded tile group data, which is then encapsulated into a payload of a tile group atom 632. A time-to-live TTL value is generated for the atom. The atom is encapsulated with a transport header which includes the TTL to form a transport unit 634, which is then written to an output buffer. It is determined whether to transmit the unit wirelessly based on the TTL. A second method (fig 7) involves dividing a frame into a plurality of tile groups, each group is encoded at a plurality of resolutions to produce a plurality of passes of encoded tile group data. Each pass of encoded data is encapsulated into a respective payload of a corresponding tile group atom, and each atom is encapsulated with a transport header to form a plurality of transport units. Each unit is written to one of a plurality of queues in an output buffer, wherein each queue has a different priority level and units in a higher priority queue are sent before units in a lower priority queue.

Description

The following terms are registered trade marks and should be read as such wherever they occur in this document:
HDMI (Page 8)
Wi-Fi (Page 9)
Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
Encoding and Transmission of Display Data
The invention relates to the processing of data for transmission to a peripheral display device, particularly where the data is transmitted over a wireless link.
Background
Short-range wireless communication links can be useful in enabling the local transmission of data. However, in systems where a large amount of data needs to be transmitted in a timely manner from a source to a destination device, any reduction in bandwidth over the link or temporary loss of the link can significantly affect data transmission.
It is desirable for Virtual Reality (VR) and Augmented Reality (AR) systems to be implemented with a headset or display device that is connected wirelessly to a central control device such as a display controller, computer, games terminal or host system. A high bandwidth link is also desirable to enable the transmission of video or display data between the devices at a high frame rate. However, maintaining a high and consistent bandwidth for the transmission of high quality video data is difficult over a wireless connection. Furthermore, when the bandwidth varies or the communication link is lost, this can cause significant disruption to the display of graphical information to the user.
With some encoding and decoding techniques, in particular encoding based on partial update techniques, it is more important to transmit all of the data for a particular tile group than to enable some of the data to be delivered quickly at the expense of other data for the tile group. In some systems, the bandwidth on the path between the encoder and decoder is not the limiting factor in displaying an image to a user. Therefore, data from all encoded passes for a tile group can be transmitted to the decoder in time for it to be processed and displayed to the user. Since processing time at the decoder is often the limiting factor or bottleneck in displaying the information to the user, developments in this technology have focussed in improved encoding and decoding techniques, such as partial updating of images. However, many of these techniques rely on the delivery of all of the encoded data for a tile group before the decoding process can be implemented. They may also introduce latency between production of the image data and its display to a user.
Overview
In a first aspect, there is provided a method of processing display data comprising a plurality of frames at a display control device for transmission to a display device over a wireless link, wherein the frames are transmitted for display at a rate of at least 50 frames per second, the method comprising:
dividing a frame of the display data into a plurality of tile groups;
encoding each tile group to generate encoded tile group data;
encapsulating the encoded tile group data into a payload of a tile group atom; generating a time-to-live value for the tile group atom;
encapsulating the tile group atom with a transport header to form a transport unit, wherein the transport header comprises the time-to-live value;
writing the transport unit to an output buffer for transmission over the wireless link to the display device;
determining at the output buffer, based on the time-to-live value, whether to transmit the transport unit from the output buffer to the display device over the wireless link.
In a VR or AR system the latency between the time at which the display data is created and the time at which it is displayed to the user should be as short as possible, preferably less than 20ms, in order to optimise the user experience. However, particularly where the link between the display control device and the display device includes a portion that is wireless, the bandwidth may not be consistently high enough in order to maintain the required speed of transfer of the data. It has been appreciated that, once the target latency time of around 20ms starts to be exceeded, it would be better to skip one or more tile groups of the data than to display delayed data. Furthermore, preventing transmission of delayed data can enable the system to catch up so that the new data falls within acceptable latency boundaries.
Adding a time-to-live value to the packets or atoms of encoded data can prevent the output buffer from getting backed up with data to send that is already delayed. Inspecting the TTL value prior to sending the data out of the output buffer also leads to a more efficient use of network resources.
In one embodiment, determining at the output buffer whether to transmit the transport unit comprises comparing the TTL value against a reference value to determine whether the transport unit meets a transmission criterion, transmitting the transport unit if the transport unit meets the transmission criterion and discarding the transport unit if the transport unit does not meet the transmission criterion.
In one embodiment, determining at the output buffer whether to transmit the transport unit comprises determining whether a time-to-live value is greater than a transmission threshold value (and therefore meets the transmission criterion), and transmitting the transport unit if the time-to-live value is greater than the transmission threshold value and discarding the transport unit if the time-to-live is equal to or less than the transmission threshold value. For example, the TTL may count down to zero at which point the transport unit is discarded rather than being sent.
As will be appreciated by the skilled person, in other embodiments, the TTL value may be reviewed against a counter that is counting upwards and therefore the output buffer determines whether to transmit the transport unit by determining whether the time-to-live value is less than a reference value (and therefore meets the transmission criterion). For example, the TTL may be set as a future clock time of the output buffer. If the clock time has exceeded the clock time in the transport unit when the transport unit is ready to be sent, the transport unit can be discarded rather than being sent.
In one embodiment, encoding each tile group comprises encoding each tile group at a plurality of resolutions to produce a plurality of passes of encoded tile group data. Where the tile group is hierarchically encoded in this way, encapsulating the encoded tile group data may comprise encapsulating a first pass of the encoded tile group data in a first transport unit and encapsulating a second pass of the encoded tile group data in a second transport unit.
Optionally, encapsulating the encoded tile group data comprises encapsulating at least one further pass of the encoded tile group data in at least one further transport unit. In a preferred embodiment, the encoded tile group data is encoded at three different resolutions and the encoding data at each resolution is encapsulated into a separate transport unit.
Preferably, where there is data common to the multiple passes of the tile group data (such as AC / DC encoding information) that information is encapsulated together with the lowest resolution tile group data.
Optionally, writing the transport unit to an output buffer comprises writing the first transport unit to a first queue in the output buffer having a first associated priority level and writing the second transport unit to a second queue in the output buffer having a second associated priority level. That is, the output buffer is implemented with multiple prioritised queues. Data in higher priority output buffer queues is sent before data in lower priority output buffer queues. This can enable the transport unit containing the lowest-resolution data to be transmitted prior to the transport unit containing the highest-resolution data. This increases the likelihood that the decoder will receive at least some data in time to display the next frame, even if the data received is only the lowest-resolution data. Where timing and latency are important, it can be better to ensure the receipt of the lowest-resolution data and display a frame based on that data than to wait for the highest-resolution data to be transmitted.
Preferably, as highlighted above, data in the buffer with the first associated priority level is transmitted prior to data in the buffer with the second associated priority level. Optionally, all data in a higher-priority buffer is transmitted prior to any data in a lower-priority buffer, so if data for the next tile group or frame is available before the data for the previous tile group has been sent from the lower-priority buffer, the data from the higher priority buffer is sent. Data from the lower-priority buffer may then be discarded.
Optionally, the time-to-live value is determined based on a number of pixels in the tile group. Hence tile groups with a larger number of pixels are given a higher TTL value than those with a smaller number of pixels. The pixel clock can be used to convert the number of pixels to a time value. The pixel value can be multiplied or added to for jitter tolerance depending on system requirements, for example low latency modes may have smaller values than “high” quality modes.
A further, closely related aspect provides a method of processing display data comprising a plurality of frames at a display control device for transmission to a display device over a wireless link, wherein the frames are transmitted for display at a rate of at least 50 frames per second, the method comprising:
dividing a frame of the display data into a plurality of tile groups;
encoding each tile group at a plurality of resolutions to produce a plurality of passes of encoded tile group data;
encapsulating each pass of encoded tile group data into a respective payload of a corresponding tile group atom;
encapsulating each tile group atom with a transport header to form a plurality of transport units, writing each transport unit to one of a plurality of queues in an output buffer for transmission over the wireless link to the display device;
wherein each queue in the output buffer has a different priority level and wherein transport units in a higher priority output buffer queue are sent before transport units in a lower priority output queue.
Hierarchical encoding of tile group data can enable data from that tile group to be displayed to a user at a low resolution if the higher-resolution data cannot be processed in time or if a low-resolution display is being used by the end user.
In previous systems, hierarchically-encoded data at various resolution levels has been packaged and sent within the same transport unit so that all the data for one tile group at the different resolutions is received at the same time. In systems implemented in accordance with the present aspect, however, the data generated for the tile group at each resolution is encapsulated in a different transport unit so that the data at each resolution can be sent separately. In a system where bandwidth is variable and unpredictable and where there is a large amount of jitter, this can be helpful because if the only transport unit that gets through to the decoder and display device in time is the transport unit with the lowest-resolution data, at least this data can be displayed to the user.
In this system, there are multiple output buffer queues at different priority levels. For a single tile group encoded at different resolutions, the data encoded at different resolutions and packed into different transport units is added to the different priority output buffer queues. In particular, the transport unit containing the lowest-resolution tile group data together with the basic data for that tile group, such as the GC data, is placed in the highest-priority output buffer queue and the transport unit containing the highest-resolution data is placed in the lowest-priority output buffer. Hence low-resolution encoded data is sent in preference to high-resolution encoded data.
In one embodiment, the transport header of each transport unit comprises a time-to-live value. Additional features of the time-to-live value may include those set out in relation to the first aspect above. In particular, the time-to-live value may be set such that a transport unit is discarded from the output buffers without being sent if the transport unit has not been transmitted before the next encoded data for the next tile group is ready to be sent. This can prevent a backlog of transport units from building up in the output buffers if the bandwidth or transmission rate of the network or wireless connection falls.
In some embodiments, transport units destined for higher-priority output buffers are given a longer time-to-live value than transport units destined for lower-priority output buffers. This may give the higher-priority lower-resolution data for each tile group a greater chance of being sent, even if the bandwidth on the connection falls a little, while the higher-resolution less important data for the tile group is discarded more quickly.
Optionally, the method further includes determining at the output buffer, based on the time-to-live value, whether to transmit the transport unit from the output buffer to the display device over the wireless link. If the time-to-live value has expired or reached a predetermined threshold value (either upwards or downwards depending on the implementation, as discussed above), then the transport unit may be discarded rather than being transmitted.
In one embodiment, the system purges the output buffer on receipt of transport units comprising data for the next tile group. This may be implemented in addition to or as an alternative to the use of time-to-live values. For example, rather than marking each transport unit with a time-to-live value, the contents of one or more queues of the output buffer may simply be discarded just before data for the next tile group is written into the output buffer.
In one embodiment, the plurality of queues is divided across a plurality of output buffer.
In each of the embodiments described herein, the display data can include multimedia data including video and computer-generated image data, audio data, graphical data and any other data output to the user via the display device.
The display data is designed for display to a user at a high frame rate of at least 50 frames per second, preferably 60 frames per second or 90 frames per second. In some ultra-high frame rate systems, a frame rate of 120 frames per second may be used.
Systems and apparatus for implementing the methods described herein, including network nodes, computer programs, computer program products, computer readable media and logic encoded on tangible media for implementing the methods are also described.
Brief Description of Drawings
Embodiments of the systems and methods described herein are further exemplified in the following numbered drawings:
Figure 1 illustrates block diagram overview of a system according to one embodiment;
Figure 2 illustrates a headset or display device for displaying an output to a user according to one embodiment;
Figure 3 illustrates a display device according to a further embodiment;
Figure 4 illustrates a method of Haar encoding according to one embodiment;
Figure 5 provides a further illustration of the method of Haar encoding according to one embodiment;
Figure 6a illustrates an embodiment of a tile group to which Haar encoding is applied;
Figure 6b illustrates the packaging of data from an encoded tile group according to one embodiment;
Figure 7 illustrates the packaging of data from an encoded tile group according to another embodiment.
Example System Configuration
Figure 1 shows a block diagram overview of a system according to one embodiment. A host computer [11] is connected to a display control device [12], which is in turn connected to a display device [13], The host [11] contains an application [14], which produces display data. The display data may be produced and sent for compression either as complete frames or as canvasses, which may, for example, be separate application windows. In either case, they are made up of tiles of pixels, where each tile is a geometrically-shaped collection of one or more pixels.
The display data is sent to a compression engine [15], which may comprise software running in a processor or an appropriate hardware engine. The compression engine [15] first performs an encoding of the data, for example using a Haar transformation, to convert the data into a format that may then be further compressed, minimising data loss.
The compression engine [15] may then further compress the data and thereafter sends the compressed data to an output engine [16], The output engine [16] manages the connection with the display control device [12] and may, for example, include a socket for a cable to be plugged into for a wired connection or a radio transmitter for a wireless connection. In either case, it is connected to a corresponding input engine [17] on the display control device [12],
The input engine [17] is connected to a decompression engine [18], When it receives compressed data it sends it to the decompression engine [18] or to a memory from which the decompression engine [18] can fetch it according to the operation of a decompression algorithm. In any case, the decompression engine [18] may decompress the data, if necessary, and performs a decoding operation, optionally using a reverse Haar transform. In the illustrated system, the decompressed data is then sent to a scaler [19], In the case where the display data was produced and compressed as multiple canvasses, it may be composed into a frame at this point.
If scaling is necessary, it is preferable for it to be carried out on a display control device [12] as this minimises the volume of data to be transmitted from the host [11] to the display control device [12], and the scaler [19] operates to convert the received display data to the correct dimensions for display on the display device [13], In some embodiments, the scaler may be omitted or may be implemented as part of the decompression engine. The data is then sent to an output engine [110] for transmission to the display device [13], This may include, for example, converting the display data to a display-specific format such as VGA, HDMI, etc.
In one embodiment, the display device is a virtual reality headset [21], as illustrated in Figure 2, connected to a host device [22], which may be a computing device, gaming station, etc. The virtual reality headset [21] incorporates two display panels [23], which may be embodied as a single panel split by optical elements. In use, one display is presented to each of a viewer’s eyes. The host device [22] generates image data for display on these panels [23] and transmits the image data to the virtual reality headset [21],
In another embodiment, the headset is a set of augmented reality glasses. As in the virtual reality headset [21] described in Figure 2, there are two display panels, each associated with one of the user’s eyes, but in this example the display panels are translucent.
The host device [22] may be a static computing device such as a computer, gaming console, etc., or may be a mobile computing device such as a smartphone or smartwatch. As previously described, it generates image data and transmits it to the augmented reality glasses or virtual reality headset [21] for display.
The display device may be connected to the host device [11, 22] or display control device [12] if one is present by a wired or wireless connection. While a wired connection minimises latency in transmission of data from the host to the display, wireless connections give the user much greater freedom of movement within range of the wireless connection and are therefore preferable. A balance must be struck between high compression of data, in particular video data, which can be used to enable larger amounts of data (e.g. higher resolution video) to be transmitted between the host and display, and the latency that will be introduced by processing of the data.
Ideally, the end-to-end latency between sensing a user’s head movement, generating the pixels in the next frame of the VR scene and streaming the video should be kept below 20ms, preferably below 10ms, further preferably below 5ms.
The wireless link should be implemented as a high bandwidth short-range wireless link, for example at least 1 Gbits/s, preferably at least 2 Gbits/s, preferably at least 3 Gbits/s. An “extremely high frequency (EHF)” radio connection, such as a 60GHz radio connection is suitable for providing such high-bandwidth connections over short-range links. Such a radio connection can implement the WiFi standard IEEE 802.1 lad.
The 71-76, 81-86 and 92-95 GHz bands may also be used in some implementations.
The wireless links described above can provide transmission between the host and the display of more than 50 frames per second, preferably around 60fps or around 90 fps in other embodiments. In a very high frame rate embodiment, a rate of around 120 fps may be used.
In some embodiments, the headset or other display device uses directional antennae and the display control device uses beamforming techniques in order to focus the signal towards the receiver. While this can increase the transmission bandwidth when the receiver remains static, in a wireless system, and particularly in VR systems designed for the user to move, such techniques can increase the variation in bandwidth available for transmission of the data to the display device.
Figure 3 shows a system which is similar in operation to the embodiment shown in Figure 2. In this case, however, there is no separate host device [22], The entire system is contained in a single casing [31], for example in a smartphone or other such mobile computing device. The device contains a processor [33], which generates display data for display on the integral display panel [32], The mobile computing device may be mounted such that the screen is held in front of the user’s eyes as if it were the screen of a virtual reality headset.
Haar Encoding
A Haar transformation processes that may be implemented in conjunction with the present system will now be explained with reference to Figures 4 and 5. As previously mentioned, the Haar transform takes place on the host [11], specifically in the compression engine [15], Decompression takes place on the display control device [12], specifically in the decompression engine [18], where the data is put through an inverse Haar transform to return it to its original form.
In the example shown in Figure 4, a group of four tiles [41] has been produced by the application [14] and passed to the compression engine [15], In this example, each tile [41] comprises one pixel, but may be larger. Each pixel [41] has a value indicating its colour, here represented by the pattern of hatching. The first pixel [41 A] is marked with dots and considered to have the lightest colour. The second pixel [41B] is marked with diagonal hatching and is considered to have the darkest colour. The third pixel [41C] is marked with vertical hatching and is considered to have a light colour, and the fourth pixel [4ID] is marked with horizontal hatching and is considered to have a dark colour. The values of the four pixels [41] are combined using the formulae [44] shown to the right of the Figure to produce a single pixel value [42], referred to as “W”, which is shaded in grey to indicate that its value is derived from the original four pixels [41], as well as a set of coefficients [43] referred to in Figure 4 as “x, y, z”. The pixel value [42] is generated from a sum of the values of all four pixels: ((A+B)+(C+D)). The three coefficients [43] are generated using the other three formulae [44] as follows:
• x: (A-B)+(C-D) • y: (A+B)-(C+D) • z: (A-B)-(C-D)
Any or all of these values may then be quantised: divided by a constant in order to produce a smaller number which will be less accurate but can be more effectively compressed and rounded.
The reverse transform process is carried out on the single pixel value [42] and coefficients [43] produced in the transform as described above. This process will be carried out after a decompression process, which might involve, for example, multiplying quantised coefficients to restore an approximation of their original values.
The decompression engine [18] combines the coefficients [43] with the value of the pixel value [42] transmitted by the host [11] to recreate the original four pixels [45] using the formulae [46] shown to the right of Figure 4.
• A: W+x+y+z • B: W-x+y-z • C: W+x-y-z • D: W-x-y+z
This is repeated the same number of times that the data was transformed. These pixels [45] are then transmitted to the scaler [19] if a scaler is used.
Figure 5 shows an example of the actual process of a Haar transform. The top part of the encode section shows 64 tiles, each numbered in order from 0. These numbers are used to indicate the values of the tiles as previously mentioned. The tiles are divided into groups of four: for example, the top-left group comprises tiles 0, 1,8, and 9.
At each pass, the same calculations are performed on a larger range of tile groups to produce combined pixel values and coefficients. In the first pass, Step S31, the tiles in each group are processed using the previously-mentioned formulae [44], This converts the values in the circled first group to 18, -2, 16, 0 and these values are stored and used in place of the original values. In this example, 18 is the pixel value “W” [42] described in Figure 4 and -2, 16, and 0 are the coefficients “x”, “y”, and “z” [43] described in Figure 4. The same process is carried out on all the groups. These results are shown in the second section of the process, after Step S31.
The second pass, Step S32, applies the same formulae [44] to the top-left tiles in each set of four tile groups. The values to be used in the top-left quarter of the frame in the second pass are shown circled: 18, from the top-left group, 26 from the group to the immediate right, 82 from the group below, and 90 from the final group in the upper-left quarter. The same formulae [44] are then applied to these values to produce 216,-16,-128, and 0, which are put in the places of the original values. Again, these values correspond to W [42], x, y, and z [43] as described in Figure 4. The same process is carried out on all four quarters, and all other values are unchanged: for example, in the top-left group the three values not used in the second pass of the transform and not circled are unchanged from -2, -16, and 0.
The third pass, Step S33, is carried out on one value from each quarter, as shown circled in Figure 4: 216 from the top-left quadrant, 280 from the top-right quadrant, 728 from the bottom-left quadrant, and 792 from the bottom-right quadrant. This produces the final results shown at the bottom of the encode section: 2016 (W), -128 (x), -1024 (y), and 0 (z). Once again, all the other values are unchanged.
The values can then be rearranged so that the different coefficients are grouped together. The pixel values at each level are transmitted first, prioritising the results of later passes, followed by the coefficients. This will result in many small numbers, including many identical numbers: for example, there is a 0 in the same position in each group after the third pass, and these can be grouped and sent as a single number. The values may also be quantised: divided by a constant to produce smaller coefficients and rounded, if desired.
At Step S34, the data is transmitted from the host [11] to the display control device [12], where it is decompressed, de-quantised and re-ordered as appropriate prior to decoding. In this example, these processes produce the same data as was generated by the initial transform, and this table is shown at the beginning of the Decode section. A similar process is then performed to reverse the transform process.
At Step S35, the first pass is performed and the formulae [46] described in Figure 4 are applied to the circled top-left tile from each quadrant: as mentioned after the third pass of the encode stage, in this pass, and this example, the figures are: 2016 (W [42]), -128 (x [43]), 1024 (y [43]), and 0 (z [43]). This produces a new W value [42] for each quadrant: 216, 280, 728, and 792.
At Step S36, the second pass is carried out. It takes the top-left value from each group in each quadrant (W:216 [42], x: -16, y: -128, z: 0 [43]) and applies the same formulae [46] to them. Finally, the same formulae [46] are applied to every value in each group in the third pass: Step S37. This produces the same values as were input at the beginning of the encode section.
Such a transform is useful because not only does it allow the host [11] to transmit a smaller number of pixels than are present in the full image data, combined with a collection of coefficients, but the coefficients can be compressed more efficiently than pixel data, with less loss of data; they are small numbers and so can be transmitted in fewer bits without any further compression being applied.
Time-To-Live
Figures 6a and 6b illustrate a further embodiment in which Haar encoding is use to encode a video frame. In this embodiment, 3 encoding passes are used to encode portions of the frame, or tile groups, at three different resolutions or scales.
In the encoding, “A” 610 is processed to provide the DC elements of the encoded tile group, “B” 612 provides the horizontal components, “C” 614 provides the vertical components and “D” 616 provides the diagonal components. However, it will be appreciated that other encoding arrangements may be used. The “A” of the third pass 610 is the “A” component of the “A” part of the second pass, which is the “A” component of the “A” part of the first pass. Hence the “A” of pass 3 is the “A of the A of the A” and provides an indication of the constant background values in the tile group.
The DC component, or “A” of the tile group forms part of the GC, or metadata, for that tile group. The GC for a tile group also includes other metadata needed in the decoding of the tile group, such as the zero count for the tile group, the tile select and optionally the tile normalisation, which provides factors for normalisation of the tile. Zero count is the number of ‘trailing’ zeros for each component in each tile. Tile select is a value that points the decoder to some pre-arranged meta data describing how to decode that tile. (A pointer to a quantization table for instance.) Tile normalization value is used to multiply the quantization value with.
As can be seen in Figure 6a, the data for a particular tile from the third pass 618 has a lower resolution and therefore takes up fewer bytes (is smaller) than the data from the second pass 620, which is smaller than the data from the first pass 622. However, in a system where the priority is the speed and consistency of display of the image to the user, the data from pass 3, as well as the GC data, is the most important to transmit to the user. In the encoding embodiments described herein, the pass 2 and 1 data relies for its basis on pass 3 data.
To deliver the encoded data to the user, in one embodiment, the data is formatted into a compressed tile group (TG) atom 632 as shown in Figure 6b. The atom has a variable length since the data for each tile will have a different length depending on the results of the encoding.
The TG atom has an Atom Header (HDR) 624 which includes information relating to the TG atom such as its length and the type of encoding used. This is followed by the GC section 626 containing the GC information as set out above, including the DC for the tile group. This is followed by AC data for each span of tile. A span of tiles may be for example two raster lines in the image. In the atom shown in Figure 6b, “AC Span 0” 628 is the data from the first row of tiles in the image shown in Figure 6a, tiles 0 to 7, and “AC Span 1” 630 is the data from the second row of tiles in the image, tiles 8 to 15. As indicated by the dashed lines in AC Span 0 of Figure 6b, the length required in the TG atom for each tile will vary, but this data includes the information from each of passes 1, 2 and 3 for that tile. That is, the information from the first, second and third passes for a particular tile are collated together so that information for each tile is grouped within the TG atom.
As illustrated in Figure 6b, a plurality of compressed tile group (TG) atoms are grouped and encapsulated into a transport unit 634 of variable length, but around 4kB, with a transport unit header. Multiple transport units are then grouped in a transport block 636 of around 64kB and queued into output buffers for handing over to the network stack for transmission to the display device. The transport units grouped into the transport block are provided with output buffer metadata in a header 638.
As described in more detail below, the output buffer metadata can include a time-to-live (TTL) value. The TTL value increases in proportion to the number of pixels in the tile group or transfer block so that blocks with higher numbers of pixels are assigned a higher TTL value. In one simplified example, the TTL value may be calculated as the number of pixels in the tile group multiplied by the pixel clock rate of the system.
Embodiments of the system may also be implemented in which TTL values are applied to encoded tile group data where a different encoding technique has been used, such as DCT transforms or encoding techniques based on other types of wavelets. In some embodiments, the encoding technique may not be a hierarchical encoding technique. In such embodiments, a TTL value can be added to one or more packets of encoded data that represent a tile group. If there are multiple packets of data for a particular tile group, the same TTL value can be used for all of the packets associated with that tile group. The TTL value may prevent encoded tile group data being sent when it is no longer of use to the decoder, for example when it will arrive too late.
The transfer block is then released onto the network via the output buffer queue for transmission over the network to the display device. In particular, the decoder can process the output buffer metadata to ensure an appropriate distribution of transport units to the decoders to enable the tile groups to be decoded.
On receipt, the decoder processes the transfer block to release the transport units and enable decompression of the tile groups.
Alternative Packaging Method
Figure 7 illustrates a development of the system shown in Figures 6a and 6b which provides an alternative mechanism for packaging the hierarchically encoded data.
In this embodiment, encoded data that makes up the tile group is split over three atoms, which can be encapsulated into three separate transport units. As the skilled person will recognise, more or fewer than three atoms can be used in particular if the data is encoded using more or fewer than three passes of the hierarchical encoding system.
The first atom 702 includes a length header, which is followed by the HDR, GC and Pass 3 data from the first tile group, that is tiles 0 to 15 in the present embodiment. As the skilled person will be aware, the HDR and GC data is the data required to enable decoding of the tile group. The pass 3 data is the lowest resolution encoded data for that particular tile group. Hence the first atom is likely to be the shortest atom in the method described.
A second atom 704 includes a length header and data for all the tiles in the tile group (0 to 15) for the second encoding pass. Similarly, a third atom 706 includes all of the data for the first encoding pass. Since pass 1 is the highest resolution pass, the third atom 706 is the longest of the three atoms described.
In the embodiment illustrated in Figure 7, four separate transport units 708, each of around 4kB, are shown to carry the atoms associated with the separate encoding passes of the frame, although where there are only three encoding passes as in Figure 6a, only three atoms may be used. Each transport unit has its own transport unit header 710 and transport units are grouped into transport blocks or transfer blocks 712 of around 64kB for transmission, with metadata inserted into the output buffer header 714 of the transfer block.
In the output buffer 716, a number of queues are established, each queue having a different priority level. In particular, the output buffer may implement a plurality of first-in-first-out (FiFo) queues that are labelled with priority values. Packets within those queues are sent in accordance with the priority values, for example all transport units or packets in the highest priority queues may be sent prior to any transport units being sent from the lower priority queue so that a transport unit is not sent if there is a transport unit in a higher-priority queue waiting to be sent.
The separate transport units of Figure 7 are added to the prioritised output buffers 716 for handing over to the network stack in the priority order. In particular, the first atom, including the HDR, GC and data from Pass 3 is added to the highest priority queue and the second, third and fourth atoms, including data from Pass 2, Pass 1 and Pass 0 are added to queues with decreasing priority levels.
In a situation where plenty of bandwidth is available on the network or connection between the output buffer and the decoder in the display device, all transport units containing all passes of data from a particular tile group are sent before the encoded data from the next tile group arrives at the output buffer. However, if transmission is delayed, for example by limited bandwidth on the network, the transport units for the next tile group may start to arrive at the output buffers before all of the previous transport units have been sent. In this situation, the highest priority transport units for the next tile group will be sent in preference to the lower-priority transport units from the previous tile group.
In a particular embodiment, the transfer blocks containing the first encoded pass or passes of the tile are allocated a higher priority value than transfer blocks containing later encoded passes. The priority value for each transfer block is included in the output buffer metadata. The output buffers of the network can then prioritise the sending of transfer blocks with higher priority values over the sending of transfer blocks with lower priority values. This can enable the system to ensure that at least some detail from each tile of the frame is sent, even if that data is of low resolution. If bandwidth allows, higher-resolution data from subsequent passes of the encoder can be sent to enable the decoder to display the tile at a higher resolution.
In virtual reality systems where users use a personal display device such as a VR headset or screen of a mobile telephone or tablet mounted in front of their eyes, it is important to reduce latency as much as possible. For example if there is a significant delay (of more than around 20ms) between a user moving around the virtual reality scene and that scene being displayed to the user in an updated image, many users can start to feel motion sickness. Particularly for wireless headsets, bandwidth between the encoder and transmitter and the headset receiver and decoder can often limit the amount of data that can be transmitted to the user to allow the image to be updated in the 20ms interval. Techniques described herein, in particular the packaging of different passes of the data into different transport units, can be used to enable at least some data to be transmitted to the decoder and displayed to the user within the 20ms (or similar) time limit.
On receipt, if the decoder has received all of the data relating to all of the (in this case three) passes of the tile group when it is ready to decode that tile group, it can perform the decoding and display the tile group at the highest pass 1 resolution. If it has received data from only the third or second encoding passes, it can display data for that tile group of lower resolution. In a system where the refresh rate of the frames is high and where the image is changing quickly (such as for a VR system with a headset), displaying a tile group at a lower resolution will not significantly affect the experience of user.
As a further development of the system described above, a time to live (TTL) value may be added to the data, preferably the TTL value is added to the output buffer metadata in the transfer block.
It has been appreciated by the present inventors that, in cases such as that described above where the final images are displayed to the user on the basis of image data at lower resolutions when data at higher resolutions is not available, the high resolution data is no longer needed once the relevant tile group has already been displayed at the lower resolution. However, in existing systems, if the high resolution data is still in the output buffer even after the lower resolution data for that tile group has been used, the high resolution data will still be transmitted to the display device when bandwidth becomes available. The transmission of this higher resolution data, which is essentially now unnecessary, will in turn be blocking transmission of data for the next tile group. In time, where bandwidth over a link becomes restricted, a backlog of data can build up in the output buffers waiting to be transmitted, inhibiting the efficient delivery of data that is needed for the latest frame, and interrupting the user’s experience.
A TTL value may be used to address this issue, as described in more detail below. The TTL value is preferably assigned to at least some of the transfer blocks as part of the output buffer metadata. The time to live value enables the system to discard any transfer block that is too old so that the transport block is simply not sent onto the network towards the decoder.
In particular, the TTL value may be assigned to the transfer block and inserted into the output buffer metadata before the transfer block is passed to the output buffers. If the TTL value has expired before the output buffer has time to send the data, the data is discarded rather than being transmitted.
The use of a TTL value can prevent the build-up of data in the output buffers that is no longer relevant or useful and clear the way for the most useful data to be transmitted to the display device.
The skilled person will appreciate that the TTL value may take a number of forms. It may be a numerical value tied to the clock of the output buffer, or it may be a time stamp with the output buffer being programmed to discard data that indicates a time earlier than a particular distance from the current time, alternatively it may be an indication of a future clock time by which the transfer block must be sent.
Preferably, the TTL value is set such that it expires at around the time the data from the next tile group is expected at the output buffer. In this way, if it has not already been sent, the data with the higher-resolution passes of the tile group will be discarded when the low resolution data for the next tile group is available to be sent.
The TTL value may be tied to parameters relating to the network conditions, for example the available bandwidth for the transmission of packets, the expected variation that bandwidth over time and/or the network jitter. This may allow greater leeway to be given for the transmission of packets when a network connection is displaying a large amount of variation in the available bandwidth on the assumption that an increase in bandwidth is likely to occur following a temporary restriction in the available bandwidth.
TTL values may also be managed actively and adjusted dynamically, for example based on average available bandwidth and/or the amount of data in the output buffers. For example, if bandwidth falls over time, the TTL applied to each transfer block may be decreased to reduced the likelihood of data building up in the output buffers. The amount of data in the output buffer queues may be another way to recognise that the TTL value should be reduced.
It will be appreciated that, in some embodiments, the transport unit containing the tile group data with the lowest resolution, in this case the pass 3 data, may not be assigned a TTL value. This is because this data, in particular the GC data, is necessary in order to display any information for that tile group and it can be assumed that the data will be delivered as a minimum in any functioning system.
Conclusion
Although particular embodiments have been described in detail above, it will be appreciated that various changes, modifications and improvements can be made by a person skilled in the art without departing from the scope of the present invention as defined in the claims. For example, hardware aspects may be implemented as software where appropriate and vice versa, and engines/modules which are described as separate may be combined into single engines/modules and vice versa. Functionality of the engines or other modules may be embodied in one or more hardware processing device(s) e.g. processors and/or in one or more software modules, or in any appropriate combination of hardware devices and software modules. Furthermore, software instructions to implement the described methods may be provided on a computer readable medium.

Claims (25)

1. A method of processing display data comprising a plurality of frames at a display control device for transmission to a display device over a wireless link, wherein the frames are transmitted for display at a rate of at least 50 frames per second, the method comprising:
dividing a frame of the display data into a plurality of tile groups; encoding each tile group to generate encoded tile group data; encapsulating the encoded tile group data into a payload of a tile group atom; generating a time-to-live, TTL, value for the tile group atom;
encapsulating the tile group atom with a transport header to form a transport unit, wherein the transport header comprises the time-to-live value;
writing the transport unit to an output buffer for transmission over the wireless link to the display device; and determining at the output buffer, based on the time-to-live value, whether to transmit the transport unit from the output buffer to the display device over the wireless link.
2. The method according to claim 1 further comprising discarding the transport unit based on the TTL value if it is determined not to transmit the transport unit.
3. The method according to claim 1 or 2 wherein determining at the output buffer whether to transmit the transport unit comprises comparing the TTL value against a reference value to determine whether the transport unit meets a transmission criterion, transmitting the transport unit if the transport unit meets the transmission criterion and discarding the transport unit if the transport unit does not meet the transmission criterion.
4. The method according to any preceding claim wherein determining at the output buffer whether to transmit the transport unit comprises:
determining whether a TTL value is greater than a transmission threshold value;
transmitting the transport unit if the TTL value is greater than the transmission threshold value; and discarding the transport unit if the time-to-live is equal to or less than the transmission threshold value.
5. The method according to any preceding claim wherein the TTL value is based on a clock time of the output buffer.
6. The method according to any preceding claim wherein the TTL value is based on a number of pixels in the tile group.
7. The method according to any preceding claim wherein encoding each tile group comprises encoding each tile group at a plurality of resolutions to produce a plurality of passes of encoded tile group data.
8. The method according to claim 7 wherein encapsulating the encoded tile group data comprises encapsulating a first pass of the encoded tile group data in a first transport unit and encapsulating a second pass of the encoded tile group data in a second transport unit.
9. The method according to claim 8 wherein encapsulating the encoded tile group data comprises encapsulating at least one further pass of the encoded tile group data in at least one further transport unit.
10. The method according to any preceding claim wherein writing the transport unit to an output buffer comprises writing the first transport unit to a first queue in the output buffer having a first associated priority level and writing the second transport unit to a second queue in the output buffer having a second associated priority level.
11. The method according to claim 10 wherein data in the buffer with the first associated priority level is transmitted prior to data in the buffer with the second associated priority level.
12. A method of processing display data comprising a plurality of frames at a display control device for transmission to a display device over a wireless link, wherein the frames are transmitted for display at a rate of at least 50 frames per second, the method comprising:
dividing a frame of the display data into a plurality of tile groups;
encoding each tile group at a plurality of resolutions to produce a plurality of passes of encoded tile group data;
encapsulating each pass of encoded tile group data into a respective payload of a corresponding tile group atom;
encapsulating each tile group atom with a transport header to form a plurality of transport units; and writing each transport unit to one of a plurality of queues in an output buffer for transmission over the wireless link to the display device;
wherein each queue in the output buffer has a different priority level and wherein transport units in a higher priority output buffer queue are sent before transport units in a lower priority output queue.
13. The method according to claim 12 wherein the transport unit containing the lowestresolution tile group data is placed in the highest-priority output buffer queue and the transport unit containing the highest-resolution data is placed in the lowest-priority output buffer.
14. The method according to claim 12 or 13 wherein the transport header of each transport unit comprises a time-to-live value.
15. The method according to any of claims 12 to 14 wherein transport units destined for higher-priority output buffers are given a longer time-to-live value than transport units destined for lower-priority output buffers.
16. The method according to any of claims 12 to 15 further comprising determining at the output buffer, based on the time-to-live value, whether to transmit the transport unit from the output buffer to the display device over the wireless link.
17. The method according to any of claims 12 to 16 wherein the system purges the output buffer on receipt of transport units comprising data for the next tile group.
18. The method according to any of claims 12 to 17 wherein the plurality of queues is divided across a plurality of output buffers.
19. The method according to any preceding claim wherein the display data comprises multimedia data including video, computer-generated image data, audio data, graphical data and/or any other data output to the user via the display device.
20. The method according to any preceding claim wherein the display data is arranged for display to a user at a high frame rate of at least 50 frames per second, preferably 60 frames per second or 90 frames per second.
21. Apparatus for processing display data comprising a plurality of frames at a display control device for transmission to a display device over a wireless link, wherein the frames are transmitted for display at a rate of at least 50 frames per second, the apparatus comprising:
means for dividing a frame of the display data into a plurality of tile groups; means for encoding each tile group to generate encoded tile group data;
means for encapsulating the encoded tile group data into a payload of a tile group atom;
means for generating a time-to-live, TTL, value for the tile group atom;
means for encapsulating the tile group atom with a transport header to form a transport unit, wherein the transport header comprises the time-to-live value;
means for writing the transport unit to an output buffer for transmission over the wireless link to the display device; and means for determining at the output buffer, based on the time-to-live value, whether to transmit the transport unit from the output buffer to the display device over the wireless link.
22. Apparatus according to claim 21 further comprising means for implementing the method according to any of claims 2 to 11.
23. Apparatus for processing display data comprising a plurality of frames at a display control device for transmission to a display device over a wireless link, wherein the frames are transmitted for display at a rate of at least 50 frames per second, the apparatus comprising:
means for dividing a frame of the display data into a plurality of tile groups;
means for encoding each tile group at a plurality of resolutions to produce a plurality of passes of encoded tile group data;
means for encapsulating each pass of encoded tile group data into a respective payload of a corresponding tile group atom;
5 means for encapsulating each tile group atom with a transport header to form a plurality of transport units; and means for writing each transport unit to one of a plurality of queues in an output buffer for transmission over the wireless link to the display device;
wherein each queue in the output buffer has a different priority level and 10 wherein transport units in a higher priority output buffer queue are sent before transport units in a lower priority output queue.
24. Apparatus according to claim 23 further comprising means for implementing the method according to any of claims 13 to 20.
25. A computer program or computer program product comprising instructions that, when executed by a processor implement the method of any of claims 1 to 20.
GB1716982.2A 2017-10-16 2017-10-16 Encoding and transmission of display data Active GB2568460B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB1716982.2A GB2568460B (en) 2017-10-16 2017-10-16 Encoding and transmission of display data
GB2210974.8A GB2606502B (en) 2017-10-16 2017-10-16 Encoding and transmission of display data
PCT/GB2018/052845 WO2019077303A1 (en) 2017-10-16 2018-10-05 Encoding and transmission of display data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1716982.2A GB2568460B (en) 2017-10-16 2017-10-16 Encoding and transmission of display data

Publications (3)

Publication Number Publication Date
GB201716982D0 GB201716982D0 (en) 2017-11-29
GB2568460A true GB2568460A (en) 2019-05-22
GB2568460B GB2568460B (en) 2022-11-16

Family

ID=60419100

Family Applications (2)

Application Number Title Priority Date Filing Date
GB2210974.8A Active GB2606502B (en) 2017-10-16 2017-10-16 Encoding and transmission of display data
GB1716982.2A Active GB2568460B (en) 2017-10-16 2017-10-16 Encoding and transmission of display data

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB2210974.8A Active GB2606502B (en) 2017-10-16 2017-10-16 Encoding and transmission of display data

Country Status (2)

Country Link
GB (2) GB2606502B (en)
WO (1) WO2019077303A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220264129A1 (en) * 2018-04-27 2022-08-18 V-Nova International Limited Video decoder chipset

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116634177B (en) * 2023-06-16 2024-02-20 北京行者无疆科技有限公司 Video communication decoding processing method based on HDMI communication equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100274970A1 (en) * 2009-04-23 2010-10-28 Opendns, Inc. Robust Domain Name Resolution
US20140150011A1 (en) * 2011-07-01 2014-05-29 Chiyo Ohno Content transmission device and content transmission method
US20150281737A1 (en) * 2014-03-31 2015-10-01 Sony Corporation Image processing device and image processing method
US20150341645A1 (en) * 2014-05-21 2015-11-26 Arris Enterprises, Inc. Signaling for Addition or Removal of Layers in Scalable Video
GB2536299A (en) * 2015-03-13 2016-09-14 Gurulogic Microsystems Oy Method of communicating data packets within data communication systems

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070201365A1 (en) * 2006-01-23 2007-08-30 Frederick Skoog Video packet multiplexer with intelligent packet discard
GB2484736B (en) * 2010-10-22 2014-11-05 Displaylink Uk Ltd Image generation
GB2485576B (en) * 2010-11-19 2013-06-26 Displaylink Uk Ltd Video compression
US10015527B1 (en) * 2013-12-16 2018-07-03 Amazon Technologies, Inc. Panoramic video distribution and viewing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100274970A1 (en) * 2009-04-23 2010-10-28 Opendns, Inc. Robust Domain Name Resolution
US20140150011A1 (en) * 2011-07-01 2014-05-29 Chiyo Ohno Content transmission device and content transmission method
US20150281737A1 (en) * 2014-03-31 2015-10-01 Sony Corporation Image processing device and image processing method
US20150341645A1 (en) * 2014-05-21 2015-11-26 Arris Enterprises, Inc. Signaling for Addition or Removal of Layers in Scalable Video
GB2536299A (en) * 2015-03-13 2016-09-14 Gurulogic Microsystems Oy Method of communicating data packets within data communication systems

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220264129A1 (en) * 2018-04-27 2022-08-18 V-Nova International Limited Video decoder chipset

Also Published As

Publication number Publication date
GB2606502A (en) 2022-11-09
GB2606502B (en) 2023-01-04
GB201716982D0 (en) 2017-11-29
WO2019077303A1 (en) 2019-04-25
GB2568460B (en) 2022-11-16
GB202210974D0 (en) 2022-09-07

Similar Documents

Publication Publication Date Title
JP5161130B2 (en) Adaptive bandwidth footprint matching for multiple compressed video streams in fixed bandwidth networks
US7844848B1 (en) Method and apparatus for managing remote display updates
US9699099B2 (en) Method of transmitting data in a communication system
US9883180B2 (en) Bounded rate near-lossless and lossless image compression
AU2018280337B2 (en) Digital content stream compression
US20140085314A1 (en) Method for transmitting digital scene description data and transmitter and receiver scene processing device
US20140187331A1 (en) Latency reduction by sub-frame encoding and transmission
AU2017285700B2 (en) Image compression method and apparatus
CN108702513B (en) Apparatus and method for adaptive computation of quantization parameters in display stream compression
EP2272237B1 (en) Method of transmitting data in a communication system
CN101854456A (en) Image source configured to communicate with image display equipment
CN105025347B (en) A kind of method of sending and receiving of GOP images group
WO2019077303A1 (en) Encoding and transmission of display data
US9142053B2 (en) Systems and methods for compositing a display image from display planes using enhanced bit-level block transfer hardware
US20120033727A1 (en) Efficient video codec implementation
US20240121406A1 (en) Content Compression for Network Transmission
CN101065760B (en) System and method for processing image data
US20230395041A1 (en) Content Display Process
US20140098852A1 (en) Compression bandwidth overflow management using auxiliary control channel
KR101251879B1 (en) Apparatus and method for displaying advertisement images in accordance with screen changing in multimedia cloud system
JP2016149770A (en) Minimization system of streaming latency and method of using the same
WO2019092392A1 (en) Method and system for processing display data
WO2014057809A1 (en) Motion video transmission system and method