WO2019135064A1 - Decoding image data at a display device - Google Patents

Decoding image data at a display device Download PDF

Info

Publication number
WO2019135064A1
WO2019135064A1 PCT/GB2018/053726 GB2018053726W WO2019135064A1 WO 2019135064 A1 WO2019135064 A1 WO 2019135064A1 GB 2018053726 W GB2018053726 W GB 2018053726W WO 2019135064 A1 WO2019135064 A1 WO 2019135064A1
Authority
WO
WIPO (PCT)
Prior art keywords
blocks
frame
decoded
streams
image data
Prior art date
Application number
PCT/GB2018/053726
Other languages
French (fr)
Inventor
Paul James
Original Assignee
Displaylink (Uk) Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Displaylink (Uk) Limited filed Critical Displaylink (Uk) Limited
Publication of WO2019135064A1 publication Critical patent/WO2019135064A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • H04N19/426Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
    • H04N19/427Display on the fly, e.g. simultaneous writing to and reading from decoding memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method of outputting image data at a display device having a plurality of decoders is disclosed. The method includes receiving image data of a source frame from a host device, the image data comprising a plurality of streams of blocks, each block comprising encoded image data for a portion of the input frame. A set of first blocks associated with a first stream of blocks are selected and passed to a first decoder. A set of second blocks associated with a second stream of blocks are selected and passed to a second decoder. The first decoder decodes the first blocks of the first stream and outputs decoded image data for the first blocks to a shared memory to form a first part of a decoded frame. The second decoder decodes the second blocks of the second stream and outputs the decoded image data for the second blocks to the shared memory to form a second part of the decoded frame. The decoded frame is then output to a display.

Description

Decoding image data at a display device
The present invention relates to systems and methods for decoding image data at a display device.
Virtual reality (VR) headsets have recently become popular for gaming and entertainment uses. Typically such headsets work in conjunction with a host device, such as a personal computer, which generates image data (as a sequence of stereoscopic frame pairs) for transmission to the headset. This typically requires compression of the image data at the host device, especially if the headset is connected via a comparatively low-bandwidth wireless connection. However, in an effort to reduce cost and weight of headsets, and increase battery life, headsets are typically equipped with limited processing power. This creates challenges in efficiently decoding received image data, especially to meet the low latency requirements of typical three-dimensional (3D) VR experiences and games.
The present invention seeks to alleviate some of the above problems.
Accordingly, in a first aspect of the invention, there is provided a method of outputting image data at a display device having a plurality of decoders, the method comprising: receiving image data of a source frame from a host device, the image data comprising a plurality of streams of blocks, each block comprising encoded image data for a portion of the input frame; selecting first blocks associated with a first stream of blocks and passing the selected blocks to a first decoder; selecting second blocks associated with a second stream of blocks and passing the selected blocks to a second decoder; decoding the first blocks of the first stream by the first decoder and outputting decoded image data for the first blocks to a shared memory to form a first part of a decoded frame; decoding the second blocks of the second stream by the second decoder and outputting the decoded image data for the second blocks to the shared memory to form a second part of the decoded frame; and outputting the decoded frame to a display.
Distributing blocks to separate decoders from different streams of blocks can allow more efficient decoding whilst smoothing out differences in bit rates of data supplied to the decoders. Each decoder may, for example, comprise a separate hardware decoder (e.g. a dedicated integrated circuit to perform decoding), or a separate software decoding module, process or thread running on one or more general- purpose processors. In one example, separate software decoding engines could be provided running on different cores of a multicore processor.
The blocks of the plurality of streams of blocks are preferably taken from respective areas of the source frame in accordance with a predetermined encoding pattern, and wherein the decoding steps preferably recreate a version of the source frame by writing decoded image data for the blocks to corresponding areas of the decoded frame in accordance with the predetermined encoding pattern. Thus, the decoding preferably reconstitutes the source frame (or an approximation of it in the case of lossy compression) from the streams of blocks using the same block-to-stream allocation used at the encoder.
The encoding (and matching decoding) pattern is preferably selected such that each of the plurality of streams comprises a set of blocks evenly distributed over the source frame, preferably wherein blocks of a given stream are evenly spaced in one or both of the horizontal and the vertical directions. Blocks of different streams are preferably interleaved in the horizontal and/or vertical direction.
Preferably, the blocks of a given stream are non-contiguous in one or both of the horizontal direction and the vertical direction. Preferably, the blocks decoded by a given decoder are non-contiguous in one or both of the horizontal direction and the vertical direction.
Horizontally adjacent blocks in the decoded frame are preferably from different ones of the plurality of streams and/or are preferably decoded by different ones of the plurality of decoders. Additionally (or alternatively), vertically adjacent blocks in the decoded frame are preferably from different ones of the plurality of streams and/or are preferably decoded by different ones of the plurality of decoders. Preferably, any two adjacent blocks in the decoded frame are from different streams and/or decoded by different decoders.
Preferably, a plurality of first streams of blocks are decoded by the first decoder and a plurality of second streams of blocks are decoded by the second decoder. Each decoder preferably decodes the same number of streams.
Each stream is preferably decoded by a respective pre-assigned decoder. The method may further comprise encoding the source frame at the host device. Preferably, the first and second streams are encoded by respective ones of a plurality of encoders at the host device, preferably wherein each of the plurality of streams is encoded by a respective encoder. The plurality of encoders may comprise a plurality of hardware encoders, or a plurality of encoding modules each running on a respective processor or a respective processor core of a multicore processor.
Preferably, the source and decoded frame correspond to one frame of a stereoscopic frame pair, or the source frame and decoded frame comprise a stereoscopic frame pair encoded as a single frame. The outputting step then preferably comprises outputting respective left-eye and right-eye images to respective displays of the display device based on the decoded image data in the shared frame memory.
The received blocks preferably comprise compressed image data, and decoding a block preferably comprises decompressing the block. The blocks of the source frame are preferably compressed using a variable bit rate encoder. Thus, the compressed blocks of the source frame preferably vary in data size (that is, the blocks are not all the same fixed size).
Each block preferably corresponds to a rectangular area of the source frame. The blocks preferably have a fixed image area size (i.e. fixed pixel width and height).
The source / decoded frame is preferably divided into a plurality of rows of blocks, the method comprising initiating a transfer of pixel data from the shared memory to the display after the first row of blocks has been completely decoded but before the complete frame has been decoded, preferably before a second row of blocks has been completely decoded. Preferably, the method comprises synchronizing the rate of output of decoded blocks by the decoders to a raster scan performed by the display, preferably such that blocks are written by the decoders to the shared memory ahead of the raster scan but while earlier blocks written to the frame are still being scanned.
In a further aspect of the invention (which may be combined with the above aspect), the invention provides a method of encoding display data for transmission to a display device, the method comprising: dividing a source frame into a plurality of blocks; assigning each block to one of a plurality of streams of blocks in accordance with an encoding pattern; encoding each of the streams of blocks; and transmitting the plurality of streams of blocks to a display device.
The encoding pattern may be as set out above and preferably interleaves blocks of different streams horizontally and/or vertically across the source frame. Thus, the assigning step preferably comprises assigning respective horizontally and/or vertically adjacent blocks to different ones of the streams. Encoding preferably comprises compressing the blocks, and blocks are preferably compressed using a variable bit rate encoding.
The method preferably comprises grouping multiple compressed blocks of a given stream having a combined size less than or equal to a predefined transport unit size into a transport unit and outputting the transport unit to a transport layer for transmission over a communications medium.
Preferably the method comprises encoding the source frame for decoding in accordance with a method as set out in the first aspect of the invention (and thus any features of that first aspect may be applied to this aspect).
In any of the aspects set out herein, the display device is preferably a stereoscopic display device comprising at least two displays; preferably wherein the display device is a virtual reality, VR, or augmented reality, AR, headset. The decoders may decode separate frames of a stereoscopic frame pair for output to respective displays. Alternatively, a stereoscopic frame pair may be encoded as two sections of a single frame, with the combined frame processed by the encoder(s) and decoders in the manner set out above, and separate sections of the decoded combined frame then output to each respective display to form a left-eye and right-eye image.
The display device is preferably separate from the host device and selectively connectable to the host device. The display device may be connected to the host device via a wireless data connection for reception of the image data.
In a further aspect, the invention provides a display device having means (e.g. in the form of a processor and associated memory) for performing any method as set out above or as described in more detail below. The invention further provides a host device having means (e.g. in the form of a processor and associated memory) for performing any method as set out above or as described in more detail below.
The invention further provides a tangible computer readable medium or computer program product comprising software code adapted, when executed on a data processing apparatus, to perform any method as set out herein.
Any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination. In particular, method aspects may be applied to apparatus and computer program aspects, and vice versa.
Furthermore, features implemented in hardware may generally be implemented in software, and vice versa. Any reference to software and hardware features herein should be construed accordingly.
Preferred features of the present invention will now be described, purely by way of example, with reference to the accompanying drawings, in which:-
Figure 1 illustrates a display device such as a VR headset connected to a host device;
Figure 2 illustrates a process of encoding an image frame based on a predetermined block pattern;
Figure 3 illustrates a method of formatting compressed blocks for
transmission over a communications medium; and
Figure 4 illustrates decoding of the image frame at the display device.
Overview
A display system in accordance with an embodiment of the invention is illustrated in overview in Figure 1 and principally comprises a host device 100 and a display device 102. In a typical embodiment, the host device 100 is a computer device such as a personal or tablet computer, smartphone or games console, and the display device 102 is a VR headset, augmented reality (AR) headset or similar. However, the described principles can more generally be applied to any system where a host device generates and sends display data to a display device for display. The host device runs an application 104 (e.g. a VR experience or game) which generates display output for the display device. The application produces display data using a graphics subsystem of the host, including various conventional graphics subsystem elements such as graphics APIs (Application Programming Interfaces), rendering engines, display drivers and the like (not shown). The graphics subsystem additionally includes an encoder module 106 which encodes the display data for transmission to the display device, where the encoding includes compressing the display data.
The display device 102 includes two decoding engines 108 and 110 for decoding and decompressing the received display data and a shared frame memory 112 for storing the decoded display data which is accessible by both decoding engines. Decoded display data is supplied from the frame memory to a stereoscopic pair of display panels 114, 116 for displaying respective left-eye and right-eye frames, which together create a three-dimensional effect when viewed by the user.
The host device is connected to the client device via a wired or wireless display connection for transmission of the encoded and compressed display data. The connection could be a dedicated display connection (e.g. HDMI) or a general- purpose connection (e.g. a network or peripheral bus connection such as USB). In the case of a wireless connection, a wireless communication infrastructure and protocol such as an 802.11 -based Wi-Fi or Bluetooth connection could be used.
In operation, the application 104 generates display data as a sequence of frames (or stereoscopic frame pairs). The frames are compressed and encoded for transport by encoder module 106 and transmitted via e.g. a wireless connection to the display device 102 where they are decoded by the decoder engines and written to the shared frame memory. For stereoscopic images, one typical approach is for the left- eye and right-eye images to be encoded as a single frame (e.g. with a left-hand half of the frame corresponding to the left-eye image and a right-hand half of the frame corresponding to the right-eye image).
In the following discussion, encoding and decoding will be described with respect to individual frames. It should be noted that the described process could be applied individually to each frame of a stereoscopic pair, or alternatively to a single combined frame including left-eye and right-eye sections. While a single frame memory is shown, this may in practice combine separate frame buffers for each of the display panels. Also, while in this example two decoding engines are used, this is purely by way of example, and more than two such engines could be employed.
Figure 2 illustrates aspects of the encoding process. The image data for a source frame 200 is divided into a plurality of streams by splitting the frame into rectangular blocks, with different blocks assigned to different streams in an interleaved pattern (i.e. such that blocks of each stream are interleaved horizontally and/or vertically with blocks of other streams). In this case, four streams are used, with the blocks numbered 0-3 in the diagram to indicate the respective stream (0,1 , 2, 3) to which a block belongs.
The allocation of blocks to streams preferably alternates such that horizontally and vertically adjacent blocks are assigned to different streams. In a preferred embodiment, a checkerboard pattern is used as illustrated, with blocks in the first horizontal row of blocks alternating between streams 0-3 in sequence, with the same pattern in subsequent rows but in each case offset by one stream in each successive row compared to the preceding row.
The blocks of each stream of blocks are input to a set of encoder engines in the encoder 106; for example, one encoder engine may be provided per stream.
Figure 3 illustrates formatting of compressed blocks for transmission. The compression codec used preferably implements variable bit rate compression. Therefore, while all blocks are the same size prior to compression (and occupy the same image area, e.g. n x m pixels), after compression the compressed blocks will typically vary in compressed data size. For transmission, the compressed blocks of each stream are arranged into equally sized transport units (TU) 302, 304, 306. Multiple TUs of each stream are combined into a TU container 308, 309 etc. (each preferably comprising a fixed number of TUs, in this example three), and the sequence of TU containers thus formed (alternating between streams sO, s1 , s2, s3) are handed off to the transport layer 310 which is responsible for packaging the TU containers in payloads of transport frames for transmission and transmitting them via a suitable network interface (e.g. a wireless interface). In one implementation, the host device manages a pool of TU buffers, to which encoding threads write as data blocks are compressed. The TU containers 308, 309 may similarly correspond to buffers in the transport layer. As soon as the relevant number of TU buffers are full the data is copied to a TU container buffer at the transport layer and the TU buffers are made available again in the pool. If the TU buffer pool runs low, the host device may temporarily stop assigning encoding tasks (blocks) to the encoders, until TU buffers become available again.
The described organization of data into TUs and TU containers is by way of example, and other ways of organizing the compressed blocks for transport over the communication medium can be employed. Note the specifics of the transport frame encapsulation will depend on the communication medium and protocol (e.g. TUs or TU containers could be carried in Ethernet frames) and the number and sizes of TU containers and TUs can be varied to suit requirements. Thus the TU and/or TU container size is typically chosen to enable efficient transport layer handoff (e.g. based on the maximum Ethernet frame payload size or the like).
Recovery of the frame at the display device is illustrated in Figure 4. The display device includes a transport layer 400 (e.g. including a network interface) which receives packets (e.g. Ethernet frames) over the communication medium and extracts the original TU containers 308, 309. The TU containers are then passed to the decoding engines 108, 110. Each TU container and its TUs are associated with a specific stream and each stream is allocated to a particular one of the decoding engines. Preferably, the streams are assigned to decoders in alternating order (of transmission); thus the TUs (and their compressed blocks) for streams sO and s2 are assigned to the first decoding engine 108 and the TUs (and their compressed blocks) for streams s1 and s3 are passed to the second decoding engine 110. Since the TU containers are transmitted in stream order (each containing a number of blocks for a particular stream), compressed blocks will also be received and decompressed by the decoders in that stream order. The transport layer preferably identifies the correct decoder to which each block (or TU/TU container of blocks) should be forwarded and transmits the blocks to the identified decoder (alternatively, decoders could proactively select and read the correct blocks from a memory buffer populated by the transport layer).
The decoders extract the TUs and from the TUs the sequence of compressed blocks for the respective streams, decompress the blocks to recover the image data, and output the decompressed blocks to the shared frame memory 112 to form reconstituted frame 402 (note that lossy compression is typically used, so that the reconstituted frame is an approximation of the corresponding source frame). This involves writing each decompressed block to its correct location within the frame, based on the same checkerboard encoding pattern used at the encoder(s).
In the above example, the frame is divided into four streams of blocks. The number of streams chosen will typically be determined based on the number of separate encoder instances at the host device and/or the number of decoders at the display device. There will typically be at least one stream per decoding engine (and possibly multiple streams per decoding engine). Thus, in an embodiment with two decoding engines, there will be at least two streams of blocks.
In this example it is assumed that there are four encoder instances at the host device, e.g. in the form of four software encoder threads running on respective cores of a quad-core processor. In other examples there could be separate hardware encoders at the host, or separate encoder instances running on CPU (Central Processing Unit) and GPU (Graphics Processing Unit) respectively, on different CPUs etc. The streams are then assigned to the decoders at the display device such that each decoder handles the same number of streams (here two of the four streams per decoder).
As illustrated in Figures 2 and 4, the blocks are assigned to different streams in a checkerboard pattern. As already mentioned, the compression ratio achieved will typically vary between different blocks. This is due to variations in image content across the frame (and the fact that variable-bit rate encoding is used). Image content with lots of detail will generally compress less easily than image content with fairly uniform content (e.g. single colour) with a result that complex blocks will be larger after compression than uniform blocks. For example, a block of an area of blue sky may achieve higher compression ratio (lower compressed size) than a block of an area depicting a part of a tree (with branches, leaves, background etc.)
By distributing the blocks to streams based on an interleaved or checkerboard pattern (i.e. alternating between streams horizontally and preferably also vertically), variations in compression ratio across the image can be evened out across the different streams, making it statistically likely that data rates for different streams remain approximately equal (though this depends on the image content). As a result, input sizes for the different decoders should generally not differ too much. Evening out data rates across streams can prevent bottlenecks, e.g. buffers saturating in the transport layer or at the decoders (which could occur if one stream included considerably more data than another). It also enables more efficient utilisation of available network and memory bandwidth in the host and at the display device (for example memory bandwidth between the decoders and shared frame memory 112).
While one particular pattern for allocating blocks to streams has been described, alternative patterns could be used. The exact pattern will also depend on the number of streams chosen. However, it is generally preferred that the encoding pattern (assignment of blocks to streams), and the allocation of streams to decoders, are chosen such that the blocks decoded by each particular decoder (constituting a subset of the total set of blocks of the source frame) are uniformly distributed across the whole of the frame and are evenly spaced from each other horizontally and/or vertically, and such that adjacent blocks (at least horizontally but preferably also vertically) are decoded by different decoders. The pattern can be adapted as needed if more than two decoders are used.
A further consideration in the choice of pattern is the raster scan order at the display device. A typical device will scan pixels in a zig-zag pattern, scanning left-to-right along a row of pixels before jumping to the next row and repeating the process.
Low latency is desirable in many applications such as video games and especially VR applications. Thus, to minimize latency, the blocks are assigned to streams in horizontal rows, alternating between streams in a defined order (e.g. s0-s1-s2-s3), such that each stream receives a block in turn. This ensures that, upon decoding, a horizontal row of blocks can be received and decoded completely, before the next horizontal row of blocks is processed (which follows the same pattern, though preferably offset as already described).
As soon as the first row of blocks has been decoded and written to the frame memory, data can be sent to the display in accordance with the raster scan order employed by the display. This reduces latency, since the display does not need to wait until a whole frame is decoded but instead transfer of data to the display can start as soon as a row of blocks has been completely decoded. Subsequent blocks are written by the decoders to the shared memory ahead of the raster scan but while earlier blocks written to the frame are still being scanned in raster scan order (which proceeds row-by-row).
In a preferred embodiment, the timing of the decoders is synchronized with the raster scan performed by the display such that decompressed blocks are written to the memory just ahead of the raster scan. In this way, the decoders essentially chase the raster scan, reducing latency. In one embodiment, this can be achieved by synchronizing the rate of output of decoded blocks by the decoders to the raster scan performed by the display directly, e.g. based on a scan signal or timing information from the display.
All blocks in the encoding pattern are typically of the same, fixed size. A smaller block size may typically allow the bit rates to be balanced more evenly across streams but may incur larger overheads in terms of processing and packaging/unpackaging blocks for transport. Thus, a small block size is generally preferred subject to avoiding excessive overhead in block handling.
In some embodiments, frames are encoded in smaller units of tiles, which may themselves be grouped into tile groups. Each tile represents a self-contained unit of pixel data (e.g. m x n pixels) from the source image which is compressed as a unit. In that case each block in the Figure 2 pattern typically consists of multiple tiles and/or tile groups (preferably being at least one tile group high).
The decoding engines 108, 110 at the display device may, for example, be separate hardware decoders (e.g. dedicated integrated circuits), or may be software decoders running on separate processors or separate cores of a multicore processor, or running in separate parallel processes / threads in a single processor.
It will be understood that the present invention has been described above purely by way of example, and modification of detail can be made within the scope of the invention.

Claims

1. A method of outputting image data at a display device having a plurality of decoders, the method comprising:
receiving image data of a source frame from a host device, the image data comprising a plurality of streams of blocks, each block comprising encoded image data for a portion of the input frame;
selecting first blocks associated with a first stream of blocks and passing the selected blocks to a first decoder;
selecting second blocks associated with a second stream of blocks and passing the selected blocks to a second decoder;
decoding the first blocks of the first stream by the first decoder and outputting decoded image data for the first blocks to a shared memory to form a first part of a decoded frame;
decoding the second blocks of the second stream by the second decoder and outputting the decoded image data for the second blocks to the shared memory to form a second part of the decoded frame; and
outputting the decoded frame to a display.
2. A method according to claim 1 , wherein the blocks of the plurality of streams of blocks are taken from respective areas of the source frame in accordance with a predetermined encoding pattern, and wherein the decoding steps recreate a version of the source frame by writing decoded image data for the blocks to corresponding areas of the decoded frame in accordance with the predetermined encoding pattern.
3. A method according to claim 2, wherein the pattern is selected such that each of the plurality of streams comprises a set of blocks evenly distributed over the source frame, preferably wherein blocks of a given stream are evenly spaced in one or both of the horizontal and the vertical direction.
4. A method according to any of the preceding claims, wherein the blocks of a given stream are non-contiguous in one or both of the horizontal direction and the vertical direction.
5. A method according to any of the preceding claims, wherein the blocks decoded by a given decoder are non-contiguous in one or both of the horizontal direction and the vertical direction.
6. A method according to any of the preceding claims, wherein horizontally adjacent blocks in the decoded frame are from different ones of the plurality of streams.
7. A method according to any of the preceding claims, wherein horizontally adjacent blocks in the decoded frame are decoded by different ones of the plurality of decoders.
8. A method according to any of the preceding claims, wherein vertically adjacent blocks in the decoded frame are from different ones of the plurality of streams.
9. A method according to any of the preceding claims, wherein vertically adjacent blocks in the decoded frame are decoded by different ones of the plurality of decoders.
10. A method according to any of the preceding claims, wherein any two adjacent blocks in the decoded frame are from different streams and/or decoded by different decoders.
11. A method according to any of the preceding claims, wherein a plurality of first streams of blocks are decoded by the first decoder and wherein a plurality of second streams of blocks are decoded by the second decoder.
12. A method according to any of the preceding claims, wherein each stream is decoded by a respective pre-assigned decoder.
13. A method according to any of the preceding claims, comprising encoding the source frame at the host device.
14. A method according to claim 13, comprising encoding the first and second streams by respective ones of a plurality of encoders at the host device, preferably wherein each of the plurality of streams is encoded by a respective encoder.
15. A method according to claim 14, wherein the plurality of encoders comprise a plurality of encoding modules each running on a respective processor or a respective processor core of a multicore processor.
16. A method according to any of the preceding claims, wherein the source and decoded frame correspond to one frame of a stereoscopic frame pair, or wherein the source frame and decoded frame comprise a stereoscopic frame pair encoded as a single frame; preferably wherein the outputting step comprises outputting respective left-eye and right eye images to respective displays of the display device based on the decoded image data in the shared frame memory.
17. A method according to any of the preceding claims, wherein decoding a block comprises decompressing the block.
18. A method according to any of the preceding claims, wherein each block corresponds to a rectangular area of the source frame and/or wherein the blocks have a fixed image area size.
19. A method according to any of the preceding claims, wherein the blocks of the source frame are compressed using a variable bit rate encoder.
20. A method according to any of the preceding claims, wherein the source frame is divided into a plurality of rows of blocks, the method comprising initiating a transfer of pixel data from the shared memory to the display after the first row of blocks has been completely decoded but before the complete frame has been decoded, preferably before a second row of blocks has been completely decoded.
21. A method according to any of the preceding claims, comprising synchronizing the rate of output of decoded blocks by the decoders to a raster scan performed by the display, preferably such that blocks are written by the decoders to the shared memory ahead of the raster scan but while earlier blocks written to the frame are still being scanned.
22. A method of encoding display data for transmission to a display device, the method comprising:
dividing a source frame into a plurality of blocks; assigning each block to one of a plurality of streams of blocks in accordance with an encoding pattern;
encoding each of the streams of blocks; and
transmitting the plurality of streams of blocks to a display device.
23. A method according to claim 22, wherein the assigning step comprises assigning respective horizontally and/or vertically adjacent blocks to different ones of the streams.
24. A method according to claim 22 or 23, wherein blocks are compressed using a variable bit rate.
25. A method according to any of claims 22 to 24, comprising grouping multiple compressed blocks of a given stream having a combined size less than or equal to a predefined transport unit size into a transport unit and outputting the transport unit to a transport layer for transmission over a communications medium.
26. A method according to any of claims 22 to 25, comprising encoding the source frame for decoding in accordance with a method as set out in any of claims 1 to 21.
27. A method according to any of the preceding claims, wherein the display device is a stereoscopic display device comprising at least two displays; preferably wherein the display device is a virtual reality, VR, or augmented reality, AR, headset.
28. A method according to any of the preceding claims, wherein the display device is separate from the host device and selectively connectable to the host device.
29. A method according to any of the preceding claims, wherein the display device is connected to the host device via a wireless data connection for reception of the image data.
30. A display device having means for performing a method as set out in any of claims 1 to 21 or 27 to 29.
31. A host device having means for performing a method as set out in any of claims 22 to 29.
32. A computer readable medium comprising software code adapted, when executed on a data processing apparatus, to perform a method as set out in any of claims 1 to 29.
PCT/GB2018/053726 2018-01-03 2018-12-20 Decoding image data at a display device WO2019135064A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1800067.9A GB2569959B (en) 2018-01-03 2018-01-03 Decoding image data at a display device
GB1800067.9 2018-01-03

Publications (1)

Publication Number Publication Date
WO2019135064A1 true WO2019135064A1 (en) 2019-07-11

Family

ID=61158114

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2018/053726 WO2019135064A1 (en) 2018-01-03 2018-12-20 Decoding image data at a display device

Country Status (2)

Country Link
GB (2) GB2569959B (en)
WO (1) WO2019135064A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117063468A (en) * 2021-03-30 2023-11-14 高通股份有限公司 Video processing using multiple bit stream engines

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110023066A1 (en) * 2009-07-27 2011-01-27 Samsung Electronics Co., Ltd. Method and apparatus for generating 3-dimensional image datastream including additional information for reproducing 3-dimensional image, and method and apparatus for receiving the 3-dimensional image datastream
US20110249741A1 (en) * 2010-04-09 2011-10-13 Jie Zhao Methods and Systems for Intra Prediction
US20130101035A1 (en) * 2011-10-24 2013-04-25 Qualcomm Incorporated Grouping of tiles for video coding
US8660193B2 (en) * 2009-01-12 2014-02-25 Maxim Integrated Products, Inc. Parallel, pipelined, integrated-circuit implementation of a computational engine

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69719365T2 (en) * 1996-12-18 2003-10-16 Thomson Consumer Electronics EFFICIENT COMPRESSION AND DECOMPRESSION OF BLOCK FIXED LENGTH
US8238437B2 (en) * 2007-09-20 2012-08-07 Canon Kabushiki Kaisha Image encoding apparatus, image decoding apparatus, and control method therefor
CN105659594A (en) * 2013-10-17 2016-06-08 联发科技股份有限公司 Data processing apparatus for transmitting/receiving compressed pixel data groups of picture and indication information of pixel data grouping setting and related data processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8660193B2 (en) * 2009-01-12 2014-02-25 Maxim Integrated Products, Inc. Parallel, pipelined, integrated-circuit implementation of a computational engine
US20110023066A1 (en) * 2009-07-27 2011-01-27 Samsung Electronics Co., Ltd. Method and apparatus for generating 3-dimensional image datastream including additional information for reproducing 3-dimensional image, and method and apparatus for receiving the 3-dimensional image datastream
US20110249741A1 (en) * 2010-04-09 2011-10-13 Jie Zhao Methods and Systems for Intra Prediction
US20130101035A1 (en) * 2011-10-24 2013-04-25 Qualcomm Incorporated Grouping of tiles for video coding

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117063468A (en) * 2021-03-30 2023-11-14 高通股份有限公司 Video processing using multiple bit stream engines

Also Published As

Publication number Publication date
GB2608575A (en) 2023-01-04
GB2569959A (en) 2019-07-10
GB201800067D0 (en) 2018-02-14
GB2608575B (en) 2023-03-15
GB202215657D0 (en) 2022-12-07
GB2569959B (en) 2022-12-21

Similar Documents

Publication Publication Date Title
US10984541B2 (en) 3D point cloud compression systems for delivery and access of a subset of a compressed 3D point cloud
KR101994599B1 (en) Method and apparatus for controlling transmission of compressed picture according to transmission synchronization events
US20210014293A1 (en) Device and method for processing data in multimedia system
US9432687B2 (en) Moving picture encoding/decoding apparatus and method for processing of moving picture divided in units of slices
CN103440612B (en) Image processing method and device in a kind of GPU vitualization
US9596477B2 (en) Methods of multiple-slice coding for frame buffer compression
CN104253996B (en) The sending, receiving method and its device and Transmission system of video data
KR101668858B1 (en) Method for transmitting multi-channel video stream, and surveillance system using the same method
CN104971499A (en) Game providing server
CN109862357A (en) Cloud game image encoding method, device, equipment and the storage medium of low latency
JP2021526774A (en) Low latency video coding and transmission rate control
WO2020006291A1 (en) Priority-based video encoding and transmission
KR20140102605A (en) Image processing device
WO2019135064A1 (en) Decoding image data at a display device
CN103796018A (en) Remote-sensing image real-time compression and progressive transmission system
CN106331764A (en) Panoramic video sharing method and panoramic video sharing device
WO2023221764A1 (en) Video encoding method, video decoding method, and related apparatus
CN109413406A (en) A kind of data transmission method based on LED three-dimensional display system
CN102123275A (en) Video component data information acquiring and extracting method
CN104093027A (en) Joint scalar embedded graphics coding for color images
CN105791819B (en) The decompression method and device of a kind of frame compression method of image, image
CN112954438B (en) Image processing method and device
CN112911297A (en) Encoding and decoding method, device and system
CN105354080B (en) A kind of task processing method and device
US20170332096A1 (en) System and method for dynamically stitching video streams

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18829944

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18829944

Country of ref document: EP

Kind code of ref document: A1