GB2581822A - Image data encoding - Google Patents

Image data encoding Download PDF

Info

Publication number
GB2581822A
GB2581822A GB1902715.0A GB201902715A GB2581822A GB 2581822 A GB2581822 A GB 2581822A GB 201902715 A GB201902715 A GB 201902715A GB 2581822 A GB2581822 A GB 2581822A
Authority
GB
United Kingdom
Prior art keywords
image data
encoding
frame
encoded
display control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1902715.0A
Other versions
GB201902715D0 (en
GB2581822B (en
Inventor
Nemouchi Yazid
Stroba Szymon
Kunc Szymon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DisplayLink UK Ltd
Original Assignee
DisplayLink UK Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DisplayLink UK Ltd filed Critical DisplayLink UK Ltd
Priority to GB2300088.8A priority Critical patent/GB2611668B/en
Priority to GB1902715.0A priority patent/GB2581822B/en
Publication of GB201902715D0 publication Critical patent/GB201902715D0/en
Publication of GB2581822A publication Critical patent/GB2581822A/en
Application granted granted Critical
Publication of GB2581822B publication Critical patent/GB2581822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/363Graphics controllers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/02Handling of images in compressed format, e.g. JPEG, MPEG
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2350/00Solving problems of bandwidth in display systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2352/00Parallel handling of streams of display data
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/06Use of more than one graphics processor to process data before displaying to one or more screens

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

Encoding image data in a device 11 having several different encoding engines 16A, B, C, the encoded image data being transmitted to a display control device 12 where it is decoded for display. The method includes selecting an encoding engine from among the encoding engines to use for encoding at least part of a frame of image data. The selecting is based on heuristics related to the image data to be encoded, and/or a performance of the device, the data connection and/or the display control device. In another aspect, at least part of the frame of image data can be encoded using at least two different encoding engines to produce at least two versions of the image data encoded differently, and then one of the two versions is selected based on heuristics related to the encoded image data, and/or a performance of the host computing device, the data connection and/or the display control device. The encoded or selected encoded image data is then sent to the display control device.

Description

Image Data Encoding
Background
Image data is often transmitted between a device where it is generated and a device on which it is displayed. Often, the image data is transmitted over a bandwidth-limited connection, and it is therefore often compressed (or encoded) prior to transmission in order to minimise the bandwidth required on the connection. The compression is carried out using a compression (or encoding) algorithm, which may be run in a dedicated encoding engine or as one of many programs run on a multi-purpose programmable processor.
As the graphics capabilities of computing devices become more advanced and encoding of display data becomes more widespread, it is sometimes the case that a computing device may have many processors or encoding engines which can carry out encoding. Current systems do not take full advantage of this capability as they are commonly arranged to only use one of the processors or engines capable of carrying out encoding. This results in a loss of efficiency.
The invention seeks to solve or at least mitigate this problem.
Summary
Accordingly, in a first aspect, the invention provides a method for encoding image data in a host computing device having at least two different encoding engines, wherein encoded image data is to be transmitted over a data connection to a display control device where it is decoded and sent for display on a display panel, the method comprising: receiving at least a frame of image data; selecting an encoding engine from among the at least two different encoding engines to use for encoding at least part of the frame of image data, wherein the selecting is based on heuristics related to the image data to be encoded, and/or a performance of the host computing device, the data connection and/or the display control device: encoding the at least part of the frame of image data using the selected encoding engine: and sending the encoded at least part of the frame of image data for transmittal to the display control device over the data connection.
According to a second aspect, the invention provides a method for encoding image data in a host computing device having at least two different encoding engines, wherein encoded image data is to be transmitted over a data connection to a display control device where it is decoded and sent for display on a display panel, the method comprising: receiving at least a frame of image data; encoding at least part of the frame of image data using the at least two different encoding engines to produce at least two versions of the at least part of the frame of image data encoded differently; selecting one of the two versions based on heuristics related to the encoded image data, and/or a performance of the host computing device, the data connection and/or the display control device; and sending the encoded at least part of the frame of image data of the selected version for transmittal to the display control device over the data connection.
In one embodiment, a whole of the frame of image data is encoded using the selected encoding engine. Preferably, a whole of the frame of image data is encoded using the at least two different encoding engines.
in one embodiment, the frame of image data comprises a plurality of parts, and at least two different parts are encoded using the at least two different encoding engines. Preferably, the frame of image data comprises a plurality of parts, and at least two different parts are encoded using both of the at least two different encoding engines. The plurality of parts of the frame of image data may comprise different areas of the frame, such that there is a central foveal area and an annular peripheral area. In an embodiment, the plurality of parts of the frame of image data comprise different areas of the frame, such that different areas have different types of image data. hi an embodiment, the plurality of parts of the frame of image data may comprise different planes of the frame, such that different planes have image data perceived at different depths by a user.
The selecting may be based on heuristics relating to a type of image data forming the at least part of the frame of image data and the selecting is based on the capabilities of the encoding engines for encoding different types of image data. Preferably, if the type of image data is photographic, selecting is based on encoding most suited to photographic image data, and if the image data is textual, selecting is based on encoding most suited to textual image data. The selecting may be based on heuristics including any one or more of bandwidth of the data connection; current use and availability of resources on the host computing device; tolerance for latency in the host computing device, data connection and display control device; and current use and availability of resources on the display control device.
In an embodiment, control information is sent, together with the encoded at least part of the frame of image data, for transmittal to the display control device over the data connection.
The control information preferably includes information indicating which of the encoding engines was used for encoding the encoded at least part of the frame of image data sent for transmittal to the display control device over the data connection. The control information may include information indicating parameters used by the encoding engine used for encoding the encoded at least part of the frame of image data sent for transmittal to the display control device over the data connection.
According to a third aspect, the invention provides a host computing device comprising: a non-transitory memory storing instructions; and one or more hardware processors coupled to the non-transitory memory and configured to execute the instructions from the non-transitory memory to cause the host computing device to perform operations as described above.
In a fourth aspect, there is provided a system comprising a host computing device as described above, a display control device, and a data connection therebetween.
in another aspect, there is provided a system comprising: a non-transitory memory storing instructions; and one or more hardware processors coupled to the non-transitory memory and configured to execute the instructions from the non-transitory memory to cause the system to perform operations as described above.
According to a further aspect, there is provided a method for improving the efficiency of display data encoding in a host computing device that has two or more encoding engines and is connected to a display control device via a data connection, comprising: 1. An application on the host computing device generating an image for display on a display panel; 2. An encoding controller determining the most appropriate encoding engine to use, based on heuristics connected to the nature of the image and/or the performance of the host computing device, the data connection, and/or the display control device; 3. The encoding controller instructing the appropriate encoding engine to encode the data; 4. The encoding engine encoding the data; The host computing device transmitting the encoded data to the display control device together with control information; 6. The display control device decoding the data as appropriate; 7. The display control device displaying the decoded data on a display panel.
The heuristics may be based on any one or more of: the bandwidth of the data connection, the current use and availability of resources on the host computing device, tolerance for latency in the system; or the use and availability of resources on the display control device. The heuristics may also be based on the display data to be encoded such that, for example, if a frame of display data consists of a photographic image it is encoded by one encoding engine with characteristics most suited to photographic images, but if it consists mostly of text it is encoded by a second encoding engine with characteristics most suited to text.
The control information transmitted with the encoded display data may comprise a flag or other signal indicating which encoding engine was used so that the display control device can use an appropriate decoding engine or algorithm. it may also include information such as any parameters used by the encoding engine such as quantisation level or number of passes of a Haar transform.
Alternatively, there may be a method of encoding using two or more encoding engines, comprising: 1 An application on the host computing device generating a frame of display data for display on a display panel; 2 The frame of display data being encoded by all available encoding engines; 3 A transmission controller selecting the encoded frame most suitable for transmission, based on heuristics connected to the performance of the data connection and/or the display control device; 4 The host computing device transmitting the selected encoded display data to the display control device together with control information; 5. The display control device decoding the data as appropriate; 6. The display control device displaying the decoded data on a display panel.
The heuristics used in this method may also be based on the bandwidth of the data connection, latency tolerance in the system, and/or the current use and availability of resources on the display control device. It has the benefit over the first version of the method that the decision as to which encoding method should be used is made closer to the time of transmission, reducing the chance that circumstances will change during the time required for encoding. However, the use of all available encoding engines when only one encoded frame of display data is to actually be transmitted may result in wasted time and processing power.
As previously mentioned, the control information transmitted with the encoded display data may comprise a flag or other signal indicating which encoding engine was selected. It may also include any parameters used by that encoding engine.
Alternatively, there may be a method of encoding using two or more encoding engines, comprising: 1. One or more applications on the host computing device generating components of a frame of display data for display on a display panel; 2 The components being encoded by separate encoding engines, selected based on heuristics connected to the capabilities of the encoding engines and/or the nature of the components; 3. The encoded components being transmitted to the display control device together with control information; I 4. The display control device decoding the components as appropriate; The display control device combining the components into a frame of display data; 6. The display control device displaying the decoded data on a display panel.
The components may be generated by the division of a single frame produced by an application, or they may be generated and encoded separately and only combined into a single frame at the display control device. In the first case, the frame may be divided by area, for example into a central foveal area and an annular peripheral area, or in a second example into an area which contains text and an area which contains images. Alternatively, the frame may be divided by plane such that the planes are perceived at different depths: for example, a first plane may comprise a desktop background and a second plane may comprise windows shown "on" the desktop background. Similarly, and potentially more usefully, in a computer-generated image or composite image a first plane might comprise parts of the image which are perceived as being in the far distance, such as the background of a world, while a second plane might comprise detailed objects to be viewed close up.
Accordingly, the heuristics for selection of which component of a frame should be encoded by a particular encoding engine may depend on the capabilities of the encoding engines -for example, one may be arranged to be more suitable for encoding text than another, in which case it would be beneficial to use the first encoding engine to encode a component that consists mostly of text.
In this case, the control information may indicate which encoding engine was used to encode each component, together with information on how the components should be combined by the display control device in order to generate the final frame for display. As previously mentioned, it may also include parameters used by each encoding engine.
These methods take advantage of the fact that it is becoming increasingly common for a host computing device to incorporate multiple processors and hardware engines that can be used for encoding display data. They allow the best possible encoding to be used in different circumstances, for example by improving load balancing and allowing different algorithms to be used for data with different requirements. A further example of such requirements is a case where one area of a frame is content-protected and therefore should be encrypted as well as encoded, but the remainder does not need such protection.
Brief Description of the Drawings
Figure 1 shows a basic block diagram of a display system; Figure 2 shows a more detailed block diagram of a display system arranged to carry out some methods of the invention; Figure 3 shows a second detailed block diagram of a display system arranged to carry out some methods of the invention: Figure 4 shows a third detailed block diagram of a display system arranged to carry out some methods of the invention: Figure 5 shows a fourth detailed block diagram of a display system arranged to carry out some methods of the invention: Figure 6 shows two example frames of display data; Figure 7 shows a high-level overview of the methods of the invention; Figure 8 shows a more detailed example process of some methods of the invention; Figure 9 shows a second example process of some methods of the invention; Figure 10 shows a third example process of some methods of the invention;
Detailed Description of the Drawings
Figure 1 shows a display system comprising a host computing device [11] connected to a display control device [12] over a limited-bandwidth connection. The connection may be wired or wireless and may be over a network connection, including the intemet. The host computing device [II] may be any computing device capable of generating display data including a mobile device such as a smartphonc or tablet, a static computing device, or a games console. The display control device [12] is in turn connected to a display device [13], which may be a single display panel as shown here or may be any other suitable display device including a projector, video wall, or virtual-reality headset.
The display control device [12] and the display device [13] may be co-located so that they share a single casing and appear to be a single device. For example, a virtual-reality headset may incorporate the workings of both the display panel [13] and the display control device [12]. Alternatively, the functionality of any of the devices may be split over several devices, for example by the connection of an adapter.
Figure 2 shows a first example of a host computing device [11] arranged to carry out embodiments of the invention. The host computing device [I I] includes an application [14] running on a processor which generates frames of display data and is connected to an encoding block [15], which is in turn connected to a connection controller [17]. The connection controller [17] controls the connection to the display control device [12] and accordingly is connected to the display control device [12], which, as previously mentioned, is connected to a display device [13]. It is also able to monitor the connection for, among other things, available bandwidth, and receive signals from the display control device [12].
The encoding block [15] comprises a number of encoding engines 1161. Here three [16A, 16B, 16C] are shown, and for the purposes of this description they arc the Central Processing Unit (CPU) [16A] of the host computing device [11], the Graphics Processing Unit (GPU) [16B1 of the host computing device [11], and a purpose-built hardware encoding engine (Hardware Encoder) [16C]. The CPU [16A] and GPU [16B] are programmable proccssors capable of running many different sets of instructions of which encoding algorithms are a subset, but the Hardware Encoder [16C] may be designed to nm only one specific algorithm for a specified purpose. Naturally, in other embodiments there may be any plural number of encoding engines [16], and any number or combination of them may be multi-purpose processors and/or "dumb" engines.
Figurc 2 also shows an encoding controller [21] which is connected to the connection controller [17] and the encoding block [15]. This is able to select which encoding engine [16] should be used for encoding data at any given time, depending on signals from the connection controller [17]. it is outlined with a dashed line since for some methods using this arrangement of host computing device [11] it may not be required.
Figure 3 shows a second example of a host computing device [11] arranged to carry out embodiments of the invention. As previously described in Figure 2, the host computing device [11] includes an application [14] running on a processor which is connected to an encoding block [15], which incorporates three encoding engines [16]: in this example, a CPU [ 16A], a GPU [1613], and a Hardware Encoder [16C]. The encoding block [15] is connected to a connection controller [17] as previously described, and it in turn is connected to a display control device [12] and display device [13].
In this embodiment, the host computing device [11] also includes an encoding controller [31] which is connected to the encoding block [15] and can both receive signals from it and transmit signals to it. It may also have signalling connections from other components of the host computing device [11] which are not shown here.
Figure 4 shows a third example of a host computing device [11], together with an example display control device [12] arranged to carry out embodiments of the invention. As previously described, it includes an application [14] running on a processor which generates frames of display data, but in this case the application [14] is connected to a divider [41] which splits each frame into components. The divider [41] is then connected to the encoding Mock [15] and arranged to transmit each component to a different encoding engine [16] within the encoding block 1151. As previously described, in this example the engines [16] are a CPU [16A], a GPU [16B], and a Hardware Encoder [16C], but there may be other encoding engines [16] in other combinations in other host computing devices [11].
The encoding engines [16] are connected to a connection controller [17] and arranged to transmit their respective encoded frame components to the connection controller [17] for transmission to the display control device [12].
In this embodiment, the display control device [12] incorporates a decoder [18] which can decode the received encoded components as appropriate, and also a compositor [19] which re-combines the decoded components into a frame for display. Accordingly, the display control device [12] is connected to a display device [13] as previously described.
Figure 5 shows a fourth example of a host computing device [1 I] and display control device [12] arranged to cam, out embodiments of the invention. In this example, the host computing device [11] has multiple applications [14], each of which is miming on a processor, which may be the same processor or a different processor from that on which other applications [14] are running and which produces a component of a frame of display data. For example, the first application [14A] may be a component of the operating system which generates a plain background colour, the second application [I4B] may be a word-processing application which generates a window mostly comprising text, and the third application [14C] may be a video player. Alternatively, the three applications [14] shown here may be components of a single application such as a video game, in which case the first application [14A] may be the component which generates the background, the second application [14B] may be the component which generates moving objects in the middle distance, and the third application [14C] may be the component which generates small, detailed objects which will appear close to the viewer. In both cases, the components generated by different applications [14] may have different encoding requirements.
The applications [14] are connected to a director [51], such as a multiplexer, which directs the frames from each application [14] to an encoding engine [16] within the encoding block [15]. The director [5 I] is also shown as having a signalling connection from the encoding block [15], which could be used for load-balancing depending on the use of the encoding engines [16], but this is optional.
As previously described in Figure 4, each encoding engine [16] is connected to the connection controller [17], which is in turn connected to the display control device [12] so that the encoded components can be transmitted to the display control device [12]. The display control device [12] comprises a decoder [18] and a compositor [19] as previously described, so it is able to decode the encoded components, compose them into a single frame, and transmit them to the connected display device [13] for display.
Figure 6 shows two example frames that might be displayed on the display device [13] in order to demonstrate components of frames that could be used in the systems shown in Figures 4 and 5.
Figure 6a shows a desktop image [61a] comprising a plain background [64], which might be a single colour but is here shown hatched with dots. Two application windows [62, 63] arc shown "on" the background [64]. The window on the left [62] contains text, and the window on the right [63] contains a moving video, for example a film played from a DVD or the interne. In systems such as those shown in Figures 2, 3, and 4 the entire frame [61a] is generated by a single application [14], which may in practice be a compositor which takes input from multiple applications (in this case, for example, the operating system, a word processing application, and an internet browser). In the system shown in Figure 5, the first application [14A] might be the word processor, the second application [14B] might be the interne browser, and the third application [14C] might be the operating system.
Figure 6a also shows a point of focus [68a]. This is presumed to be the point on the frame [61a] on which a user's eyes are focused and could be determined by eye-tracking techniques or by using some other method of interaction such as a cursor and assuming that the user is looking at the point on the image with which he or she is interacting. The point of focus [68a] can be used in foveal encoding techniques, which ensure that the area around the point of focus [68a] is displayed at as high a quality as possible, since it is the area in which the user is interested and is likely to be seen with the most sensitive part of the eye.
Figure 6b shows an image [61b] generated by an application [14] such as a computer game, in this case a space combat game in which a user flies a spaceship and destroys enemy spaceships. The image [61b] shows a background [67] comprising open space and an enemy space station, together with several enemy spaceships [66] which appear to be between the user and the background [67]. In the foreground, the frame [61b] shows the frame of a cockpit [65b] between the user and the enemy spaceships [66] and finally in the extreme foreground the user sees a head-up display showing game statistics 165a]. Again, in systems such as those shown in Figures 2, 3, and 4 the entire frame [61b] is generated by a single application [14], in this case a computer game application. However, in a system such as that shown in Figure 5 different parts of the image might be generated by different components of the game application. for example, the background [67] might be generated separately to the enemy spaceships [66], since the background [67] is likely to be relatively static while the enemy spaceships [66] arc much more mobile relative to the background [67] and each other.
Figure 6b also shows a point of focus 168b1 which behaves in much the same way as the point of focus [68a] shown in Figure 6a.
Figure 7 shows an overview process which describes the various embodiments of the invention at a high level.
At Step S71, the application [14] generates a frame [61] of display data, or the applications [14] generate their respective components of a frame [61] of display data. This is then passed to the encoding block [15], via a divider [41] or director [51] if appropriate, and encoded according to any instructions from an encoding controller [21/31] at Step S72. The encoded data is then passed to the connection controller [17].
At Step S73, the appropriate encoded data is transmitted to the display control device [12]. This may mean all the display data that was received from the encoding block [15], or it may mean only the display data received from one encoding engine [16]. In any case, it will be accompanied by control information giving instructions on how the encoded data should be decoded and prepared for display by the display control device [12].
At Step S74, the decoder [18] on the display control device [12] decodes the received encoded data according to the control information received from the host computing device [11]. The display control device [12] may also carry out other processing such as scaling, rotation, or composition of frame components into a frame. Finally, the frame of display data is sent to the display device [13] for display in the conventional way.
Figure 8 shows a more detailed version of one method of the invention, which will be described with reference to the system shown in Figure 2.
At Step S81, the application [14] generates a frame [61] of display data in the conventional way.
This may be part of a stream of regular frames of display data, for example where the application [14] is playing a video, or it may be part of an irregular stream, for example where the application [14] is a desktop application that only generates a new frame [61] of display data where there has actually been a change, for example due to user input. This frame [61] is passed to the encoding block [15] for encoding.
At Step S82, the frame is passed to the CPU [16A], the GPU [16B], and the Hardware Encoder [16C] and all three encoding engines [16] encode the frame according to their programming. For example, the CPU [16A] may run an encoding algorithm on a relatively serial basis allowing more changes to the algorithm used for different parts of the frame, the GPU [16B] may run an encoding algorithm which incorporates a high degree of parallelisation, and the Hardware Encoder [16C] may run an encoding algorithm which is fast and designed to be appropriate for most frames of display data it might encounter but is not very customisable.
Accordingly, the CPU [16A] may output an encoded frame which is high-quality but relatively bulky and which may be complex to decode, the GPU [I6B] may output an encoded frame which is relatively well compressed but which has lost a greater degree of detail than the version of the frame encoded by the CPU [16A], and the Hardware Encoder [16C] may output an encoded frame which is very well compressed but poor quality. These three encoded frames are passed to the connection controller [17].
At Step S83, the connection controller [17] determines the most appropriate encoded frame to transmit. This determination may be based on the bandwidth available in the connection such that, for example, the connection controller [17] selects the best-quality encoded frame that can be transmitted across the available bandwidth in the connection before it will be required for decoding. Alternatively, the determination may be based on the processing power available at the display control device [12] such that, for example, the connection controller [17] selects the best-quality encoded frame that can be transmitted across the connection and decoded before it will be required for display. This will be especially useful where the bandwidth is constant but die decoding time may vary. Furthermore, the determination may be based on the latency required, such that the fastest encoding engine [16] (most likely the Hardware Encoder [16C]) will be selected if it is crucial that there is as little delay as possible between generation and display of the frame, for example in an augmented reality system. Naturally, these heuristics may be combined such that the encoded frame must be both transmitted and decoded and both of these stages in the display pipeline are variable.
At Step S84, the connection controller [17] on the host computing device [11] transmits the selected frame to the display control device [12] for decoding. It also transmits an indication of which encoding engine [16] encoded the frame, so that the decoder on the display control device [12] can correctly decode the frame. If appropriate, it may also transmit parameters that were used as part of the encoding process, especially where that process is variable as may be the case with the CPU [16A] and GPU [16B], or in some cases with the Hardware Encoder [16C] if, for example, it uses variable starting values for its processing.
The display control device [12] receives the encoded frame and decodes it at Step 585, using an inbuilt decoder which is capable of using the decoding algorithms corresponding to all of the available encoding engines [16], together with the transmitted control information. The decoded frame is then transmitted to the display device [13] for display at Step S76.
Depending on the encoding and decoding algorithms, it may be possible for part of the transmitted frame to be taken from one encoding engine [16] and part from another. For example, where the encoding algorithms used by the CPU [16A] and GPU [16B] both encode the frame in stripes, it may be possible for the connection controller [17] to monitor -for example -the bandwidth of the connection throughout the transmission process and re-evaluate the frame to transmit on a stripe-by-stripe basis. This means that if, for example, the initial bandwidth is high and it is possible to send the frame encoded by the CPU [16A], but after three out of six stripes have been transmitted the bandwidth falls due to interference, the connection controller [17] might be able to transmit the final three stripes from the frame compressed by the GPU [16B], with appropriate control information to indicate the change in the encoding algorithm used. Naturally, this is an example only and encoding can in practice be carried out in tiles, tile groups, or any other such division, and the frame produced by the Hardware Encoder [16C] could also or instead be interleaved as appropriate. However, in this embodiment such a determination is made by the connection controller [17] at the time of transmission, rather than by an encoding controller or any controller earlier in the process.
Figure 9 shows a second example process which could be used in the system shown in Figure 2 or the system shown in Figure 3.
At Step S91a, the application [14] generates a frame [61] of display data as previously described.
Simultaneously (for the purposes of this description; this determination may in practice not be perfectly simultaneous), at Step S9 lb the encoding controller [21] selects the encoding engine [16] that would be most appropriate based on a signal from the connection controller [17]. This signal is based on a similar measurement to that used to determine the encoded frame to transmit at Step S83 of Figure 8: for example, the bandwidth available in the connection to the display control device [12], the latency requirements, and/or the processing power available on the display control device [12] for decoding. The encoding controller [21] selects the most appropriate encoding engine [16] to use based on this signal and its knowledge of the usual characteristics of the encoding engines [16] available. In the examples already described, this means that it is aware that: * The CPU [16A] generally produces good-quality but bulky encoded frames and its encoding algorithm is relatively slow; * The GPU [16B] generally produces medium-quality, medium-bulk frames at an average latency; and * The Hardware Encoder [16C] generally produces low-quality, small frames relatively quickly and therefore if, for example, the bandwidth available is low,the encoding controller [21] might determine that the Hardware Encoder [16C] should be used.
In the embodiment shown in Figure 3, the determination at Step S91b may be carried out differently due to the different inputs available to the encoding controller [31]. In this system, the encoding controller [31] selects the most appropriate encoding engine [16] to use based on signals from other parts of the host computing device [11] system. For example, in Figure 3 a signalling connection is shown from the encoding block [15] to the encoding controller [31]. This could carry signalling indicating the current use levels of the encoding engines [16], which could enable the encoding controller [31] to determine which of the encoding engines [16] is least busy and assign the frame [61] to that encoding engine [16]. Alternatively, there may be other inputs. For example, if the Hardware Encoder [16C] produces less heat when in use, the encoding controller [31] could receive input from a thermometer indicating the current temperature of the host computing device [11] and, when that temperature rises above a threshold, assign all encoding to the Hardware Encoder [16C] rather than the CPU [16A] and/or GPU [16B] until the temperature falls again. Furthermore, where some of the encoding engines [16] are multi-use proccssers, such as a CPU [16A] which also performs other processing for the operation of the host computing device [11], information on such use could be passed to the encoding controller [31] for use in load-balancing. This system could also be used for determination based on required latency such that, for example, the source of the frame is known to the encoding controller [31] and if it comes from one source which requires low latency such as a gaming application there is a presumption that the lowest-latency encoding engine [16] should be used -in this example the Hardware Encoder [16C] -while other applications may have greater tolerance for latency, allowing a higher-latency encoding engine [16] to be used.
Finally, the encoding controller [3l] could also receive input from the connection controller [17], indicating the available bandwidth and any signals on the capabilities of the display control device [12], in the same way as the connection controller [17] in Figure 2.
The inputs and their appropriate thresholds may be used simply such that, for example, the encoding controller [31] always selects the encoding engine [16] with the shortest queue of data to be encoded. This means that if the previous two frames have been sent to the CPU [16A] and the GPU [1613] respectively, the encoding controller [3 I] might determine that the current frame [6 I] should be encoded by the Hardware Encoder [16C].
Alternatively, the inputs could be balanced in order to make the optimal determination possible in the circumstances. For example: * A temperature input indicates that the temperature has risen above a threshold and this creates a presumption that the Hardware Encoder [16C] should be used; * The CPU [16A] is currently busy with the operation of the application [14] and this creates a presumption that one of the other encoding engines [16B, 16C] should be used; * The connection currently has a high bandwidth level and this creates a presumption that the CPU [16A] should be used; * The Hardware Encoder [I6C] has a queue of incoming data and this creates a presumption that one of the other encoding engines [16A, 16B] should be used; * There has been damage to one of the cores of the GPU [16B] and this creates a presumption that one of the other encoding engines [16A, 16C] should be used; * The application [14] requires as low a latency as possible and this creates a presumption that the Hardware Encoder [16C] should be used.
These rules could be weighted such that, for example, damage to an encoding engine [16] prohibits that encoding engine [16] from being used and temperature is considered a key factor. This means that the GPU [16B] cannot be used and the temperature means that if possible the Hardware Encoder [16C] should be used, so the encoding controller [31] determines that the Hardware Encoder [16C] should encode the frame [61].
in any case, at Step S92 the application [14] passes the generated frame [61] to the encoding block [15], and it is received by the encoding engine [16] selected by the encoding controller [31]: in the above examples, the Hardware Encoder [16C]. This may mean that the application [14] stores the frame [61] in a common frame buffer and the Hardware Encoder [16C] fetches it when instructed to do so by the encoding controller [31], or it may mean that the encoding controller [31] in fact sends a signal to the application [14] indicating to which encoding engine [16] it should transmit the frame.
At Step S93, the selected encoding engine [16] encodes the received frame [61]. The other encoding engines [16] are idle or may be used for other functionality. For example, where the Hardware Encoder [16C] is used for encoding display data, the CPU [16A] and GPU [16B] -as programmable processors -may be used for other processing required by the host computing device [11]. The encoding engine [16] then passes the encoded frame to the connection controller [17].
At Step S94 the connection controller [17] transmits the encoded frame to the display control device [12]. Unlike in the process described in Figure 8 it does not have to make any determination as to which data to transmit since it has only received one frame. it may be aware of which encoding engine [16] provided the data and attach control information as previously described, or the control information may be attached by the encoding engine [16] as part of the encoding process and the connection controller [17] may have no knowledge of which encoding engine [16] encoded the data.
The display control device [12] receives the encoded frame and decodes it as previously described at Step S95. It then passes the decoded frame to the display device [13] for display in the conventional way at Figure S96.
If it is possible for the application [14] to provide pre-knowledge of the content of the frame [61] to the encoding controller [21/31] in Figure 2 or Figure 3, this input could also be used. For example, in the system shown in Figure 2, if the connection controller [17] indicates to the encoding controller [21] that there is a large bandwidth available, this might create a presumption that the CPU [16A] should be used, but if the application [14] indicates that the frame [61] it is generating is ideally suited to the algorithm used by the Hardware Encoder [16C] and therefore can be compressed significantly with very little loss of quality, the encoding controller [21] might instead select the Hardware Encoder [16C]. A similar determination could be used by the encoding controller [31] in Figure 3, if its heuristics indicated that the CPU [16A] should be used but the application [14] indicated that the frame would be best suited for the Hardware Encoder [16C].
The method described in Figure 9 has advantages over the method described in Figure 8 because it results in fewer wasted resources: only the frame that is to be transmitted is encoded, so the other encoding engines [16] do not waste time and power encoding data that will not be used.
Figure 10 shows a third example process which could be used in the systems shown in Figure 4 and Figure 5. In this case, the frame [61] is divided into areas or planes, which can then be encoded separately.
At Step 5101, the frame of display data [61] is generated. In the system shown in Figure 4, an entire frame [61] is generated by a single application [14] as previously described. In the system shown in Figure 5, different components of the frame [61] are generated by different applications [14] (or different application components as previously mentioned). The frame [61] is then passed to a divider [41], or the components to a director [51].
Step S102 is only carried out in a system with a divider [41] like that shown in Figure 4. It is therefore shown outlined with dashes in the Figure. The divider [41] receives the frame [61] generated by the application [14] and divides it into components. These may be based on area, such as different application windows or foveal and annular peripheral regions, or they may be planes based on depth. The determination of depth may be based on information received from the application [14] based on the frame generation process, but in any case, the divider [41] splits the frame [61] into components which may have different encoding requirements.
For example, in the frame [6la] shown in Figure 6a, the divider [41] may divide the frame [6 la] by area such that the application window containing text [62] is one area (Areal), the application window containing the video [63] is a second area (Areal) and the background [64] is a third area (Area3). Alternatively, it may determine that a circle centring on the point of focus [68a] with a radius of ten units is one area (Fovea), an annular region beginning at the edge of Fovea and extending a thither thirty units is a second area (Periphery]) and the remainder of the frame [6la] is a third area (Periphery/3).
In the frame [61b] shown in Figure 6b, the divider [41] may receive information indicating the depths of the objects such that the background [67] and the enemy space station have depth value DepthD, the enemy spaceships [66] have depth value DepthC, the cockpit surroundings [65b] have depth value DepthB, and the head-up display [65a] has depth value DepthA and determine that a first plane (Plane I) comprises all objects with depth values DepthB or DepthA [65], a second plane (Plane2) comprises all objects with depth value DepthC [66], and a third plane (Plane3) comprises all objects with depth value DepthD [67].
Step 5102 is not carried out in a system such as that shown in Figure 5 because in that system the frame [61] has already been divided. In that case, Areal, Fovea, or Plane 1 (collectively, Componentl) might be generated by the first application [ I4A], Area2, Periphery!, or Plane2 (collectively, Component2) might be generated by the second application [14B], and Area3, Periphery2, or Plane3 (collectively, Component3) might be generated by the third application [14C].
At Step 5103, the components are sent to the appropriate encoding engines [16]. The mechanism by which the divider [au or director [51] determines which component(s) to send to a particular encoding engine [16] depends on the characteristics of the components and the encoding engines [16], and the divider [41] or director [51] may also act in a similar way to an encoding controller [21/31] as previously described, determining which encoding engine [16] should be used based on external inputs and the current loads of the encoding engines [16], as well as their characteristics.
For example, in the above examples all three components [62/65] that are referred to as Component) must be displayed at high quality but do not comprise much data: the text in Areal [62] may be easy to encode since it is monochrome, Planet [65] comprises a small amount of graphic data, and Fovea is a relatively small area on the screen. It may therefore be appropriate to send Component) to the CPU [16A] if the CPU [16A] has the characteristics earlier described: little data loss, but relatively poor compression ratios. Similar determinations could be made to send Component2 (which may consist of moving data for which slightly lowered quality could be less noticeable but speed of transmission is key for Areal [63] and Planet [66], or may consist of data which will appear further towards the edge of the user's vision in the case of Periphery l) to the GPU [16C] and Component3 (which consists of relatively simple data which may appear to be in the far distance and therefore has little requirement for fine detail) to the Hardware Encoder [16C].
However, additional considerations such as which encoding engines [16] are currently under most load, tolerance for latency, or the bandwidth available and therefore requirement for fast encoding and/or low volume may change these assumptions and result in, for example, Component2 also being sent to the Hardware Encoder [16C].
In any case, at Step S104 each encoding engine [16] encodes the component(s) it has been sent in accordance with its operation and may attach control information to it/them as previously described. It then passes the encoded data to the connection controller [17]. if control information has not been attached by the encoding engines [16] the connection controller [17] may attach such information.
The connection controller [17] may also have received an indication from the divider [41] or director [51] of which encoding engine [16] is encoding which component and may attach an indication of which component is which to the encoded components, or this information could have been passed directly from the divider 1411 or director 1511. Similarly, information on how to compose or re-compose the final frame may be passed to the connection controller [17] from the application(s) [14] or the divider [41] or director [51]. All of this information is included in the control information transmitted along with the encoded components at Step S 105.
At Step 5106, the decoder [18] on the display control device [12] receives the encoded components and decodes each one using the details of the encoding used that were contained in its respective control information. The decoder [18] then passes the decoded components to the compositor [19] on the display control device [12], together with the instructions on how to composite the frame contained in the control information. The compositor [19] then composites or re-composites the components into at least an approximation of the frame [61] at Step 5107. Finally, it passes the frame to the display device [13] for display in the conventional way at Step 5108.
Fig. 11 is a block diagram of a computer system [600] suitable for implementing one or more embodiments of the present disclosure, including the host computing device [11], and the display control device [12]. As mentioned above, in various implementations, the host computing device [11] may include any computing device capable of generating display data, such as a mobile cellular phone, personal computer (PC), laptop, etc. adapted for wireless communication, and the display control device [12] may include a computing device, such as a wearable computing device, adapted for wireless communication with the host computing device [11]. Thus, it should be appreciated that the devices [11] and [12] may be implemented as the computer system [600] in a manner as follows.
The computer system [600] includes a bus [612] or other communication mechanism for communicating information data, signals, and information between various components of the computer system [600]. The components include an input/output (110) component [604] that processes a user (i.e., sender, recipient, service provider) action, such as selecting keys from a keypad/keyboard, selecting one or more buttons or links, etc., and sends a conesponding signal to the bus [6 12]. The VO component [604] may also include an output component, such as a display [602] and a cursor control [608] (such as a keyboard, keypad, mouse, etc.). The display [602] may be configured to present a login page for logging into a user account or a checkout page for purchasing an item from a merchant. An optional audio input/output component [606] may also be included to allow a user to use voice for inputting information by converting audio signals. The audio I/O component [606] may allow the user to hear audio. A transceiver or network interface [620] transmits and receives signals between the computer system [600] and other devices, such as another user device, a merchant server, or a service provider server via network [622]. In one embodiment, the transmission is wireless, although other transmission mediums and methods may also be suitable. A processor [614], which can be a micro-controller, digital signal processor (DSP), or other processing component, processes these various signals, such as for display on the computer system [600] or transmission to other devices via a communication link [624]. The processor [614] may also control transmission of information, such as cookies or IP addresses, to other devices.
The components of the computer system [600] also include a system memory component [610] (e.g., RAM), a static storage component [616] (e.g., ROM), and/or a disk drive [618] (e.g., a solid-state drive, a hard drive). The computer system [600] performs specific operations by the processor [614] and other components by executing one or more sequences of instructions contained in the system memory component [610]. For example, the processor [614] can perform the display data encoding functionalities described herein.
Logic may be encoded in a computer readable medium, which may refer to any medium that participates in providing instructions to the processor [614] for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. in various implementations, non-volatile media includes optical or magnetic disks, volatile media includes dynamic memory, such as the system memory component [610], and transmission media includes coaxial cables, copper wire, and fiber-optics, including wires that comprise the bus [612]. In one embodiment, the logic is encoded in non-transitory computer readable medium. In one example, transmission media may take the form of acoustic or light waves, such as those generated during radio wave, optical, and infrared data communications.
Some common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer is adapted to read.
In various embodiments of the present disclosure, execution of instruction sequences to practice the present disclosure may be performed by the computer system [600]. In various other embodiments of the present disclosure, a plurality of computer systems [600] coupled by the communication link [624] to the network (e.g., such as a LAN, WLAN, PTSN, and/or various other wired or wireless networks, including telecommunications, mobile, and cellular phone networks) may perform instruction sequences to practice the present disclosure in coordination with one another.
Where applicable, various embodiments provided by the present disclosure may be implemented using hardware, software, or combinations of hardware and software. Also, where applicable, the various hardware components and/or software components set forth herein may be combined into composite components comprising software, hardware, and/or both without departing from the spirit of the present disclosure. Where applicable, the various hardware components and/or software components set forth herein may be separated into sub-components comprising software, hardware, or both without departing from the scope of the present disclosure. In addition, where applicable, it is contemplated that software components may be implemented as hardware components and vice-versa.
Software in accordance with the present disclosure, such as program code and/or data, may be stored on one or more computer readable mediums. it is also contemplated that software identified herein may be implemented using one or more general purpose or specific purpose computers and/or computer systems, networked and/or otherwise. Where applicable, the ordering of various steps described herein may be changed, combined into composite steps, and/or separated into sub-steps to provide features described herein.
The various features and steps described herein may be implemented as systems comprising one or more memories storing various information described herein and one or more processors coupled to the one or more memories and a network, wherein the one or more processors arc operable to perform steps as described herein, as non-transitory machine-readable medium comprising a plurality of machine-readable instructions which, when executed by one or more processors, are adapted to cause the one or more processors to perform a method comprising steps described herein, and methods performed by one or more devices, such as a hardware processor, user device, server, and other devices described herein.

Claims (17)

  1. Claims 1. A method for encoding image data in a host computing device having at least two different encoding engines, wherein encoded image data is to be transmitted over a data connection to a display control device where it is decoded and sent for display on a display panel, the method comprising: receiving at least a frame of image data; selecting an encoding engine from among the at least two different encoding engines to use for encoding at least part of the frame of image data, wherein the selecting is based on heuristics related to the image data to be encoded, and/or a performance of the host computing device, the data connection and/or the display control device; encoding the at least part of the frame of image data using the selected encoding engine; and sending the encoded at least part of the frame of image data for transmittal to the display control device over the data connection.
  2. 2. A method for encoding image data in a host computing device having at least two different encoding engines, wherein encoded image data is to be transmitted over a data connection to a display control device where it is decoded and sent for display on a display panel, the method comprising: receiving at least a frame of image data; encoding at least part of the frame of image data using the at least two different encoding engines to produce at least two versions of the at least part of the frame of image data encoded differently; selecting one of the two versions based on heuristics related to the encoded image data, and/or a performance of the host computing device, the data connection and/or the display control device; and sending the encoded at least part of the frame of image data of the selected version for transmittal to the display control device over the data connection.
  3. 3. A method according to claim I wherein a whole of the frame of image data is encoded using the selected encoding engine.
  4. 4. A method according to claim 2, wherein a whole of the frame of image data is encoded using the at least two different encoding engines.
  5. 5. A method according to claim 1, wherein the frame of image data comprises a plurality of parts, and at least two different parts are encoded using the at least two different encoding engines.
  6. 6. A method according to claim 2, wherein the frame of image data comprises a plurality of parts, and at least two different parts are encoded using both of the at least two different encoding engines.
  7. 7. A method according to either claim 5 or claim 6, wherein the plurality of parts of the frame of image data comprise different areas of the frame, such that there is a central foveal area and an annular peripheral area.
  8. 8. A method according to either claim 5 or claim 6, wherein the plurality of parts of the frame of image data comprise different areas of the frame, such that different areas have different types of image data.
  9. 9. A method according to either claim 5 or claim 6, wherein the plurality of parts of the frame of image data comprise different planes of the frame, such that different planes have image data perceived at different depths by a user.
  10. 10. A method according to any preceding claim, wherein the selecting is based on heuristics relating to a type of image data forming the at least part of the frame of image data and the selecting is based on the capabilities of the encoding engines for encoding different types of image data.
  11. 11. A method according to claim 10, wherein if the type of image data is photographic, selecting is based on encoding most suited to photographic image data, and if the image data is textual, selecting is based on encoding most suited to textual image data
  12. 12. A method according to any preceding claim, wherein the selecting is based on heuristics including any one or more of: bandwidth of the data connection; current use and availability of resources on the host computing device; tolerance for latency in the host computing device, data connection and display control deice; and current use and availability of resources on the display control device.
  13. 13. A method according to any preceding claim, wherein control information is sent, together with the encoded at least part of the frame of image data, for transmittal to the display control device over the data connection.
  14. 14. A method according to claim 13, wherein the control information includes information indicating which of the encoding engines was used for encoding the encoded at least part of the frame of image data sent for transmittal to the display control device over the data connection.
  15. 15. A method according to either claim 13 or claim 14, wherein the control information includes information indicating parameters used by the encoding engine used for encoding the encoded at least part of the frame of image data sent for transmittal to the display control device over the data connection.
  16. 16. A host computing device comprising: a non-transitory memory storing instructions; and one or more hardware processors coupled to the non-transitory memory and configured to execute the instructions from the non-transitory memory to cause the host computing device to perform operations comprising the steps of any one of the preceding claims.
  17. 17. A system comprising a host computing device according to claim 16, a display control device, and a data connection therebetween.IS. A system comprising: a non-transitory memory storing instructions and one or more hardware processors coupled to the non-transitory memory and configured to execute the instructions from the non-transitory memory to cause the system to perform operations comprising the steps of any one of claims 1 to 15.
GB1902715.0A 2019-02-28 2019-02-28 Image data encoding Active GB2581822B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2300088.8A GB2611668B (en) 2019-02-28 2019-02-28 Image data encoding
GB1902715.0A GB2581822B (en) 2019-02-28 2019-02-28 Image data encoding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1902715.0A GB2581822B (en) 2019-02-28 2019-02-28 Image data encoding

Publications (3)

Publication Number Publication Date
GB201902715D0 GB201902715D0 (en) 2019-04-17
GB2581822A true GB2581822A (en) 2020-09-02
GB2581822B GB2581822B (en) 2023-03-15

Family

ID=66377296

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1902715.0A Active GB2581822B (en) 2019-02-28 2019-02-28 Image data encoding

Country Status (1)

Country Link
GB (1) GB2581822B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000048400A1 (en) * 1999-02-11 2000-08-17 Loudeye Technologies, Inc. Distributed production system for digitally encoding information
US20020101367A1 (en) * 1999-01-29 2002-08-01 Interactive Silicon, Inc. System and method for generating optimally compressed data from a plurality of data compression/decompression engines implementing different data compression algorithms
US6778291B1 (en) * 2000-06-12 2004-08-17 Hewlett-Packard Development Company, L.P. Fast page analyzer for proper selection of compression engine for rendered data
US20110082842A1 (en) * 2009-10-06 2011-04-07 International Business Machines Corporation Data compression algorithm selection and tiering
US8036265B1 (en) * 2001-09-26 2011-10-11 Interact Devices System and method for communicating media signals
WO2012100071A1 (en) * 2011-01-20 2012-07-26 Openwave Systems Inc. A method and a transcoding broker for managing the delivery of content over a content network
US20140010289A1 (en) * 2012-07-09 2014-01-09 Derek Lukasik Video stream
US20160314140A1 (en) * 2013-05-22 2016-10-27 Amazon Technologies, Inc. Efficient data compression and analysis as a service
WO2018143992A1 (en) * 2017-02-02 2018-08-09 Hewlett-Packard Development Company, L.P. Video compression

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020101367A1 (en) * 1999-01-29 2002-08-01 Interactive Silicon, Inc. System and method for generating optimally compressed data from a plurality of data compression/decompression engines implementing different data compression algorithms
WO2000048400A1 (en) * 1999-02-11 2000-08-17 Loudeye Technologies, Inc. Distributed production system for digitally encoding information
US6778291B1 (en) * 2000-06-12 2004-08-17 Hewlett-Packard Development Company, L.P. Fast page analyzer for proper selection of compression engine for rendered data
US8036265B1 (en) * 2001-09-26 2011-10-11 Interact Devices System and method for communicating media signals
US20110082842A1 (en) * 2009-10-06 2011-04-07 International Business Machines Corporation Data compression algorithm selection and tiering
WO2012100071A1 (en) * 2011-01-20 2012-07-26 Openwave Systems Inc. A method and a transcoding broker for managing the delivery of content over a content network
US20140010289A1 (en) * 2012-07-09 2014-01-09 Derek Lukasik Video stream
US20160314140A1 (en) * 2013-05-22 2016-10-27 Amazon Technologies, Inc. Efficient data compression and analysis as a service
WO2018143992A1 (en) * 2017-02-02 2018-08-09 Hewlett-Packard Development Company, L.P. Video compression

Also Published As

Publication number Publication date
GB201902715D0 (en) 2019-04-17
GB2581822B (en) 2023-03-15

Similar Documents

Publication Publication Date Title
TWI528787B (en) Techniques for managing video streaming
US9239661B2 (en) Methods and apparatus for displaying images on a head mounted display
WO2020008284A1 (en) Virtual reality media content generation in multi-layer structure based on depth of field
US11662975B2 (en) Method and apparatus for teleconference
WO2015170410A1 (en) Image playback device, display device, and transmission device
EP4008103B1 (en) Parameters for overlay handling for immersive teleconferencing and telepresence for remote terminals
CN114641998A (en) Method and apparatus for machine video encoding
CN114268626A (en) Window processing system, method and device
CN115606170A (en) Multi-grouping for immersive teleconferencing and telepresence
GB2581822A (en) Image data encoding
US10983746B2 (en) Generating display data
GB2611668A (en) Image data encoding
JP6781445B1 (en) Information processing method
CN115486058A (en) Techniques to signal multiple audio mixing gains for teleconferencing and telepresence of remote terminals
CN110072108B (en) Image compression method and device
KR20160131829A (en) System for cloud streaming service, method of image cloud streaming service using alpha value of image type and apparatus for the same
US10025550B2 (en) Fast keyboard for screen mirroring
US12044845B2 (en) Towards subsiding motion sickness for viewport sharing for teleconferencing and telepresence for remote terminals
US20230164330A1 (en) Data codec method and apparatus
US20220391167A1 (en) Adaptive audio delivery and rendering
KR20170022599A (en) System for cloud streaming service, method of image cloud streaming service using reduction of color bit and apparatus for the same
WO2024063928A1 (en) Multi-layer foveated streaming
JP2023502789A (en) Immersive teleconferencing and telepresence interactive overlay processing for remote terminals
CN115668369A (en) Audio processing method and device
CN116324683A (en) Immersive media interoperability