GB2611668A - Image data encoding - Google Patents

Image data encoding Download PDF

Info

Publication number
GB2611668A
GB2611668A GB2300088.8A GB202300088A GB2611668A GB 2611668 A GB2611668 A GB 2611668A GB 202300088 A GB202300088 A GB 202300088A GB 2611668 A GB2611668 A GB 2611668A
Authority
GB
United Kingdom
Prior art keywords
encoding
frame
image data
encoded
control device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB2300088.8A
Other versions
GB2611668B (en
GB202300088D0 (en
Inventor
Nemouchi Yazid
Stroba Szymon
Kunc Szymon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DisplayLink UK Ltd
Original Assignee
DisplayLink UK Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DisplayLink UK Ltd filed Critical DisplayLink UK Ltd
Priority to GB2300088.8A priority Critical patent/GB2611668B/en
Priority claimed from GB1902715.0A external-priority patent/GB2581822B/en
Publication of GB202300088D0 publication Critical patent/GB202300088D0/en
Publication of GB2611668A publication Critical patent/GB2611668A/en
Application granted granted Critical
Publication of GB2611668B publication Critical patent/GB2611668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • H04N21/2355Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages
    • H04N21/2358Processing of additional data, e.g. scrambling of additional data or processing content descriptors involving reformatting operations of additional data, e.g. HTML pages for generating different versions, e.g. for different recipient devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/363Graphics controllers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234345Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2402Monitoring of the downstream path of the transmission network, e.g. bandwidth available
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2405Monitoring of the internal components or processes of the server, e.g. server load
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25825Management of client data involving client display capabilities, e.g. screen resolution of a mobile phone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25833Management of client data involving client hardware characteristics, e.g. manufacturer, processing or storage capabilities
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/02Handling of images in compressed format, e.g. JPEG, MPEG
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2350/00Solving problems of bandwidth in display systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2352/00Parallel handling of streams of display data
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/08Power processing, i.e. workload management for processors involved in display operations, such as CPUs or GPUs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

A method for encoding image data in a host computing device (Figure 2, 11) having at least two different encoding engines (Figure 2, 16A-16C) the encoded image data being transmitted to a display control device (Figure 2, 12) where it is decoded for display. The method includes receiving at least a frame of image data, encoding at least part of the frame of image data using the different encoding engines to produce at least two versions of the frame/part frame of image data which have been encoded differently, S82. One of the two versions is selected, S83, based on heuristics related to one or more of a) the encoded image data, b) a performance of the host computing device, c) the data connection d) the display control device. The encoded at least part of the frame of image data of the selected version is transmitted to the display control device, S84. The encoding engines may be a CPU encoding engine, a GPU encoding engine or a hardware encoder.

Description

Image Data Encoding
Background
Image data is often transmitted between a device where it is generated and a device on which it is displayed. Often, the image data is transmitted over a bandwidth-limited connection, and it is therefore often compressed (or encoded) prior to transmission in order to minimise the bandwidth required on the connection. The compression is carried out using a compression (or encoding) algorithm, which may be run in a dedicated encoding engine or as one of many programs run on a multi-purpose programmable processor.
As the graphics capabilities of computing devices become more advanced and encoding of display data becomes more widespread, it is sometimes the case that a computing device may have many processors or encoding engines which can early out encoding. Current systems do not take full advantage of this capability as they are commonly arranged to only use one of the processors or engines capable of carrying out encoding. This results in a loss of efficiency.
The invention seeks to solve or at least mitigate this problem.
Is Summary
Accordingly, in a first aspect, the invention provides a method for encoding image data in a host computing device having at least two different encoding engines, wherein encoded image data is to be transmitted over a data connection to a display control device where it is decoded and sent for display on a display panel, the method comprising: receiving at least a frame of image data; selecting an encoding engine from among the at least two different encoding engines to use for encoding at least part of the frame of image data, wherein the selecting is based on heuristics related to the image data to be encoded, and/or a performance of the host computing device, the data connection and/or the display control device; encoding the at least part of the frame of image data using the selected encoding engine; and sending the encoded at least part of the frame of image data for transmittal to the display control device over the data connection.
According to a second aspect, the invention provides a method for encoding image data in a host computing device having at least two different encoding engines, wherein encoded image data is to be transmitted over a data connection to a display control device where it is decoded and sent for display on a display panel, the method comprising: receiving at least a frame of image data; encoding at least part of the frame of image data using the at least two different encoding engines to produce at least two versions of the at least part of the frame of image data encoded differently; selecting one of the two versions based on heuristics related to the encoded image data, and/or a performance of the host computing device, the data connection and/or the display control device; and sending the encoded at least part of the frame of image data of the selected version for transmittal to the display control device over the data connection.
In one embodiment, a whole of the frame of image data is encoded using the selected encoding engine. Preferably, a whole of the frame of image data is encoded using the at least two different encoding engines.
In one embodiment, the frame of image data comprises a plurality of parts, and at least two different parts are encoded using the at least two different encoding engines. Preferably, the frame of image data comprises a plurality of parts, and at least two different parts are encoded using both of the at least two different encoding engines. The plurality of parts of the frame of image data may comprise different areas of the frame, such that there is a central fovea' area and an annular peripheral area. In an embodiment, the plurality of parts of the frame of image data comprise different areas of the frame, such that different areas have different types of image data In an embodiment, the plurality of parts of the frame of image data may comprise different planes of the frame, such that different planes have image data perceived at different depths by a user.
The selecting may be based on heuristics relating to a type of image data forming the at least part of the frame of image data and the selecting is based on the capabilities of the encoding engines for encoding different types of image data. Preferably, if the type of image data is photographic, selecting is based on encoding most suited to photographic image data, and if the image data is textual, selecting is based on encoding most suited to textual image data. The selecting may be based on heuristics including any one or more of: bandwidth of the data connection; current use and availability of resources on the host computing device; tolerance for latency in the host computing device, data connection and display control device; and current use and availability of resources on the display control device In an embodiment, control information is sent, together with the encoded at least part of the frame of image data, for transmittal to the display control device over the data connection.
The control information preferably includes infommtion indicating which of the encoding engines was used for encoding the encoded at least part of the frame of image data sent for transmittal to the display control device over the data connection. The control information may include information indicating parameters used by the encoding engine used for encoding the encoded at least part of the frame of image data sent for transmittal to the display control device over the data connection.
According to a third aspect, the invention provides a host computing device comprising: a non-transitory memory storing instructions; and one or more hardware processors coupled to the non-transitory memory and configured to execute the instructions from the non-transitory memory to cause the host computing device to perform operations as described above.
In a fourth aspect, there is provided a system comprising a host computing device as described above, a display control device, and a data connection therebetween.
In another aspect, there is provided a system comprising: a non-transitory memory storing instructions and one or more hardware processors coupled to the non-transitory memory and configured to execute the instructions from the non-transitory memory to cause the system to perform operations as described above.
According to a further aspect, there is provided a method for improving the efficiency of display data encoding in a host computing device that has two or more encoding engines and is connected to a display control device via a data connection, comprising: panel; 2. An encoding controller determining the most appropriate encoding engine to use, based on heuristics connected to the nature of the image and/or the performance of the host computing device, the data connection, and/or the display control device; 3. The encoding controller instructing the appropriate encoding engine to encode the data; 4. The encoding engine encoding the data; The host computing device transmitting the encoded data to the display control device together with control information; 6. The display control device decoding the data as appropriate; 7. The display control device displaying the decoded data on a display panel.
The heuristics may be based on any one or more of: the bandwidth of the data connection, the current use and availability of resources on the host computing device, tolerance for latency in the system; or the use and availability of resources on the display control device. The heuristics may also be based on the display data to be encoded such that, for example, if a frame of display data consists 1. image for display on a display An application on the host computing device generating an of a photographic image it is encoded by one encoding engine with characteristics most suited to photographic images, but if it consists mostly of text it is encoded by a second encoding engine with characteristics most suited to text.
The control information transmitted with the encoded display data may comprise a flag or other signal indicating which encoding engine was used so that the display control device can use an appropriate decoding engine or algorithm. It may also include information such as any parameters used by the encoding engine such as quantisation level or number of passes of a Haar transform.
Alternatively, there may be a method of encoding using two or more encoding engines, comprising: 1. An application on the host computing device generating a frame of display data for display on a display panel; 2. The frame of display data being encoded by all available encoding engines; 3. A transmission controller selecting the encoded frame most suitable for transmission, based on heuristics connected to the performance of the data connection and/or the display control device; 4. The host computing device transmitting the selected encoded display data to the display control device together with control information; 5. The display control device decoding the data as appropriate; 6. The display control device displaying the decoded data on a display panel.
The heuristics used in this method may also be based on the bandwidth of the data connection, latency tolerance in the system, and/or the current use and availability of resources on the display control device. It has the benefit over the first version of the method that the decision as to which encoding method should be used is made closer to the time of transmission, reducing the chance that circumstances will change during the time required for encoding. However, the use of all available encoding engines when only one encoded frame of display data is to actually be transmitted may result in wasted time and processing power.
As previously mentioned, the control information transmitted with the encoded display data may comprise a flag or other signal indicating which encoding engine was selected. It may also include any parameters used by that encoding engine.
Alternatively, there may be a method of encoding using two or more encoding engines, comprising: 1. One or more applications on the host computing device generating components of a frame of display data for display on a display panel; 2 The components being encoded by separate encoding engines, selected based on heuristics connected to the capabilities of the encoding engines and/or the nature of the components; 3 The encoded components being transmitted to the display control device together with control infonnation; 4 The display control device decoding the components as appropriate; The display control device combining the components into a frame of display data; 6 The display control device displaying the decoded data on a display panel.
The components may be generated by the division of a single frame produced by an application, or they may be generated and encoded separately and only combined into a single frame at the display control device. In the first case, the frame may be divided by area, for example into a central foveal area and an annular peripheral area, or in a second example into an area which contains text and an area which contains images. Alternatively, the frame may be divided by plane such that the planes are perceived at different depths: for example, a first plane may comprise a desktop background and a second plane may comprise windows shown "on" the desktop background. Similarly, and potentially more usefully, in a computer-generated image or composite image a first plane might comprise parts of the image which are perceived as being in the far distance, such as the background of a world, while a second plane might comprise detailed objects to be viewed close up.
Accordingly, the heuristics for selection of which component of a frame should be encoded by a particular encoding engine may depend on the capabilities of the encoding engines -for example, one may be arranged to be more suitable for encoding text than another, in which case it would be beneficial to use the first encoding engine to encode a component that consists mostly of text.
In this case, the control information may indicate which encoding engine was used to encode each component, together with information on how the components should be combined by the display control device in order to generate the final frame for display. As previously mentioned, it may also include parameters used by each encoding engine.
These methods take advantage of the fact that it is becoming increasingly common for a host computing device to incorporate multiple processors and hardware engines that can be used for encoding display data They allow the best possible encoding to be used in different circumstances, for example by improving load balancing and allowing different algorithms to be used for data with different requirements. A further example of such requirements is a case where one area of a frame is content-protected and therefore should be encrypted as well as encoded, but the remainder does not need such protection.
Brief Description of the Drawings
Figure I shows a basic block diagram of a display system; Figure 2 shows a more detailed block diagram of a display system arranged to carry out some methods of the invention; Figure 3 shows a second detailed block diagram of a display system arranged to carry out some methods of the invention; Figure 4 shows a third detailed block diagram of a display system arranged to carry out some methods of the invention; Figure 5 shows a fourth detailed block diagram of a display system arranged to carry out some methods of the invention; Figure 6 shows two example frames of display data; Figure 7 shows a high-level overview of the methods of the invention; Figure 8 shows a more detailed example process of some methods of the invention; Figure 9 shows a second example process of some methods of the invention; Figure 10 shows a third example process of some methods of the invention; and Figure 11 is a block diagram of a computer system suitable for implementing one or more
embodiments of the present disclosure.
Detailed Description of the Drawings
Figure I shows a display system comprising a host computing device 11 I] connected to a display control device [12] over a limited-bandwidth connection. The connection may be wired or wireless and may be over a network coimection, including the internet. The host computing device L111 may be any computing device capable of generating display data, including a mobile device such as a smartphone or tablet, a static computing device, or a games console. The display control device [12] is in turn connected to a display device [13], which may be a single display panel as shown here or may be any other suitable display device including a projector, video wall, or virtual-reality headset.
The display control device [12] and the display device [13] may be co-located so that they share a single casing and appear to be a single device. For example, a virtual-reality headset may incorporate the workings of both the display panel [13] and the display control device [12]. Alternatively, the functionality of any of the devices may be split over several devices, for example by the connection of an adapter.
Figure 2 shows a first example of a host computing device [11] arranged to carry out embodiments of the invention. The host computing device [11] includes an application [14] running on a processor which generates frames of display data and is connected to an encoding block [15], which is in turn connected to a connection controller [I 71. The connection controller [171 controls the connection to the display control device [12] and accordingly is connected to the display control device [12], which, as previously mentioned, is connected to a display device [13]. It is also able to monitor the connection for, among other things, available bandwidth, and receive signals from the display control device [12].
The encoding block [15] comprises a number of encoding engines [16]. Here three [16A, I6B, 16C] are shown, and for the purposes of this description they are the Central Processing Unit (CPU) [16A] of the host computing device [11], the Graphics Processing Unit (GPU) [16B] of the host computing device [11], and a purpose-built hardware encoding engine (Hardware Encoder) [16C]. The CPU [16A] and GPU [16B] are programmable processors capable of running many different sets of instructions of which encoding algorithms are a subset, but the Hardware Encoder [16C] may be designed to run only one specific algorithm for a specified purpose. Naturally, in other embodiments there may be any plural number of encoding engines [16], and any number or combination of them may be multi-purpose processors and/or "dumb" engines.
Figure 2 also shows an encoding controller [21] which is connected to the connection controller [17] and the encoding block [15]. This is able to select which encoding engine [16] should be used for encoding data at any given time, depending on signals from the connection controller [17]. It is outlined with a dashed line since for some methods using this arrangement of host computing device [11] it may not be required.
Figure 3 shows a second example of a host computing device [11] arranged to carry out embodiments of the invention. As previously described in Figure 2, the host computing device [11] includes an application [14] running on a processor which is connected to an encoding block [15], which incorporates three encoding engines [161: in this example, a CPU [16A], a GPU [16B]. and a Hardware Encoder [16C]. The encoding block [15] is connected to a connection controller [17] as previously described, and it in turn is connected to a display control device [12] and display device [13].
In this embodiment, the host computing device [11] also includes an encoding controller [31] which is connected to the encoding block [15] and can both receive signals from it and transmit signals to it. it may also have signalling connections from other components of the host computing device 1111 which are not shown here.
Figure 4 shows a third example of a host computing device [II], together with an example display control device [12] arranged to carry out embodiments of the invention. As previously described, it includes an application [14] running on a processor which generates frames of display data, but in this case the application [14] is connected to a divider [41] which splits each frame into components. The divider [41] is then connected to the encoding block [15] and arranged to transmit each component to a different encoding engine [16] within the encoding block [15]. As previously described, in this example the engines [16] are a CPU [16A], a GPU [16B], and a Hardware Encoder [16C], but there may be other encoding engines [16] in other combinations in other host computing devices [11].
The encoding engines [16] are connected to a connection controller [17] and arranged to transmit their respective encoded frame components to the connection controller [17] for transmission to the display control device [12].
In this embodiment, the display control device [12] incorporates a decoder 1181 which can decode the received encoded components as appropriate, and also a compositor 1191 which re-combines the decoded components into a frame for display. Accordingly, the display control device [12] is connected to a display device [13] as previously described.
Figure 5 shows a fourth example of a host computing device [11] and display control device [12] arranged to carry out embodiments of the invention. in this example, the host computing device [11] has multiple applications [14], each of which is running on a processor, which may be the same processor or a different processor from that on which other applications [14] are running and which produces a component of a frame of display data. For example, the first application [14A] may be a component of the operating system which generates a plain background colour, the second application [14B] may be a word-processing application which generates a window mostly comprising text, and the third application [14C] may be a video player. Alternatively, the three applications [14] shown here may be components of a single application such as a video game, in which case the first application [14A] may be the component which generates the background, the second application [14B] may be the component which generates moving objects in the middle distance, and the third application [14C] may be the component which generates small, detailed objects which will appear close to the viewer. In both cases, the components generated by different applications [14] may have different encoding requirements.
The applications [14] are connected to a director [51]. such as a multiplexer, which directs the frames from each application [14] to an encoding engine [161 within the encoding block [151. The director [51] is also shown as having a signalling connection from the encoding block [15], which could be used for load-balancing depending on the use of the encoding engines [16], but this is optional.
As previously described in Figure 4, each encoding engine [16] is connected to the connection controller [17], which is in turn connected to the display control device [12] so that the encoded components can be transmitted to the display control device [12]. The display control device [12] comprises a decoder [18] and a compositor [19] as previously described, so it is able to decode the encoded components, compose them into a single frame, and transmit them to the connected display device [131 for display.
Figure 6 shows two example frames that might be displayed on the display device [13] in order to demonstrate components of frames that could be used in the systems shown in Figures 4 and 5.
Figure 6a shows a desktop image [61a] comprising a plain background [64], which might be a single colour but is here shown hatched with dots. Two application windows [62, 631 are shown "on" the background [64]. The window on the left [62] contains text, and the window on the right [63] contains a moving video, for example a film played from a DVD or the internet. In systems such as those shown in Figures 2, 3, and 4 the entire frame [61a] is generated by a single application [14], which may in practice be a compositor which takes input from multiple applications Cm this case, for example, the operating system, a word processing application, and an interne browser). In the system shown in Figure 5, the first application [I4A] might be the word processor, the second application [14B] might be the intemet browser, and the third application [14C] might be die operating system.
Figure 6a also shows a point of focus [68a]. This is presumed to be the point on the frame [61a] on which a user's eyes are focused and could be detennined by eye-tracking techniques or by using some other method of interaction such as a cursor and assuming that the user is looking at the point on the image with which he or she is interacting. The point of focus [68a] can be used in foveal encoding techniques, which ensure that the area around the point of focus [68a] is displayed at as high a quality as possible, since it is the area in which the user is interested and is likely to be seen with the most sensitive part of the eye.
Figure 6b shows an image [61b] generated by an application [14] such as a computer game, in this case a space combat game in which a user flies a spaceship and destroys enemy spaceships. The image [61b] shows a background [67] comprising open space and an enemy space station, together with several enemy spaceships [66] which appear to be between the user and the background [67]. In the foreground, the frame [6 lb] shows the frame of a cockpit [65b] between the user and the enemy spaceships [66] and finally in the extreme foreground the user sees a head-up display showing game statistics [65a]. Again, in systems such as those shown in Figures 2, 3, and 4 the entire frame [61b] is generated by a single application [14], in this case a computer game application. However, in a system such as that shown in Figure 5 different parts of the image might be generated by different components of the game application, for example, the background [67] might be generated separately to the enemy spaceships [66], since the background [67] is likely to be relatively static while the enemy spaceships [66] are much more mobile relative to the background [67] and each other.
Figure 6b also shows a point of focus [68b] which behaves in much the same way as the point of focus [68a] shown in Figure 6a.
Figure 7 shows an overview process which describes the various embodiments of the invention at a high level.
At Step S71, the application [14] generates a frame [61] of display data, or the applications [14[ generate their respective components of a frame [61] of display data. This is then passed to the encoding block [15], via a divider [41] or director [51] if appropriate, and encoded according to any instructions from an encoding controller [21/31[ at Step S72. The encoded data is then passed to the connection controller [17].
At Step S73, the appropriate encoded data is transmitted to the display control device [12]. This may mean all the display data that was received from the encoding block [15], or it may mean only the display data received from one encoding engine [16]. In any case, it will be accompanied by control information giving instructions on how the encoded data should be decoded and prepared for display by the display control device [12].
At Step S74, the decoder [18[ on the display control device [12[ decodes the received encoded data according to the control information received from the host computing device [11]. The display control device [12] may also carry out other processing such as scaling, rotation, or composition of frame components into a frame. Finally, the frame of display data is sent to the display device [13[ for display in the conventional way.
Figure 8 shows a more detailed version of one method of the inventionwhich will be described with reference to the system shown in Figure 2.
At Step S81, the application [14[ generates a frame [61] of display data in the conventional way. This may be part of a stream of regular frames of display data, for example where the application [14] is playing a video, or it may be part of an irregular stream, for example where the application [14] is a desktop application that only generates a new frame [61] of display data where there has actually been a change, for example due to user input. This frame [61] is passed to the encoding block [15] for encoding.
At Step 582, the fra. me is passed to the CPU [16A], the GPU [16B], and the Hardware Encoder [16C[ and all three encoding engines]16] encode the frame according to their programming. For example, the CPU [16A] may nm an encoding algorithm on a relatively serial basis allowing more changes to the algorithm used for different parts of the frame, the GPU [16B] may run an encoding algorithm which incorporates a high degree of pamllelisatiom and the Hardware Encoder [16C] may run an encoding algorithm which is fast and designed to be appropriate for most frames of display data it might encounter but is not very customisable.
Accordingly, the CPU [16A] may output an encoded frame which is high-quality but relatively bulky and which may be complex to decode, the GPU [1613] may output an encoded frame which is relatively well compressed but which has lost a greater degree of detail than the version of the frame encoded by the CPU [16A], and the Hardware Encoder [16C] may output an encoded frame which is very well compressed but poor quality. These three encoded frames are passed to the connection controller [17].
At Step S83, the connection controller [17] determines the most appropriate encoded frame to transmit. This determination may be based on the bandwidth available in the connection such that, for example, the connection controller [17] selects the best-quality encoded frame that can be transmitted across the available bandwidth in the connection before it will be required for decoding. Alternatively, the determination may be based on the processing power available at the display control device [12] such that, for example, the connection controller [17] selects the best-quality encoded frame that can be transmitted across the connection and decoded before it will be required for display. This will be especially useful where the bandwidth is constant but the decoding time may vary. Furthermore, the determination may be based on the latency required, such that the fastest encoding engine [16] (most likely the Hardware Encoder [16C]) will be selected if it is crucial that there is as little delay as possible between generation and display of the frame, for example in an augmented reality system. Naturally, these heuristics may be combined such that the encoded frame must be both transmitted and decoded and both of these stages in the display pipeline are variable.
At Step S84, the connection controller [17] on the host computing device [11] transmits the selected frame to the display control device [12] for decoding. it also transmits an indication of which encoding engine [16] encoded the frame, so that the decoder on the display control device [12] can correctly decode the frame. If appropriate, it may also transmit parameters that were used as part of the encoding process, especially where that process is variable as may be the case with the CPU [16A] and GPU [16B]. or in some cases with the Hardware Encoder [16C] if, for example, it uses variable starting values for its processing.
The display control device [12] receives the encoded frame and decodes it at Step S85, using an inbuilt decoder which is capable of using the decoding algorithms corresponding to all of the available encoding engines [16], together with the transmitted control information. The decoded frame is then transmitted to the display device [13[ for display at Step S76.
Depending on the encoding and decoding algorithms, it may be possible for part of the transmitted frame to be taken from one encoding engine [16] and part from another. For example, where the encoding algorithms used by the CPU 116A1 and GPU 116B1 both encode the frame in stripes, it may be possible for the connection controller [17[ to monitor -for example -the bandwidth of the connection throughout the transmission process and re-evaluate the frame to transmit on a stripe-bystripe basis. This means that if, for example. the initial bandwidth is high and it is possible to send the frame encoded by the CPU [16A], but after three out of six stripes have been transmitted the bandwidth falls due to interference, the connection controller [17] might be able to transmit the final three stripes from the frame compressed by the GPU [16B], with appropriate control information to indicate the change in the encoding algorithm used. Naturally, this is an example only and encoding can in practice be carried out in tiles, tile groups, or any other such division, and the frame produced by the Hardware Encoder [16C] could &so or instead be interleaved as appropriate. However, in this embodiment such a detennination is made by the connection controller [17] at the time of transmission, rather thin by in encoding controller or any controller earlier in the process.
Figure 9 shows a second example process which could be used in the system shown in Figure 2 or the system shown in Figure 3.
At Step S91a, the application [14] generates a frame [61] of display data as previously described. Simultaneously (for the purposes of this description; this determination may in practice not be perfectly simultaneous), at Step 591b the encoding controller [21] selects the encoding engine [16] that would be most appropriate based on a signal from the connection controller [17]. This signal is based on a similar measurement to that used to determine the encoded frame to transmit at Step S83 of Figure 8: for example, the bandwidth available in the connection to the display control device [12], the latency requirements, and/or the processing power available on the display control device [12] for decoding. The encoding controller [21] selects the most appropriate encoding engine [16] to use based on this signal and its knowledge of the usual characteristics of the encoding engines [16] available. In the examples already described, this means that it is aware that: * The CPU [16A] generally produces good-quality but bulky encoded frames and its encoding algorithm is relatively slow; * The GPU [16B] generally produces medium-quality, medium-bulk frames at an average latency; and * The Hardware Encoder [16C] generally produces low-quality, small frames relatively quickly and therefore if, for example, the bandwidth available is low, the encoding controller [21] might determine that the Hardware Encoder [16C] should be used In the embodiment shown in Figure 3, the determination at Step 591b may be carried out differently due to the different inputs available to the encoding controller [31]. In this system, the encoding controller [31] selects the most appropriate encoding engine [16] to use based on signals from other parts of the host computing device [11] system. For example, in Figure 3 a signalling connection is shown from the encoding block [15] to the encoding controller [31]. This could carry signalling indicating the current use levels of the encoding engines [16], which could enable the encoding controller [31] to determine which of the encoding engines [16] is least busy and assign the frame [61] to that encoding engine [16]. Alternatively, there may be other inputs. For example, if the Hardware Encoder [16C] produces less heat when in use, the encoding controller [31] could receive input from a thermometer indicating the current temperature of the host computing device [11] and, when that temperature rises above a threshold, assign all encoding to the Hardware Encoder 116C1 rather than the CPU [16A] and/or GPU [16B] until the temperature falls again. Furthemiore, where some of the encoding engines [16] are multi-use processers, such as a CPU [16A] which also performs other processing for the operation of the host computing device [11], information on such use could be passed to the encoding controller [31] for use in load-balancing. This system could also be used for determination based on required latency such that, for example, die source of the frame is known to the encoding controller [31] and if it comes from one source which requires low latency such as a gaming application there is a presumption that the lowest-latency encoding engine 1161 should be used -in this example the Hardware Encoder [16C] -while other applications may have greater tolerance for latency, allowing a higher-latency encoding engine [16] to be used.
Finally, the encoding controller [31] could also receive input from die connection controller [17], indicating the available bandwidth and any signals on the capabilities of the display control device [12], in the same way as the connection controller [17] in Figure 2.
The inputs and their appropriate thresholds may be used simply such that, for example, the encoding controller [31] always selects the encoding engine [16] with the shortest queue of data to be encoded. This means that if the previous two frames have been sent to the CPU [16A] and the GPU [16B] respectively, the encoding controller [31] might determine that the current frame [61] should be encoded by the Hardware Encoder [16C].
Alternatively, the inputs could be balanced in order to make the optimal determination possible in the circumstances. For example: * A temperature input indicates that the temperature has risen above a threshold and this creates a presumption that the Hardware Encoder [16C] should be used; * The CPU [16A] is currently busy with the operation of the application [14] and this creates a presumption that one of the other encoding engines [16B, 16C] should be used; * The connection currently has a high bandwidth level and this creates a presumption that the CPU [16A] should be used: * The Hardware Encoder [16C] has a queue of incoming data and this creates a presumption that one of the other encoding engines [16A, 16B] should be used; * There has been damage to one of the cores of the GPU [16B] and this creates a presumption that one of the other encoding engines [16A. 16C] should be used; * The application [14] requires as low a latency as possible and this creates a presumption that the Hardware Encoder [16C] should be uscd.
These rules could be weighted such that, for example, damage to an encoding engine [16] prohibits that encoding engine [16] from being used and temperature is considered a key factor. This means that the GPU [16B] cannot be used and the temperature means that if possible the Hardware Encoder [16C] should be used, so the encoding controller [31] determines that the Hardware Encoder [16C] should encode the frame [61].
In any case, at Step S92 the application [14] passes the generated frame [61] to the encoding block [15], and it is received by the encoding engine [16] selected by the encoding controller [311: in the above examples, the Hardware Encoder [16C]. This may mean that the application [14] stores the frame [61] in a conunon frame buffer and the Hardware Encoder [16C] fetches it when instructed to do so by the encoding controller [31], or it may mean that the encoding controller [31] in fact sends a signal to the application [14] indicating to which encoding engine [16] it should transmit the frame.
At Step S93, the selected encoding engine [16] encodes the received frame [61]. The other encoding engines [16] are idle or may be used for other functionality. For example, where the Hardware Encoder [16C] is used for encoding display data, the CPU [16A] and CPU [16B] -as programmable processors -may be used for other processing required by the host computing device [111. The encoding engine [16] then passes the encoded frame to the connection controller [17].
At Step S94 the connection controller [17] transmits the encoded frame to the display control device [12]. Unlike in the process described in Figure 8 it does not have to make any determination as to which data to transmit since it has only received one frame. It may be aware of which encoding engine [16] provided the data and attach control information as previously described, or the control infonnation may be attached by the encoding engine [16] as part of the encoding process and the connection controller [17] may have no knowledge of which encoding engine [16] encoded the data.
The display control device [12] receives the encoded frame and decodes it as previously described at Step S95, it then passes the decoded frame to the display device [13] for display in the conventional way at Figure S96.
If it is possible for the application [14] to provide pre-knowledge of the content of the frame [61] to the encoding controller [21/31] in Figure 2 or Figure 3, this input could also be used. For example, in the system shown in Figure 2, if the connection controller [17] indicates to the encoding controller [21] that there is a large bandwidth available, this might create a presumption that the CPU [16A] should be used, but if the application [14] indicates that the frame [61] it is generating is ideally suited to the algorithm used by the Hardware Encoder [I6C] and therefore can be compressed significantly with very little loss of quality, the encoding controller [21] might instead select the Hardware Encoder [16C1. A similar determination could be used by the encoding controller [31] in Figure 3, if its heuristics indicated that the CPU [16A] should be used but the application [14] indicated that the frame would be best suited for the Hardware Encoder [16C].
The method described in Figure 9 has advantages over the method described in Figure 8 because it results in fewer wasted resources: only the frame that is to be transmitted is encoded, so the other encoding engines [16] do not waste time and power encoding data that will not be used.
Figure 10 shows a third example process which could be used in the systems shown in Figure 4 and Figure 5. In this case, the frame [61] is divided into areas or planes, which can then be encoded separately.
At Step S101, the frame of display data [61] is generated. in the system shown in Figure 4, an entire frame [61] is generated by a single application [14] as previously described, in the system shown in Figure 5, different components of the frame [61] are generated by different applications [14] (or different application components as previously mentioned). The frame [61] is then passed to a divider [41], or the components to a director [51].
Step S102 is only carried out in a system with a divider [41] like that shown in Figure 4. It is therefore shown outlined with dashes in the Figure. The divider [41] receives the frame [61] generated by the application [14] and divides it into components. These may be based on area, such as different application windows or foveal and annular peripheral regions, or they may be planes based on depth. The determination of depth may be based on information received from the application [14] based on the frame generation process, but in any case, the divider [41] splits the frame [61] into components which may have different encoding requirements.
For example, in the frame [61a] shown in Figure 6a, the divider [41] may divide the frame [61a] by area such that the application window containing text [62] is one area (Areal), the application window containing the video [63] is a second area (Area2) and the background [64] is a third area (Area3). Alternatively, it may determine that a circle centring on the point of focus [68a] with a radius of ten units is one area (Fovea), an annular region beginning at the edge of Fovea and extending a further thirty units is a second area (Periphery1) and the remainder of the frame [6 is a third area (Periphery3).
In the frame [61b] shown in Figure 6b, the divider [41] may receive information indicating the depths of the objects such that the background [67] and the enemy space station have depth value DepthD, the enemy spaceships [66] have depth value DepthC, the cockpit surroundings [65b] have depth value DepthB, and the head-up display [65a] has depth value DepthA and determine that a first plane (Plane1) comprises all objects with depth values DepthB or DepthA [65], a second plane (Plane2) comprises all objects with depth value DepthC [66], and a third plane (Plane3) comprises all objects with depth value Depth D 1671, Step S102 is not carried out in a system such as that shown in Figure 5 because in that system the frame [61] has already been divided. in that case, Areal, Fovea, or Plane! (collectively, Component!) might be generated by the first application [14A], Area2, Periphery 1, or Plane2 (collectively, Component2) might be generated by the second application [14B], and Area3, Periphery2. or Plane3 (collectively, Component3) might be generated by the third application [14C].
At Step 5103, the components are sent to the appropriate encoding engines [16]. The mechanism by which the divider [41] or director [51] determines which component(s) to send to a particular encoding engine [16] depends on the characteristics of the components and the encoding engines [16], and the divider [41] or director [51] may also act in a similar way to an encoding controller [21/31] as previously described, determining which encoding engine [16] should be used based on external inputs and the current loads of the encoding engines [16], as well as their characteristics.
For example, in the above examples all three components [62/65] that are referred to as Component' must be displayed at high quality but do not comprise much data: the text in Areal [62] may be easy to encode since it is monochrome, Plane! [65] comprises a small amount of graphic data, and Fovea is a relatively small area on the screen. It may therefore be appropriate to send Componentl to the CPU [16A] if the CPU [16A] has the characteristics earlier described: little data loss, but relatively poor compression ratios. Similar determinations could be made to send Component2 (which may consist of moving data for which slightly lowered quality could be less noticeable but speed of transmission is key for Area2 [63] and Planc2 [66], or may consist of data which will appear further towards the edge of the user's vision in the case of Periphery I) to the GPU [16C] and Component3 (which consists of relatively simple data which may appear to be in the far distance and therefore has little requirement for fine detail) to the Hardware Encoder [16C].
However, additional considerations such as which encoding engines [16] are currently under most load, tolerance for latency, or the bandwidth available and therefore requirement for fast encoding and/or low volume may change these assumptions and result in, for example, Component2 also being sent to the Hardware Encoder [I 6C].
In any case, at Step S104 each encoding engine [16] encodes the component(s) it has been sent in accordance with its operation and may attach control information to it/them as previously described. it then passes the encoded data to the connection controller [17]. If control information has not been attached by the encoding engines [16] the connection controller [17] may attach such information.
The connection controller [17] may also have received an indication from the divider [41] or director [51] of which encoding engine [16] is encoding which component and may attach an indication of which component is which to the encoded components, or this information could have been passed directly from the divider [41] or director [51]. Similarly, information on how to compose or re-compose the final frame may be passed to the connection controller [17] from the application(s) [14] or the divider [41] or director [51]. All of this information is included in the control infontion transmitted along with the encoded components at Step S105.
At Step S106, the decoder [18] on the display control device [12] receives the encoded components and decodes each one using the details of the encoding used that were contained in its respective control information. The decoder [18] then passes the decoded components to the compositor [19] on the display control device [12], together with the instructions on how to composite the frame contained in the control information. The compositor [19] then composites or re-composites the components into at least an approximation of the frame [61] at Step S107. Finally, it passes the frame to the display device [13] for display in the conventional way at Step S108.
Fig. 11 is a block diagram of a computer system [600] suitable for implementing one or more embodiments of the present disclosure, including the host computing device [11], and the display control device [12]. As mentioned above, in various implementations, the host computing device [11] may include any computing device capable of generating display data, such as a mobile cellular phone, personal computer (PC), laptop, etc. adapted for wireless communication, and the display control device [12] may include a computing device, such as a wearable computing device, adapted for wireless communication with the host computing device [11]. Thus, it should be appreciated that the devices [11] and [12] may be implemented as the computer system [600] in a manner as follows.
The computer system [600] includes a bus [612] or other communication mechanism for communicating information data, signals, and information between various components of the computer system [600]. The components include an input/output (I/O) component [604] that processes a user (i.e., sender, recipient, service provider) action, such as selecting keys from a keypad/keyboard, selecting one or more buttons or links, etc., and sends a corresponding signal to the bus [612]. The 1/0 component [604] may also include an output component, such as a display [602] and a cursor control [608] (such as a keyboard, keypad, mouse, etc.). The display [602] may be configured to present a login page for logging into a user account or a checkout page for purchasing an item from a merchant. An optional audio input/output component [606] may also be included to allow a user to use voice for inputting information by converting audio signals. The audio I/0 component [606] may allow the user to hear audio. A transceiver or network interface [620] transmits and receives signals between the computer system [600] and other devices, such as another user device, a merchant server, or a service provider server via network [6221. in one embodiment, the transmission is wireless, although other transmission mediums and methods may also be suitable. A processor [614], which can be a micro-controller, digital signal processor (DSP), or other processing component, processes these various signals, such as for display on the computer system [600] or transmission to other devices via a communication link [6241. The processor [614] may also control transmission of information, such as cookies or IP addresses, to other devices.
The components of the computer system [600] also include a system memory component [610] (e.g., RAM), a static storage component [616] (e.g., ROM), and/or a disk drive [618] (e.g., a solid-state drive, a hard drive). The computer system [600] performs specific operations by the processor [614] and other components by executing one or more sequences of instructions contained in the system memory component [610]. For example, the processor [614] can perform the display data encoding fimctionalities described herein.
Logic may be encoded in a computer readable medium, which may refer to any medium that participates in providing instructions to the processor [614] for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. In various implementations, non-volatile media includes optical or magnetic disks, volatile media includes dynamic memory, such as the system memory component [610], and transmission media includes coaxial cables, copper wire, and ftber-optics, including wires that comprise the bus [6121. In one embodiment, the logic is encoded in non-transitory computer readable medium. in one example, transmission media may take the form of acoustic or light waves, such as those generated during radio wave, optical, and infrared data communications.
Some common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, or any other medium from which a computer is adapted to read.
In various embodiments of the present disclosure, execution of instruction sequences to practice the present disclosure may be performed by the computer system [600]. In various other embodiments of the present disclosure, a plurality of computer systems [600] coupled by the communication link [624] to the network (e.g., such as a LAN, WLAN, PTSN, and/or various other wired or wireless networks, including telecommunications, mobile, and cellular phone networks) may perform instruction sequences to practice the present disclosure in coordination with one another.
Where applicable, various embodiments provided by the present disclosure may be implemented using hardware, software, or combinations of hardware and software. Also, where applicable, the various hardware components and/or software components set forth herein may be combined into composite components comprising software, hardware, and/or both without departing from the spirit of the present disclosure. Where applicable, the various hardware components and/or software components set forth herein may be separated into sub-components comprising software, hardware, or both without departing from the scope of the present disclosure. In addition, where applicable, it is contemplated that software components may be implemented as hardware components and vice-versa.
Software in accordance with the present disclosure, such as program code and/or data, may be stored on one or more computer readable mediums. It is also contemplated that software identified herein may be implemented using one or more general purpose or specific purpose computers and/or computer systems, networked and/or othenvise. Where applicable, the ordering of various steps described herein may be changed, combined into composite steps, and/or separated into sub-steps to provide features described herein.
The various features and steps described herein may be implemented as systems comprising one or more memories storing various information described herein and one or more processors coupled to the one or more memories and a network, wherein the one or more processors are operable to perform steps as described herein, as non-transitory machine-readable medium comprising a plurality of machine-readable instructions which, when executed by one or more processors, are adapted to cause the one or more processors to perfomi a method comprising steps described herein, and methods performed by one or more devices such as a hardware processor, user device, server, and other devices described herein.
Aspects of the apparatus and methods described herein are further exemplified in the following numbered CLAUSES: CLAUSE 1. A method for encoding image data in a host computing device having at least two different encoding engines, wherein encoded image data is to be transmitted over a data connection to a display control device where it is decoded and sent for display on a display panel, the method 15 comprising: receiving at least a frame of image data; selecting an encoding engine from among the at least two different encoding engines to use for encoding at least part of the frame of image data, wherein the selecting is based on heuristics related to the image data to be encoded, and/or a perfonnance of the host computing device, the data connection and/or the display control device; encoding the at least part of the frame of image data using the selected encoding engine; and sending the encoded at least part of the frame of image data for transmittal to the display control device over the data connection.
CLAUSE 2. A method for encoding image data in a host computing device having at least two different encoding engines, wherein encoded image data is to be transmitted over a data connection to a display control device where it is decoded and sent for display on a display panel, the method comprising: receiving at least a frame of image data; encoding at least part of the frame of image data using the at least two different encoding engines to produce at least two versions of the at least part of the frame of image data encoded differently; selecting one of the two versions based on heuristics related to the encoded image data, and/or a performance of the host computing device, the data connection and/or the display control device; and sending the encoded at least part of the frame of image data of the selected version for transmittal to the display control device over the data connection.
CLAUSE 3. A method according to clause 1 wherein a whole of the frame of image data is encoded using the selected encoding engine.
CLAUSE 4. A method according to clause 2. wherein a whole of the frame of image data is encoded using the at least two different encoding engines.
CLAUSE 5. A method according to clause 1, wherein the frame of image data comprises a plurality of parts, and at least two different parts are encoded using the at least two different encoding engines.
CLAUSE 6. A method according to clause 2, wherein the frame of image data comprises a plurality of parts, and at least two different parts are encoded using both of the at least two different encoding engines.
CLAUSE 7. A method according to either clause 5 or clause 6, wherein the plurality of parts of the frame of image data comprise different areas of the frame, such that there is a central foveal area and an annular peripheral area.
CLAUSE 8. A method according to either clause 5 or clause 6, wherein the plurality of parts of the frame of image data comprise different areas of the frame, such that different areas have different types of image data.
CLAUSE 9. A method according to either clause 5 or clause 6, wherein the plurality of parts of the frame of image data comprise different planes of the frame, such that different planes have image data perceived at different depths by a user.
CLAUSE 10. A method according to any preceding clause, wherein the selecting is based on heuristics relating to a type of image data forming the at least part of the frame of image data and the selecting is based on the capabilities of the encoding engines for encoding different types of image data.
CLAUSE 11. A method according to clause 10, wherein if the type of image data is photographic, selecting is based on encoding most suited to photographic image data, and if the image data is textual, selecting is based on encoding most suited to textual image data.
CLAUSE 12. A method according to any preceding clause, wherein the selecting is based on heuristics including any one or more of: bandwidth of the data connection; current use and availability of resources on the host computing device; tolerance for latency in the host computing device, data connection and display control deice; and current use and availability of resources on the display control device.
CLAUSE 13. A method according to any preceding clause, wherein control information is sent, together with the encoded at least part of the frame of image data, for transmittal to the display control device over the data connection.
CLAUSE 14. A method according to clause 13, wherein the control information includes information indicating which of the encoding engines was used for encoding the encoded at least part of the frame of image data sent for transmittal to the display control device over the data connection.
CLAUSE 15. A method according to either clause 13 or clause 14, wherein the control information includes information indicating parameters used by the encoding engine used for encoding the encoded at least part of the frame of image data sent for transmittal to the display control device over the data connection.
CLAUSE 16. A host computing device comprising: a non-transitory memory storing instructions; and one or more hardware processors coupled to the non-transitory memory and configured to execute the instructions from the non-transitory memory to cause the host computing device to perform operations comprising the steps of any one of the preceding clauses.
CLAUSE 17. A system comprising a host computing device according to clause 16, a display control device, and a data connection therebetween.
CLAUSE 18. A system comprising: a non-transitory memory storing instructions; and one or more hardware processors coupled to the non-transitory memory and configured to execute the instructions from the non-transitory memory to cause the system to perform operations comprising the steps of any one of clauses 1 to 15.

Claims (20)

  1. CLAIMS1. A method for encoding image data in a host computing device having at least two different encoding engines, wherein encoded image data is to be transmitted over a data connection to a display control device where it is decoded and sent for display on a display panel, the method comprising: receiving at least a frame of image data; encoding at least part of the frame of image data using the at least two different encoding engines to produce at least two versions of the at least part of the frame of image data encoded differently; selecting one of the two versions based on heuristics related to the encoded image data, and/or a performance of the data connection and/or the display control device; and sending the encoded at least part of the frame of image data of the selected version for transmittal to the display control device over the data connection.
  2. 2. A method according to claim 1, wherein a whole of the frame of image data is encoded using the at least two different encoding engines.
  3. 3. A method according to claim 1, wherein the frame of image data comprises a plurality of parts, and at least two different parts are encoded using both of the at least two different encoding engines.
  4. 4. A method according to claim 3, wherein the plurality of parts of the frame of image data comprise different areas of the frame, such that there is a central fovea' area and an annular peripheral area,
  5. 5. A method according to claim 3, wherein the plurality of parts of the frame of image data comprise different areas of the frame, such that different areas have different types of image data
  6. 6. A method according to claim 3, wherein the plurality of parts of the frame of image data comprise different planes of the frame, such that different planes have image data perceived at different depths by a user.
  7. 7. A method according to any preceding claim, wherein the selecting is based on heuristics relating to a type of image data forming the at least part of the frame of image data and the selecting is based on the capabilities of the encoding engines for encoding different types of image data
  8. 8. A method according to claim 7_ wherein if the type of image data is photographic, selecting is based on encoding most suited to photographic image data, and if the image data is textual, selecting is based on encoding most suited to textual image data.
  9. 9. A method according to any preceding claim wherein the selecting is based on heuristics including any one or more of bandwidth of the data connection; current use and availability of resources on the host computing device; tolerance for latency in the host computing device, data connection and display control device and current use and availability of resources on the display control device.IS
  10. 10. A method according to claim 9, wherein the selection is performed based on the bandwidth available in die data connection so as to select a best-quality encoded frame that can be transmitted across the available bandwidth before it will be required for decoding.
  11. 11. A method according to claim 9 or 10, wherein the selection is performed based on the processing power available at the display control device so as to select the best-quality encoded frame that can be transmitted across the connection and decoded before it will be required for display.
  12. 12. A method according to any of the preceding claims, wherein the encoding engines encode a frame in portions such as tiles, tile groups or stripes, the method comprising: monitoring the bandwidth of the data connection throughout transmission mid re-evaluating the frame on a portion-by-portion basis, and after selecting encoded versions of one or more portions generated by a first one of the encoding engines, selecting encoded versions of one or more further portions generated by a second one of the encoding engines based on a change in the bandwidth.
  13. 13. A method according to any preceding claim, wherein control information is sent, together with the encoded at least part of the frame of image data, for transmittal to the display control device over the data connection.
  14. 14. A method according to claim 13. wherein the control information includes information indicating which of the encoding engines was used for encoding the encoded at least part of the frame of image data sent for transmittal to the display control device over the data connection.
  15. 15. A method according to claim 13 or 14 when dependent on claim 12, comprising adding control information to indicate the change of encoding engine used.
  16. 16. A method according to any of claims 13 to 15, wherein the control information includes information indicating parameters used by the encoding engine used for encoding the encoded at least part of the frame of image data sent for transmittal to the display control device over the data IS connection.
  17. 17. A method according to any of the preceding claims, wherein the encoding engines comprise one or more of: a CPU-based encoding engine, a GPU-based encoding engine and a hardware encoder.
  18. 18. A host computing device comprising: a non-transitory memory storing instructions and one or more hardware processors coupled to the non-transitory memory and configured to execute the instructions from the non-transitory memory to cause the host computing device to perform operations comprising the steps of any one of the preceding claims.
  19. 19. A system comprising a host computing device according to claim 18, a display control device, and a data connection therebetween.
  20. 20. A system comprising: a non-transitory memory storinginstructions; and one or more hardware processors coupled to the non-transitory memory and configured to execute the instructions from the non-transitory memory to cause the system to perform operations comprising the steps of any one of claims 1 to 17.
GB2300088.8A 2019-02-28 2019-02-28 Image data encoding Active GB2611668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2300088.8A GB2611668B (en) 2019-02-28 2019-02-28 Image data encoding

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB2300088.8A GB2611668B (en) 2019-02-28 2019-02-28 Image data encoding
GB1902715.0A GB2581822B (en) 2019-02-28 2019-02-28 Image data encoding

Publications (3)

Publication Number Publication Date
GB202300088D0 GB202300088D0 (en) 2023-02-15
GB2611668A true GB2611668A (en) 2023-04-12
GB2611668B GB2611668B (en) 2023-09-13

Family

ID=85571126

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2300088.8A Active GB2611668B (en) 2019-02-28 2019-02-28 Image data encoding

Country Status (1)

Country Link
GB (1) GB2611668B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020101367A1 (en) * 1999-01-29 2002-08-01 Interactive Silicon, Inc. System and method for generating optimally compressed data from a plurality of data compression/decompression engines implementing different data compression algorithms
US8036265B1 (en) * 2001-09-26 2011-10-11 Interact Devices System and method for communicating media signals

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020101367A1 (en) * 1999-01-29 2002-08-01 Interactive Silicon, Inc. System and method for generating optimally compressed data from a plurality of data compression/decompression engines implementing different data compression algorithms
US8036265B1 (en) * 2001-09-26 2011-10-11 Interact Devices System and method for communicating media signals

Also Published As

Publication number Publication date
GB2611668B (en) 2023-09-13
GB202300088D0 (en) 2023-02-15

Similar Documents

Publication Publication Date Title
TWI528787B (en) Techniques for managing video streaming
US10284753B1 (en) Virtual reality media content generation in multi-layer structure based on depth of field
US9239661B2 (en) Methods and apparatus for displaying images on a head mounted display
US11662975B2 (en) Method and apparatus for teleconference
JP7411791B2 (en) Overlay processing parameters for immersive teleconferencing and telepresence of remote terminals
CN113391734A (en) Image processing method, image display device, storage medium, and electronic device
CN111901414A (en) Realization method and realization system of secure desktop transmission protocol based on virtualization environment
KR20210096643A (en) Online Gaming Platform Voice Communication System
GB2611668A (en) Image data encoding
GB2581822A (en) Image data encoding
CN114268626A (en) Window processing system, method and device
US20140330957A1 (en) Widi cloud mode
CN115606170A (en) Multi-grouping for immersive teleconferencing and telepresence
JP7419529B2 (en) Immersive teleconference and telepresence interactive overlay processing for remote terminals
JP2020187482A (en) Information processing method
KR102405143B1 (en) System for cloud streaming service, method of image cloud streaming service using reduction of color bit and apparatus for the same
US20220391167A1 (en) Adaptive audio delivery and rendering
US20230164330A1 (en) Data codec method and apparatus
TWI539795B (en) Media encoding using changed regions
US20220308341A1 (en) Towards subsiding motion sickness for viewport sharing for teleconferencing and telepresence for remote terminals
CN110072108B (en) Image compression method and device
JP6412893B2 (en) Video distribution system, video transmission device, communication terminal, and program
WO2024063928A1 (en) Multi-layer foveated streaming
KR20230078649A (en) Machine learning techniques for generating and training high-resolution compressed data structures representing textures from low-resolution compressed data structures
CN115668369A (en) Audio processing method and device