GB2538797A - Managing display data - Google Patents

Managing display data Download PDF

Info

Publication number
GB2538797A
GB2538797A GB1509290.1A GB201509290A GB2538797A GB 2538797 A GB2538797 A GB 2538797A GB 201509290 A GB201509290 A GB 201509290A GB 2538797 A GB2538797 A GB 2538797A
Authority
GB
United Kingdom
Prior art keywords
display
display data
data
display device
streams
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1509290.1A
Other versions
GB2538797B (en
GB201509290D0 (en
Inventor
Skinner Colin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DisplayLink UK Ltd
Original Assignee
DisplayLink UK Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DisplayLink UK Ltd filed Critical DisplayLink UK Ltd
Priority to GB1509290.1A priority Critical patent/GB2538797B/en
Publication of GB201509290D0 publication Critical patent/GB201509290D0/en
Publication of GB2538797A publication Critical patent/GB2538797A/en
Application granted granted Critical
Publication of GB2538797B publication Critical patent/GB2538797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • G09G5/006Details of the interface to the display terminal
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/08Cursor circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/02Handling of images in compressed format, e.g. JPEG, MPEG
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0464Positioning
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2350/00Solving problems of bandwidth in display systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2352/00Parallel handling of streams of display data
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/18Use of a frame buffer in a display terminal, inclusive of the display panel
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/10Use of a protocol of communication by packets in interfaces along the display data pipeline
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/20Details of the management of multiple sources of image data
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/399Control of the bit-mapped memory using two or more bit-mapped memories, the operations of which are switched in time, e.g. ping-pong buffers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks

Abstract

Managing display data received as one or more input streams from at least one source 14. A first output stream comprising first display data and a second output stream comprising second display data from the one or more input streams of display data is produced, the first and second output streams being associated with a display device 16 for display on the display device. Instructions indicating how the first and second display data is to be combined for display is output to the display device and the first and second output streams of display data are output onto a multi stream link for transmittal to the display device. The first and second display data may have different latencies based on, for example, feedback from a sensor 17 that senses which part of the display device a user is looking at, so that the area of the display that is being looked at has lower latency that the rest of the displayed image. The higher latency data may be compressed prior to being output. Preferably the first display data is received from a first input stream and the second input data is received in a second input stream.

Description

Intellectual Property Office Application No. GII1509290.1 RTM Date:9 November 2015 The following terms are registered trade marks and should be read as such wherever they occur in this document: DisplayPort (Pages 1 and 2) Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo Managing Display Data
Background
In desktop computing, it is now common to use more than one display device such as a monitor, television screen or even a projector. Traditionally, a user would have a computer with a single display device attached, but now it is possible to have more than one display device attached to the computer, which increases the usable area for the user. For example, International Patent Application Publication WO 20071020408 discloses a display system which comprises a plurality of display devices, each displaying respectively an image, a data processing device connected to each display device and controlling the image displayed by each display device, and a user interface device connected to the data processing device.
Connecting multiple display devices to a computer is a proven method for improving productivity.
The connection of an additional display device to a computer presents a number of problems. In general, a computer will be provided with only one video output such as a VGA-out connection. One method by which a display device can be added to a computer is by adding an additional graphics card to the internal components of the computer. The additional graphics card will provide an additional video output which will allow the display device to be connected to the computer and driven by that computer.
However, this solution is relatively expensive and is not suitable for many non-technical 20 users of computers.
An alternative method of connecting a display device is to connect the display device to a USB socket on the computer, as all modern computers are provided with multiple USB sockets. This provides a simple connection topology, but requires additional hardware and software to be present since it is necessary to compress display data due to the relatively-low bandwidth of a USB connection. However, compression and the associated processing add a delay to the transmission of display data to the display device. This is especially problematic in the case of the cursor, which in a conventional desktop arrangement is likely to be the user's main point of interaction with the computer. When the user moves a mouse, he or she expects to see an immediate reaction from the cursor and low latency is therefore especially important in this case. In some cases, such as the DisplayPort standard, the data may even be Page 1 of 21 compressed a second time as part of conversion to a DisplayPort standard, adding additional latency to that mentioned above.
The DisplayPort standard further provides a method, known as Multiple Stream Transport (MST), which allows display data to be transmitted in multiple time-multiplexed streams down a single physical connection to a chain of display devices, such that one stream contains display data for one connected display device. In this way, only one stream is displayed on each display device.
Overview It is an object of the present invention, therefore, to provide a method of managing display data that overcomes or at least reduces the over mentioned problems Accordingly, in a first aspect, the invention provides a method of managing display data, the method comprising: receiving one or more input streams of display data from at least one source; producing at least a first output stream comprising first display data and a second output stream comprising second display data from the one or more input streams of display data, the at least first and second output streams being associated with a first display device for display on the first display device; providing instructions indicating how the first and second display data in the first and second output streams is to be combined for display on the first display device; outputting the at least first and second output streams of display data onto a multi stream link for transmittal via the multi stream link to a controller for the first display device; and outputting the instructions for transmittal to the controller for first display device The instructions may indicate that the first display data is to be displayed next to the second display data on the first display device, or that the first display data is to be displayed at least partially overlapping the second display data on the first display device. Preferably, the first display data has a lower latency than the second display data.
The second display data may be compressed prior to being output in the second output stream, and, if the first display data is also compressed, the second display data may be compressed more than the first display data Page 2 of 21 Preferably, the first display data is received in a first input stream and the second display data is received in a second input stream. Alternatively, the display data received in one input stream may be used to form the first display data and the second display data. The first display data is preferably formed from a part of the display data in the one input stream, wherein the part of the display data optionally forms an area of the display on the display device. The area is preferably smaller than the whole of the display on the display device, and may be an area of the display on the display device with which a user of the display device is interacting. The first display data preferably comprises cursor display data.
In one embodiment, the area of the display with which a user of the display device is interacting is determined by sensing which part of the display on the display device the user is viewing.
The above method is preferably performed in a display data manager, which may, for example, be a docking station.
In a second aspect, the invention provides a method of managing display data, the method comprising: receiving at least a first input stream of first display data and a second input stream of second display data on a multi stream link, at least the first and second display data being destined for display on a first display device, receiving instructions indicating how the at least first and second display data are to be combined for display on the first display device; combining the at least first and second display data from the at least first and second streams according to the instructions into combined display data; and forwarding the combined display data for display on the first display device The instructions may indicate that the first display data is to be displayed next to the second display data on the first display device, or that the first display data is to be displayed at least partially overlapping the second display data on the first display device. Preferably, the first display data has a lower latency than the second display data.
In a preferred embodiment, the second display data is decompressed prior to being combined with the first display data.
The first display data preferably forms an area of the display on the display device. The area is preferably smaller than the whole of the display on the display device, and may be an Page 3 of 21 area of the display on the display device with which a user of the display device is interacting. The first display data preferably comprises cursor display data.
In one embodiment, the area of the display with which a user of the display device is interacting is determined by sensing which part of the display on the display device the user is viewing According to a further aspect of the invention, there is provided a method for transmitting display data to a display device and displaying it, comprising: I. a first device (a 'display control device'): a receiving a stream of display data to be sent to one or more display devices (the 'main display data'); b receiving a second stream of display data (the 'accelerated display data'); c receiving a location relative to the main display data at which the accelerated display data is to be stored (the 'accelerated area'); d identifying a notional display at the accelerated area; and e transmitting the streams of display data to the displays as appropriate for the connection, such that the accelerated display data is sent to the notional display as if it were a separate physical display device but addressed to the physical display device of which the notional display is part; 2. each connected display device: a receiving two or more streams of display data; b identifying the streams to be displayed on this physical display device; c if multiple streams are to be displayed on this physical display device: i, decompressing any compressed streams; ii, blending the received streams as appropriate; iii. displaying the resulting pixel data; and d otherwise, displaying the appropriate stream of display data in the conventional way.
This is beneficial because it will allow the accelerated display data to be treated separately from the main display data, for example by being sent to the display device as a raw stream.
This is the most beneficial use of the invention as it will further mean that the accelerated display data will not be compressed and will therefore be of better quality than if it had been compressed It also means that the accelerated display data can be updated entirely Page 4 of 21 independently from the rest of the display data, which will improve latency when the other display data must be processed in some way or if it is not changing; it would not be necessary to update all the display data in order to move the cursor, for example A cursor is likely to be moving even when the rest of the image displayed on the display device is static, for example when browsing a web page Therefore, by having the cursor as the accelerated display data, re-processing and compressing the entire image will not be necessary.
A notional display is a construct comprising an area of a physical display device and when transmitted the display data is still addressed to the physical display device in question.
However, in all other ways the display control device treats the display data as if it were being sent to a separate display device. This includes timing controls and the methods and timing used for transmission, such that the notional display will be correctly positioned and supplied with data at the correct rate. This is beneficial because it takes advantage of existing technology such as MST to reduce latency and processing of areas of interest on a single display device.
Preferably, it is possible for the notional display to be duplicated such that it can be part of multiple physical display devices. This is especially beneficial in a cloning situation, as it allows the same accelerated display data to be sent to multiple physical display devices to be displayed in the same location relative to the main display data associated with each physical display device Preferably, the accelerated display data comprises a cursor. This is the most beneficial embodiment of the invention as the cursor is a user-interface device and therefore must have low latency for a satisfactory user experience. This is also more beneficial than many other possible embodiments as a cursor is likely to have relatively small dimensions and comprise a relatively small amount of data, maximising the efficiency of the individual stream.
Advantageously, the method of the first aspect of the invention may be extended to allow more than one accelerated area to be received. This could be beneficial in, for example, an embodiment where two users were able to interact with a single computer and they each had individual cursors. Alternatively, other accelerated areas could be used for other areas of the display data that required low latency, alongside a cursor.
According to another aspect of the invention, there is provided a method for transmitting display data to a display device and displaying it, comprising.
Page 5 of 21 I. a first device (a 'display control device'): a. receiving a stream of display data to be sent to one or more display devices (the 'main display data'); b. identifying an area of the main display data at a location which is of particular interest (the 'accelerated area'); c. isolating the display data at the accelerated area (the 'accelerated display data'); d identifying a notional display at the accelerated area; e creating a separate stream of display data carrying the accelerated display data and associating it with the notional display; and f transmitting the streams of display data to the displays as appropriate for the connection, such that the accelerated display data is sent to the notional display as if it were a separate physical display device but addressed to the physical display device of which the notional display is part; 2. each connected display device: a receiving two or more streams of display data b identifying the streams to be displayed on this physical display device; c if multiple streams are to be displayed on this physical display device: i, decompressing any compressed streams; ii, blending the received streams as appropriate; iii, displaying the resulting pixel data; and d otherwise, displaying the appropriate stream of display data in the conventional way.
This method is beneficial because it allows part of the main display data to be accelerated without requiring it to be sent to the display control device as a separate stream.
The accelerated display data may be isolated from the main display data either by cutting it out or copying it out. Cutting it out would mean copying it into separate storage and filling the space it had occupied in the frame with blank pixels or pixels in a single colour which will be easy to compress and blend into upon receipt of the resulting stream. It is most likely that this colour will be black, but it may not be, depending on the details of the implementation. Copying comprises simply copying the accelerated display data into separate storage and making no changes to the main display data. Copying is preferable as it requires less processing due to the fact that the gap does not need to be filled, and also because it will Page 6 of 21 result in fewer visible artefacts after the accelerated display data has been blended back into the main display data.
In a preferred embodiment, the accelerated area is the location on the main display data where the user's gaze is focussed. As such, preferably the method of identifying the accelerated area comprises.
1 capturing the direction of a user's gaze using cameras connected to all connected display devices; 2 calculating the location on the screen on which the user is focussed: 3 transmitting this location to the display control device; and 4 using the location to identify the accelerated area as hereinbefore described According to a still further aspect of the invention, there is provided a method for displaying display data received from multiple computing devices, commonly known as hosts, comprising: I. dividing one or more display devices into a number of notional displays greater than the number of physical display devices; 2 receiving streams of display data from more than one host; 3 associating each of the multiple streams of display data with the appropriate notional display; 4 optionally, receiving instructions as to how a stream of display data should be manipulated prior to display on its associated notional display and carrying out these instructions; and transmitting the streams of display data to the appropriate displays.
This is beneficial because it will allow multiple hosts to share a single display using the same technique as is used in the first aspect for different parts of the same display data. The use of different streams serving individual notional displays is beneficial because it means that the data does not need to be composited into a single frame before being sent to the display device; it can just be directed onto notional displays as appropriate.
The instructions for how a stream of display data should be displayed on its associated notional display may include: * Cropping * Rotating Page 7 of 21 * Scaling * Colour correction * Dithering * Format conversion * Colour conversion or any other appropriate required function. Any number of these functions may be carried out in any combination.
According to a further aspect of the invention there is provided a system arranged to carry out the above described methods, comprising: 1. 2. 3. 4. S.
at least one host; a display control device comprising: a. one or more inputs for display data; b. one or more buffers for storing input display data; c. optionally, an engine arranged to copy one or more areas of accelerated display data from input accelerated areas; and d. one or more outputs connected to one or more display devices, each arranged to carry one or more streams of display data one or more display devices each comprising: a. one or more inputs for streams of display data; b. if appropriate, an engine arranged to separate multiple interleaved streams of display data; c. a blending engine arranged to combine separate streams to form a single frame for display; and d. a display panel arranged to display the resulting frame; a connection between the display control device and at least one display device arranged to carry multiple streams of display data; and connections between the at least one host and the display control device.
The connections between the host or hosts and the display control device and the connections between the display control device and the display device or devices may be wired or wireless and may be over a network, including the internet.
Optionally, the display control device may include engines arranged to decompress incoming data and to compress the main display data as it is transmitted to the display device Page 8 of 21 The outgoing compression engine could further be arranged to compress different streams of display data to different degrees such that, for example, the accelerated display data is compressed less than the main display data.
Brief Description of the Drawings
Embodiments of the invention will now be more fully described, by way of example, with reference to the drawings, of which: Figure la is a basic schematic of a conventional system; Figure lb is a frame of display data comprising an image and a cursor that may be displayed by the system of Figure la; Figure 2 shows a basic schematic of a system according to a first embodiment of the invention; Figure 3 is a detailed schematic of a display device used in the system of Figure 2; Figure 4 is a detailed schematic of part of the system of Figure 2 in the case where the accelerated display data is provided as a separate stream; Figure 5 is a detailed schematic of part of the system of Figure 2 in the case where the accelerated display data must be copied from the main display data, and Figure 6 is a detailed schematic of an example embodiment of the system with multiple hosts.
Detailed Description of the Drawings
Figure 1 a shows a conventional system comprising a host [8], a data manager [9] and a display device [10]. The host produces display data for display on the display device in frame [11] such as that shown in Figure lb, which will include some 'background' image data [12] (shown as a "star") -the main display data, and some foreground image data, such as a cursor [13]. Conventionally, the frame [t t] is then rasterised by a data manager [9], which may be part of a host device such as the computer, and is then sent to the display device [10] in this form.
Figure 2 shows a schematic of a system according to one embodiment of the present invention. In this system, there may be several hosts [14a, 14b], (in this case two are shown) each of which provide display data to a data manager [15] In this embodiment, one host Page 9 of 21 produces the 'background' image data [12] (shown as a "star"), for which latency is less important and which is therefore of a lower priority, and another host produces foreground image data, such as the cursor [13], which is of higher priority and is likely to move more rapidly. Thus, in general, different parts of the complete frame may be produced by different hosts, with the cursor being part of display data produced by one host or being produced independently by one host. In this example, the cursor [13] is likely to be accelerated display data, although, as will be described further below, there may be other types of accelerated display data. The data manager [15] forms two or more streams of display data from the display data provided by the hosts and interleaves them into a single interleaved stream that is sent to a display device [16]. The data manager [15] also sends instructions to the display device on how the multiple interleaved streams should be combined for display. The system may also include a sensor [17], for example a camera, to sense which part of the display a user is looking at, that information being fed back via a sensing device [18] to the data manager [15], which may use that information to make sure that the area of the display that is being looked at has lower latency that the rest of the displayed image, as will be more fully described below.
Figure 3 shows an embodiment of the display device [16] which is able to receive multiple interleaved streams of display data [21] and blend them into a single frame for display. The display device [16] includes an input engine [22] which is arranged to receive an incoming interleaved stream of display data [21] and separate it into separate streams [23] as required.
It could do this by checking header data in packets of display data and arranging the packets into internal buffers prior to releasing the packets from the buffers as required by a blending engine [24].
The blending engine [24] takes the multiple streams [23] of display data separately and combines them according to position data provided in instructions from the data manager.
The instructions may be provided with the display data [21], preferably as part of the interleaving, or may be provided separately. The combined display data forms a single frame of pixel data which is suitable for display on a display panel [27]. The finished pixel data is stored in a frame buffer [25]. This may be large enough to hold a complete frame or may be a small flow control buffer and only able to hold a limited amount of data at a time There may also be more than one buffer [25] so that one buffer can be updated while another is read and its contents displayed. As such, two buffers [25] are shown here. The pixel data can then be read by a raster engine [26] and displayed on the display panel [27] in a conventional way.
Page 10 of 21 Figure 4 shows a more detailed view of a data manager [15] which contains an input engine [31], an input buffer [32], a processing engine [33], a cursor buffer [34], an output buffer [35], and an output engine [36] It may, of course, also comprise further components, but these are not shown for clarity. The data manager [15] is connected to a single physical display device [16] such as that shown in Figure 3, in this embodiment via a single wired connection [37] which carries a signal [39] comprising the interleaved streams of display data [310, 311] The main display data -in this example, the "star" [12] similar to that shown in Figure lb-is produced by a host [14] and transmitted to the data manager [15] along with, in this embodiment, metadata comprising the location of the cursor [13]. The display data comprising the cursor icon itself will also be provided by the same host [14], but it is likely to be updated much less frequently and is here shown as a separate input. It is stored in a dedicated cursor buffer [34] The main display data and metadata are received by the input engine [31], which copies the main display data into the input buffer [32] and transfers the metadata to the output engine [36] (shown by the dashed line).
The processing engine [33] carries out any processing required, such as decompression, and copies the processed display data to the output buffer [35]. No blending of the cursor [13] is necessary at this stage as the cursor [13] will be treated by the data manager [15] as a separate frame displayed on a different display device [38] It will be blended at the display device [16]. This removes the need for some processing and will therefore improve latency and reduce power consumption by the processing engine [33].
As a further result of the removal of the need for blending, if the main display data [12] does not change from frame to frame, for example it comprises a website or word processing document being browsed by the user, no update is required to the display data in order to move the cursor [13]. No further display data need be sent to the input engine [31]; it will only receive metadata comprising the new location of the cursor [13]. No buffer updates are needed, removing slow memory interactions, and the processing engine [33] need not be used, leading to a further reduction in latency and power use The output engine [36] can simply read the original display data from the output buffer [35] and produce a stream of display data [39] as described below.
The output engine [36] creates a notional display [38] at the location sent to it by the input engine [31] but with the same address as the physical display device [16] This means that, Page 11 of 21 although there is only one physical display device [16], the output engine [36] behaves in all ways as if it were sending display data to two display devices, although the single physical display device [16] will receive both streams of display data [310, 311] The output engine [36] then fetches pixel data from the buffers [35, 34] to produce an interleaved stream [39] directed to both displays [16, 38]. In this embodiment, it fetches pixel data from the output buffer [35] and compresses it, then fetches raw data from the cursor buffer [34] as appropriate such that the resulting interleaved stream [39] is written from left to right and top to bottom across all displays [16, 38].
The output engine [36] may also compress one of the streams of display data [310, 311], likely to be the main display data [311] as the greatest benefit will be seen from compressing the larger area of display data. Latency of the accelerated display data [310], in this case the cursor [13], is reduced by the fact that it does not need to be compressed.
Decompression and blending are finally performed at the display device [16] as hereinbefore described. The completed frame is then displayed.
Figure 5 shows a similar embodiment of the data manager [15] in a case where the accelerated display data is part of the main display data and must be identified and copied.
As in the embodiment shown in Figure 4, this data manager includes an input engine [41], input buffer [42], processing engine [43], output buffer [45], accelerated buffer [44], and output engine [46]. The connection [37] to the display device [16] operates in the same way and the details of the interleaved stream [39] are not here shown. Likewise, the display device [16] is once again similar to that shown in Figure 3 The host produces a frame of main display data such as that shown in Figure lb, but without the cursor [13]. It also detects the location [47] on the display device [16] on which a user is focussed. This could be done by, for example, multiple webcams, such as that shown in Figure 3 as camera [17], attached to the display device which detect the user's eyes and the direction in which they are looking. The cameras send the information to the sensing device [18], where they may be combined, possibly with other data, to determine the area on the display device where the user is focussed. This information is then sent to the display control device [15] as location metadata. This and the main display data are then received by the input engine [41], which puts the main display data in the input buffer [42] and forwards the location metadata on to the processing engine [43].
Page 12 of 21 The processing engine [43] receives the location metadata and accesses the main display data in the input buffer [42]. It is able to locate the display data at the location [47] and copies this and, in this embodiment, the display data around it to form a rectangle of a preprogrammed size, which will also be the size of the notional display [48]. It then places this in the accelerated buffer [44] to form the accelerated display data. It then copies the main display data into the output buffer. In this embodiment, no change is made to the main display data. The processing engine [43] then forwards the location metadata to the output engine [46].
The output engine [46] proceeds in a similar way to that described with respect to Figure 4. It need not be aware of the fact that the accelerated display data has been copied from the main display data as opposed to being provided as a separate stream. It creates a notional display [48] at the location [47] indicated by the received metadata and fetches pixel data from the buffers [45, 44] to produce an interleaved stream. In this embodiment, it would be extremely beneficial for the output engine [46] to compress the main display data, as it would reduce the bandwidth required overall but the area at which the user is looking can be raw quality, resulting in a better user experience.
The interleaved streams can then be received by the display device [16] for decompression, blending and display as hereinbefore described.
Figure 6 shows a schematic of an embodiment of the invention that allows a user to connect multiple hosts. It operates in a similar way to the system shown in Figure 4 and comprises, in this example, four hosts [14], a data manager [15] and a single physical display device [16] which will once again be of the type shown in Figure 3. The data manager [15] includes an input buffer [52] and an output buffer [54], each of which is divided into a number of virtual buffers equal to the number of connected hosts [14], as is shown by the patterns of the hosts [14], input buffers [52], output buffers [54] and notional displays [56] in Figure 6. This division would be triggered by metadata received from the input engine [51] upon connection of the hosts [14], through connections that are not here shown.
The four hosts [14] are all connected to the data manager [15]. This connection may be wireless or wired, through connections to multiple input ports or through an adapter that allows all four to connect to a single input port. In any case, they transmit display data to the input engine [51]. The input engine [Si] is aware of which host has supplied a given packet of display data and places it in the appropriate virtual input buffer [52] For example, display Page 13 of 21 data supplied by Host A [14A] (shown marked with dots) is placed in the first virtual input buffer [52A]. The input engine [51] also sends metadata to the processing engine [53] to notify it that the data is ready and its location The processing engine [53] takes display data from the input buffer [52] as it becomes available and processes it, for example by decompressing it It then places the resulting pixel data in the appropriate virtual output buffer [54] according to the host [14] that produced it and therefore the notional display [56] to which it will be sent.
The hosts [14] may send further metadata to the data manager [15] regarding how display data should be cropped, resized, rotated etc. in order to fit on its associated notional display [56], since these may not be of regular and equal size as shown in Figure 6. This metadata is received by the input engine [51], which then passes it on to the processing engine [53], which performs the necessary operations prior to storing the data in the output buffers [54].
When the input engine [5]] received the display data from each host [14], it also received a notification of the location and size of the notional display [56] to be associated with that host [14]. The locations and sizes of the notional displays [56] may be determined by the hosts [14] in a variety of ways, for example: * Matching software behaviour on the hosts, for example in a videoconferencing setting where all the hosts are running the same videoconferencing software, the software may be configured such that if the associated user is speaking the host will require a large notional display at the top of the screen and otherwise it will require a small notional display at the bottom of the screen; * Negotiation between the hosts such that they are all aware of the size and resolution of the display device and divide this space up between themselves according to heuristics, for example such that they each get an equal portion of space, arranged in the order in which they were connected; * Set availability on the data manager, which may, for example, be a docking station; for example, a maximum of four hosts can be connected and the docking station stores notional display configurations for each possible number of connected hosts. When a host is connected, the docking station informs it of the size and location of its notional display during initial connection handshaking and updates previously-connected hosts accordingly.
Page 14 of 21 It should be understood that these heuristics are examples only and do not define or limit the scope of the claims.
Upon receiving the locations and sizes of the required notional displays, the input engine [5]] sends further metadata to the output engine [55] to notify it of these attributes and the output engine [55] creates the appropriate number of notional displays [56] as hereinbefore described.
If the locations and sizes given for the notional displays [56] will result in overlaps between two notional displays [56] such that two notional displays [56] are attempting to occupy the same area on the physical display device [16], the output engine will apply heuristics to determine which notional display [56] will be positioned 'behind' the other.
Example heuristics include: * The smaller notional display [56] is positioned in front of the larger.
* The host [14] connected first has priority and the notional display [56] associated with that host [14] will appear in front.
* Both notional displays [56] are reduced in size until they no longer overlap.
Other heuristics may occur to the reader and the above examples may be combined in any way appropriate to the specific embodiment.
The output engine [55] constantly fetches pixel data from the virtual output buffers [54] and creates an interleaved stream comprising a stream of display data for each notional display [56]. This is then sent to the display device [16] where the streams are blended as hereinbefore described. In the same way as the movement of a cursor [13] in Figure 3, if only the display data being produced by one host [14] has changed, there is no need for the data manager [15] to interfere with or re-process the display data associated with any of the other hosts [14] or notional displays [56]. The notional displays [56] can also be moved and re-configured as appropriate by sending new location metadata to the input engine [51], which will signal the output engine [55] appropriately.
Although only a few particular embodiments have been described in detail above, it will be appreciated that various changes, modifications and improvements can be made by a person skilled in the art without departing from the scope of the present invention as defined in the claims. For example, hardware aspects may be implemented as software where appropriate and vice versa Furthermore, instnictions to implement the method may be provided on a computer readable medium. For example, although the input engine [22], the Page 15 of 21 blending engine [24], the buffers [25] and the raster engine [26], which form a display data controller are described as being within the display device [16], the display data controller could be a separate device located between the data manager and a conventional display device, conveniently co-located with the conventional display device.
Page 16 of 21
GB1509290.1A 2015-05-29 2015-05-29 Managing display data Active GB2538797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1509290.1A GB2538797B (en) 2015-05-29 2015-05-29 Managing display data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1509290.1A GB2538797B (en) 2015-05-29 2015-05-29 Managing display data

Publications (3)

Publication Number Publication Date
GB201509290D0 GB201509290D0 (en) 2015-07-15
GB2538797A true GB2538797A (en) 2016-11-30
GB2538797B GB2538797B (en) 2019-09-11

Family

ID=53677440

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1509290.1A Active GB2538797B (en) 2015-05-29 2015-05-29 Managing display data

Country Status (1)

Country Link
GB (1) GB2538797B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112965573A (en) * 2021-03-31 2021-06-15 重庆电子工程职业学院 Computer interface conversion device
US11150857B2 (en) 2017-02-08 2021-10-19 Immersive Robotics Pty Ltd Antenna control for mobile device communication

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3472806A4 (en) 2016-06-17 2020-02-26 Immersive Robotics Pty Ltd Image compression method and apparatus
AU2018372561B2 (en) 2017-11-21 2023-01-05 Immersive Robotics Pty Ltd Image compression for digital reality
AU2018373495B2 (en) 2017-11-21 2023-01-05 Immersive Robotics Pty Ltd Frequency component selection for image compression

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090121849A1 (en) * 2007-11-13 2009-05-14 John Whittaker Vehicular Computer System With Independent Multiplexed Video Capture Subsystem
US20100045791A1 (en) * 2008-08-20 2010-02-25 Honeywell International Inc. Infinite recursion of monitors in surveillance applications
US20110145879A1 (en) * 2009-12-14 2011-06-16 Qualcomm Incorporated Decomposed multi-stream (dms) techniques for video display systems
US20140355664A1 (en) * 2013-05-31 2014-12-04 Cambridge Silicon Radio Limited Optimizing video transfer

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090121849A1 (en) * 2007-11-13 2009-05-14 John Whittaker Vehicular Computer System With Independent Multiplexed Video Capture Subsystem
US20100045791A1 (en) * 2008-08-20 2010-02-25 Honeywell International Inc. Infinite recursion of monitors in surveillance applications
US20110145879A1 (en) * 2009-12-14 2011-06-16 Qualcomm Incorporated Decomposed multi-stream (dms) techniques for video display systems
US20140355664A1 (en) * 2013-05-31 2014-12-04 Cambridge Silicon Radio Limited Optimizing video transfer

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11150857B2 (en) 2017-02-08 2021-10-19 Immersive Robotics Pty Ltd Antenna control for mobile device communication
US11429337B2 (en) 2017-02-08 2022-08-30 Immersive Robotics Pty Ltd Displaying content to users in a multiplayer venue
CN112965573A (en) * 2021-03-31 2021-06-15 重庆电子工程职业学院 Computer interface conversion device
CN112965573B (en) * 2021-03-31 2022-05-24 重庆电子工程职业学院 Computer interface conversion device

Also Published As

Publication number Publication date
GB2538797B (en) 2019-09-11
GB201509290D0 (en) 2015-07-15

Similar Documents

Publication Publication Date Title
US11741916B2 (en) Video frame rate compensation through adjustment of timing of scanout
GB2538797B (en) Managing display data
WO2016091082A1 (en) Multi-screen joint display processing method and device
TW487900B (en) Image display system, host device, image display device and image display method
EP3134804B1 (en) Multiple display pipelines driving a divided display
US20200143516A1 (en) Data processing systems
US8723891B2 (en) System and method for efficiently processing digital video
US11127110B2 (en) Data processing systems
US20140285502A1 (en) Gpu and encoding apparatus for virtual machine environments
US11243786B2 (en) Streaming application visuals using page-like splitting of individual windows
US7479965B1 (en) Optimized alpha blend for anti-aliased render
US9449585B2 (en) Systems and methods for compositing a display image from display planes using enhanced blending hardware
WO2011077550A1 (en) Screen relay device
GB2536472A (en) A method of processing display data
US20140300935A1 (en) Image processing apparatus and control method thereof
CN108243293B (en) Image display method and system based on virtual reality equipment
US20140362097A1 (en) Systems and methods for hardware-accelerated key color extraction
US10788925B2 (en) Touch panel sharing support apparatus, touch panel sharing method, and computer program
JP2000224559A (en) Device, system and method for processing picture
CN110741634A (en) Image processing method, head-mounted display device and head-mounted display system
US20130346646A1 (en) Usb display device operation in absence of local frame buffer
JP2005108035A (en) Method for copying screen