WO2016016607A1 - Managing display data for display - Google Patents

Managing display data for display Download PDF

Info

Publication number
WO2016016607A1
WO2016016607A1 PCT/GB2015/052023 GB2015052023W WO2016016607A1 WO 2016016607 A1 WO2016016607 A1 WO 2016016607A1 GB 2015052023 W GB2015052023 W GB 2015052023W WO 2016016607 A1 WO2016016607 A1 WO 2016016607A1
Authority
WO
WIPO (PCT)
Prior art keywords
display data
pixels
changed
data according
managing
Prior art date
Application number
PCT/GB2015/052023
Other languages
French (fr)
Inventor
Paul James
Original Assignee
Displaylink (Uk) Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Displaylink (Uk) Limited filed Critical Displaylink (Uk) Limited
Publication of WO2016016607A1 publication Critical patent/WO2016016607A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1415Digital output to display device ; Cooperation and interconnection of the display device with other functional units with means for detecting differences between the image stored in the host and the images displayed on the displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • G06F3/1462Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay with means for detecting differences between the image stored in the host and the images displayed on the remote displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/10Special adaptations of display systems for operation with variable images
    • G09G2320/103Detection of image changes, e.g. determination of an index representative of the image change
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/16Determination of a pixel data signal depending on the signal applied in the previous frame

Definitions

  • the present invention relates to methods and apparatus for managing display data, and more particularly, though not exclusively, for managing display data to be sent from a host device for display at a display device.
  • connection of an additional display device to a computer presents a number of problems.
  • a computer will be provided with only one video output such as a VGA-out connection.
  • One method by which a display device can be added to a computer is by adding an additional graphics card to the internal components of the computer.
  • the additional graphics card will provide an additional video output which will allow the display device to be connected to the computer and driven by that computer.
  • An alternative method of connecting a display device is to connect the display device to a USB socket on the computer, as all modern computers are provided with multiple USB sockets.
  • This provides a simple connection topology, but requires additional hardware and software to be present, as in general, USB has a bandwidth that makes the provision of a good quality video output a non-trivial task.
  • any new display control devices can handle higher definition video inputs and higher resolution display devices.
  • display devices connected to a desktop computer via a USB connection Apart from display devices connected to a desktop computer via a USB connection, it is becoming more and more common for the display data to be generated by a host device, whether a desktop computer or a mobile computing device such as a laptop computer, netbook, tablet, or mobile phone, and for display devices to be connected to the host device.
  • the display devices may be connected with a wired connection or wirelessly, either directly, or via a network, again, either wired or wireless, or, of course, a mixture of both.
  • the host device will include an integral display, as well as sending display data to be displayed on a remote display device, which includes a display device connected directly via a general-purpose data transmission medium (such as a USB connection) to the host device.
  • a general-purpose data transmission medium such as a USB connection
  • the display data will be stored in memory in the host device for direct display on the integral display, and will also be compressed for transmission to the remote display device.
  • Display data can be still or moving image data, e.g. pictures or movies, graphical data, text or a mixture of these or other data to be displayed.
  • the display data is generated by a graphics processing unit (GPU), which is dedicated to decoding (if necessary) compressed images, for example those compressed by standard formats such as JPEG or MPEG, and rendering the complete display frames, including any mixture of images, texts and graphics, that is to be actually displayed into a graphics memory.
  • the graphics memory will usually store two or more consecutive frames, which are then sent as complete frames to a frame buffer before being displayed.
  • each frame is copied from the graphics memory to a separate memory (or separate part of the memory), where the display data in the frame can be operated on to be compressed and the compressed frame is stored and then transmitted.
  • Embodiments of the invention may be implemented in software, middleware, firmware or hardware or any combination thereof.
  • Embodiments of the invention comprise computer program products comprising program instructions to program a processor to perform one or more of the methods described herein. Such products may be provided on computer-readable storage media or in the form of a computer-readable signal for transmission over a network.
  • Embodiments of the invention provide computer-readable storage media and computer-readable signals carrying data structures, media data files or databases according to any of those described herein.
  • Apparatus aspects may be applied to method aspects and vice versa.
  • apparatus embodiments may be adapted to implement features of method embodiments and that one or more features of any of the embodiments described herein, whether defined in the body of the description or in the claims, may be independently combined with any of the other embodiments described herein.
  • the invention provides a method of managing display data to be sent from a host device for display on a remote display device, the method comprising: maintaining, in a first memory, two or more consecutive frames of display data to be displayed;
  • the method further comprises compressing the display data for those regions within which at least one group of one or more pixels has changed prior to transmitting the display data.
  • the first memory and the second memory may comprise portions of a single memory or may be in separate memories.
  • the analysis area may comprise the complete frame or may comprise only a portion of the frame.
  • the analysis area may be determined according to whether information is available that one or more portions of the frame have been changed. For example, if information is available indicating that only a small portion of the current frame has been changed, then only that portion may be used as the analysis area. On the other hand, if no information is available, then the whole frame may be used as the analysis area.
  • the method further comprises determining the analysis area of the current frame by receiving, from a central processing unit of the host device, information indicating that a portion of the current frame has been changed and using that portion as the analysis area.
  • the analysis area may be determined based on the size of the changed portion. For example, if the size of the changed portion is more than, or even approaching, half the size of the frame, then it may be preferable to determine the analysis area as the whole frame, depending on the efficiency of the comparison step. If information is available that indicates that more than one portion of the current frame has changed, then the determination of whether to consider the whole frame as the analysis area may be based on an aggregate of the sizes of all the changed portions.
  • the group of one or more pixels may comprise a single pixel, or may comprise a tile of 2 x 2, 4 x 4 or any other appropriate number of pixels.
  • Each region may be 32 x 32, 64 x 64 or any other appropriate number of pixels.
  • both the groups and the regions may be rectangular, rather than square shaped.
  • the steps of comparing, storing an indication of which groups have changed, dividing, determining, and storing the display data to be repeated where the regions in which at least one group of one or more pixels within the region has changed form the analysis area for the repeated steps and the predetermined number of pixels of the regions in the repetition is smaller than the number of pixels of the regions in the initial cycle of steps.
  • Such repetition could be carried out several times, depending on the granularity required to reduce the number of pixels of the regions in which display data has changed.
  • the invention provides a method of managing display data to be sent from a host device for display on a remote display device, the method comprising:
  • the region is of a predetermined number of pixels and the region is determined by analysing the stored indication of which groups of one or more pixels have changed to determine a first group of one or more pixels that has changed in a row of groups of one or more pixels and designating a first boundary of the region so as to include the first group of one or more pixels.
  • the region may be of a size based on a determination of a last group of one or more pixels that has changed in a row of groups of one or more pixels and designating a second boundary of the region so as to include the last group of one or more pixels.
  • Other boundaries of the region may be determined analogously from other rows of groups of one or more pixels. In this way, a region of ad hoc size may be determined according to where there are groups of one or more pixels that have changed.
  • the method further comprises compressing the display data for those regions within which at least one group of one or more pixels has changed prior to transmitting the display data.
  • the first memory and the second memory may comprise portions of a single memory or may be in separate memories.
  • the analysis area may comprise the complete frame or may comprise only a portion of the frame.
  • the analysis area may be determined according to whether information is available that one or more portions of the frame have been changed. For example, if information is available indicating that only a small portion of the current frame has been changed, then only that portion may be used as the analysis area. On the other hand, if no information is available, then the whole frame may be used as the analysis area.
  • the method further comprises determining the analysis area of the current frame by receiving, from a central processing unit of the host device, information indicating that a portion of the current frame has been changed and using that portion as the analysis area. If information indicating the size of the portion that has been changed is available, then the analysis area may be determined based on the size of the changed area. For example, if the size of the changed area is more than, or even approaching, half the size of the frame, then it may be preferable to determine the analysis area as the whole frame, depending on the efficiency of the comparison step. If information is available that indicates that more than one portion of the current frame has changed, then the determination of whether to consider the whole frame as the analysis area may be based on an aggregate of the sizes of all the changed portions.
  • the group of one or more pixels may comprise a single pixel, or may comprise a tile of 2 x 2, 4 x 4 or any other appropriate number of pixels.
  • Each region may be of predetermined size of 32 x 32, 64 x 64 or any other appropriate number of pixels.
  • both the groups and the regions may be rectangular, rather than square shaped.
  • the steps of comparing, storing an indication of which groups have changed, determining, and storing the display data to be repeated where the regions in which at least one group of one or more pixels within the region has changed form the analysis area for the repeated steps, and the number of pixels of the regions in the repetition is smaller than the number of pixels of the regions in the initial cycle of steps.
  • Such repetition could be carried out several times, depending on the granularity required to reduce the number of pixels of the regions in which display data has changed.
  • Figure 1 shows a schematic diagram of components of a display system
  • Figure 2 shows a schematic diagram of elements of the host device used in the display system of Figure 1 ;
  • Figure 3 shows a schematic diagram of one embodiment of a frame to be displayed by the display system of Figure 1 ;
  • Figure 4 shows a schematic diagram of a differencing method that may be used by the host device of Figure 2;
  • Figure 5 shows a schematic diagram of how the differencing method of Figure 4 may be used on the frame of Figure 3.
  • FIG. 1 An embodiment of a display system is shown in Figure 1.
  • the system comprises a host processing device 10, a display device 12 and user interface devices 14.
  • the user interface devices are a keyboard 14a and a mouse 14b.
  • the system shown in Figure 1 is a standard desktop computer, with a display device 12, which is composed of discrete components that are locally located but could equally be a device such as a laptop computer or suitably enabled handheld device such as a mobile phone or pda (personal digital assistant) all using an additional display.
  • the system may comprise part of a networked or mainframe computing system, in which case the processing device 10 may be located remotely from the user input devices 14 and the display device 12, or indeed may have its function distributed amongst separate devices.
  • the display device 12 shows images, and the display of the images is controlled by the processing device 10.
  • One or more applications are running on the processing device 10 and these are represented to the user by corresponding application windows, with which the user can interact in a conventional manner.
  • the user can control the movement of a cursor about the images shown on the display device 12 using the computer mouse 14b, again in a totally conventional manner.
  • the user can perform actions with respect to any running application via the user interface device 14 and these actions result in corresponding changes in the images displayed on the display device 12.
  • the operating system run by the processing device 10 uses virtual desktops to manage one or multiple display devices 12.
  • a physical display device 12 is represented by a frame buffer that contains everything currently shown on that display device 12.
  • the processing device 10 connects to the secondary display device 12 via a display control device 16.
  • the display control device 16 is connected to the processing device 10 via a standard USB connection, and appears to the processing device 10 as a USB connected device. Any communications between the processing device 10 and the display control device 16 are carried out under the control of a USB driver specifically for the display control device 16. Such devices allow the connection of the secondary display device 12 to the processing device 10 without the need for any hardware changes to the processing device 10.
  • the display control device 16 connects to the display device 12 via a standard VGA or HDMI connection, and the display device 12 is a conventional display device 12 which requires no adjustment to operate in the display system shown in Figure 1. As far as the display device 12 is concerned, it could be connected directly to the graphics card of a processing device; it is unaware that the graphical data displayed by the display device 12 has actually been first sent via a USB connection to an intermediate component, the display control device 16. Multiple additional display devices 12 can be connected to the processing device 10 in this way, as long as suitable USB slots are available on the processing device 10.
  • the display control device 16 is external to the processing device 10 and is not a graphics card. It is a dedicated piece of hardware that receives graphical data via the USB connection from the processing device 10 and transforms that graphics data into a VGA or HDMI format that will be understood by the display device 12.
  • USB and VGA are only examples of data standards that can be used to connect the additional display device 12 to the processing device 10.
  • the general principle is that a general-purpose data network (such as USB or Ethernet) connects the processing device 10 to the display control device 16 and a display-specific data standard (such as VGA, HDMI or DVI) is used on the connection from the display control device 16 to the display device 12.
  • the host processing device 10 may includes a Graphics Processing Unit (GPU) 20 and a Central Processing Unit (CPU) 22. It will, of course, be appreciated that the host processing device 10 will have many other components, which are not here illustrated.
  • the GPU 20 is generally used to decode (if necessary) compressed images, for example those compressed by standard formats such as JPEG or MPEG, and render the complete display frame 25, including any mixture of images, texts and graphics, that is to be actually displayed into a graphics memory 21.
  • the graphics memory 21 will usually store data for two or more consecutive frames 25. The frame data is then sent to the CPU 22 where it will be dealt with according to the needs of the host device 10.
  • the CPU 22 will also have a memory 23, in which the frame data is stored before, during and after encoding and prior to transmittal to the display device 12.
  • the host device 10 will also have a communication interface 24 for connection to the display device 12.
  • Figure 3 illustrates schematically the frame 25.
  • the complete rendered frame is stored in the graphics memory 21 , from one frame (in time) to the next, only a small portion may actually change.
  • the whole frame may include a picture 26 and a portion 27 of text which is being edited.
  • the picture is unchanged and only a small piece 28 of the portion 27 of text actually changes. Therefore, it would be advantageous to reduce the amount of data that needs to be sent to the display device 12, whether compressed or not. It would also reduce load on the CPU 22 if a smaller amount of data than the whole frame needed to be stored and compressed by the CPU 22.
  • a processing unit which may conveniently be the GPU 20, can determine which parts of the frame have actually changed so that only those parts are then stored by the CPU, compressed by the CPU (if necessary) and transmitted to the display device.
  • the GPU 20 (or other processing unit) therefore determines a region of the frame that is stored in the graphics memory 22 for analysis.
  • the analysis region may be the whole of the frame, or, if hints are available that less than the whole frame is being changed, the analysis region could be smaller than the whole frame.
  • the CPU 22 may not know which piece of text actually changes from one frame to the next, it may know that the portion 27 of text is being edited (for example because the text editing program is being executed) and the CPU may therefore be able to provide the GPU with such information. In this case, therefore, the GPU will know that it only needs to analyse the region 27.
  • Figure 4 shows schematically how the GPU analyses the data in the analysis region 27 from one frame to the next.
  • Figure 4(a) shows the analysis region 27 of the frame 25 at a first time with pixels 29 marked as "X" in three particular locations being activated.
  • Figure 4(b) shows the same analysis region 27 of the frame at a second, later time, with one of the previously activated pixels being deactivated and one, new pixel being activated.
  • the GPU compares the pixels in the previous frame ( Figure 4(a)) with that of the current frame ( Figure 4(b)) and generates a difference map 30, as shown in Figure 4(c), in which the pixels that have changed are marked with a "*".
  • Figure 5 shows several further steps in the analysis after the difference map has been determined.
  • Figure 5(a) shows a difference map of an analysis region 27 in which a triangle 32 has appeared in the middle of the analysis region 27.
  • the pixels forming the triangle 32 are indicated as "1" and the pixels that have not changed are indicated as "0".
  • the difference map is then divided into a grid of rectangles 34 of predetermined size and the rectangles having any changed pixels within them are determined. Alternatively, of course, the rectangles having no changed pixels could be determined. In either case, the result leads to a determination of which of the rectangles have changed pixels, and the data for those rectangles, as indicated in Figure 5(c), is then stored.
  • the stored data for those rectangles 36 that have changed pixels is copied to another memory (or portion of memory), for example to the CPU memory 23 for compression, if necessary, and the compressed (or uncompressed) data is transmitted to the display device 12.
  • a single larger rectangle 40 may be used, which in the example of Figure 5(c) would also include the four rectangles 34 on either side of the central rectangles 36, to form a single rectangle made up of 4 x 3 of the rectangles 34.
  • the rectangle 40 may be determined by checking each row of the difference map until a changed pixel (a "1") is found. This could be considered the boundary of the changes for that row, with the opposite boundary being determined by the last changed pixel to be determined in each row. The boundary of the rectangle would then be determined according to the maximum row boundaries.
  • the amount of data that needs to be copied to and stored in the memory 23 is therefore substantially reduced, so that the amount of data transmitted to the display device is also substantially reduced, even if not compressed.
  • whether the stored data to be transmitted will be compressed or not will depend on the amount of data to be transmitted, the available bandwidth, and the speed of the processor carrying out the compression, among other factors.
  • the differencing method described above may depend on the size of the analysis region or regions in comparison to the size of the whole frame. If, for example, the analysis region is comparable in size to the whole frame, then it may not make sense to carry out the differencing since it may use up more resources that it saves. On the other hand, if no information is available from the CPU as to any analysis region smaller than the whole frame, then the differencing method could be used once, coarsely, with relatively large predetermined rectangle sizes and then repeated once or several time, with smaller and smaller rectangles to give a degree of granularity that is appropriate for the resources being used compared to the resources being saved.
  • the (estimated) time that is required to encode the data in those rectangles 36 that have changed pixels may be compared with the (estimated) time needed to encode all the data in the analysis region 27 identified by the CPU in order to decide whether to use the differencing method or not.
  • This may assume linearity of encoding time depending on the size of the rectangles 36 that have changed pixels and the analysis region 27.
  • estimating the times may depend on which differencing state the system is presently in. Rather than switching the differencing state immediately based on the comparison, it may be useful to add inertia to switching states so that the system has to be in a particular state for a minimum period of time.

Abstract

A method of managing display data to be sent from a host device (10) for display on a remote display device (2) involves storing two or more consecutive frames of display data to be displayed, comparing a group of one or more pixels of display data in an analysis area (27) in the current frame with a corresponding group of a previous frame to determine which groups have changed, storing an indication of which groups of one or more pixels have changed, determining a region (34) that includes at least one group of one or more pixels (29) that has changed, storing the display data for the region (34) within which at least one group of one or more pixels (29) has changed, and transmitting to the remote display device the display data only for the region (34) within which at least one group of one or more pixels has changed.

Description

Managing Display Data for Display
The present invention relates to methods and apparatus for managing display data, and more particularly, though not exclusively, for managing display data to be sent from a host device for display at a display device.
In desktop computing, it is now common to use more than one display device. Traditionally, a user would have a computer with a single display device attached, but now it is possible to have more than one display device attached to the computer, which increases the usable area for the worker. For example, International Patent Application Publication WO 20071020408 discloses a display system which comprises a plurality of display devices, each displaying respectively an image, a data processing device connected to each display device and controlling the image displayed by each display device, and a user interface device connected to the data processing device. Connecting multiple display devices to a computer is a proven method for improving productivity.
The connection of an additional display device to a computer presents a number of problems. In general, a computer will be provided with only one video output such as a VGA-out connection. One method by which a display device can be added to a computer is by adding an additional graphics card to the internal components of the computer. The additional graphics card will provide an additional video output which will allow the display device to be connected to the computer and driven by that computer.
However, this solution is relatively expensive and is not suitable for many nontechnical users of computers.
An alternative method of connecting a display device is to connect the display device to a USB socket on the computer, as all modern computers are provided with multiple USB sockets. This provides a simple connection topology, but requires additional hardware and software to be present, as in general, USB has a bandwidth that makes the provision of a good quality video output a non-trivial task.
As display technologies improve and the user's desire for display quality increases, the requirements placed upon the display control device that is receiving the incoming display data will correspondingly increase, as the amount of display data will increase proportionally. It is obviously desirable that any new display control devices can handle higher definition video inputs and higher resolution display devices.
Apart from display devices connected to a desktop computer via a USB connection, it is becoming more and more common for the display data to be generated by a host device, whether a desktop computer or a mobile computing device such as a laptop computer, netbook, tablet, or mobile phone, and for display devices to be connected to the host device. The display devices may be connected with a wired connection or wirelessly, either directly, or via a network, again, either wired or wireless, or, of course, a mixture of both.
Often, the host device will include an integral display, as well as sending display data to be displayed on a remote display device, which includes a display device connected directly via a general-purpose data transmission medium (such as a USB connection) to the host device. In such cases, the display data will be stored in memory in the host device for direct display on the integral display, and will also be compressed for transmission to the remote display device.
Display data can be still or moving image data, e.g. pictures or movies, graphical data, text or a mixture of these or other data to be displayed. Often, the display data is generated by a graphics processing unit (GPU), which is dedicated to decoding (if necessary) compressed images, for example those compressed by standard formats such as JPEG or MPEG, and rendering the complete display frames, including any mixture of images, texts and graphics, that is to be actually displayed into a graphics memory. The graphics memory will usually store two or more consecutive frames, which are then sent as complete frames to a frame buffer before being displayed.
If such frames are to be sent over a wire or wireless connection to a remote display device, it will be apparent that, although the display data could be sent as stored in the graphics memory frame by frame to a remote frame buffer to be displayed at the remote display device, because of bandwidth and other resource restrictions it is common for the display data to be compressed using a encoding algorithm prior to being transmitted. In order to be compressed, each frame is copied from the graphics memory to a separate memory (or separate part of the memory), where the display data in the frame can be operated on to be compressed and the compressed frame is stored and then transmitted. Although this technique reduces the amount of display data being transmitted, since the transmitted display data is compressed, it would be advantageous to provide further techniques to reduce the amount of data that needs to be transmitted.
Therefore, aspects and examples of the invention are set out in the claims and address at least a part of the above-described problem.
Examples of the invention may be implemented in software, middleware, firmware or hardware or any combination thereof. Embodiments of the invention comprise computer program products comprising program instructions to program a processor to perform one or more of the methods described herein. Such products may be provided on computer-readable storage media or in the form of a computer-readable signal for transmission over a network. Embodiments of the invention provide computer-readable storage media and computer-readable signals carrying data structures, media data files or databases according to any of those described herein.
Apparatus aspects may be applied to method aspects and vice versa. The skilled reader will appreciate that apparatus embodiments may be adapted to implement features of method embodiments and that one or more features of any of the embodiments described herein, whether defined in the body of the description or in the claims, may be independently combined with any of the other embodiments described herein.
In a first aspect the invention provides a method of managing display data to be sent from a host device for display on a remote display device, the method comprising: maintaining, in a first memory, two or more consecutive frames of display data to be displayed;
comparing a group of one or more pixels of display data in an analysis area in the current frame with a corresponding group of one or more pixels of display data in a corresponding analysis area of a previous frame to determine which groups have changed from the previous frame to the current frame;
storing an indication of which groups of one or more pixels have changed;
dividing the area to be analysed into a plurality of regions of a predetermined number of pixels;
determining, for each region, whether it includes at least one group of one or more pixels that has changed;
storing the display data for those regions within which at least one group of one or more pixels has changed in a second memory; and
transmitting the display data only for those regions within which at least one group of one or more pixels has changed to a remote display device.
Preferably, the method further comprises compressing the display data for those regions within which at least one group of one or more pixels has changed prior to transmitting the display data.
The first memory and the second memory may comprise portions of a single memory or may be in separate memories.
The analysis area may comprise the complete frame or may comprise only a portion of the frame. The analysis area may be determined according to whether information is available that one or more portions of the frame have been changed. For example, if information is available indicating that only a small portion of the current frame has been changed, then only that portion may be used as the analysis area. On the other hand, if no information is available, then the whole frame may be used as the analysis area.
In one embodiment, the method further comprises determining the analysis area of the current frame by receiving, from a central processing unit of the host device, information indicating that a portion of the current frame has been changed and using that portion as the analysis area.
If information indicating the size of the portion that has been changed is available, then the analysis area may be determined based on the size of the changed portion. For example, if the size of the changed portion is more than, or even approaching, half the size of the frame, then it may be preferable to determine the analysis area as the whole frame, depending on the efficiency of the comparison step. If information is available that indicates that more than one portion of the current frame has changed, then the determination of whether to consider the whole frame as the analysis area may be based on an aggregate of the sizes of all the changed portions.
The group of one or more pixels may comprise a single pixel, or may comprise a tile of 2 x 2, 4 x 4 or any other appropriate number of pixels. Each region may be 32 x 32, 64 x 64 or any other appropriate number of pixels. Furthermore, both the groups and the regions may be rectangular, rather than square shaped.
It is also possible for the steps of comparing, storing an indication of which groups have changed, dividing, determining, and storing the display data, to be repeated where the regions in which at least one group of one or more pixels within the region has changed form the analysis area for the repeated steps and the predetermined number of pixels of the regions in the repetition is smaller than the number of pixels of the regions in the initial cycle of steps. Such repetition could be carried out several times, depending on the granularity required to reduce the number of pixels of the regions in which display data has changed.
According to another aspect, the invention provides a method of managing display data to be sent from a host device for display on a remote display device, the method comprising:
maintaining two or more consecutive frames of display data to be displayed in a first memory;
comparing a group of one or more pixels of display data in an analysis area in the current frame with a corresponding group of one or more pixels of display data in a corresponding analysis area of a previous frame to determine which groups have changed from the previous frame to the current frame;
storing an indication of which groups of one or more pixels have changed;
determining a region of pixels that includes at least one group of one or more pixels that has changed;
storing the display data for the region in which at least one group of one or more pixels within the region has changed within a second memory; and
transmitting the display data only for the region in which at least one group of one or more pixels within the region has changed to a remote display device.
Preferably, the region is of a predetermined number of pixels and the region is determined by analysing the stored indication of which groups of one or more pixels have changed to determine a first group of one or more pixels that has changed in a row of groups of one or more pixels and designating a first boundary of the region so as to include the first group of one or more pixels.
Alternatively, the region may be of a size based on a determination of a last group of one or more pixels that has changed in a row of groups of one or more pixels and designating a second boundary of the region so as to include the last group of one or more pixels. Other boundaries of the region may be determined analogously from other rows of groups of one or more pixels. In this way, a region of ad hoc size may be determined according to where there are groups of one or more pixels that have changed.
Again, preferably, the method further comprises compressing the display data for those regions within which at least one group of one or more pixels has changed prior to transmitting the display data.
The first memory and the second memory may comprise portions of a single memory or may be in separate memories.
The analysis area may comprise the complete frame or may comprise only a portion of the frame. The analysis area may be determined according to whether information is available that one or more portions of the frame have been changed. For example, if information is available indicating that only a small portion of the current frame has been changed, then only that portion may be used as the analysis area. On the other hand, if no information is available, then the whole frame may be used as the analysis area.
In one embodiment, the method further comprises determining the analysis area of the current frame by receiving, from a central processing unit of the host device, information indicating that a portion of the current frame has been changed and using that portion as the analysis area. If information indicating the size of the portion that has been changed is available, then the analysis area may be determined based on the size of the changed area. For example, if the size of the changed area is more than, or even approaching, half the size of the frame, then it may be preferable to determine the analysis area as the whole frame, depending on the efficiency of the comparison step. If information is available that indicates that more than one portion of the current frame has changed, then the determination of whether to consider the whole frame as the analysis area may be based on an aggregate of the sizes of all the changed portions.
The group of one or more pixels may comprise a single pixel, or may comprise a tile of 2 x 2, 4 x 4 or any other appropriate number of pixels. Each region may be of predetermined size of 32 x 32, 64 x 64 or any other appropriate number of pixels. Furthermore, both the groups and the regions may be rectangular, rather than square shaped.
It is also possible for the steps of comparing, storing an indication of which groups have changed, determining, and storing the display data, to be repeated where the regions in which at least one group of one or more pixels within the region has changed form the analysis area for the repeated steps, and the number of pixels of the regions in the repetition is smaller than the number of pixels of the regions in the initial cycle of steps. Such repetition could be carried out several times, depending on the granularity required to reduce the number of pixels of the regions in which display data has changed.
Embodiments of the invention will now be described in greater detail, by way of example only, with reference to the accompanying drawings, in which:
Figure 1 shows a schematic diagram of components of a display system;
Figure 2 shows a schematic diagram of elements of the host device used in the display system of Figure 1 ;
Figure 3 shows a schematic diagram of one embodiment of a frame to be displayed by the display system of Figure 1 ;
Figure 4 shows a schematic diagram of a differencing method that may be used by the host device of Figure 2;
Figure 5 shows a schematic diagram of how the differencing method of Figure 4 may be used on the frame of Figure 3.
An embodiment of a display system is shown in Figure 1. The system comprises a host processing device 10, a display device 12 and user interface devices 14. The user interface devices are a keyboard 14a and a mouse 14b. The system shown in Figure 1 is a standard desktop computer, with a display device 12, which is composed of discrete components that are locally located but could equally be a device such as a laptop computer or suitably enabled handheld device such as a mobile phone or pda (personal digital assistant) all using an additional display. Similarly, the system may comprise part of a networked or mainframe computing system, in which case the processing device 10 may be located remotely from the user input devices 14 and the display device 12, or indeed may have its function distributed amongst separate devices.
The display device 12 shows images, and the display of the images is controlled by the processing device 10. One or more applications are running on the processing device 10 and these are represented to the user by corresponding application windows, with which the user can interact in a conventional manner. The user can control the movement of a cursor about the images shown on the display device 12 using the computer mouse 14b, again in a totally conventional manner. The user can perform actions with respect to any running application via the user interface device 14 and these actions result in corresponding changes in the images displayed on the display device 12.
The operating system run by the processing device 10 uses virtual desktops to manage one or multiple display devices 12. A physical display device 12 is represented by a frame buffer that contains everything currently shown on that display device 12. In order to allow the display device to be connected to a USB port on the processing device 10, rather than the standard VGA port, as would be the case if the display device 12 is a secondary display device, the processing device 10 connects to the secondary display device 12 via a display control device 16. The display control device 16 is connected to the processing device 10 via a standard USB connection, and appears to the processing device 10 as a USB connected device. Any communications between the processing device 10 and the display control device 16 are carried out under the control of a USB driver specifically for the display control device 16. Such devices allow the connection of the secondary display device 12 to the processing device 10 without the need for any hardware changes to the processing device 10.
The display control device 16 connects to the display device 12 via a standard VGA or HDMI connection, and the display device 12 is a conventional display device 12 which requires no adjustment to operate in the display system shown in Figure 1. As far as the display device 12 is concerned, it could be connected directly to the graphics card of a processing device; it is unaware that the graphical data displayed by the display device 12 has actually been first sent via a USB connection to an intermediate component, the display control device 16. Multiple additional display devices 12 can be connected to the processing device 10 in this way, as long as suitable USB slots are available on the processing device 10.
The display control device 16 is external to the processing device 10 and is not a graphics card. It is a dedicated piece of hardware that receives graphical data via the USB connection from the processing device 10 and transforms that graphics data into a VGA or HDMI format that will be understood by the display device 12. In topological terms USB and VGA are only examples of data standards that can be used to connect the additional display device 12 to the processing device 10. The general principle is that a general-purpose data network (such as USB or Ethernet) connects the processing device 10 to the display control device 16 and a display-specific data standard (such as VGA, HDMI or DVI) is used on the connection from the display control device 16 to the display device 12.
As shown in Figure 2, the host processing device 10 may includes a Graphics Processing Unit (GPU) 20 and a Central Processing Unit (CPU) 22. It will, of course, be appreciated that the host processing device 10 will have many other components, which are not here illustrated. A mentioned above, the GPU 20 is generally used to decode (if necessary) compressed images, for example those compressed by standard formats such as JPEG or MPEG, and render the complete display frame 25, including any mixture of images, texts and graphics, that is to be actually displayed into a graphics memory 21. The graphics memory 21 will usually store data for two or more consecutive frames 25. The frame data is then sent to the CPU 22 where it will be dealt with according to the needs of the host device 10. This may include encoding (compression), as necessary depending on whether it is to be transmitted over a bandwidth limited connection to the display device 12. Accordingly, the CPU 22 will also have a memory 23, in which the frame data is stored before, during and after encoding and prior to transmittal to the display device 12. The host device 10 will also have a communication interface 24 for connection to the display device 12.
Figure 3 illustrates schematically the frame 25. In many cases, although the complete rendered frame is stored in the graphics memory 21 , from one frame (in time) to the next, only a small portion may actually change. For example, the whole frame may include a picture 26 and a portion 27 of text which is being edited. In this example, the picture is unchanged and only a small piece 28 of the portion 27 of text actually changes. Therefore, it would be advantageous to reduce the amount of data that needs to be sent to the display device 12, whether compressed or not. It would also reduce load on the CPU 22 if a smaller amount of data than the whole frame needed to be stored and compressed by the CPU 22. Accordingly, it is envisioned that a processing unit, which may conveniently be the GPU 20, can determine which parts of the frame have actually changed so that only those parts are then stored by the CPU, compressed by the CPU (if necessary) and transmitted to the display device.
The GPU 20 (or other processing unit) therefore determines a region of the frame that is stored in the graphics memory 22 for analysis. The analysis region may be the whole of the frame, or, if hints are available that less than the whole frame is being changed, the analysis region could be smaller than the whole frame. For example, although the CPU 22 may not know which piece of text actually changes from one frame to the next, it may know that the portion 27 of text is being edited (for example because the text editing program is being executed) and the CPU may therefore be able to provide the GPU with such information. In this case, therefore, the GPU will know that it only needs to analyse the region 27.
Figure 4 shows schematically how the GPU analyses the data in the analysis region 27 from one frame to the next. Figure 4(a) shows the analysis region 27 of the frame 25 at a first time with pixels 29 marked as "X" in three particular locations being activated. Figure 4(b) shows the same analysis region 27 of the frame at a second, later time, with one of the previously activated pixels being deactivated and one, new pixel being activated. The GPU compares the pixels in the previous frame (Figure 4(a)) with that of the current frame (Figure 4(b)) and generates a difference map 30, as shown in Figure 4(c), in which the pixels that have changed are marked with a "*". As can be seen, only the two pixels that have become deactivated or activated from the previous frame to the current frame are so marked. All other pixels, whether activated or not, are not marked, as they have not changed. It will be apparent that the GPU can perform this analysis either on single pixels or groups or tiles of pixels, depending on its capabilities and on the "granularity" required.
Figure 5 shows several further steps in the analysis after the difference map has been determined. In this case, Figure 5(a) shows a difference map of an analysis region 27 in which a triangle 32 has appeared in the middle of the analysis region 27. As can be seen, the pixels forming the triangle 32 are indicated as "1" and the pixels that have not changed are indicated as "0". As shown in Figure 5(b), the difference map is then divided into a grid of rectangles 34 of predetermined size and the rectangles having any changed pixels within them are determined. Alternatively, of course, the rectangles having no changed pixels could be determined. In either case, the result leads to a determination of which of the rectangles have changed pixels, and the data for those rectangles, as indicated in Figure 5(c), is then stored. The stored data for those rectangles 36 that have changed pixels is copied to another memory (or portion of memory), for example to the CPU memory 23 for compression, if necessary, and the compressed (or uncompressed) data is transmitted to the display device 12.
Alternatively, instead of only those rectangles 36 that have changed pixels being copied, a single larger rectangle 40 may be used, which in the example of Figure 5(c) would also include the four rectangles 34 on either side of the central rectangles 36, to form a single rectangle made up of 4 x 3 of the rectangles 34.
In another embodiment, instead of dividing the difference map into the rectangles 34, the rectangle 40 may be determined by checking each row of the difference map until a changed pixel (a "1") is found. This could be considered the boundary of the changes for that row, with the opposite boundary being determined by the last changed pixel to be determined in each row. The boundary of the rectangle would then be determined according to the maximum row boundaries.
As will be apparent, therefore, the amount of data that needs to be copied to and stored in the memory 23 is therefore substantially reduced, so that the amount of data transmitted to the display device is also substantially reduced, even if not compressed. In practice, whether the stored data to be transmitted will be compressed or not will depend on the amount of data to be transmitted, the available bandwidth, and the speed of the processor carrying out the compression, among other factors.
Indeed, whether the differencing method described above is used at all may depend on the size of the analysis region or regions in comparison to the size of the whole frame. If, for example, the analysis region is comparable in size to the whole frame, then it may not make sense to carry out the differencing since it may use up more resources that it saves. On the other hand, if no information is available from the CPU as to any analysis region smaller than the whole frame, then the differencing method could be used once, coarsely, with relatively large predetermined rectangle sizes and then repeated once or several time, with smaller and smaller rectangles to give a degree of granularity that is appropriate for the resources being used compared to the resources being saved.
In determining whether to use the differencing method, for example, the (estimated) time that is required to encode the data in those rectangles 36 that have changed pixels may be compared with the (estimated) time needed to encode all the data in the analysis region 27 identified by the CPU in order to decide whether to use the differencing method or not. This may assume linearity of encoding time depending on the size of the rectangles 36 that have changed pixels and the analysis region 27. Furthermore, estimating the times may depend on which differencing state the system is presently in. Rather than switching the differencing state immediately based on the comparison, it may be useful to add inertia to switching states so that the system has to be in a particular state for a minimum period of time.
Other aspects that can be considered when determining whether to use the differencing method may include:
• Measure comparative size rather than comparative encoding time;
• Count the number of differences and use differencing if the number is below a certain threshold, perhaps by carrying out the first part of the above described algorithm and then checking before doing the second part;
· Look at the location of differences and use differencing to create multiple spaced rectangles that have changed pixels if the differences are widely located, perhaps by carrying out the first part of the above described algorithm and then checking before doing the second part;
• Look at the location of differences and use differencing to create a single rectangle that has changed pixels if the differences are located close together, perhaps by carrying out the first part of the above described algorithm and then checking before doing the second part.
Although the comparison of the analysis region of the previous and current frames has been described as being carried out on each pixel, it will be apparent that it could be carried out on groups, or tiles, of pixels, which may, for example be 4 x 4 pixels in size. The GPU may well carry out the comparison on more than one pixel or tile at a time, so that the comparison is carried out in parallel. Furthermore, it may be possible to carry out the comparison on averages of pixel values in groups or tiles, or on some other function of pixel values for the groups or tiles, such as the sums of the pixel values, or hashes of the individual pixel values or groups of pixels. A comparison of the pixel values for different colour channels may be made. Although several embodiments have been described in detail above, it will be appreciated that various changes, modifications and improvements can be made by a person skilled in the art without departing from the scope of the present invention as defined in the claims.

Claims

Claims
1. A method of managing display data to be sent from a host device for display on a remote display device, the method comprising:
maintaining, in a first memory, two or more consecutive frames of display data to be displayed;
comparing a group of one or more pixels of display data in an analysis area in the current frame with a corresponding group of one or more pixels of display data in a corresponding analysis area of a previous frame to determine which groups have changed from the previous frame to the current frame;
storing an indication of which groups of one or more pixels have changed;
dividing the area to be analysed into a plurality of regions of a predetermined number of pixels;
determining, for each region, whether it includes at least one group of one or more pixels that has changed;
storing the display data for those regions within which at least one group of one or more pixels has changed in a second memory; and
transmitting the display data only for those regions within which at least one group of one or more pixels has changed to a remote display device.
2. A method of managing display data according to claim 1 , further comprising compressing the display data for those regions within which at least one group of one or more pixels has changed prior to transmitting the display data.
3. A method of managing display data according to either claim 1 or claim 2, wherein the first memory and the second memory comprise portions of a single memory.
4. A method of managing display data according to either claim 1 or claim 2, wherein the first memory and the second memory are in separate memories.
5. A method of managing display data according to any preceding claim, wherein the analysis area comprises a whole of the current frame.
6. A method of managing display data according to any one of claims 1 to 4, wherein the analysis area comprises a portion of the current frame.
7. A method of managing display data according to any preceding claim, further comprising determining the analysis area of the current frame by receiving, from a central processing unit of the host device, information indicating that a portion of the current frame
5 has been changed and using that portion as the analysis area.
8. A method of managing display data according to any one of claims 1 to 6, wherein the analysis area is determined according to whether information is available that one or more portions of the frame have been changed.
10
9. A method of managing display data according to claim 8, wherein, if information is available indicating that only a portion of the current frame has been changed, then only that portion is used as the analysis area.
15 10. A method of managing display data according to claim 8, wherein, if no information is available, then the whole frame is used as the analysis area.
1 1. A method of managing display data according to either claim 8 or claim 9, wherein, if information indicating the size of the portion that has been changed is available, then the
20 analysis area may be determined based on the size of the changed portion.
12. A method of managing display data according to claim 1 1 , wherein, if the size of the changed portion is more than half the size of the frame, then the analysis area is determined as the whole frame.
25
13. A method of managing display data according to claim 1 1 , wherein, if information is available that indicates that more than one portion of the current frame has changed, then the determination of the analysis area is based on an aggregate of the sizes of all the changed portions.
30
14. A method of managing display data according to any preceding claim, wherein each group of one or more pixels comprises either a single pixel, a tile of 2 x 2, 4 x 4 or any other appropriate number of pixels.
35 15. A method of managing display data according to any preceding claim, wherein each region comprises 32 x 32, 64 x 64 or any other appropriate number of pixels.
16. A method of managing display data according to any one of claims 1 to 13, wherein one or more of the groups and the regions are rectangular in shape.
17. A method of managing display data according to any preceding claim, wherein the steps of comparing, storing an indication of which groups have changed, dividing, determining, and storing the display data, are repeated, wherein the regions in which at least one group of one or more pixels within the region has changed form the analysis area for the repeated steps, and the predetermined number of pixels of the regions in the repetition is smaller than the number of pixels of the regions in the initial cycle of steps.
18. A method of managing display data according to claim 17, wherein the repetition is carried out several times.
19. A method of managing display data to be sent from a host device for display on a remote display device, the method comprising:
maintaining two or more consecutive frames of display data to be displayed in a first memory;
comparing a group of one or more pixels of display data in an analysis area in the current frame with a corresponding group of one or more pixels of display data in a corresponding analysis area of a previous frame to determine which groups have changed from the previous frame to the current frame;
storing an indication of which groups of one or more pixels have changed;
determining a region of pixels that includes at least one group of one or more pixels that has changed;
storing the display data for the region in which at least one group of one or more pixels within the region has changed within a second memory; and
transmitting the display data only for the region in which at least one group of one or more pixels within the region has changed to a remote display device.
20. A method of managing display data according to claim 19, wherein the region is of a predetermined number of pixels, and determining the region comprises:
analysing the stored indication of which groups of one or more pixels have changed to determine a first group of one or more pixels that has changed in a row of groups of one or more pixels; and
designating a first boundary of the region so as to include the first group of one or more pixels.
5 21. A method of managing display data according to claim 19, wherein the region is of an ad hoc size and determining the region comprises:
analysing the stored indication of which groups of one or more pixels have changed to determine a first group of one or more pixels that has changed in a row of groups of one or more pixels;
10 designating a first boundary of the region so as to include the first group of one or more pixels;
determining a last group of one or more pixels that has changed in a row of groups of one or more pixels; and
designating a second boundary of the region so as to include the last group of one 15 or more pixels.
22. A method of managing display data according to any one of claims 19 to 21 , further comprising compressing the display data for those regions within which at least one group of one or more pixels has changed prior to transmitting the display data.
20
23. A method of managing display data according to any one of claims 19 to 22, wherein the first memory and the second memory comprise portions of a single memory.
24. A method of managing display data according to any one of claims 19 to 22, 25 wherein the first memory and the second memory are in separate memories.
25. A method of managing display data according to any one of claims 19 to 24, wherein the analysis area comprises a whole of the current frame.
30 26. A method of managing display data according to any one of claims 19 to 25, wherein the analysis area comprises a portion of the current frame.
27. A method of managing display data according to any one of claims 19 to 26, further comprising determining the analysis area of the current frame by receiving, from a central 35 processing unit of the host device, information indicating that a portion of the current frame has been changed and using that portion as the analysis area.
28. A method of managing display data according to any one of claims 19 to 26, wherein the analysis area is determined according to whether information is available that one or more portions of the frame have been changed.
29. A method of managing display data according to claim 28, wherein, if information is available indicating that only a portion of the current frame has been changed, then only that portion is used as the analysis area.
30. A method of managing display data according to claim 28, wherein, if no information is available, then the whole frame is used as the analysis area.
31. A method of managing display data according to either claim 28 or claim 29, wherein, if information indicating the size of the portion that has been changed is available, then the analysis area may be determined based on the size of the changed portion.
32. A method of managing display data according to claim 31 , wherein, if the size of the changed portion is more than half the size of the frame, then the analysis area is determined as the whole frame.
33. A method of managing display data according to claim 31 , wherein, if information is available that indicates that more than one portion of the current frame has changed, then the determination of the analysis area is based on an aggregate of the sizes of all the changed portions.
34. A method of managing display data according to any one of claims 19 to 33, wherein each group of one or more pixels comprises either a single pixel, a tile of 2 x 2, 4 x 4 or any other appropriate number of pixels.
35. A method of managing display data according to any one of claims 19 to 34, wherein each region comprises 32 x 32, 64 x 64 or any other appropriate number of pixels.
36. A method of managing display data according to any one of claims 19 to 33, wherein one or more of the groups and the regions are rectangular in shape.
37. A method of managing display data according to any one of claims 19 to 36, 5 wherein the steps of comparing, storing an indication of which groups have changed, dividing, determining, and storing the display data, are repeated, wherein the regions in which at least one group of one or more pixels within the region has changed form the analysis area for the repeated steps, and the predetermined number of pixels of the regions in the repetition is smaller than the number of pixels of the regions in the initial 10 cycle of steps.
38. A method of managing display data according to claim 37, wherein the repetition is carried out several times.
15 39. A host device configured to perform a method according to any preceding claim.
40. A computer readable medium including executable instructions which, when executed in a processing system, cause the processing system to perform a method according to any one of claims 1 to 38.
20
41. A display system comprising:
a host device according to claim 40;
a display device for receiving the display data transmitted from the host device and configured to display the received display data.
25
42. A method for generating difference data comprising:
receiving display update information, the display update information specifying at least one changed display area;
receiving pixel data associated with the changed display area or areas; 30 performing a comparison on the pixel data using values from previous pixel data; and
generating new display update information, the display update information specifying a changed display area of the same or smaller size to the original changed display area containing one or more regions of change.
43. A method for performing the method of claim 42, wherein the steps are performed repeatedly on progressively smaller changed display areas.
44. A method according to either claim 42 or claim 43, wherein the step of performing a 5 comparison between the previous frame data and the current frame data is performed by parallel processing.
45. A device configured to carry out the method of any one of claims 42 to 44.
10 46. A device according to claim 45, wherein the device comprises a graphics coprocessor.
PCT/GB2015/052023 2014-07-31 2015-07-14 Managing display data for display WO2016016607A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1413622.0 2014-07-31
GB1413622.0A GB2528870A (en) 2014-07-31 2014-07-31 Managing display data for display

Publications (1)

Publication Number Publication Date
WO2016016607A1 true WO2016016607A1 (en) 2016-02-04

Family

ID=51587566

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2015/052023 WO2016016607A1 (en) 2014-07-31 2015-07-14 Managing display data for display

Country Status (2)

Country Link
GB (1) GB2528870A (en)
WO (1) WO2016016607A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2559550A (en) * 2017-02-03 2018-08-15 Realvnc Ltd Method and system for remote controlling and viewing a computing device
US11153604B2 (en) 2017-11-21 2021-10-19 Immersive Robotics Pty Ltd Image compression for digital reality
US11151749B2 (en) 2016-06-17 2021-10-19 Immersive Robotics Pty Ltd. Image compression method and apparatus
US11150857B2 (en) 2017-02-08 2021-10-19 Immersive Robotics Pty Ltd Antenna control for mobile device communication
US11553187B2 (en) 2017-11-21 2023-01-10 Immersive Robotics Pty Ltd Frequency component selection for image compression

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007057053A1 (en) * 2005-11-21 2007-05-24 Agilent Technologies, Inc. Conditional updating of image data in a memory buffer
EP2244183A2 (en) * 2009-04-23 2010-10-27 VMWare, Inc. Method and system for copying a framebuffer for transmission to a remote display
US20120268480A1 (en) * 2011-04-04 2012-10-25 Arm Limited Methods of and apparatus for displaying windows on a display

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7684483B2 (en) * 2002-08-29 2010-03-23 Raritan Americas, Inc. Method and apparatus for digitizing and compressing remote video signals
US7667707B1 (en) * 2005-05-05 2010-02-23 Digital Display Innovations, Llc Computer system for supporting multiple remote displays
US20110157001A1 (en) * 2009-07-09 2011-06-30 Nokia Corporation Method and apparatus for display framebuffer processing
GB2486434B (en) * 2010-12-14 2014-05-07 Displaylink Uk Ltd Overdriving pixels in a display system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007057053A1 (en) * 2005-11-21 2007-05-24 Agilent Technologies, Inc. Conditional updating of image data in a memory buffer
EP2244183A2 (en) * 2009-04-23 2010-10-27 VMWare, Inc. Method and system for copying a framebuffer for transmission to a remote display
US20120268480A1 (en) * 2011-04-04 2012-10-25 Arm Limited Methods of and apparatus for displaying windows on a display

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11151749B2 (en) 2016-06-17 2021-10-19 Immersive Robotics Pty Ltd. Image compression method and apparatus
GB2559550A (en) * 2017-02-03 2018-08-15 Realvnc Ltd Method and system for remote controlling and viewing a computing device
US11150857B2 (en) 2017-02-08 2021-10-19 Immersive Robotics Pty Ltd Antenna control for mobile device communication
US11429337B2 (en) 2017-02-08 2022-08-30 Immersive Robotics Pty Ltd Displaying content to users in a multiplayer venue
US11153604B2 (en) 2017-11-21 2021-10-19 Immersive Robotics Pty Ltd Image compression for digital reality
US11553187B2 (en) 2017-11-21 2023-01-10 Immersive Robotics Pty Ltd Frequency component selection for image compression

Also Published As

Publication number Publication date
GB2528870A (en) 2016-02-10
GB201413622D0 (en) 2014-09-17

Similar Documents

Publication Publication Date Title
JP4405419B2 (en) Screen transmitter
WO2016016607A1 (en) Managing display data for display
US8760366B2 (en) Method and system for remote computing
US8995763B2 (en) Systems and methods for determining compression methods to use for an image
US9947298B2 (en) Variable compression management of memory for storing display data
JP2007241736A (en) Server device and client device for remote desktop system
GB2484736A (en) Connecting a display device via USB interface
US20160125568A1 (en) Management of memory for storing display data
CN113368492A (en) Rendering method and device
US20130002521A1 (en) Screen relay device, screen relay system, and computer -readable storage medium
US20170371614A1 (en) Method, apparatus, and storage medium
KR102245137B1 (en) Apparatus and method for decompressing rendering data and recording medium thereof
US20160005379A1 (en) Image Generation
US20120218292A1 (en) System and method for multistage optimized jpeg output
TWI691200B (en) Systems and methods for deferred post-processes in video encoding
US20150281699A1 (en) Information processing device and method
US9571600B2 (en) Relay device, relay method and thin client system
CN107318021B (en) Data processing method and system for remote display
US9626330B2 (en) Information processing apparatus, and information processing method
US20150106733A1 (en) Terminal device, thin client system, display method, and recording medium
CN107318020B (en) Data processing method and system for remote display
US11557018B2 (en) Image processing apparatus and computer-readable recording medium storing screen transfer program
US9584752B2 (en) System, information processing apparatus, and image processing method
KR101473463B1 (en) System for providing terminal service by providing compressed display information in server based computing system of terminal environment and method thereof
JP5701964B2 (en) Screen relay device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15741258

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15741258

Country of ref document: EP

Kind code of ref document: A1