GB2559550A - Method and system for remote controlling and viewing a computing device - Google Patents
Method and system for remote controlling and viewing a computing device Download PDFInfo
- Publication number
- GB2559550A GB2559550A GB1701813.6A GB201701813A GB2559550A GB 2559550 A GB2559550 A GB 2559550A GB 201701813 A GB201701813 A GB 201701813A GB 2559550 A GB2559550 A GB 2559550A
- Authority
- GB
- United Kingdom
- Prior art keywords
- display data
- computing device
- processor
- memory
- block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1454—Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
- G06F3/1454—Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
- G06F3/1462—Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay with means for detecting differences between the image stored in the host and the images displayed on the remote displays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
- G06F9/452—Remote windowing, e.g. X-Window System, desktop virtualisation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Transmitting data for display on a remote computing device, by comparing the captured display data with previously captured display data stored in said memory. After receiving a request for display data from the remote computing device; a second processor of the computing device captures display data that is displayed on said display and determines that display data on at least one region of the display has changed; the first processor accessing said memory to retrieve only display data corresponding to the at least one region; and the first processor encoding and transmitting said retrieved display data to the remote computing device. Previous images may be stored in a screenshot buffer and compared block by block to determine which blocks are different, and then copied to the screenshot buffer and the number of blocks that differ returned to the first processor so that changed blocks in the map can be sent to the remote device, such as a Raspberry Pi (RTM) via a wired or wireless connection.
Description
(71) Applicant(s):
ReaIVNC Limited (56) Documents Cited:
WO 2016/016607 A1 US 20120075346 A1 US 20090079663 A1
WO 2005/114375 A1 US 20100111410 A1 (Incorporated in the United Kingdom)
Betjeman House, 104 Hills Road, CAMBRIDGE, Cambridgeshire, CB2 1LQ, United Kingdom (58) Field of Search:
INT CL G06F
Other: Online: WPI, Epodoc (72) Inventor(s):
Andrew Wedgbury (74) Agent and/or Address for Service:
Marks & Clerk LLP
62/68 Hills Road, CAMBRIDGE, CB2 1LA, United Kingdom (54) Title of the Invention: Method and system for remote controlling and viewing a computing device Abstract Title: Transmitting display data for regions that change (57) Transmitting data for display on a remote computing device, by comparing the captured display data with previously captured display data stored in said memory. After receiving a request for display data from the remote computing device; a second processor of the computing device captures display data that is displayed on said display and determines that display data on at least one region of the display has changed; the first processor accessing said memory to retrieve only display data corresponding to the at least one region; and the first processor encoding and transmitting said retrieved display data to the remote computing device. Previous images may be stored in a screenshot buffer and compared block by block to determine which blocks are different, and then copied to the screenshot buffer and the number of blocks that differ returned to the first processor so that changed blocks in the map can be sent to the remote device, such as a Raspberry Pi (RTM) via a wired or wireless connection.
1/3
100
Figure 1
2/3
Ο ο
ΓΝ
3/3 ο
m (Ζ)
Φ | |
u | |
c Φ | (/) _x u |
Φ | O |
M— <4— | _Ω |
T3 | c |
Φ 4-» | Φ Φ |
Z3 | g |
Q. | 4-» |
E | Φ _Q |
O | |
o | |
E | |
o | |
Φ | |
M— | |
_x (J o | M— D _Q |
_Q | Φ (J |
4-* | c |
X Φ | Φ i_ |
c | Φ |
T3 | Φ |
Π3 | |
O | |
_1 | |
i | k |
E | |
o | Φ |
M— | M— M— |
_x | D |
_Q | |
O | 4-* |
_Q | o |
c | |
X Φ | (/) c |
c | Φ |
Φ | |
T3 | i_ |
TO | (J · |
O | (/) |
_1 |
LD
O m
<Z)
Φ | |
(J | 4—> |
c | o |
Φ | _c |
(/) | |
Φ | c |
<4— Φ | Φ Φ |
c | (J |
(/) | |
x (J | E |
o | o |
_Q | <4— |
Φ | Φ |
Π3 | <4— <4— |
T3 | |
Q_ | _Q |
Σ) |
o m
(Z)
Φ on c Π3 _C (J | |
_c | |
Q_ | |
(J | |
o _Q Φ 4—* Π3 T5 Q_ Σ) | E |
CO
Φ
OX
Φ N. | ||
Z5 \ | ||
1— | Π3 5 | |
or | > | |
< | ||
l— | Φ | |
(Z) | 4-» i c | |
i_ Φ i_ | Z5 O | ΓΝ |
TO | (J L·— | \ o |
Q_ E o u | Φ on c TO _c | (Z) |
D | (J | |
Cl | 4-» 1 | |
O | Φ 3 (/) / | |
Φ / | ||
oc / |
Application No. GB1701813.6
RTM
Date :1 August 2017
Intellectual
Property
Office
The following terms are registered trade marks and should be read as such wherever they occur in this document:
Raspberry Pi
Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
METHOD AND SYSTEM FOR
REMOTE CONTROLLING AND VIEWING A COMPUTING DEVICE
Technical Field
The present invention relates to a method and system for remote controlling and/or viewing a computing device.
Background
It is known to use a first computer device to view and control a second computer device using a Virtual Network Computing (VNC) Viewer application running on the first computer device (VNC Viewer) and a VNC Server application running on the second computer device (VNC Server). The contents of the display of the second computer device are duplicated on the first computer device which is typically remote from the second computer device. The first computer device has an interface mechanism which allows the user to send user input events, such as pressing a physical key on the device, moving the mouse cursor or touching a touch screen input, to the second computer device being controlled. As will be appreciated, the form of data link and the nature of the computer devices can vary, depending on the situation being used.
VNC uses the remote framebuffer (RFB) protocol, in which the VNC Viewer maintains a copy of the VNC Server’s screen and requests updates from the VNC Server. The VNC Server responds to each update request with a series of encoded rectangular image regions, which the VNC Viewer applies to its copy of the screen in order to bring it up-to date. Once the complete update has been received, the VNC Viewer then requests the next update, and so on.
It is known that a VNC Server application may run on a processor of a computer device which has limited processor resources. For example it is known that the Raspbian operating system for the Raspberry Pi computer developed by the Raspberry Pi Foundation is installed with VNC® Server from ReaIVNC®. Raspberry Pi is a trademark of the Raspberry Pi Foundation.
In this light, the present applicant has recognised that an improved method and system for remote controlling and/or viewing a computing device is required.
Summary
The Raspberry Pi devices are currently based on the Broadcom® BCM2835, BCM2836 or BCM2837 System-on-chip (SOC), which contain an ARM® central processing unit (CPU), a graphics processing unit (GPU) (VideoCore IV Scalar/Vector processor) and various other components including hardware image and video codecs.
On the Raspberry Pi, it is possible to use the “dispmanx” APIs to capture the image data displayed on a display coupled to the Raspberry Pi at a low level, which includes directly-rendered content which is not captured through X11-based methods. This has the advantage of being able to work without X11 running, for example, when the system is displaying a text-mode console or non-X11 graphical application. The screen capture application programming interface (API) provided by dispmanx is limited to taking a snapshot of the screen into a screenshot buffer residing in GPU-accessible memory, but provides no indication as to what regions of image data displayed on the screen have changed (if any). Therefore, repeated snapshots need to be taken in order to provide continuous updates.
Accessing the screenshot buffer from the ARM processor (where the VNC Server application runs) is a relatively expensive operation, in terms of the length of time taken between making the read request and getting the result (i.e. waiting for the image data stored in the screenshot buffer to be made available in memory readable by the ARM processor).The encoding of the screenshot is also a relatively expensive operation in terms of the number of CPU instructions required to perform the encoding, during which time the CPU is busy. Finally, the sending of the encoded screenshot over a network is expensive due to the elapsed time it takes to send data over the network, which is orders of magnitude slower than moving data around between any sort of memory on the Raspberry Pi.
The simplest approach for the VNC Server device to respond to an update request received from a VNC Viewer device would be to take a snapshot of the screen, transfer this to CPU-accessible memory, and have the VNC Server application encode and transfer the entire screen image to the VNC Viewer device. The inventor has identified that his has the significant problem of continually sending large full-screen updates even when nothing is changing on the server’s screen, wasting both CPU resources and network bandwidth.
It is possible to potentially reduce the amount of data encoded and transferred by detecting whether any change has occurred on the screen, and only sending the (entire) updated screen image if it has changed.
One way to approach this that has been identified by the inventor would be to calculate a hash of the screen image that has been copied into CPU-accessible memory, comparing this to a previously calculated hash, and thereby determining whether any part of the screen has changed.
Another approach identified by the inventor would be to maintain a second copy of the screen image in CPU-accessible memory, and compare this to the new image to detect any changes. This may have the advantage of returning a result more quickly in the case where the screen is changing, since the comparison can stop as soon as a change is detected. The inventor has identified that this approach is still potentially quite wasteful, since even a minor change on the screen will result in the entire screen image being updated, but it performs well in the case of full-screen video, where typically the whole screen is continually changing.
Going further with this approach, the inventor has identified that it is possible to use the second copy of the screen image to determine specific regions of the screen that have changed, thereby reducing the amount of image data that needs to be encoded and transferred to the viewer. However, in practice, this approach has been found to put a significant load on the CPU. This is a particular problem on the lower-powered devices in the Raspberry Pi series, where very little CPU resource is left for the user to run applications.
Additionally, the inventor has identified that each of these approaches still involves the expensive step of transferring the whole screen image into CPU-accessible memory each time a snapshot is taken, even if it is later determined there is no change between screenshots.
In light of the above the inventor has recognised the need for an optimised method for direct screen capture on computer devices which have limited processor resources (such as the Raspberry Pi series of devices) in order to improve the performance and efficiency of the VNC Server application running on the VNC Server device.
According to one aspect of the present invention there is provided a method of transmitting display data that is displayed on a display of a computing device from the computing device to a remote computing device, the method comprising: receiving, at a first processor of the computing device, a request for display data from the remote computing device; in response to receiving said request, a second processor of the computing device capturing display data that is displayed on said display and storing the captured display data in a memory of the computing device; the second processor determining that display data displayed on at least one region of the display has changed based on comparing the captured display data with previously captured display data stored in said memory; the first processor accessing said memory to retrieve only display data corresponding to the at least one region; and the first processor encoding and transmitting said retrieved display data to the remote computing device.
The method may further comprise the first processor instructing the second processor to perform said capturing in response to receiving said request.
The storing step may comprise storing the captured display data in a screenshot buffer of said memory, the previously captured display data being stored in a reference buffer of said memory.
The comparing step may comprise for each of a plurality of blocks of the captured display data stored in the screenshot buffer: comparing the block of captured display data with its corresponding block of previously captured display data stored in the reference buffer; and determining that display data displayed on the at least one region of the display has changed based on at least one block of captured display data being different to its corresponding block of previously captured display data.
The retrieved display data may comprise the at least one block of captured display data being different to its corresponding block of previously captured display data.
The method may further comprise the second processor updating the reference buffer by, for each of the at least one block of captured display data that is different to its corresponding block of previously captured display data, copying the block of captured display data from the screenshot buffer into the reference buffer to replace its corresponding block of previously captured display data.
The method may further comprise the second processor counting a number of blocks of captured display data that differ to its corresponding block of previously captured display data, and returning said number to the first processor.
The method may further comprise the first processor accessing said memory to retrieve the display data corresponding to the at least one region in response to receiving said number.
The method may further comprise the second processor maintaining a map in said memory, and updating said map to indicate which of the plurality of blocks of the captured display data differ to its corresponding block of previously captured display data.
The screenshot buffer and map may be located in a portion of said memory shared between the first processor and the second processor, and the method may further comprise the first processor using the map to access the screenshot buffer and copy each of the at least one block of captured display data that is different to its corresponding block of previously captured display data, into a portion of memory allocated to the first processor.
The step of accessing said memory may comprises accessing said portion of memory allocated to the first processor to retrieve each of the at least one block of captured display data that is different to its corresponding block of previously captured display data; and the method may further comprise encoding and transmitting each of the at least one block of captured display data that is different to its corresponding block of previously captured display data.
The display may be an internal component of the computing device.
The computing device may be coupled to the display via a wired or wireless connection.
A central processing unit of the computing device may comprise said processor, and the second processor may be a vector processor.
A graphics processing unit of the computing device may comprise said vector processor.
The computing device may be a Raspberry Pi device.
According to another aspect of the present invention there is provided a computing device that is configured to transmit display data that is displayed on a display of a computing device from the computing device to a remote computing device, the computing device comprising: a first processor; a second processor; and a memory; wherein the first processor is configured to receive a request for display data from the remote computing device; wherein the second processor is configured to capture display data that is displayed on said display, store the captured display data in said memory, compare the captured display data with previously captured display data stored in said memory, and determine that display data displayed on at least one region of the display has changed based on said comparison; and wherein the first processor is further configured to access said memory to retrieve only display data corresponding to the at least one region, and encode and transmit said retrieved display data to the remote computing device.
According to another aspect of the present invention there is provided a system comprising: a remote computing device having a processor and a display; a computing device that is configured to transmit display data that is displayed on a display of the computing device from the computing device to the remote computing device; and a data link linking said remote computing device and said computing device; wherein the computing device comprises: a first processor; a second processor; and a memory; wherein the first processor is configured to receive a request for display data from the remote computing device; wherein the second processor is configured to capture display data that is displayed on said display, store the captured display data in said memory, compare the captured display data with previously captured display data stored in said memory, and determine that display data displayed on at least one region of the display has changed based on said comparison; and wherein the first processor is further configured to access said memory to retrieve only display data corresponding to the at least one region, and encode and transmit said retrieved display data to the remote computing device.
The invention further provides processor control code to implement the abovedescribed systems and methods, for example on a general purpose computer system or on a digital signal processor (DSP). The code may be provided on a carrier such as a disk, CD- or DVD-ROM, programmed memory such as non-volatile memory (eg Flash) or read-only memory (Firmware). Code (and/or data) to implement embodiments of the invention may comprise source, object or executable code in a conventional programming language (interpreted or compiled) such as C, or assembly code. As the skilled person will appreciate such code and/or data may be distributed between a plurality of coupled components in communication with one another (e.g. for execution by the first processor and second processor referred to above).
These and other aspects will be apparent from the embodiments described in the following. The scope of the present disclosure is not intended to be limited by this summary nor to implementations that necessarily solve any or all of the disadvantages noted.
Brief Description of the Drawings
For a better understanding of the present disclosure and to show how embodiments may be put into effect, reference is made to the accompanying drawings in which:
Figure 1 illustrates a schematic block diagram of a remote control system comprising a first computing device and a second computing device;
Fig. 2 is a flowchart illustrating a method performed by a central processing unit of the second computing device; and
Figure 3 is a flowchart illustrating a method performed by a graphics processing unit of the second computing device.
Detailed Description
Embodiments will now be described by way of example only.
Figure 1 shows the components of a remote control system 100 comprising a computing device 102 connected via a data link 106 to a remote computing device 104.
The computing device 102 comprises central processing unit (CPU) 110, memory 120 in the form of random access memory (RAM), and a vector processor 130 for example in the form of graphics processor unit (GPU). The vector processor 130 is a processor with vector manipulation instructions, capable of performing the same operation on multiple pieces of data simultaneously. As shown in Figure 1, the computing device 102 may be coupled to an external display 140 via a wired or wireless connection. In other embodiments, the display 140 is an internal component of the computing device 102. It will be appreciated that the computing device 102 may include various other components not shown in Figure 1 for reasons of clarity.
As explained in more detail below, a VNC Server application 112 is running on the CPU 110 to initiate a capture of an image displayed on the display 140 and send it via the data link 106 to the remote computing device 104, thus the computing device 102 may be termed a VNC Server. It will be appreciated that the image data displayed on the display will change over time as a user of the computing device operates the computing device 102.
The remote computing device 104 comprises a CPU 150 connected to the data link 106 and a display 160. It will be appreciated that the remote computing device 104 may include various other components not shown in Figure 1 for reasons of clarity. A corresponding VNC Viewer application 152 is running on CPU 150 to receive image data displayed on the display 140 via the data link 106 and output it on the remote computing device display 160, thus the remote computing device 104 may be termed a VNC Viewer.
The data link 106 may be a wired connection (e.g. Ethernet, USB connection) or a wireless connection (e.g. Wi-Fi®, Bluetooth®, Zigbee®, Cellular).
Referring back to the computing device 102, as shown in Figure 1 the RAM 120 comprises a screenshot buffer 122, a change map 124 and a reference buffer 126. The GPU 136 comprises a frame buffer 136 which is coupled to the display 140. When an image is to be output to the display 140, the image is rendered into the frame buffer 136 in the GPU 130. The image data in the frame buffer 136 is then sent to the display 140 for output. The GPU 130 further comprises a screen capture module 132 coupled to the frame buffer 136. The GPU 130 is configured to execute a GPU comparer routine 134. These components of the computing device 102 are described in more detail below.
Reference is now made to Figure 2 which is a flow chart for a process 200 performed by the VNC Server application 112 when executed on the CPU 110.
The functionality of the VNC Server application 112 described herein may be implemented in code (software) stored on a memory (of the computing device 102) comprising one or more storage media, and arranged for execution on the CPU 110 comprising on or more processing units. The code is configured so as when fetched from the memory and executed on the CPU 110 to perform operations in line with embodiments discussed herein. The code may be provided on a carrier such as a disk, CD or DVD-ROM, programmed memory such as non-volatile memory (e.g. Flash) or read-only memory (Firmware).
Embodiments are described below with reference to the computing device 102 being a Raspberry Pi computing device, however it will be appreciated that this is merely an example.
At step S202, the VNC Server application 112 received a screen update request that has been transmitted by the remote computing device 104 over the data link 106. The screen update request is received at the computing device 102 at an interface and supplied to the CPU 110 on which the VNC Server application 112 is running. It will be appreciated that the type of interface will depend on the type of data link 106 that is used, such interfaces are well known to persons skilled in the art and are therefore not discussed in detail herein.
As shown in Figure 1 the CPU 110 and GPU 130 are able to communicate via a suitable connection.
In response to receiving the screen update request, at step S204 the VNC Server application 112 instructs the GPU 130 to capture a screenshot of the image data that is being displayed on display 140.
The communication between the VNC Server application 112 and the screen capture module 132 may be implemented using an API. For example, on a Raspberry Pi computing device, the VNC Server application 112 may instruct the GPU 130 to capture a screenshot of the image data that is being displayed on display 140 using a screen capture API provided by dispmanx. It will be appreciated that APIs other than the dispmax API which provide screen capture functionality may also be used.
Upon receiving the instruction from the VNC Server application 112, the screen capture module 132 communicates with the frame buffer 136 to capture the image data (otherwise referred to herein as display data) that is displayed on display 140 and places the captured image data into the screenshot buffer 122 (replacing any image data currently stored in the screenshot buffer 122). That is, the screen capture module 132 obtains a screenshot from the frame buffer 136.
When a Raspberry Pi computing device is first powered on, boot code stored in ROM (not shown in Figure 1) is run which splits the RAM 120 between the CPU 110 and the GPU 130. In particular, the GPU 130 is allocated an address space of the RAM120 and the CPU 110 has access to the address space left over by the GPU address space. That is, the CPU 110 is allocated CPU-accessible memory and the GPU 130 is allocated separate GPU-accessible memory.
The screenshot buffer 122, change map 124 and reference buffer 126 reside in the GPU-accessible memory portion of the RAM 120.
At step S206, the VNC Server application 112 sends a request to the GPU 130 to execute the GPU comparer routine 134. This may be done via the mailbox interface (a common interface used for passing messages between the ARM and VideoCore processors).
Reference is now made to Figure 3 which is a flow chart for a process 300 performed by the GPU comparer routine 134 when executed on the GPU 130.
The functionality of the GPU comparer route 134 described herein may be implemented in code (software) stored on a memory (of the computing device 102) comprising one or more storage media, and arranged for execution on the GPU 130 comprising on or more processing units. The code is configured so as when fetched from the memory and executed on the GPU 130 to perform operations in line with embodiments discussed herein. The code may be provided on a carrier such as a disk, CD or DVD-ROM, programmed memory such as non-volatile memory (e.g. Flash) or read-only memory (Firmware).
The GPU comparer routine 134 starts at step S302 and resets a value of a change counter that is maintained by the GPU comparer routine 134. The change counter is set to zero at step S302.
The process 300 then proceeds to step S304 where the GPU comparer routine 134 loads a block of image data from the screenshot buffer 122.
It will be appreciated that in a scenario where the screen update request received at step S202 is not the first received screen update request in the VNC session, when the GPU comparer routine 134 starts at step S302, image data captured previously in the VNC session will be stored in the reference buffer 126.
At step S306, the GPU comparer routine 134 loads a block of image data from the reference buffer 126.
The block of image data loaded from the screenshot buffer 122 at step S304 and the block of image data loaded from the reference buffer 126 at step S306 correspond to image data displayed on the same region of the display 140 (but at different times). The block of image data refers to a n-pixel by m-pixel block of image data which may for example be a 32x32 pixel block of image data (it will be appreciated that this size of block is merely an example).
At step S308, the GPU comparer routine 134 computes a difference between the block of image data loaded from the screenshot buffer 122 and the block of image data loaded from the reference buffer 126, using the vector capabilities of the GPU 130 (this can be performed in just a few GPU instructions), to make a determination at step S310 as to whether the blocks differ or not.
If it is determined at step S310 that the blocks of image data differ, then this means that the block of image data loaded from the screenshot buffer 122 comprises image data of a region of the display 140 that has changed since the previous screen capture. In this scenario, the block of image data loaded from the screenshot buffer 122 is referred to herein as a “changed block”.
If it is determined at step S310 that the blocks of image data differ, then the process 300 proceeds to step S312 where the reference buffer 126 is updated with the block of image data loaded from the screenshot buffer 122 at step S304. Expressed another way, the block of image data loaded from the screenshot buffer 122 at step S304 is placed in the reference buffer 126 and replaces the block of image data loaded from the reference buffer 126 at step S306.
At step S316 the GPU comparer routine 134 increments (by one) the change counter that is maintained by the GPU comparer routine 134.
The change map 124 stored in GPU-accessible memory portion of the RAM 120 is used to indicate which blocks of image data stored in the screenshot buffer 122 comprise image data that has changed since the previous screen capture i.e. what blocks of image data stored in the screenshot buffer 122 are changed blocks, and which blocks of image data stored in the screenshot buffer 122 comprise image data that has not changed since the previous screen capture i.e. what blocks of image data stored in the screenshot buffer 122 are unchanged blocks. The change map 124 can be considered as a 2D array of numbers, one for each block of image data which will store a Ό’ if the block is unchanged, or nonzero if it has changed. For example, the change map for a 1920x1080 resolution screen will be a 60x34 array (using blocks of 32x32 pixels).
When the process 300 proceeds from step S316 to step S318, at step S318 the GPU comparer routine 134 updates the change map to indicate that the block of image data loaded from the screenshot buffer 122 at step S304 is a changed block.
Referring back to step S310, if it is determined at step S310 that the blocks of image data do not differ, then this means that the block of image data loaded from the screenshot buffer 122 comprises image data of a region of the display 140 that has not changed since the previous screen capture (i.e. is an “unchanged block”).
When the process 300 proceeds from step S310 to step S318, at step S318 the GPU comparer routine 134 updates the change map 124 to indicate that the block of image data loaded from the screenshot buffer 122 at step S304 is an unchanged block.
It will be appreciated that an image displayed on the display 140 is formed from sequential lines (rows) of blocks of image data.
At step S320, the GPU comparer routine 134 determines if the end of a line of blocks of image data has been reached. That is, whether the block of image data loaded from the screenshot buffer 122 and the block of image data loaded from the reference buffer 126 correspond to a region on the display 140 that is at the end of line of blocks of image data.
If an end of a line has not been reached then the proceeds back to step S304 where the next block of image data in the line is loaded from the screenshot buffer 122 and the next block of image data in the line is loaded from the reference buffer 126 and the process 300 described above repeats.
Upon determining at step S320 that the analysis has reached the end of a line of blocks of image data, the process 300 proceeds to step S322 where the GPU comparer routine 134 moves its analysis to the next line of blocks of image data at step S322 and provided that the end of the screenshot buffer 122 has not been reached (determined step S324), the process loops back to step S304
Once the GPU comparer routine 134 reaches the end of the screenshot buffer 122, the entire captured image data that is stored in the screenshot buffer 122 has been compared on a block-by-block basis to the image data stored in the reference buffer 126. Furthermore, the value (n) of the change counter (n>0) will indicate the number of changed blocks in the image data stored in the screenshot buffer 122.
It will be appreciated from the above that the reference buffer 126 maintains the image data of the current state of the contents displayed on the display 140, and is updated as necessary by the process 300 during a VNC session as a user of the computing device 102 operates the device.
Once the GPU comparer routine 134 reaches the end of the screenshot buffer 122, the process 300 proceeds to step S326 where the GPU comparer routine 134 returns the change counter value to the VNC Server application 112.
We now refer back to Figure 2.
Upon receiving the change counter value from the GPU 130, the VNC Server application 112 determines at step S208 whether the image data that is displayed on display 140 has changed since the last received screen update request.
If the VNC Server application 112 determines at step S208 that the image data displayed on display 140 has not changed since the last received screen update request (change counter value, n=0) then the process 200 proceeds to step S218 where the process ends and the VNC Server application 112 takes no further action. The GPU comparer routine 134 allows the VNC Server application 112 to quickly determine when nothing needs to be done in the case where there have been no changes on screen.
If the VNC Server application 112 determines at step S208 that the image data displayed on display 140 has changed since the last received screen update request (change counter value, n>0) then the process 200 proceeds to step S210.
When a screen update request is first received in the VNC session, the VNC Server application 112 sends a call to the GPU 130 to set up a translation (i.e. a mapping) between the address space of the GPU-accessible memory (in RAM 120) and the address space of the CPU-accessible memory (in RAM 120). This call may be for example a mmap() system call, and results in the address space of the GPUaccessible memory where the screenshot buffer 122 and change map 124 are stored becoming a shared memory space. In particular, the call provides the VNC Server application 112 with an address space in the CPU-accessible memory which the VNC Server application 112 can read to access the screenshot buffer 122 and change map 124 that are stored in the GPU-accessible memory.
At step S210, the VNC Server application 112 reads the address space of the CPUaccessible memory that has been mapped to the address space in the GPU-accessible memory where the change map 124 is stored to read the change map and build a list of changed rectangles from the change map 124. The term “rectangle” is used herein to refer to n by m pixels of image data, where m and n could be equal (i.e. ‘rectangle’ includes the possibility of ‘square’) or non-equal.
At step S210, the VNC Server application 112 coalesces, wherever possible, blocks of size (m x ri) into larger rectangles which may be for example (3m x 2ri) that are formed from multiple adjoining blocks which correspond to regions of the display 140 which are adjacent to each other. Whilst a rectangle may correspond to multiple blocks of image data, a rectangle may also correspond to a single block of image data.
In the event that all changed blocks stored in the screenshot buffer 122 have not been sent to the remote computing device 104 (determined at step S212), at step S214 the VNC Server application 112 reads the address space of the CPU-accessible memory that has been mapped to the address space in the GPU-accessible memory where the screenshot buffer 122 is stored and performs a copying operation to copy a changed rectangle(identified in the list of changed rectangles) from the GPU-accessible memory to the CPU-accessible memory.
Once the changed rectangle has been copied into the CPU-accessible memory, at step S216 the VNC Server application 112 encodes the image data of the changed rectangle and transmits the encoded changed rectangle to the remote computing device 104 via the data link 106.
The loop of steps S212, S214, and S216 repeats until all of the changed rectangles (identified in the list of changed rectangles) have been copied into CPU-accessible memory, and encoded and transmitted to the remote computing device 104.
Step S210 advantageously enables the VNC Server application 112 to make a single read request to memory 120 at step S214 e.g. for a rectangle of size (3m x 2n), rather than making six separate requests for each constituent block of image data.
Once the VNC Server application 112 determines at step S212 that all of the changed rectangles have been copied into CPU-accessible memory, and encoded and transmitted to the remote computing device 104, the process 200 proceeds to step S218 where the process ends and the VNC Server application 112 takes no further action.
The VNC Viewer application 152 running on the CPU 150 of the remote computing device 104 receives the changed rectangles and applies them to its copy of the screen in order to bring it up-to date. Once the complete update has been received, the VNC Viewer application 152 then requests the next update by transmitting a further screen update, this repeats throughout the VNC session.
The first run of the GPU comparer routine 134 (during a VNC session) brings the reference buffer 126 up-to-date with the image data stored in the screenshot buffer 122 no matter what state it was previously in.
The VNC Server application 112 knows whether the screen update request received at step S202 is the first received screen update request in the VNC session or not (i.e. is a subsequently received screen update request). For the first received screen update request in a VNC session, the VNC Server application 112 ignores the number of changes reported by the GPU comparer routine 134 and treats all of the blocks of image data in the screenshot buffer 122 as changed blocks. Thus the VNC Server application 112 encodes and transfers the entire screen image (captured at step S204) to the remote computing device 104. For subsequently received screen update requests during the VNC session the VNC Server application 112 operates in accordance with the process 200 shown in Figure 2.
It will be apparent from the above, that in the scenario where a screen update request is received and there have been changes to the image data displayed on the display 140 since the previous screen capture, the VNC Server application 112 uses the change map 124 to pick only the regions of the screenshot buffer 122 that have changed, encoding and transferring these to the viewer machine. By accessing the screenshot buffer 122 via shared memory and utilising the GPU-computed map of changed blocks, the VNC Server application 112 does not need to copy the entire screenshot buffer 122 into CPU-accessible memory.
Embodiments of the invention provide increased VNC session performance by providing more frequent screen updates by using a combination of the CPU 110 and the GPU 130 than would be possible by using the CPU 110 alone. As described above, copying from GPU-accessible memory into CPU-accessible memory is expensive. Thus the additional processing that is performed by GPU 130 reduces the amount of data we have to copy into CPU-accessible memory. Additionally, the number of instructions needed to compare a block is reduced due to the vector-based instructions available to the GPU 130, allowing comparison to be completed faster than on the CPU 110 alone.
The CPU usage and memory access on the CPU 110 is significantly reduced, since the CPU 110 it is not continually comparing screen buffers to find changes, leaving more CPU resources available for user applications. Furthermore, the amount of data that is encoded and transmitted to any connected viewers is significantly reduced because in embodiments of the invention only the minimum amount of data needs to be encoded and transmitted over the data link in order to keep the VNC Viewer application 152 updated, again resulting in more frequent screen updates.
Whilst embodiments have been described above with reference to the computing device 102 being a Raspberry Pi device it will be appreciated that this is just an example. Embodiments of the invention apply to any computing device comprising a CPU and vector processor which may be used in combination as described herein to provide a VNC session with improved performance.
Whilst embodiments have been described above with reference to the vector processor being a GPU, the functionality of the GPU 130 described herein can be implemented using any vector processor (that is not necessarily a GPU) with vector manipulation instructions, capable of performing the same operation on multiple pieces of data simultaneously.
While this invention has been particularly shown and described with reference to preferred embodiments, it will be understood to those skilled in the art that various changes in form and detail may be made without departing from the scope of the invention as defined by the appendant claims.
Claims (19)
1. A method of transmitting display data that is displayed on a display of a computing device from the computing device to a remote computing device, the method comprising:
receiving, at a first processor of the computing device, a request for display data from the remote computing device;
in response to receiving said request, a second processor of the computing device capturing display data that is displayed on said display and storing the captured display data in a memory of the computing device;
the second processor determining that display data displayed on at least one region of the display has changed based on comparing the captured display data with previously captured display data stored in said memory;
the first processor accessing said memory to retrieve only display data corresponding to the at least one region; and the first processor encoding and transmitting said retrieved display data to the remote computing device.
2. A method according to claim 1, the method comprising the first processor instructing the second processor to perform said capturing in response to receiving said request.
3. A method according to claim 1 or 2, wherein said storing comprises storing the captured display data in a screenshot buffer of said memory, the previously captured display data being stored in a reference buffer of said memory.
4. A method according to claim 3, wherein said comparing comprises for each of a plurality of blocks of the captured display data stored in the screenshot buffer:
comparing the block of captured display data with its corresponding block of previously captured display data stored in the reference buffer; and determining that display data displayed on the at least one region of the display has changed based on at least one block of captured display data being different to its corresponding block of previously captured display data.
5. A method according to claim 4, wherein said retrieved display data comprises the at least one block of captured display data being different to its corresponding block of previously captured display data.
6. A method according to claim 4 or 5, further comprising:
the second processor updating the reference buffer by, for each of the at least one block of captured display data that is different to its corresponding block of previously captured display data, copying the block of captured display data from the screenshot buffer into the reference buffer to replace its corresponding block of previously captured display data.
7. A method according to any of claims 4 to 6, further comprising:
the second processor counting a number of blocks of captured display data that differ to its corresponding block of previously captured display data, and returning said number to the first processor.
8. A method according to claim 7, comprising the first processor accessing said memory to retrieve the display data corresponding to the at least one region in response to receiving said number.
9. A method according to any of claims 4 to 8, further comprising:
the second processor maintaining a map in said memory, and updating said map to indicate which of the plurality of blocks of the captured display data differ to its corresponding block of previously captured display data.
10. A method according to claim 9, wherein the screenshot buffer and map are located in a portion of said memory shared between the first processor and the second processor, the method comprising:
the first processor using the map to access the screenshot buffer and copy each of the at least one block of captured display data that is different to its corresponding block of previously captured display data, into a portion of memory allocated to the first processor.
11. A method according to claim 10, wherein said accessing said memory comprises accessing said portion of memory allocated to the first processor to retrieve each of the at least one block of captured display data that is different to its corresponding block of previously captured display data; and the method comprises:
encoding and transmitting each of the at least one block of captured display data that is different to its corresponding block of previously captured display data.
12. A method according to any preceding claim, wherein said display is an internal component of the computing device.
13. A method according to any of claims 1 to 11, wherein the computing device is coupled to the display via a wired or wireless connection.
14. A method according to any preceding claim, wherein a central processing unit of the computing device comprises said processor, and the second processor is a vector processor.
15. A method according to claim 14, wherein a graphics processing unit of the computing device comprises said vector processor.
16. A method according to any preceding claim, wherein the computing device is a Raspberry Pi device.
17. A computing device that is configured to transmit display data that is displayed on a display of a computing device from the computing device to a remote computing device, the computing device comprising:
a first processor; a second processor; and a memory;
wherein the first processor is configured to receive a request for display data from the remote computing device;
wherein the second processor is configured to capture display data that is displayed on said display, store the captured display data in said memory, compare the captured display data with previously captured display data stored in said memory, and determine that display data displayed on at least one region of the display has changed based on said comparison; and wherein the first processor is further configured to access said memory to retrieve only display data corresponding to the at least one region, and encode and transmit said retrieved display data to the remote computing device.
18. A system comprising:
a remote computing device having a processor and a display; a computing device that is configured to transmit display data that is displayed on a display of the computing device from the computing device to the remote computing device; and a data link linking said remote computing device and said computing device; wherein the computing device comprises: a first processor;
a second processor; and a memory;
wherein the first processor is configured to receive a request for display data from the remote computing device;
wherein the second processor is configured to capture display data that is displayed on said display, store the captured display data in said memory, compare the captured display data with previously captured display data stored in said memory, and determine that display data displayed on at least one region ofthe display has changed based on said comparison; and wherein the first processor is further configured to access said memory to retrieve only display data corresponding to the at least one region, and encode and transmit said retrieved display data to the remote computing device.
19. A carrier carrying processing code for implementing the method of any one of claims 1 to 16 on the computing device.
Intellectual
Property
Office
Application No: GB 1701813.6 Examiner: Robert Shorthouse
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1701813.6A GB2559550A (en) | 2017-02-03 | 2017-02-03 | Method and system for remote controlling and viewing a computing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1701813.6A GB2559550A (en) | 2017-02-03 | 2017-02-03 | Method and system for remote controlling and viewing a computing device |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201701813D0 GB201701813D0 (en) | 2017-03-22 |
GB2559550A true GB2559550A (en) | 2018-08-15 |
Family
ID=58462338
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1701813.6A Withdrawn GB2559550A (en) | 2017-02-03 | 2017-02-03 | Method and system for remote controlling and viewing a computing device |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2559550A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11050743B2 (en) | 2019-01-29 | 2021-06-29 | Citrix Systems, Inc. | Systems and methods of enabling fast user access to remote desktops |
CN117573380A (en) * | 2024-01-16 | 2024-02-20 | 北京趋动智能科技有限公司 | Virtual address allocation method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005114375A1 (en) * | 2004-05-21 | 2005-12-01 | Computer Associates Think, Inc. | Systems and methods for tracking screen updates |
US20090079663A1 (en) * | 2007-09-20 | 2009-03-26 | Kuo-Lung Chang | Locating and displaying method upon a specific video region of a computer screen |
US20100111410A1 (en) * | 2008-10-30 | 2010-05-06 | Microsoft Corporation | Remote computing platforms providing high-fidelity display and interactivity for clients |
US20120075346A1 (en) * | 2010-09-29 | 2012-03-29 | Microsoft Corporation | Low Complexity Method For Motion Compensation Of DWT Based Systems |
WO2016016607A1 (en) * | 2014-07-31 | 2016-02-04 | Displaylink (Uk) Limited | Managing display data for display |
-
2017
- 2017-02-03 GB GB1701813.6A patent/GB2559550A/en not_active Withdrawn
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005114375A1 (en) * | 2004-05-21 | 2005-12-01 | Computer Associates Think, Inc. | Systems and methods for tracking screen updates |
US20090079663A1 (en) * | 2007-09-20 | 2009-03-26 | Kuo-Lung Chang | Locating and displaying method upon a specific video region of a computer screen |
US20100111410A1 (en) * | 2008-10-30 | 2010-05-06 | Microsoft Corporation | Remote computing platforms providing high-fidelity display and interactivity for clients |
US20120075346A1 (en) * | 2010-09-29 | 2012-03-29 | Microsoft Corporation | Low Complexity Method For Motion Compensation Of DWT Based Systems |
WO2016016607A1 (en) * | 2014-07-31 | 2016-02-04 | Displaylink (Uk) Limited | Managing display data for display |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11050743B2 (en) | 2019-01-29 | 2021-06-29 | Citrix Systems, Inc. | Systems and methods of enabling fast user access to remote desktops |
CN117573380A (en) * | 2024-01-16 | 2024-02-20 | 北京趋动智能科技有限公司 | Virtual address allocation method and device |
CN117573380B (en) * | 2024-01-16 | 2024-05-28 | 北京趋动智能科技有限公司 | Virtual address allocation method and device |
Also Published As
Publication number | Publication date |
---|---|
GB201701813D0 (en) | 2017-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12118642B2 (en) | Graphics rendering method and apparatus | |
US10085056B2 (en) | Method and system for improving application sharing by dynamic partitioning | |
CN111078147B (en) | Processing method, device and equipment for cache data and storage medium | |
US8797233B2 (en) | Systems, methods, and devices for dynamic management of data streams updating displays | |
CN101918921B (en) | Methods and systems for remoting three dimensional graphics | |
KR101367718B1 (en) | Method and apparatus for providing mobile device interoperability | |
US20220365796A1 (en) | Streaming per-pixel transparency information using transparency-agnostic video codecs | |
TWI495330B (en) | System and method for efficiently streaming digital video | |
JP2015534160A (en) | Client-side image rendering in client-server image browsing architecture | |
JP2009187379A (en) | Virtual computer server unit, updating image detection method, and program | |
WO2016118346A1 (en) | User mode driver extension and preprocessing | |
US20160261671A1 (en) | Local Operation of Remotely Executed Applications | |
CN113368492A (en) | Rendering method and device | |
US10225570B2 (en) | Split framebuffer encoding | |
US20130002521A1 (en) | Screen relay device, screen relay system, and computer -readable storage medium | |
CN102959955A (en) | Sharing an image | |
JP2024504572A (en) | Image processing method, device, computer device and computer program | |
US10296713B2 (en) | Method and system for reviewing medical study data | |
GB2559550A (en) | Method and system for remote controlling and viewing a computing device | |
CN104144212A (en) | Virtual desktop image transmission method, device and system | |
US20150121376A1 (en) | Managing data transfer | |
CN115378937B (en) | Distributed concurrency method, device, equipment and readable storage medium for tasks | |
CN112218003B (en) | Desktop image acquisition method and device and electronic equipment | |
JP6922344B2 (en) | Information processing equipment, information processing system, and information processing method | |
Matsui et al. | Virtual desktop display acceleration technology: RVEC |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |