WO2010114512A1 - System and method of transmitting display data to a remote display - Google Patents

System and method of transmitting display data to a remote display Download PDF

Info

Publication number
WO2010114512A1
WO2010114512A1 PCT/US2009/038735 US2009038735W WO2010114512A1 WO 2010114512 A1 WO2010114512 A1 WO 2010114512A1 US 2009038735 W US2009038735 W US 2009038735W WO 2010114512 A1 WO2010114512 A1 WO 2010114512A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
display
video
video data
overlay
Prior art date
Application number
PCT/US2009/038735
Other languages
French (fr)
Inventor
Bernard J. Thompson
Andrew J. Fisher
Timothy H. Glauert
Original Assignee
Displaylink Corporation
Displaylink (Uk) Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Displaylink Corporation, Displaylink (Uk) Limited filed Critical Displaylink Corporation
Priority to PCT/US2009/038735 priority Critical patent/WO2010114512A1/en
Publication of WO2010114512A1 publication Critical patent/WO2010114512A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1431Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display using a single graphics controller
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/764Media network packet handling at the destination 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/43615Interfacing a Home Network, e.g. for connecting the client to a plurality of peripherals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1446Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display display composed of modules, e.g. video walls
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2300/00Aspects of the constitution of display devices
    • G09G2300/02Composition of display devices
    • G09G2300/026Video wall, i.e. juxtaposition of a plurality of screens to create a display screen of bigger dimensions
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2310/00Command of the display device
    • G09G2310/04Partial updating of the display screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/02Handling of images in compressed format, e.g. JPEG, MPEG
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
    • G09G2340/125Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels wherein one of the images is motion video
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2352/00Parallel handling of streams of display data
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/18Use of a frame buffer in a display terminal, inclusive of the display panel
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/02Networking aspects
    • G09G2370/027Arrangements and methods specific for the display of internet documents
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast

Definitions

  • This invention relates to a method and system for transmitting display data.
  • Computer networks have long allowed independent computers to communicate, share information, and, to an extent, share resources.
  • the limitations of network technology have meant that each computer has had to remain fully functional as an independent device.
  • Devices which, in the past, would have needed a greater degree of autonomy and local processing power because of the limitations of the network can now be assumed to be reliably connected by a fast link to at least one computer with a powerful processor. They can therefore be much simpler, and can be managed by a computer almost as if they were its own locally connected peripherals. This can greatly reduce the cost of the devices and the complexity of managing a network. This particularly applies to network connected terminals and other graphical display devices. In relation to the transmission of display data in such a network a number of problems are raised which need to " be solved.
  • International Patent Application Publication WO 2006/061582 describes an apparatus and method for transmitting, over a general purpose data network, graphical data to a display device having a memory.
  • the apparatus has a graphics component for generating graphical data in an appropriate format for direct transmission to corresponding addresses in the display device memory.
  • Graphical data transmitted from the network interface specifies an address in the memory of that display device upon which an action is required.
  • the apparatus is thus more efficient than prior remote graphics systems.
  • Direct transmission of graphical data to a memory address uses less network capacity as a single address value can generally be packed more tightly than a pair of coordinates.
  • the system simplifies the requirements placed on display devices. Since the data is not transmitted as geometric coordinates, there is no need for the display device to perform complicated arithmetic operations to convert incoming geometries to memory addresses.
  • the network is preferably a general purpose data network and may be wireless.
  • Video quality and performance may not be optimal under certain circumstances.
  • Video dramatically increases host processing because the host must not only decode the video, but compare each frame to the preceding frame to compute the difference, essentially re-compressing the video. Once re-compressed, the host sends the video within the existing protocol's connection.
  • Video playback using the protocol described in this Patent Application Publication can be computationally intensive and/or demanding of connection speed, which can cause the method to encounter limits when, for example, moving from youtube-quality video playback, up to DVD-quality, then up to Blu-Ray HD quality.
  • a method of transmitting display data from a host device to a remote display device over a general purpose network comprising generating display data including an overlay portion, generating video data corresponding to the overlay portion, generating memory address data for the display data, transmitting the display data, video data and memory address data from the host device to the remote display device over the general purpose network, storing the display data according to the memory address data, decoding the video data, storing the decoded video data, and displaying an image frame from the stored display data and decoded video data.
  • a display system comprising a host device comprising a frame buffer, a video streaming component and a display driver, a general purpose data network, and a remote display device, connected to the host device via the general purpose network, and comprising a frame buffer, a video decoder, an overlay buffer, a memory controller and a display, wherein the frame buffer of the host device is arranged to generate display data including an overlay portion, the video streaming component is arranged to generate video data corresponding to the overlay portion, the display driver is arranged to generate memory address data for the display data and to transmit the display data, video data and memory address data to the remote display device over the general purpose network, the frame buffer of the remote display device is arranged to store the display data according to the memory address data, the video decoder is arranged to decode the video data, the overlay buffer is arranged to store the decoded video data, and the display is arranged to display an image frame from the stored display data and decoded video data.
  • Patent Application Publication WO 2006/061582 listed above describes a method for transmitting computer generated graphics data such as windows, buttons, lines, etc. from a host to a network-based display with its own frame buffer.
  • the system and techniques described in the subject application extends that process to include a means for transmitting motion video, such as a movie, in a parallel stream, and overlaying it on to the screen.
  • the extended protocol continues to use the host for 2D/3D rendering, while passing through video data to be decompressed and decoded at by the remote display.
  • the invention delivers higher quality video than is currently possible to a remote display using a simple and lightweight system that can nevertheless function over a general purpose network.
  • the display driver is further arranged to create a first network connection for transmitting the display data and memory address data and to create a second network connection for transmitting the video data.
  • This can better support DVD quality video since the host device sends the video to the display in its highly compressed format over an independent connection.
  • the extended protocol takes advantage of the enormous amount of computing power that goes into creating highly compressed video formats such as MPEG-4/H.264.
  • Using a second connection provides more flexibility for prioritizing one type of traffic over another.
  • the second connection could optionally use network protocols that guarantee the bandwidth required to play the video, such as LJSB's isochronous protocol.
  • the methodology combines two approaches which are separately optimal for their own type of data, the address-based graphics protocol for 2D/3D screen data and video data as a parallel stream without modifying its encoded format.
  • the parallel stream allows great flexibility in controlling video performance, such as using isochronous protocols. Further, by removing the video from the main protocol, the video can be sent in its native format, thus leveraging much higher video compression formats, as well as keeping the door open to future advances in formats & protocols.
  • an address based protocol method it is useful to implement a rendering technique where first an approximation of the colour of changed pixels is sent, later followed by an update that includes a more accurate representation. This makes it appear to the user that screen updates happen more quickly (lower latency).
  • This "temporal compression” is very applicable to 2D/3D data, but less so to motion video data, because pixels change more often (on average), leaving less time to "catch up” (or, perhaps arguably, no need to catch up).
  • the benefits of "temporal compression" for the 2D/3D graphics outside of the overlay window can be retained, while inside the window entirely different, more video-appropriate, techniques can be utilized. Prime among such techniques would be enabling higher quality video playback and audio synchronization with buffering, time synchronization, and temporal shift.
  • Figure 1 is a schematic diagram of a display system
  • Figure 2 is a schematic diagram of a second embodiment of the display system
  • Figure 3 is a schematic diagram of a display device
  • Figure 4 is a further, more detailed, schematic diagram of the display system
  • Figure 5 is a schematic diagram showing data flow in the display system
  • Figure 6 is a flowchart of a method of operating the display system.
  • FIG. 1 An example of a display system is shown in Figure 1.
  • the system of Figure 1 shows a laptop 10 and additional secondary displays 12.
  • the improved address based protocol of the present invention is highly effective for use in a number of applications, such as the one shown in the Figure.
  • the protocol can be used in the process of adding multiple screens 12 to a computer 10 for the purpose of providing an expanded desktop.
  • the address based protocol of the present invention provides a more efficient method of transmitting the graphical data in this process than was previously available.
  • Figure 1 illustrates a first network topology of this process.
  • a data processing device is illustrated as a laptop computer 10.
  • the data processing device 10 has its own conventional display device but is also connected to a number of secondary display devices 12.
  • each secondary display devices 12 has their own dedicated connection 14, such as a USB connection, to the host 10.
  • the secondary display devices 12 can be simply plugged into the same network as the machine 10, or into another network to which the laptop 10 has access, and an association is made in software between the secondary display devices 12 and the particular computer 10.
  • Software or hardware on the data processing device 10 may make the extra secondary display devices 12 appear to be part of the same workspace shown on the main screen, typically by emulating a graphics card or driver software, so that programs running on the data processing device 10 are unaware that their output is being displayed on a secondary display devices 12.
  • windows on the conventional screen of the computer 10 can be moved across to a secondary display device 12 simply by dragging them off one side of the main display.
  • a simple user interface would generally be provided to enable users to control which secondary display device 12 is part of this extended workspace, the geometric relationship between them and any conventional displays, and other aspects of the system.
  • a further use of the improved address based protocol of the present invention is in the process of adding multiple screens 12 which aren't intended to be part of the workspace of a computer 10.
  • a secondary display device 12 which displays a slide show in a shop window is only visible from the outside of the building. These displays 12 may also be at a greater distance from the data processing device 10 than would be easily possible with conventional display-driving mechanisms, such as those using the VGA standard. For whatever reason, interacting with the secondary display device 12 as if it were simply part of the main display may not be ideal.
  • secondary display device 12 software is written or modified to be compatible with secondary display device 12 and to drive one or more of them explicitly.
  • a typical use might be the control of multiple displays 12 on a railway platform for informational and/or advertising purposes.
  • the host machine 10 may also have some displays 12 running conventional desktop applications, but this is not necessary, and indeed it may not normally have a 'user' at all in the conventional sense.
  • Secondary display devices 12 may also be driven by consumer electronics devices such as central heating controllers, games machines or voicemail systems. Again, the use of the improved address based protocol of the present invention increases the efficiency of the system.
  • FIG. 2 shows such a network topology in which a single data processing device 10 is connected over a general purpose data network 14 to a plurality of secondary display devices 12.
  • the illustrated data processing device 10 does not have its own conventional display device.
  • Display data for the secondary display devices 12 is sent from the computer 10 over the network 14 to the individual display devices 12.
  • These secondary display devices 12 will either have a lightweight processing component connected between the network 14 and the display 12, or each secondary display device 12 will have a small amount of processing and memory built in. This is required to handle the received display data.
  • FIG 3 illustrates the concept of an overlay.
  • the display device 12 is showing an image, such as a webpage, which includes within the image an overlay 16 which is a video portion.
  • An example of such as webpage is the service offered by www.youtube.com, whereby a video can be watched within a web browser.
  • the video is delivered in the overlay region 16 of the screen 12 and is typically of a low quality, in this case primarily due to storage and bandwidth constraints on the website.
  • a secondary (remote) display device 12 the delivery of sufficient quality video is limited by the bandwidth of the connection to the display device 12.
  • the host device 10 must specify any overlay regions 16. There are a number of methods for specifying the overlay regions 16. One common method is to use chroma keying. To create a chroma key, the host device 10 flags the individual pixels in the overlay region 16 with a special value called the chroma key. Meanwhile, the remote display 12 has a controller which is aware of the chroma key. The controller scans a frame buffer, reading the colour values for each pixel, and updates the physical screen. When the controller encounters a chroma keyed pixel, the controller knows that there is an overlay region, and instead reads the colour value from the corresponding pixel in the overlay buffer. Thus, the physical screen 12 shows the video in the correct position, though the video data and the rest of the display data are stored in separate sections of memory.
  • chroma key is one of many different methods.
  • another method is not to store the overlay 16 in a separate memory region.
  • a video decoder of the remote display device 12 writes the decoded video directly to the correct region of the frame buffer.
  • This design requires many different hardware components to read and write to the frame buffer simultaneously, thus dramatically increasing the complexity and cost of the hardware.
  • FIG 4 shows the system for transmitting display data from the host device 10 to the remote display device 12 over a general purpose network 14.
  • the host device comprises a frame buffer 18, a video streaming component 20 and a display driver 22.
  • the remote display device 12, which is connected to the host device 10 via the general purpose network 14, comprises a frame buffer 24, a video decoder 26, an overlay buffer 28, a memory controller 30 and a physical display 32.
  • the network 14 supports multiple network connections between the host device 10 and the remote display device 12.
  • the frame buffer 18 of the host system 10 is arranged to generate the display data including an overlay portion 16
  • the video streaming component 20 is arranged to generate video data corresponding to the overlay portion 16
  • the display driver 22 is arranged to generate memory address data for the display data and to transmit the display data, video data and memory address data to the remote display device 12 over the general purpose network 14.
  • the address data is the data for where the display data should be written in the frame buffer 24 of the remote display device 12, rather than sending information about where on the screen 32 the display data is located.
  • the frame buffer 24 of the remote display device 12 is arranged to store the display data according to the memory address data
  • the video decoder 26 is arranged to decode the video data
  • the overlay buffer 28 is arranged to store this decoded video data.
  • the memory controller 30 is arranged to construct an image frame from the stored display data and decoded video data
  • the display 32 is arranged to display the image frame, in this way the two channels between the host device 10 and the remote display 12 are utilised to send the display data from frame buffer 18 to frame buffer 24 and to send the video data from the streaming component 20 to the video decoder 26.
  • FIG. 5 shows the data flow between the host device 10 and remote display device 12. Optional components are shown in dashed line.
  • the display data 34 and memory address data 36 for the display data are generated, and optionally an overlay specification 38 defining the overlay in the display data is sent via a first network connection to the frame buffer 24 of the remote display device 12.
  • the video data 42 is sent via a second network connection 44 to the video decoder 26 of the remote display device 12. If the ultimate image being created contains more than one video component, then further video data 46 can be transmitted to the video decoder 26 via a third network connection 48.
  • Video data 46 may also include very useful motion video-specific data, such as time stamps and video synchronized audio data.
  • the display driver 22 on the host device 10 first determines the region of the screen that is to contain the video. This region is the overlay 16. The display driver 22 can then send a specification 38 of the overlay 16 to the remote display 12. The remote display 12 allocates a section of memory to act as a frame buffer 28 just for the decoded video. Then, the display driver 12 creates a second network connection 44 to the remote display 12 and starts sending the video over it. The remote display 12 routes the video stream 42 to the video decoder 26, which may be implemented in either hardware or software, which in turn writes decoded frames of video to the overlay buffer 28. The remote display 12 then combines the overlay buffer 28 with the frame buffer 24 to render the final image on the screen 32.
  • the extended protocol sends the video data 42 over a separate connection 44.
  • the main graphics connection 40 is a lossless connection.
  • the video channel 44 could also be lossless, or use a method that allows loss.
  • the main connection 40 could be made via TCP, and the video 44 via UDP.
  • TCP Transmission Control Protocol
  • the video data 42 could be sent over existing streaming application protocols, or simply raw data could be sent.
  • a wireless network is a prime example of where 2D/3D graphics data might best be sent over a connection that simulates a reliable "wired-like" connection, over the air, where the video data 42 is sent with a higher throughput, lower latency, but much less reliable protocol, which is more native/natural for wireless transmission.
  • the method of transmitting the display data 34 from the host device 10 to the remote display device 12 over the general purpose network 14 is summarised in Figure 6.
  • the method comprises the steps of, firstly step S1 of generating the display data 34 including the overlay portion 16, generating the video data 42 corresponding to the overlay portion 16 and generating the memory address data 36 for the display data 34. This is all carried out at the host device 10.
  • the display driver generates the address data 36 which will be used to read the display data 34 into the memory 24 of the remote display device.
  • the next step is the step S2 of transmitting the display data 34, video data 42 and memory address data 36 from the host device 10 to the remote display device 12 over the general purpose network 14. This is carried out by the display driver 22 of the host device 10.
  • the next step is the step S3 of storing the received display data 34 according to the memory address data 36 in the frame buffer 24 of the remote display device 12.
  • the next step is the step S4 of decoding the video data 42 in the video decoder 26 and storing the decoded video data 42 in the overlay buffer 28.
  • Step S5 comprises constructing the final image frame from the stored display data 34 and decoded video data 42, and finally in step S6 displaying the image frame.
  • Step S5 is optional in the sense that some graphics systems have hardware overlay support which will refresh the display 12 (an LCD or CRT) directly from the two separate (graphics, video) buffers 24 and 28, doing two reads when a chroma key pixel is hit, rather than ever having to construct any combined frame in memory.
  • the remote display device 12 may be integral with the actual physical display 32, or may be a separate component which is connected between the network 14 and actual physical display 32. If the remote display 12 is remote from the physical display 32, then the steps S1 to S5 are carried out by the remote display device 12 and the actual displaying step S6 is carried out by the physical display 32. In this case the remote display device 12 can be considered to be a lightweight processing component that is driving the physical display 32.

Abstract

A display system comprises a host device (10) comprising a frame buffer (18), a video streaming component (20) and a display driver (22), a general purpose data network (14), and a remote display device (12), connected to the host device via the general purpose network, and comprising a frame buffer (24), a video decoder (26), an overlay buffer (28), a memory controller (30) and a display (32). The frame buffer of the host device is arranged to generate display data including an overlay portion, the video streaming component is arranged to generate video data corresponding to the overlay portion and the display driver is arranged to generate memory address data for the display data and to transmit the display data, video data and memory address data to the remote display device over the general purpose network. The frame buffer of the remote display device is arranged to store the display data according to the memory address data, the video decoder is arranged to decode the video data, the overlay buffer is arranged to store the decoded video data, the memory controller is arranged to construct an image frame from the stored display data and decoded video data, and the display is arranged to display the image frame.

Description

SYSTEM AND METHOD OF TRANSMITTING DISPLAY DATA TO A REMOTE DISPLAY
DESCRIPTION
This invention relates to a method and system for transmitting display data.
Computer networks have long allowed independent computers to communicate, share information, and, to an extent, share resources. However, the limitations of network technology have meant that each computer has had to remain fully functional as an independent device. As high bandwidth networks become more prevalent and the power of the computers connected to them continues to increase, there is a growing interest in attaching relatively simple devices to these networks and managing such devices remotely. Devices which, in the past, would have needed a greater degree of autonomy and local processing power because of the limitations of the network can now be assumed to be reliably connected by a fast link to at least one computer with a powerful processor. They can therefore be much simpler, and can be managed by a computer almost as if they were its own locally connected peripherals. This can greatly reduce the cost of the devices and the complexity of managing a network. This particularly applies to network connected terminals and other graphical display devices. In relation to the transmission of display data in such a network a number of problems are raised which need to "be solved.
For example, International Patent Application Publication WO 2006/061582 describes an apparatus and method for transmitting, over a general purpose data network, graphical data to a display device having a memory. The apparatus has a graphics component for generating graphical data in an appropriate format for direct transmission to corresponding addresses in the display device memory. Graphical data transmitted from the network interface specifies an address in the memory of that display device upon which an action is required. The apparatus is thus more efficient than prior remote graphics systems. Direct transmission of graphical data to a memory address uses less network capacity as a single address value can generally be packed more tightly than a pair of coordinates. The system simplifies the requirements placed on display devices. Since the data is not transmitted as geometric coordinates, there is no need for the display device to perform complicated arithmetic operations to convert incoming geometries to memory addresses. The network is preferably a general purpose data network and may be wireless.
Using only the protocol of Patent Application Publication WO 2006/061582, video quality and performance may not be optimal under certain circumstances. Video dramatically increases host processing because the host must not only decode the video, but compare each frame to the preceding frame to compute the difference, essentially re-compressing the video. Once re-compressed, the host sends the video within the existing protocol's connection. Video playback using the protocol described in this Patent Application Publication can be computationally intensive and/or demanding of connection speed, which can cause the method to encounter limits when, for example, moving from youtube-quality video playback, up to DVD-quality, then up to Blu-Ray HD quality.
It is therefore an object of the invention to improve upon the known art. According to a first aspect of the present invention, there is provided a method of transmitting display data from a host device to a remote display device over a general purpose network, comprising generating display data including an overlay portion, generating video data corresponding to the overlay portion, generating memory address data for the display data, transmitting the display data, video data and memory address data from the host device to the remote display device over the general purpose network, storing the display data according to the memory address data, decoding the video data, storing the decoded video data, and displaying an image frame from the stored display data and decoded video data.
According to a second aspect of the present invention, there is provided a display system comprising a host device comprising a frame buffer, a video streaming component and a display driver, a general purpose data network, and a remote display device, connected to the host device via the general purpose network, and comprising a frame buffer, a video decoder, an overlay buffer, a memory controller and a display, wherein the frame buffer of the host device is arranged to generate display data including an overlay portion, the video streaming component is arranged to generate video data corresponding to the overlay portion, the display driver is arranged to generate memory address data for the display data and to transmit the display data, video data and memory address data to the remote display device over the general purpose network, the frame buffer of the remote display device is arranged to store the display data according to the memory address data, the video decoder is arranged to decode the video data, the overlay buffer is arranged to store the decoded video data, and the display is arranged to display an image frame from the stored display data and decoded video data.
Owing to the invention, it is possible to provide an overlay extension to an address based graphics protocol. Patent Application Publication WO 2006/061582 listed above describes a method for transmitting computer generated graphics data such as windows, buttons, lines, etc. from a host to a network-based display with its own frame buffer. The system and techniques described in the subject application extends that process to include a means for transmitting motion video, such as a movie, in a parallel stream, and overlaying it on to the screen. The extended protocol, however, continues to use the host for 2D/3D rendering, while passing through video data to be decompressed and decoded at by the remote display. The invention delivers higher quality video than is currently possible to a remote display using a simple and lightweight system that can nevertheless function over a general purpose network.
Advantageously, the display driver is further arranged to create a first network connection for transmitting the display data and memory address data and to create a second network connection for transmitting the video data. This can better support DVD quality video since the host device sends the video to the display in its highly compressed format over an independent connection. In this way, the extended protocol takes advantage of the enormous amount of computing power that goes into creating highly compressed video formats such as MPEG-4/H.264. Using a second connection provides more flexibility for prioritizing one type of traffic over another. For example, the second connection could optionally use network protocols that guarantee the bandwidth required to play the video, such as LJSB's isochronous protocol. Thus, according to various aspects to this invention, firstly, video is sent separately from the rest of the computer screen, and, secondly, the host device specifies the overlay region in the display frame buffer into which the video should be displayed.
The methodology combines two approaches which are separately optimal for their own type of data, the address-based graphics protocol for 2D/3D screen data and video data as a parallel stream without modifying its encoded format. The parallel stream allows great flexibility in controlling video performance, such as using isochronous protocols. Further, by removing the video from the main protocol, the video can be sent in its native format, thus leveraging much higher video compression formats, as well as keeping the door open to future advances in formats & protocols. When using an address based protocol method it is useful to implement a rendering technique where first an approximation of the colour of changed pixels is sent, later followed by an update that includes a more accurate representation. This makes it appear to the user that screen updates happen more quickly (lower latency). This "temporal compression" is very applicable to 2D/3D data, but less so to motion video data, because pixels change more often (on average), leaving less time to "catch up" (or, perhaps arguably, no need to catch up). However, by allowing video playback to occur as a separate stream in an overlay window, the benefits of "temporal compression" for the 2D/3D graphics outside of the overlay window can be retained, while inside the window entirely different, more video-appropriate, techniques can be utilized. Prime among such techniques would be enabling higher quality video playback and audio synchronization with buffering, time synchronization, and temporal shift. For example, when a user hits play on a video, several seconds of video and audio (perhaps in their original highly compressed format) are first buffered to the remote display device, and only when sufficient data is queued, does the remote display show the video overlay, with video that is initially temporally shifted several seconds later, relative to 2D/3D updates that may have occurred outside the overlay window. The user gets the best of both worlds: 2D/3D graphics which are updated with low latency, and video playback with the highest possible quality, with the lowest possible cost in terms of host processing power and bandwidth.
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which:-
Figure 1 is a schematic diagram of a display system, Figure 2 is a schematic diagram of a second embodiment of the display system,
Figure 3 is a schematic diagram of a display device, Figure 4 is a further, more detailed, schematic diagram of the display system, Figure 5 is a schematic diagram showing data flow in the display system, and Figure 6 is a flowchart of a method of operating the display system.
An example of a display system is shown in Figure 1. The system of Figure 1 shows a laptop 10 and additional secondary displays 12. The improved address based protocol of the present invention is highly effective for use in a number of applications, such as the one shown in the Figure. For example, the protocol can be used in the process of adding multiple screens 12 to a computer 10 for the purpose of providing an expanded desktop. The address based protocol of the present invention provides a more efficient method of transmitting the graphical data in this process than was previously available. Figure 1 illustrates a first network topology of this process. A data processing device is illustrated as a laptop computer 10. The data processing device 10 has its own conventional display device but is also connected to a number of secondary display devices 12. As shown, each secondary display devices 12 has their own dedicated connection 14, such as a USB connection, to the host 10. Alternatively, the secondary display devices 12 can be simply plugged into the same network as the machine 10, or into another network to which the laptop 10 has access, and an association is made in software between the secondary display devices 12 and the particular computer 10.
Software or hardware on the data processing device 10 may make the extra secondary display devices 12 appear to be part of the same workspace shown on the main screen, typically by emulating a graphics card or driver software, so that programs running on the data processing device 10 are unaware that their output is being displayed on a secondary display devices 12. In a typical scenario, windows on the conventional screen of the computer 10 can be moved across to a secondary display device 12 simply by dragging them off one side of the main display. A simple user interface would generally be provided to enable users to control which secondary display device 12 is part of this extended workspace, the geometric relationship between them and any conventional displays, and other aspects of the system. A further use of the improved address based protocol of the present invention is in the process of adding multiple screens 12 which aren't intended to be part of the workspace of a computer 10. For example, a secondary display device 12 which displays a slide show in a shop window is only visible from the outside of the building. These displays 12 may also be at a greater distance from the data processing device 10 than would be easily possible with conventional display-driving mechanisms, such as those using the VGA standard. For whatever reason, interacting with the secondary display device 12 as if it were simply part of the main display may not be ideal.
In these cases, software is written or modified to be compatible with secondary display device 12 and to drive one or more of them explicitly. A typical use might be the control of multiple displays 12 on a railway platform for informational and/or advertising purposes. The host machine 10 may also have some displays 12 running conventional desktop applications, but this is not necessary, and indeed it may not normally have a 'user' at all in the conventional sense. Secondary display devices 12 may also be driven by consumer electronics devices such as central heating controllers, games machines or voicemail systems. Again, the use of the improved address based protocol of the present invention increases the efficiency of the system.
Figure 2 shows such a network topology in which a single data processing device 10 is connected over a general purpose data network 14 to a plurality of secondary display devices 12. The illustrated data processing device 10 does not have its own conventional display device. Display data for the secondary display devices 12 is sent from the computer 10 over the network 14 to the individual display devices 12. These secondary display devices 12 will either have a lightweight processing component connected between the network 14 and the display 12, or each secondary display device 12 will have a small amount of processing and memory built in. This is required to handle the received display data.
Figure 3 illustrates the concept of an overlay. The display device 12 is showing an image, such as a webpage, which includes within the image an overlay 16 which is a video portion. An example of such as webpage is the service offered by www.youtube.com, whereby a video can be watched within a web browser. The video is delivered in the overlay region 16 of the screen 12 and is typically of a low quality, in this case primarily due to storage and bandwidth constraints on the website. However, in the case of a secondary (remote) display device 12, the delivery of sufficient quality video is limited by the bandwidth of the connection to the display device 12.
The host device 10 must specify any overlay regions 16. There are a number of methods for specifying the overlay regions 16. One common method is to use chroma keying. To create a chroma key, the host device 10 flags the individual pixels in the overlay region 16 with a special value called the chroma key. Meanwhile, the remote display 12 has a controller which is aware of the chroma key. The controller scans a frame buffer, reading the colour values for each pixel, and updates the physical screen. When the controller encounters a chroma keyed pixel, the controller knows that there is an overlay region, and instead reads the colour value from the corresponding pixel in the overlay buffer. Thus, the physical screen 12 shows the video in the correct position, though the video data and the rest of the display data are stored in separate sections of memory.
As mentioned above, chroma key is one of many different methods. As an example, another method is not to store the overlay 16 in a separate memory region. instead, a video decoder of the remote display device 12 writes the decoded video directly to the correct region of the frame buffer. This design requires many different hardware components to read and write to the frame buffer simultaneously, thus dramatically increasing the complexity and cost of the hardware.
A more detailed view of the display system is contained in Figure 4, which shows the system for transmitting display data from the host device 10 to the remote display device 12 over a general purpose network 14. The host device comprises a frame buffer 18, a video streaming component 20 and a display driver 22. The remote display device 12, which is connected to the host device 10 via the general purpose network 14, comprises a frame buffer 24, a video decoder 26, an overlay buffer 28, a memory controller 30 and a physical display 32. The network 14 supports multiple network connections between the host device 10 and the remote display device 12.
In order to provide the video overlay, the frame buffer 18 of the host system 10 is arranged to generate the display data including an overlay portion 16, the video streaming component 20 is arranged to generate video data corresponding to the overlay portion 16, the display driver 22 is arranged to generate memory address data for the display data and to transmit the display data, video data and memory address data to the remote display device 12 over the general purpose network 14. The address data is the data for where the display data should be written in the frame buffer 24 of the remote display device 12, rather than sending information about where on the screen 32 the display data is located.
The frame buffer 24 of the remote display device 12 is arranged to store the display data according to the memory address data, the video decoder 26 is arranged to decode the video data and the overlay buffer 28 is arranged to store this decoded video data. The memory controller 30 is arranged to construct an image frame from the stored display data and decoded video data, and the display 32 is arranged to display the image frame, in this way the two channels between the host device 10 and the remote display 12 are utilised to send the display data from frame buffer 18 to frame buffer 24 and to send the video data from the streaming component 20 to the video decoder 26.
Figure 5 shows the data flow between the host device 10 and remote display device 12. Optional components are shown in dashed line. The display data 34 and memory address data 36 for the display data are generated, and optionally an overlay specification 38 defining the overlay in the display data is sent via a first network connection to the frame buffer 24 of the remote display device 12. The video data 42 is sent via a second network connection 44 to the video decoder 26 of the remote display device 12. If the ultimate image being created contains more than one video component, then further video data 46 can be transmitted to the video decoder 26 via a third network connection 48. Video data 46 may also include very useful motion video-specific data, such as time stamps and video synchronized audio data.
When a video begins to play, the display driver 22 on the host device 10 first determines the region of the screen that is to contain the video. This region is the overlay 16. The display driver 22 can then send a specification 38 of the overlay 16 to the remote display 12. The remote display 12 allocates a section of memory to act as a frame buffer 28 just for the decoded video. Then, the display driver 12 creates a second network connection 44 to the remote display 12 and starts sending the video over it. The remote display 12 routes the video stream 42 to the video decoder 26, which may be implemented in either hardware or software, which in turn writes decoded frames of video to the overlay buffer 28. The remote display 12 then combines the overlay buffer 28 with the frame buffer 24 to render the final image on the screen 32.
As described above, most graphic operations are sent to the remote display 12 via a network connection 40. The extended protocol sends the video data 42 over a separate connection 44. This leads to a number of design options, ail of which are equally applicable. Implementations could use any number of additional network connections for the video, as described above, one for each video being simultaneously played. Alternatively, an implementation might send all videos 42 and 46 over the same network connection 44 (still, however, separate from the main connection 40). There is no requirement that the network connections 40 and 44 use the same type of connection. Generally, the main graphics connection 40 is a lossless connection. The video channel 44 could also be lossless, or use a method that allows loss. For example, using the TCP/IP protocol, the main connection 40 could be made via TCP, and the video 44 via UDP. Alternatively, one could use TCP for both. Similarly, there is no requirement about the higher level protocols over which the video data 42 is sent. The video data 42 could be sent over existing streaming application protocols, or simply raw data could be sent. A wireless network is a prime example of where 2D/3D graphics data might best be sent over a connection that simulates a reliable "wired-like" connection, over the air, where the video data 42 is sent with a higher throughput, lower latency, but much less reliable protocol, which is more native/natural for wireless transmission.
The method of transmitting the display data 34 from the host device 10 to the remote display device 12 over the general purpose network 14 is summarised in Figure 6. The method comprises the steps of, firstly step S1 of generating the display data 34 including the overlay portion 16, generating the video data 42 corresponding to the overlay portion 16 and generating the memory address data 36 for the display data 34. This is all carried out at the host device 10. The display driver generates the address data 36 which will be used to read the display data 34 into the memory 24 of the remote display device. The next step is the step S2 of transmitting the display data 34, video data 42 and memory address data 36 from the host device 10 to the remote display device 12 over the general purpose network 14. This is carried out by the display driver 22 of the host device 10. The next step is the step S3 of storing the received display data 34 according to the memory address data 36 in the frame buffer 24 of the remote display device 12. The next step is the step S4 of decoding the video data 42 in the video decoder 26 and storing the decoded video data 42 in the overlay buffer 28.
Step S5 comprises constructing the final image frame from the stored display data 34 and decoded video data 42, and finally in step S6 displaying the image frame. Step S5 is optional in the sense that some graphics systems have hardware overlay support which will refresh the display 12 (an LCD or CRT) directly from the two separate (graphics, video) buffers 24 and 28, doing two reads when a chroma key pixel is hit, rather than ever having to construct any combined frame in memory. The remote display device 12 may be integral with the actual physical display 32, or may be a separate component which is connected between the network 14 and actual physical display 32. If the remote display 12 is remote from the physical display 32, then the steps S1 to S5 are carried out by the remote display device 12 and the actual displaying step S6 is carried out by the physical display 32. In this case the remote display device 12 can be considered to be a lightweight processing component that is driving the physical display 32.

Claims

1. A method of transmitting display data from a host device to a remote display device over a general purpose network, comprising: o generating display data including an overlay portion, o generating video data corresponding to the overlay portion, o generating memory address data for the display data, o transmitting the display data, video data and memory address data from the host device to the remote display device over the general purpose network, o storing the display data according to the memory address data, o decoding the video data, o storing the decoded video data, and o displaying an image frame from the stored display data and decoded video data.
2. A method according to claim 1 , and further comprising transmitting an overlay specification from the host device to the remote display device.
3. A method according to claim 1 or 2, and further comprising creating a first network connection for transmitting the display data and memory address data and creating a second network connection for transmitting the video data.
4. A method according to claim 1 , 2 or 3, and further comprising generating further video data and creating a third network connection for transmitting the further video data.
5. A method according to any preceding claim, and further comprising writing the decoded video data to the stored display data.
6. A display system comprising: o a host device comprising a frame buffer, a video streaming component and a display driver, o a general purpose data network, and o a remote display device, connected to the host device via the general purpose network, and comprising a frame buffer, a video decoder, an overlay buffer, a memory controller and a display, wherein the frame buffer of the host device is arranged to generate display data including an overlay portion, the video streaming component is arranged to generate video data corresponding to the overlay portion, the display driver is arranged to generate memory address data for the display data and to transmit the display data, video data and memory address data to the remote display device over the general purpose network, the frame buffer of the remote display device is arranged to store the display data according to the memory address data, the video decoder is arranged to decode the video data, the overlay buffer is arranged to store the decoded video data, and the display is arranged to display an image frame from the stored display data and decoded video data.
7. A system according to claim 6, wherein the display driver is further arranged to transmit an overlay specification to the remote display device over the general purpose network.
8. A system according to claim 6 or 7, wherein the display driver is further arranged to create a first network connection for transmitting the display data and memory address data and to create a second network connection for transmitting the video data.
9. A system according to claim 6, 7 or 8, wherein the video streaming component is further arranged to generate further video data and the display driver is further arranged to create a third network connection for transmitting the further video data.
10. A system according to any one of claims 6 to 9, wherein the memory controller is further arranged to write the decoded video data to the frame buffer of the remote display device.
PCT/US2009/038735 2009-03-30 2009-03-30 System and method of transmitting display data to a remote display WO2010114512A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2009/038735 WO2010114512A1 (en) 2009-03-30 2009-03-30 System and method of transmitting display data to a remote display

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2009/038735 WO2010114512A1 (en) 2009-03-30 2009-03-30 System and method of transmitting display data to a remote display

Publications (1)

Publication Number Publication Date
WO2010114512A1 true WO2010114512A1 (en) 2010-10-07

Family

ID=40606024

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/038735 WO2010114512A1 (en) 2009-03-30 2009-03-30 System and method of transmitting display data to a remote display

Country Status (1)

Country Link
WO (1) WO2010114512A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012170118A1 (en) * 2011-06-08 2012-12-13 Cisco Technology, Inc. Virtual meeting video sharing
CN103986935A (en) * 2014-04-30 2014-08-13 华为技术有限公司 Encoding method, encoder and screen sharing device and system
CN104837048A (en) * 2015-05-08 2015-08-12 深圳市飞图视讯有限公司 Screen mirror implementation method and system
US9253490B2 (en) 2013-05-31 2016-02-02 Qualcomm Technologies International, Ltd. Optimizing video transfer
US11150857B2 (en) 2017-02-08 2021-10-19 Immersive Robotics Pty Ltd Antenna control for mobile device communication
US11153604B2 (en) 2017-11-21 2021-10-19 Immersive Robotics Pty Ltd Image compression for digital reality
US11151749B2 (en) 2016-06-17 2021-10-19 Immersive Robotics Pty Ltd. Image compression method and apparatus
US11553187B2 (en) 2017-11-21 2023-01-10 Immersive Robotics Pty Ltd Frequency component selection for image compression

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060069797A1 (en) * 2004-09-10 2006-03-30 Microsoft Corporation Systems and methods for multimedia remoting over terminal server connections
WO2006061582A2 (en) * 2004-12-07 2006-06-15 Newnham Research Limited Address based graphics protocol
US20060282855A1 (en) * 2005-05-05 2006-12-14 Digital Display Innovations, Llc Multiple remote display system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060069797A1 (en) * 2004-09-10 2006-03-30 Microsoft Corporation Systems and methods for multimedia remoting over terminal server connections
WO2006061582A2 (en) * 2004-12-07 2006-06-15 Newnham Research Limited Address based graphics protocol
US20060282855A1 (en) * 2005-05-05 2006-12-14 Digital Display Innovations, Llc Multiple remote display system

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012170118A1 (en) * 2011-06-08 2012-12-13 Cisco Technology, Inc. Virtual meeting video sharing
US8621352B2 (en) 2011-06-08 2013-12-31 Cisco Technology, Inc. Virtual meeting video sharing
CN103718152A (en) * 2011-06-08 2014-04-09 思科技术公司 Virtual meeting video sharing
US9571534B2 (en) 2011-06-08 2017-02-14 Cisco Technology, Inc. Virtual meeting video sharing
US9253490B2 (en) 2013-05-31 2016-02-02 Qualcomm Technologies International, Ltd. Optimizing video transfer
CN103986935A (en) * 2014-04-30 2014-08-13 华为技术有限公司 Encoding method, encoder and screen sharing device and system
CN104837048A (en) * 2015-05-08 2015-08-12 深圳市飞图视讯有限公司 Screen mirror implementation method and system
US11151749B2 (en) 2016-06-17 2021-10-19 Immersive Robotics Pty Ltd. Image compression method and apparatus
US11150857B2 (en) 2017-02-08 2021-10-19 Immersive Robotics Pty Ltd Antenna control for mobile device communication
US11429337B2 (en) 2017-02-08 2022-08-30 Immersive Robotics Pty Ltd Displaying content to users in a multiplayer venue
US11153604B2 (en) 2017-11-21 2021-10-19 Immersive Robotics Pty Ltd Image compression for digital reality
US11553187B2 (en) 2017-11-21 2023-01-10 Immersive Robotics Pty Ltd Frequency component selection for image compression

Similar Documents

Publication Publication Date Title
WO2010114512A1 (en) System and method of transmitting display data to a remote display
US9619916B2 (en) Method for transmitting digital scene description data and transmitter and receiver scene processing device
US7667707B1 (en) Computer system for supporting multiple remote displays
JP5129151B2 (en) Multi-user display proxy server
US20060282855A1 (en) Multiple remote display system
US20130147787A1 (en) Systems and Methods for Transmitting Visual Content
US20100165079A1 (en) Frame processing device, television receiving apparatus and frame processing method
US20140281896A1 (en) Screencasting for multi-screen applications
US11217201B2 (en) Video frame interfaces for logically-defined pixels
KR20030081463A (en) Image display system
JP4623860B2 (en) Method and system for using a single OSD pixmap across multiple video raster sizes by chaining OSD headers
US7724279B2 (en) O/S application based multiple device access windowing display
CN110187858B (en) Image display method and system
CN111741343B (en) Video processing method and device and electronic equipment
US20140297720A1 (en) Client apparatus, server apparatus, multimedia redirection system, and method thereof
JP2001265313A (en) Device and method for processing signal, and computer- readable storage medium
KR101871403B1 (en) Media control device application executing method and system in media displaying device using presentation virtualization
CN110581960B (en) Video processing method, device, system, storage medium and processor
WO2008018860A1 (en) Multiple remote display system
CN109788340B (en) Content providing apparatus, control method of content providing apparatus, and recording medium thereof
WO2012171156A1 (en) Wireless video streaming using usb connectivity of hd displays
NL1032712C2 (en) DEVICE FOR CARRYING OUT MULTIMEDIA DATA AND A METHOD FOR CARRYING OUT IT.
CN115174991B (en) Display equipment and video playing method
KR102568415B1 (en) HMD-based PC game expansion system
KR100654834B1 (en) Host device, display device and display system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09789547

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09789547

Country of ref document: EP

Kind code of ref document: A1