US20160112259A1 - Home Cloud with Virtualized Input and Output Roaming over Network - Google Patents
Home Cloud with Virtualized Input and Output Roaming over Network Download PDFInfo
- Publication number
- US20160112259A1 US20160112259A1 US14/985,945 US201514985945A US2016112259A1 US 20160112259 A1 US20160112259 A1 US 20160112259A1 US 201514985945 A US201514985945 A US 201514985945A US 2016112259 A1 US2016112259 A1 US 2016112259A1
- Authority
- US
- United States
- Prior art keywords
- data
- virtualized
- input
- recited
- client device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2807—Exchanging configuration information on appliance services in a home automation network
- H04L12/2809—Exchanging configuration information on appliance services in a home automation network indicating that an appliance service is present in a home automation network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0806—Configuration setting for initial configuration or provisioning, e.g. plug-and-play
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2816—Controlling appliance services of a home automation network by calling their functionalities
- H04L12/282—Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/1066—Session management
- H04L65/1069—Session establishment or de-establishment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/04—Protocols specially adapted for terminals or networks with limited capabilities; specially adapted for terminal portability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00127—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00127—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
- H04N1/00204—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a digital computer or a digital computer system, e.g. an internet server
- H04N1/00244—Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a digital computer or a digital computer system, e.g. an internet server with a server, e.g. an internet server
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2807—Exchanging configuration information on appliance services in a home automation network
- H04L12/281—Exchanging configuration information on appliance services in a home automation network indicating a format for calling an appliance service function in a home automation network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/283—Processing of data at an internetworking point of a home automation network
- H04L12/2832—Interconnection of the control functionalities between home networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L2012/2847—Home automation networks characterised by the type of home appliance used
- H04L2012/2849—Audio/video appliances
Definitions
- NUI natural user interfaces
- computing devices which include televisions, game consoles, desktop computers, tablets, smart phones, to name a few.
- computing capabilities and NUI functions are still confined to respective devices, platforms or applications, and cannot be shared with other devices, platforms or applications due to diverse and incompatible input and output data formats of different computing devices, for example.
- a first device may virtualize data (such as input and output data) thereof and redirect part or all of the virtualized data to one or more other devices to leverage resources of the first device for the one or more other devices.
- the first device may detect a presence of a second device in a proximity of the first device. In response to detecting the presence of the second device, the first device may establish a network or data connection with the second device. Additionally, the first device may further determine functional capabilities (such as data processing capabilities, display capabilities, etc.) of the second device.
- the first device may negotiate with the second device with respect to one or more responsibilities. For example, the first device may negotiate with the second device regarding an extent of data transformation that is to be performed by the first device for the second device and an extent of data transformation that is to be performed by the second device for the first device based on the functional capabilities of the second device and functional capabilities of the first device.
- the first device may virtualize its data (input and/or output data) into virtualized data.
- the first device may transform the virtualized data into a virtualized data stream (e.g., in a virtualization interconnection layer that is on top of a data network layer) based on a result of the negotiation between the first device and the second device.
- the first device may send the virtualized data stream to the second device, thus leveraging resources of the first device for the second device.
- FIG. 1 illustrates an example environment including a home cloud computing system.
- FIG. 2 illustrates the example virtualization system of FIG. 1 in more detail.
- FIG. 3 illustrates data flow between two example client devices in the example home cloud computing system.
- FIG. 4 illustrates an example method of data virtualization and resource leveraging.
- the home cloud computing system employs a virtualization system to leverage resources of a plurality of devices by virtualizing respective inputs and outputs, and enables communication and interpretation of the inputs and the outputs among and across the plurality of devices regardless of input and/or output capabilities, types, formats and/or complexities of the plurality of devices. Furthermore, the virtualization system allows composing all or parts of different inputs and/or outputs from different devices to form a combined input or output that may be manipulated and/or presented in one or more devices.
- the virtualization system may be included or installed in a device that interacts with a user, i.e., a device that receives inputs from the user and/or presents outputs to the user.
- the virtualization system enables the device to share or redirect resources thereof, such as data processing and display capabilities, without specifying a particular device beforehand.
- the virtualization system also allows the device to adapt to a particular device during and/or after establishing a network connection with that particular device.
- the virtualization system leverages resources by virtualizing input and/or output data of the device.
- Examples of virtualization may include, but are not limited to, generating output data (e.g., screen data) with or without presenting the output data locally at the device, capturing input data through a user interface of the device with or without processing the input data and/or providing a response with respect to the input data locally, etc.
- output data e.g., screen data
- the virtualization system may redirect part or all of the input data and/or the output data of the device to one or more other devices through a network for data presentation and/or processing.
- the virtualization system may pre-process the data of the device prior to sending the data to the one or more other devices.
- a virtualization system of a first device may have negotiated with a second device (or a virtualization system of the second device) an extent of data transformation (for example, data pre-processing, encoding, etc.) that the first device may perform for the second device and/or an extent of data transformation that the second device may perform for the first device when establishing a network or data connection between the first device and the second device.
- the first device may pre-process and encode the data in accordance with a result of the negotiation, and send the pre-processed/encoded data to the second device thereafter.
- the described home cloud computing system enables interaction among a plurality of devices that may have diverse input and output capabilities, types and formats, and leverages resources of the plurality of devices through input and output virtualization.
- the virtualization system virtualizes input and output data of a device, establishes or facilitates a network connection with another device for the device, negotiates respective degrees of data transformation to be done by each device, transforms the data of the device, and sends the data to the other device and/or receive data from the other device.
- these functions may be performed by multiple separate systems or services.
- a virtualization service may virtualize data of the device, while a separate service may establish a network connection with other device and negotiate respective degrees of data transformation to be performed by each device, and yet another service may transform the data of the device and send the data to the other device and/or receive data from the other device.
- the virtualization system may be implemented as software and/or hardware installed in a device, in other embodiments, the virtualization system may be implemented as a separate entity or device that is peripheral or attached to the device. Furthermore, in some embodiments, the virtualization system may be implemented as software and/or hardware included in one or more other devices forming a network through which each device of the plurality of devices connects to one another (e.g., a local server or router, etc.). Additionally or alternatively, the virtualization system may be implemented as a service provided in one or more servers over the network and/or in a cloud computing architecture.
- the application describes multiple and varied implementations and embodiments.
- the following section describes an example environment that is suitable for practicing various implementations.
- the application describes example systems, devices, and processes for implementing a home cloud computing system.
- FIG. 1 illustrates an exemplary environment 100 that implements a home cloud computing system.
- the environment 100 may include a plurality of client devices 102 - 1 , 102 - 2 , 102 - 3 , 102 - 4 , . . . , 102 -N (which are collectively referred to as client devices 102 ) and a network 104 .
- each of the plurality of client devices 102 may be installed or attached with a virtualization system 106 .
- the plurality of client devices 102 may communicate data with one another through respective virtualization systems 106 via the network 104 .
- each client device 102 is described to install or attach with a respective virtualization system 106
- some or all of the functions of the virtualization system 106 may be included in one or more entities other than the client devices 102 .
- some or all of the functions of the virtualization system 106 may be included and distributed in the client device 102 and a separate device that is peripheral to the client device 102 (e.g., a peripheral device, a set-top box, etc.).
- some or all of the functions of the virtualization system 106 may be included and distributed among the client device 102 and one or more servers 108 that are connected to the network 104 .
- the client device 102 may include part of the functions of the virtualization system 106 while other functions of the virtualization system 106 may be included in one or more other servers 108 .
- the virtualization system 106 may be included in one or more third-party servers, e.g., other servers 108 , that may or may not be a part of a cloud computing system or architecture.
- the client device 102 may be implemented as any of a variety of conventional computing devices including, for example, a mainframe computer, a server, a notebook or portable computer, a handheld device, a netbook, an Internet appliance, a tablet or slate computer, a mobile device (e.g., a mobile phone, a personal digital assistant, a smart phone, etc.), a game console, a set-top box, etc. or a combination thereof.
- a mainframe computer e.g., a server, a notebook or portable computer, a handheld device, a netbook, an Internet appliance, a tablet or slate computer, a mobile device (e.g., a mobile phone, a personal digital assistant, a smart phone, etc.), a game console, a set-top box, etc. or a combination thereof.
- the client device 102 may be implemented as any of a variety of conventional consumer devices including, for example, a television, a digital picture frame, an audio player, a video player, an eReader, a digital camera, etc. or a combination thereof.
- the virtualization system 106 may be configured to provide networking and computing capabilities for these consumer devices having no or limited networking and computing capabilities.
- the network 104 may be a wireless or a wired network, or a combination thereof.
- the network 104 may be a collection of individual networks interconnected with each other and functioning as a single large network (e.g., the Internet or an intranet). Examples of such individual networks include, but are not limited to, telephone networks, cable networks, Local Area Networks (LANs), Wide Area Networks (WANs), and Metropolitan Area Networks (MANS). Further, the individual networks may be wireless or wired networks, or a combination thereof.
- the client device 102 may include one or more processors 110 coupled to memory 112 .
- the memory 112 includes one or more applications or services 114 (e.g., web applications or services, video applications or services, etc.) and other program data 116 .
- the memory 112 may be coupled to, associated with, and/or accessible to other devices, such as network servers, routers, and/or the other servers 108 .
- the client device 102 may include an input interface 118 (such as a touch screen, a touch pad, a remote controller, a mouse, a keyboard, a camera, a microphone, etc.) and an output interface 120 (e.g., a screen, a loudspeaker, etc.).
- a user 122 may use the plurality of client devices 102 for home entertainment.
- the user 122 may read a web page on a mobile phone (e.g., client device 102 - 1 ) and note a video on the web page.
- the user 122 may want to watch the video on a television (e.g., client device 102 - 2 ) and use the mobile phone as a remote controller of the television.
- the user 122 may further want to read textual information of the web page using a tablet (e.g., client device 102 - 3 ).
- the user 122 may perform these operations using the plurality of client devices 102 that include the virtualization systems 106 .
- FIG. 2 illustrates the virtualization system 106 in more detail.
- the virtualization system 106 includes, but is not limited to, one or more processors 202 , a network interface 204 , memory 206 , and an input/output interface 208 .
- the processor(s) 202 is configured to execute instructions received from the network interface 204 , received from the input/output interface 208 , and/or stored in the memory 206 .
- the memory 206 may include computer-readable media in the form of volatile memory, such as Random Access Memory (RAM) and/or non-volatile memory, such as read only memory (ROM) or flash RAM.
- RAM Random Access Memory
- ROM read only memory
- the memory 206 is an example of computer-readable media.
- Computer-readable media includes at least two types of computer-readable media, namely computer storage media and communications media.
- Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data.
- Computer storage media includes, but is not limited to, phase change memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), other types of random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disk read-only memory (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device.
- PRAM phase change memory
- SRAM static random-access memory
- DRAM dynamic random-access memory
- RAM random-access memory
- ROM read-only memory
- EEPROM electrically erasable programmable read-only memory
- flash memory or other memory
- communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism.
- computer storage media does not include communication media.
- the memory 206 may include program modules 210 and program data 212 .
- the virtualization system 106 may include a virtualization module 214 .
- the virtualization module 214 captures inputs and/or outputs of associated client device 102 and/or separates the inputs and/or outputs from the client device 102 .
- Examples of inputs may include, but are not limited to, data captured from a user interface, such as mouse movement and click, keyboard's keystrokes, images from touch surfaces, visual and depth information from cameras or motion sensors, control signals from game controllers, data from accelerometers and/or gyros, data from contextual or ambient sensors, such as location information from GPS (Global Positioning System), WiFi, and other positioning sensors, orientation data from a compass sensor, data from a light sensor, and/or any other possible sensors attached to the client device 102 .
- Examples of the outputs may include, for example, screen contents, audio contents, feedback of haptic devices, and any other results that may be presented to the user 122 .
- the virtualization module 214 may generate data virtually inside the client device 102 to obtain virtualized data (e.g., virtualized input (VI) or virtualized output (VO)). For example, the virtualization module 214 may virtually render a screen image with or without displaying the screen image in a local display of the client device 102 and/or pre-process or pre-interpret real input raw data into a high level input signal (e.g., interpreting an image from a touch surface to a high level touch gesture, etc.).
- V virtualized input
- VO virtualized output
- the virtualization module 214 may generate data virtually inside the client device 102 by intercepting physical data directly from a physical device (such as a camera or motion sensor attached to the client device 102 ) that generates the physical data through a piece of software (e.g., an application programming interface (API) for screen capture, etc.) or a piece of hardware (such as a splitter connected to a display port of the client device 102 , for example).
- a physical device such as a camera or motion sensor attached to the client device 102
- API application programming interface
- a piece of hardware such as a splitter connected to a display port of the client device 102 , for example.
- the virtualization system 106 may further include a virtualization interconnection layer 216 .
- the virtualization interconnection layer 216 may be added and attached to the client device 102 on top of existing data network layers and may serve as a higher-level abstraction layer to interconnect the client device 102 with other client devices 102 .
- the virtualization interconnection layer 216 facilitates data roaming or migration, i.e., moving the virtualized data from the client device 102 that generates the data to one or more other client devices 102 .
- the virtualization interconnection layer 216 may be made to be independent of the virtualization module 214 (or the virtualization process that is done by the virtualization module 214 ).
- the virtualization system 106 may enable this separation of the virtualization interconnection layer 216 from the virtualization module 214 (or process) by defining a programming model 218 and corresponding application programming interface (API) that is independent of what is performed by the virtualization module 214 (or process).
- API application programming interface
- the virtualization system 106 may employ a driver model that may rely on respective software virtualization drivers for corresponding input and output devices.
- the virtualization interconnection layer 216 may be implemented in a dedicated cost effective hardware or a piece of software that provides minimal functions for the data roaming without involving sophisticated computation. Furthermore, this separation enables separating input and output functions from main computing tasks of the applications 114 associated with the client device 102 , and thus releases developers of the applications 114 and/or the client device from considering which one or more output devices that the outputs may go to and/or which one or more input devices that the inputs may come from. Moreover, the separation allows the virtualization interconnection layer 216 to interpret or translate data differently for different client devices 102 , for example, enabling translation of gestures captured by cameras or motion sensors into mouse signals that are applicable to computer applications.
- the virtualization interconnection layer 216 of the client device 102 may include a sending virtualization interconnection node 220 (which may be referred to as SVIN) and/or a receiving virtualization interconnection node 222 (which may be referred to as RVIN).
- SVIN sending virtualization interconnection node 220
- RVIN receiving virtualization interconnection node 222
- the computer may only need to have the sending virtualization interconnection node 220 while the television may only need the receiving virtualization interconnection node 222 on the television or a computing device that is attached to the television, for example.
- the virtualization interconnection layer 216 associated with the client device 102 is described hereinafter to have both the sending virtualization interconnection node 220 and the receiving virtualization interconnection node 222 .
- the sending virtualization interconnection node 220 may be interfaced with the virtualization module 214 , and receive virtualized data (i.e., virtualized input (VI) and/or virtualized output (VO)) from the virtualization module 214 .
- virtualized data i.e., virtualized input (VI) and/or virtualized output (VO)
- the sending virtualization interconnection node 220 may pre-process and/or encode the virtualized data, and transmit the processed/encoded virtualized data to one or more other client devices 102 via the network 104 .
- the sending virtualization interconnection node 220 may include a pre-processing module 224 , an encoding module 226 and a sending module 228 .
- the receiving virtualization interconnection node 222 may receive virtualized data from one or more other client devices 102 via the network 104 , and decode, reconstruct or process the received virtualized data into a form that is acceptable by a device to which the receiving virtualization interconnection node 222 , i.e., the client device 102 in this example.
- the receiving virtualization interconnection node 222 may include a receiving module 230 , a decoding module 232 and a post-processing module 234 .
- FIG. 3 illustrates data flow between two client devices 102 through respective virtualization interconnection layers 216 .
- the sending virtualization interconnection node 220 may pre-process and prepare the virtualized data for further encoding using the pre-processing module 224 .
- the pre-processing module 224 may perform one or more operations on the virtualized data. Examples of the one or more operations include, but are not limited to, analyzing, converting, recognizing, enhancing, filtering, extracting, combining, synchronizing, interpolating, subsampling, cleaning and denoising the virtualized data.
- the pre-processing module 224 may analyze output content of the virtualized data, such as classifying screen contents into different types (e.g., text regions, image regions, video regions, object regions, etc.). Additionally or alternatively, the pre-processing module 224 may convert raw input data into a format more easily to encode and transmit, such as converting raw images from sensors of a touch surface into high-level gestures, or converting raw depth information from camera or motion sensors into high-level body gestures (such as direction and amount of movement, etc.), etc.
- an extent of data processing performed by the pre-processing module 224 may be determined by one or more factors.
- the one or more factors may include, but are not limited to, processing power, memory, storage capacity, network bandwidth, flexibility, delay criterion, etc., associated with the client device 102 and/or the virtualization system 106 of the virtualization system 106 .
- the one or more factors may include relative processing power, memory, storage capacity, network bandwidth, flexibility, delay criterion, etc., between the client device 102 (and/or the virtualization system 106 of the client device 102 ) and another client device 102 (and/or a virtualization system 106 of the other client device 102 ) to which the virtualized data is to be sent.
- the pre-processing module 224 may pass the virtualized data transparently to the encoding module 226 , depending on the receiving virtualization interconnection node 222 of the receiving client device 102 to process the virtualized data. Specifically, what to be pre-processed and/or an extent of which the pre-processing to be done by the pre-processing module 224 depend(s) on one or more factors as described above and other system considerations including, for example, a balance between different factors to achieve an overall optimized system performance.
- the pre-processing module 224 Upon pre-processing the virtualized data, the pre-processing module 224 sends the pre-processed virtualized data to the encoding module 226 for further processing.
- the encoding module 226 may further compress and/or encode the pre-processed virtualized data into a compressed virtualized data stream.
- the encoding module 226 may apply a generic lossless compression algorithm to compress the pre-processed virtualized data to a predetermined level, for example, for sending to the receiving client device 102 .
- the encoding module 226 may apply a content-aware encoding (and/or compression) scheme, i.e., applying different encoding and/or compression algorithms for different types or formats of the virtualized data. Furthermore, in some embodiments, the encoding module 226 may apply different and adaptive encoding and/or compression algorithms based on characteristics of the data at different regions and times even with the same data type. Examples of applications of different encoding or compression algorithms for different types or formats of virtualized data will be described in more detail hereinafter.
- the sending module 228 may transmit or put the compressed virtualized data into the network 104 for other client devices 102 connected to the network 104 to share and utilize. Furthermore, the sending module 228 may serve as a system coordinator to coordinate operations and/or behaviors of the other modules (e.g., the pre-processing module 224 and the encoding module 226 ) in the virtualization interconnection layer 216 of the client device 102 . Additionally or alternatively, the sending module 228 may negotiate with the virtualization interconnection layer 216 of another client device 102 .
- Examples of negotiation may include, but are not limited to, an extent of data transformation (or processing) that the other client device 102 may perform for the client device 102 of the sending module 228 and/or an extent of data transformation (or processing) that the client device 102 of the sending module 228 may perform for the other client device 102 , etc.
- the sending module 228 may monitor network conditions of the network 104 including, for example, bandwidth and delay.
- the sending module 228 may adaptively guide the pre-processing module 224 and the encoding module 226 to adapt to changes in the network conditions and/or changes in capabilities (such as memory, storage, workload, etc.) of the client device 102 and/or one or more client devices 102 that receive the virtualized data stream.
- the sending module 228 may encrypt sensitive data included in the compressed virtualized data stream prior to sending the data stream to the network 104 .
- the sending module 228 may further broadcast a presence of the client device 102 in a proximity or neighborhood of the client device 102 . Additionally or alternatively, the sending module 228 may broadcast the presence of the client device 102 via the network 104 . For example, the sending module 228 may send a broadcast message using a network protocol such as Simple Service Discovery Protocol (SSDP) to the network 104 , allowing other client devices 102 to discover the presence of the client device 102 . Additionally, the sending module 228 may further broadcast one or more capabilities (e.g., abilities to display video and image, play audio, receive user input through a user interface such as a touchscreen, etc.), data qualities (such as display resolution, audio quality, etc.), acceptable data formats and/or types, etc. The sending module 228 may broadcast or advertise the virtualization interconnection layer 216 and/or corresponding client device 102 as a service to other client devices 102 of the same home cloud system or network 104 .
- SSDP Simple Service Discovery Protocol
- the sending module 228 may broadcast or advertise the virtual
- the sending module 228 may detect or discover one or more client devices 102 by receiving a broadcast message from the one or more client devices 102 in a proximity or neighborhood of the client device 102 and/or through the network 104 .
- the sending module 228 may adaptively or proactively discover or recommend one or more client devices 102 that may provide a better experience to the user 122 with respect to the content that the user 122 is consuming.
- the sending module 228 of the client device 102 may detect or determine whether one or more other client devices 102 may be used for providing a better display resolution or size for the video, for example.
- the sending module 228 may provide a prompt to the user 122 that he/she may switch to watch the video on another client device 102 found.
- the sending module 228 may further be configured to authenticate and verify one or more client devices 102 that attempt to connect to the client device 102 of the sending module 228 before finally establishing the connections with the one or more client devices 102 .
- the sending module 228 may encrypt sensitive data present in the virtualized data stream to avoid other unrelated or unauthenticated devices from intercepting or tampering the sensitive data.
- the sending module 228 may support error resilience due to a loss of data (or data packets in the virtualized data stream) during data transmission in the network 104 .
- the sending module 228 may switch to real time and low delay transmission of the virtualized data stream from the sending client device 102 to the receiving client device 102 .
- the sending module 228 may render a low delay (having a predetermined time threshold) as a criterion for smooth and natural user interaction for the user 122 . This is because the virtualized data (VI and VO, for example) needs to go through a network layer where a transmission delay may be introduced in addition to delays due to data pre-processing, encoding, decoding and/or post-processing, etc.
- a streaming sending buffer 236 and a streaming receiving buffer 238 may be deployed with the sending module 228 and the receiving module 230 , respectively.
- the sending module 228 may guide the encoding module 226 to adaptively implement a rate control algorithm according to available channel bandwidth in the network 104 . Additionally or alternatively, in some embodiments, the sending module 228 may identify which region and/or object in the virtualized data stream may be the focus of attention of the user 122 (e.g., a button pressed in an application, etc.) and prioritize a sending order of regions and/or objects in the virtualized data stream, having some regions and/or objects having a higher priority to be updated or sent faster or more frequently than others having a lower priority. Additionally or alternatively, the virtualized data and locally rendered data which represents (most) actively changed portions may be combined so that delay for this type of content may no longer be an issue.
- the sending module 228 may further detect and signal to other modules (e.g., the pre-processing module 224 and the encoding module 226 of its associated client device 102 and the other modules of the client device 102 that receives the virtualized data stream, etc.) that a channel error, such as a packet loss or error, has occurred to allow the other modules to take proper actions to recover or compensate for the error at the sending client device 102 or the receiving client device 102 .
- modules e.g., the pre-processing module 224 and the encoding module 226 of its associated client device 102 and the other modules of the client device 102 that receives the virtualized data stream, etc.
- the home cloud system may not know in advance what kinds of client devices 102 are connected thereto, and/or in what kind of network conditions the home cloud system is connected.
- the sending module 228 may facilitate adaptation of one or more services of the client device 102 for one or more other client devices 102 that are connected to the network 104 and the network conditions.
- the sending module 228 may perform negotiation with another client device 102 at the time when a connection is established with the other client device 102 .
- the sending module 228 may perform adaptation with the other client device 102 continuously or regularly when transmitting the virtualized data stream to the other client device 102 .
- the sending module 228 of the client device 102 and the receiving module 230 of another client device 102 may exchange information regarding capabilities (such as screen resolutions, pre-processing/post-processing capabilities, encoding/decoding capabilities, acceptable data types or formats, input or output capabilities, etc.), etc.
- the sending module 228 may coordinate with other modules (e.g., the pre-processing module 224 and the encoding module 226 ) in the virtualization interconnection layer 216 to select a suitable configuration to meet the needs of the other client device 102 .
- the sending module 228 of the client device 102 may constantly monitor network conditions (such as network bandwidth, network delay, rate of packet loss, etc.) and data transmission health conditions, etc., within a predetermined time interval. Additionally or alternatively, the sending module 228 may detect changes that may occur in one or more client devices 102 to which the client device 102 of the sending module 228 is connected. Based on the above collected information, the sending module 228 may direct other modules (e.g., the pre-processing module 224 and the encoding module 226 ) to change one or more operations (e.g., performing additional pre-processing operations, etc.) to adapt to the current situation. For example, if a bandwidth change is detected, the sending module 228 may inform the encoding module 226 to change a bit rate of the virtualized data stream to adapt to this bandwidth change.
- network conditions such as network bandwidth, network delay, rate of packet loss, etc.
- data transmission health conditions etc.
- the sending module 228 may detect changes that may occur in one or more client devices 102 to
- the client device 102 may receive a virtualized data stream from another client device 102 through the virtualization system 106 via the network 104 .
- the client device 102 may receive the virtualized data stream from another client device 102 through the receiving module 230 of the receiving virtualization interconnection node 222 of the virtualization system 106 .
- the receiving module 230 may perform one or more operations similar to or complementary to the sending module 228 .
- the receiving module 230 may identify or verify identity of the other client device 102 and/or the virtualized data stream, and determine the authenticity of the other client device 102 or a source of the virtualized data stream.
- the receiving module 230 may decrypt a certain part of data included in the received virtualized data stream if the certain part of data is encrypted. Additionally or alternatively, the receiving module 230 may facilitate adaptation of the home cloud computing system in response to changes (e.g., changes in network conditions, capabilities of one or more client devices 102 , etc.) by exchanging information regarding capabilities of the client device 102 and other client devices 102 with the sending modules 228 of the other client devices 102 .
- changes e.g., changes in network conditions, capabilities of one or more client devices 102 , etc.
- the receiving module 230 may forward the received virtualized data stream to the decoding module 232 for decoding and/or decompression.
- the decoding module 232 may decode and/or decompress the received virtualized data stream in an opposite way that the encoding module 226 of the other client device 102 has done on the received virtualized data stream.
- the two client devices 102 may have negotiated and established respective responsibilities for data to be transmitted therebetween when a connection between the client devices 102 is established and may renegotiate again after the connection when a change (e.g., changes in capabilities of the client devices, network conditions, etc.).
- the decoding module 232 may know what encoding and/or compression algorithm the encoding module 226 of the other client device 102 has applied on the received virtualized data stream, and therefore select an appropriate (or an agreed-upon) decoding and/or decompression algorithm to decode and/or decompress the received virtualized data stream.
- the post-processing module 234 of the receiving virtualization interconnection node 222 of the client device 102 may process the decoded/decompressed virtualized data stream.
- the post-processing module 234 may perform the same or similar operations that the pre-processing module 224 may perform.
- the post-processing module 234 may perform other operations including, but not limited to, post-filtering to remove compression artifacts due to lossy compression, for example, error resilience/hiding when data losses or errors occur, interpolation/extrapolation/recovery of data that may have been intentionally dropped at the encoding module 226 of the other client device 102 due to, for example, poor network condition or low processing power, etc.
- the number and/or types of operations to be performed by the post-processing module 234 associated with the client device 102 may depend on the number and/or types of operations that have been done by the pre-processing module 224 associated with the other client device 102 .
- This dependence may be one of the information to be established within the negotiation between the two client devices 102 as described in the foregoing embodiments. For example, in some embodiments, so long as an output from the post-processing module 234 is an acceptable input to the client device 102 , how to partition workload or operations between the post-processing module 234 associated with the client device 102 and the pre-processing module 224 associated with the other client device 102 to achieve a modality transformation (or conversion) is flexible.
- a partition between workloads (or operations) of two client devices 102 may depend on factors, such as compression efficiency, bandwidth, processing capability, power consumption, etc., associated with the two client devices 102 .
- a modality transform refers to a transformation from an input to a pre-processing module 224 associated with one client device 102 to an output to a post-processing module 234 associated with another client device 102 .
- the virtualization system 106 may further include other program data 240 .
- the other program data 240 may include log data that records information of one or more other client devices 102 to which the virtualization system 106 of the client device 102 have connected.
- the record information may include, for example, device identification of the one or more previously connected client devices 102 , capabilities (input, output, computing, storage, etc.) of the one or more previously connected client devices 102 , etc.
- the log data 240 may include information about user preference, for example, which one or more client devices 102 the user 122 most likely or often uses when watching a video, which one or more client devices 102 the user 122 most likely or often uses when reading a text (e.g., a web page), etc.
- the log data may include information about which one or more client devices 102 are most often or likely used by the user 122 during a particular time period.
- the virtualization system 106 may employ this information to establish connections among these client devices 102 beforehand while leaving other client devices 102 in a waiting or disconnected state for connection to avoid the client device 102 from establishing too many network connections with other client devices 102 that are not needed and hence saving the resources of the client device 102 from wasting. Additionally or alternatively, the virtualization system 106 may use the log data to re-establish and/or authenticate future connections and data communication with other previously connected client devices 102 .
- the home cloud computing system may further provide a unified programming model.
- the programming model may allow developers of software applications (e.g., the applications 114 , etc.) and/or the client devices 102 to build applications that may be adapted to other output (e.g., display) and/or input (e.g., user interface (UI)) devices (e.g., personal computers, televisions, slate computers, mobile devices, etc.), which separate functionality of user interaction from main computing functions.
- the programming model may automatically or semi-automatically (i.e., with human intervention or input) determine a way for user interaction (both inputs and outputs) adaptation.
- inputs and outputs of associated client device 102 may be arbitrarily redirected to another client device 102 as if the inputs and the outputs are physically attached or connected to the other client device 102 .
- existing applications in the client device 102 may not need to be updated for this redirection and the virtualization interconnection layer 216 may be responsible for converting one mode (or type, format, version, etc.) of virtualized input or output to another mode (or type, format, version, etc.) that may be acceptable to the other client device 102 .
- a touch input on a slate computer may be converted into a mouse input for a desktop computer, while a screen output of a desktop computer may be scaled to be displayed in a screen of a mobile device, etc.
- new applications may have more flexibility to aggregate new input and/or output capabilities that may or may not be physically attached or connected to a computing or client device 102 .
- the home cloud computing system leverage natural UI devices such as touch, Kinect®, camera on one or more client devices 102 to control behaviors and/or operations of one or more other client devices 102 and/or new applications on the one or more other client devices 102 .
- the home cloud computing system may leverage rendering, processing and/or other computing operations or resources on multiple client devices 102 , and combine inputs and/or outputs of these client devices with input and/or output of another client device 102 to form a computing platform for special applications.
- a delay sensitive object e.g., a graphical UI button, a fillable form, a pull-down menu, a foreground active game character, etc.
- a delay sensitive object may be rendered on a client device 102 locally, while computationally intensive but delay tolerant tasks may be offloaded to other computing or client devices 102 .
- the other client devices 102 may then pass back (only) outputs or results of these computationally intensive but delay tolerant tasks to the client device 102 in which the delay sensitive object is locally rendered to form a final result for display to the user 122 .
- No additional hardware and/or software other than the virtualization interconnection layer 216 may be needed for these operations.
- the foregoing embodiments may be readily applicable to cloud computing and mobile computing. For example, if the user 122 moves outside his/her home, the user 122 may still enjoy the benefits of the home cloud computing system with his/her mobile device and cloud devices as the plurality of client devices 102 of the home cloud computing system.
- the encoding module 226 may apply different encoding or compression algorithms for different types or formats of data in the virtualized data forwarded by the pre-processing module 224 .
- three encoding/compression algorithms for different types of data namely, frame-based screen data encoding, object-based screen data encoding and UI control data encoding, are described herein. It should be noted however that these encoding/compression algorithms are described herein for illustration purpose. The present disclosure is not limited to these encoding/compression algorithms and these types of data.
- the virtualized (captured) screen from the client device 102 may be represented as a sequence of display images organized in a temporal order.
- the virtualized screen may include pictorial, graphical and textual contents that may be different from normal/natural video.
- the encoding module 226 may employ a specific screen codec for frame-based screen coding.
- An example of the specific screen codec is described in the U.S. Pat. No. 8,180,165, issued on May 15, 2012, titled “Accelerated Screen Codec”, the entirety of which is incorporated by reference herein. Additionally, in some screen virtualization scenarios, the encoding module 226 may obtain additional information besides a time series of frame-based screen images.
- the encoding module 226 may obtain original compressed bitstreams of image or video objects.
- the encoding module 226 may simply redirect the original compressed bitstreams of the screen (image or video) objects to a receiving client device 102 that receives the virtualized data stream, instead of compressing the sequence of frame-based screen images.
- the virtualized screen may be a composition of the sequence of frame-based images and/or image and video objects.
- the encoding module 226 may only encode or compress the part to be displayed and send the encoded part to the receiving client device 102 through the sending module 228 in order to save both computing power and network bandwidth.
- the encoding module 226 may decompose virtualized screen data into objects for predetermined screen types, i.e., applying object-based screen data encoding. For example, for virtualized display of some specific applications such as a web browser application, an entire web page may be rendered as an image with object and metadata information at the sending client device 102 .
- the encoding module 226 may encode web regions that are visible on a display of the receiving client device 102 .
- the decomposed screen data may include, for example, object metadata, redirectable media objects, unredirectable media objects and background image data, etc.
- the object metadata include object information, such as hyperlink, input text box, etc.
- the encoding module 226 may encode object type, position and/or shape, for example, using conventional lossless coding scheme such as ZIP coding scheme.
- the redirectable media objects correspond to media objects that may be handled by the receiving client device 102 (or the virtualization system 106 of the receiving client device 102 ).
- the redirectable media objects may include, for example, GIF animations, video objects with specific formats, etc.)
- the unredirectable media objects correspond to media objects that may be extracted from a web browser engine and may be handled (or decoded) by the receiving client device 102 (or the virtualization system 106 of the receiving client device 102 ).
- the encoding module 226 may transcode these media objects into a media format that is accepted or supported by the receiving client device 102 .
- the receiving client device 102 supports WMV (Windows Media Video) format but not Flash format
- the encoding module 226 may transcode the Flash formatted video into the WMV formatted video before sending the video to the receiving client device 102 .
- the background image data correspond to the remaining content on the rendered web page that may be taken as a time series of background images.
- the background image data may include dynamically changed regions, e.g., an unextractable video or animation.
- the encoding module 226 may apply a specific screen codec (e.g., the screen codec that is described in the above-referenced US patent) to encode or compress the background image data.
- a buffer may be deployed at or with the encoding module 226 of the sending client device 102 and another buffer at or with the decoding module 232 of the receiving client device 102 to cache or store the background image data.
- the encoding module 226 may compute differences between the visible regions of the current web page and corresponding regions that are cached or stored in the buffer, for example, in terms of temporal and/or spatial predictions. The encoding module 226 may encode only the differences which may then be sent to the receiving client device 102 .
- the encoding module 226 may encode the redirectable media objects as background image using the same screen codec as described in the referenced US patent. For example, if only a small part of a picture having a high resolution in a web page or a downsized portion thereof is to be shown in a display of the receiving client device 102 , the encoding module 226 may re-encode the picture as a part of the background image data and redirect a full resolution image to the receiving client device 102 (or the virtualization system 106 of the receiving client device 102 ). Depending on specific situations of the home cloud computing system and/or current screen content, the encoding module 226 may flexibly adapt to different situations to select and provide an efficient composition and compression scheme to improve the performance of the home cloud computing system.
- the object-based web page image coding scheme as described above is extendable to encode virtualized data streams from other applications if, for example, similar metadata and data objects can be extracted from their virtualized data and associated side information.
- new features including, for example, data object redirection, mixed composition, UI customization, enabled object caching and selective transmission, etc., may be obtained.
- the data object redirection is referred to as capturing certain virtualized data objects in a compressed format, directly passing the captured objects to the virtualization interconnection layer 216 of the receiving client device 102 , and decoding or rendering the captured objects in the receiving client device 102 based on its native acceptable data format.
- the mixed composition feature corresponds to mixing locally captured or rendered data objects in a client device 102 with virtualized data objects from another client device 102 to form or compose a final data input or output for one or more client devices 102 (e.g., the client device 102 where the data objects are locally captured or rendered). This may avoid a round trip delay to respond to certain user interactions otherwise needed in a completely virtualized (remote) solution, e.g., text input box, graphical control button, pull down menu, etc. Furthermore, the mixed composition feature provides a flexible solution for both local and remote client devices 102 to collaborate for delivery of better user experience through mixed inputs or mixed outputs.
- the UI customization corresponds to flexible combination of virtualized UI control modalities, e.g., game controller, keyboard, mouse, Kinect®, touch surface, voice, audio and video, etc., to fit specific user scenarios.
- UI control data objects e.g., extracted gesture data from Kinect® data
- the virtualization interconnection layer 216 of the receiving client device 102 either alone or along with virtualized raw or compressed data (e.g., virtualized raw or compressed Kinect® data).
- the object-based screen data coding may enable prioritized processing, encoding and transmission of regions or objects (such as UI control, for example) that have stringent latency requirements from the sending client device 102 to the receiving client device 102 .
- repeated objects may be cached for reuse and/or manipulation in the receiving client device 102 without re-transmission.
- the encoding module 226 may selectively encode (only) relevant data objects for sending to the receiving client device 102 through the sending module 228 .
- UI and sensor input data may be generated. These UI and sensor data may be transferred from one client device 102 to another client device 102 and vice versa through respectively virtualization interconnection layers 216 to bring new UI controls without locally attaching the UI controls to the client devices 102 .
- touch control data of a client device 102 e.g., a mobile phone
- client device 102 e.g., a tablet
- gesture control data generated from a Kinect® camera that is attached to a client device 102 may also be delivered to another client device 102 (e.g., a desktop computer) to control an application in the other client device 102 .
- client device 102 e.g., Xbox®
- another client device 102 e.g., a desktop computer
- the UI/sensor input data may be divided into a plurality of categories.
- a first category may include a low-rate raw input data with raw data rate generated by a corresponding UI device or sensor is less than a first predetermined rate threshold. These raw data may be directly interpreted by a client device 102 .
- Examples of UI devices or sensors may include, for example, a keyboard, a mouse, a joystick, a game controller, etc. Although the bit rate for this type of raw data is low, this type of data is normally sensitive to errors or losses which may cause the client device 102 to behave differently.
- the encoding module 226 may transmit the raw data uncompressed if a corresponding bit rate is less than a predetermined threshold or use a simple or generic lossless compression algorithm (such as ZIP or LZW algorithm) to compress the raw data before sending if an extra but rate saving is desirable at a system level.
- a simple or generic lossless compression algorithm such as ZIP or LZW algorithm
- a second category may include moderate-to-high-rate (i.e., data rate is between the first rate threshold and a second rate threshold, for example) raw input data which may become low-rate processed data after pre-processing by the pre-processing module 224 at the sending client device 102 .
- this second type of raw data include, but are not limited to, two-dimensional images from a touch surface, audio input data, visual data from a camera, visual and depth information from a Kinect® sensor, time series of data from an accelerometer and/or a gyroscope; data from a contextual or ambient sensor, such as satellite data received by GPS, orientation data from a compass sensor, data from a light sensor, and other possible sensors attached to the client device 102 .
- these raw data may be first pre-processed by the pre-processing module 224 or the client device 102 to which corresponding UI/sensor devices are attached, through hardware or software solutions.
- the pre-processing functions may be collocated with the UI/sensor devices for improved efficiency.
- the processed data rate may be significantly reduced, for example, from raw 2D images to high level touch gestures, from raw Kinect® visual and depth data to body gestures, from audio input to recognized texts or commands, from visual data to identified object/extracted features, from received satellite data to location coordinates, etc. This kind of processed data becomes high-level UI inputs which have low data rate but are more sensitive to errors or losses.
- the encoding module 226 may transmit the raw data uncompressed if a corresponding bit rate is less than a predetermined threshold or use a simple or generic lossless compression algorithm (such as ZIP or LZW algorithm) to compress the raw data before sending if an extra but rate saving is desirable at a system level.
- a simple or generic lossless compression algorithm such as ZIP or LZW algorithm
- a third category may include raw input data that have moderate to high data rate generated by NUI or sensor devices, and the sending client device 102 does not have enough pre-processing capabilities or the raw data need to be sent to the receiving client device 102 for further data processing or manipulation.
- the encoding module 226 need to compress this type or kind of raw input data to a much lower data rate than its original raw data rate to fit into the network 104 for making the home cloud computing system feasible.
- the encoding module 226 may select a compression algorithm for this kind of raw input data based on one or more other factors such as a low latency criterion for enabling a natural and smooth user experience.
- the encoding module 226 may select a compression algorithm that may be specific to the data type of the raw input data.
- the encoding module 226 may select a lossy compression algorithm in order to achieve a desired data compression ratio for this kind of raw input data, while enabling high-level UI controls that are extracted at the receiving client device 102 as close to those that are extracted at the sending client device 102 locally as much as possible.
- the encoding module 226 may select and employ compression algorithms or schemes that are tuned to respective data characteristics of the raw input data having a low latency criterion.
- the encoding module 226 may apply existing standard coding schemes to compress them, e.g., MPS for audio data, H.264 for video data, JPEG for image data, etc.
- FIG. 4 is a flow chart depicting an example method 400 of data virtualization and resource leveraging.
- the method of FIG. 4 may, but need not, be implemented in the environment of FIG. 1 and using the system of FIG. 2 .
- method 400 is described with reference to FIGS. 1 and 2 .
- the method 400 may alternatively be implemented in other environments and/or using other systems.
- Method 400 is described in the general context of computer-executable instructions.
- computer-executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, and the like that perform particular functions or implement particular abstract data types.
- the method can also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communication network.
- computer-executable instructions may be located in local and/or remote computer storage media, including memory storage devices.
- the exemplary method is illustrated as a collection of blocks in a logical flow graph representing a sequence of operations that can be implemented in hardware, software, firmware, or a combination thereof.
- the order in which the method is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method, or alternate methods. Additionally, individual blocks may be omitted from the method without departing from the spirit and scope of the subject matter described herein.
- the blocks represent computer instructions that, when executed by one or more processors, perform the recited operations.
- some or all of the blocks may represent application specific integrated circuits (ASICs) or other physical components that perform the recited operations.
- ASICs application specific integrated circuits
- a first device 102 (equipped with a virtualization system 106 ) detects a presence of a second device 102 in a proximity or neighborhood of the first device.
- the virtualization system 106 of the first device 102 establishes or initiates a network connection with the virtualization system 106 of the second device 102 .
- the virtualization system 106 of the first device 102 determines or receives information of functional capabilities of the second device 102 .
- the virtualization system 106 of the first device 102 negotiates with the virtualization system 106 of the second device 102 with respect to one or more responsibilities on data communicated therebetween. For example, the virtualization system 106 of the first device 102 negotiates with the virtualization system 106 of the second device 102 on an extent of data transformation to be performed by the first device 102 for the second device 102 and an extent of data transformation to be performed by the second device for the first device 102 based on the functional capabilities of the second device 102 and functional capabilities of the first device 102 .
- the virtualization system 106 of the first device 102 virtualizes of the first device 102 as virtualized data.
- the virtualization system 106 of the first device 102 virtualizes the data of the first device 102 by generating output data at the first device 102 without locally presenting the output data in a display of the first device 102 and/or capturing input data through a user interface of the first device 102 without locally processing the input data at the first device 102 .
- the virtualization system 106 of the first device 102 transforms the virtualized data into a virtualized data stream in a virtualization interconnection layer based on a result of the negotiation between the first device 102 and the second device 102 .
- the virtualization system 106 of the first device 102 may pre-process the virtualized data to a predetermined extent that has been agreed upon between the first device 102 and the second device 102 based on the result of the negotiation. Additionally or alternatively, the virtualization system 106 of the first device 102 may encode the pre-processed virtualized data using different algorithms for different types of data included in the virtualized data.
- the virtualization system 106 of the first device 102 sends the compressed or encoded virtualized data stream from the first device 102 to the second device 102 to leverage resources of the first device 102 and the second device 102 .
- a sending client device 102 may virtualize data thereof and send the virtualized data to the server 108 via the network 104 .
- the server 108 may pre-process and encode the virtualized data based on a data type or format that is acceptable to a receiving client device 102 (a client device 102 that receives the virtualized data), and send the processed and encoded virtualized data to the receiving client device 102 .
- the sending client device 102 and/or the server 108 may constantly monitor network conditions and/or computing conditions of the receiving client device 102 , and perform adaptation to data pre-processing and data transmission accordingly in response to detecting or determining that a change in the network conditions and/or computing conditions of the receiving client device 102 occurs.
- any of the acts of any of the methods described herein may be implemented at least partially by a processor or other electronic device based on instructions stored on one or more computer-readable media.
- any of the acts of any of the methods described herein may be implemented under control of one or more processors configured with executable instructions that may be stored on one or more computer-readable media such as one or more computer storage media.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Human Computer Interaction (AREA)
- Information Transfer Between Computers (AREA)
Abstract
A home cloud computing system employs a virtualization system to virtualize data of a device and adaptively transform type or format of the virtualized data for one or more other devices, thus leveraging resources of the device for the one or more other devices. Through data virtualization and adaptive transformation, devices of heterogeneous types are seamlessly connected to one another and can act as input or output devices for each other to create a home cloud network of devices.
Description
- This application is a continuation of, and claims priority to, co-pending, commonly-owned U.S. patent application Ser. No. 13/663,720, entitled “Home Cloud with Virtualized Input and Output Roaming Over Network”, filed on Oct. 30, 2012 which application is incorporated herein in its entirety by reference.
- The development of new computing technologies brings unprecedented experience to people's daily lives. Among the newly developed technologies, new human-computer interaction (HCI) technologies have become a driving force and a determining factor of success for today's home entertainment. Different from conventional user interfaces which include physical keyboards and mice, more natural user interfaces (NUI), such as user touch, gesture, voice, etc., are proposed and developed for user interaction with a variety of computing devices which include televisions, game consoles, desktop computers, tablets, smart phones, to name a few. Although significant developments have been made so far, computing capabilities and NUI functions are still confined to respective devices, platforms or applications, and cannot be shared with other devices, platforms or applications due to diverse and incompatible input and output data formats of different computing devices, for example.
- This summary introduces simplified concepts of home cloud computing, which are further described below in the Detailed Description. This summary is not intended to identify essential features of the claimed subject matter, nor is it intended for use in limiting the scope of the claimed subject matter.
- This application describes example embodiments of home cloud computing. In one embodiment, a first device may virtualize data (such as input and output data) thereof and redirect part or all of the virtualized data to one or more other devices to leverage resources of the first device for the one or more other devices.
- In one embodiment, the first device may detect a presence of a second device in a proximity of the first device. In response to detecting the presence of the second device, the first device may establish a network or data connection with the second device. Additionally, the first device may further determine functional capabilities (such as data processing capabilities, display capabilities, etc.) of the second device.
- In some embodiments, the first device may negotiate with the second device with respect to one or more responsibilities. For example, the first device may negotiate with the second device regarding an extent of data transformation that is to be performed by the first device for the second device and an extent of data transformation that is to be performed by the second device for the first device based on the functional capabilities of the second device and functional capabilities of the first device.
- The first device may virtualize its data (input and/or output data) into virtualized data. Upon virtualizing the data, the first device may transform the virtualized data into a virtualized data stream (e.g., in a virtualization interconnection layer that is on top of a data network layer) based on a result of the negotiation between the first device and the second device. The first device may send the virtualized data stream to the second device, thus leveraging resources of the first device for the second device.
- The detailed description is set forth with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items.
-
FIG. 1 illustrates an example environment including a home cloud computing system. -
FIG. 2 illustrates the example virtualization system ofFIG. 1 in more detail. -
FIG. 3 illustrates data flow between two example client devices in the example home cloud computing system. -
FIG. 4 illustrates an example method of data virtualization and resource leveraging. - Overview
- As noted above, existing computing capabilities and NUI functions are confined to respective devices, platforms or applications, and cannot be shared with other devices, platforms or applications due to, for example, diverse and incompatible input and output data formats and complexities of different computing devices.
- This disclosure describes a home cloud computing system. The home cloud computing system employs a virtualization system to leverage resources of a plurality of devices by virtualizing respective inputs and outputs, and enables communication and interpretation of the inputs and the outputs among and across the plurality of devices regardless of input and/or output capabilities, types, formats and/or complexities of the plurality of devices. Furthermore, the virtualization system allows composing all or parts of different inputs and/or outputs from different devices to form a combined input or output that may be manipulated and/or presented in one or more devices.
- In one embodiment, the virtualization system may be included or installed in a device that interacts with a user, i.e., a device that receives inputs from the user and/or presents outputs to the user. The virtualization system enables the device to share or redirect resources thereof, such as data processing and display capabilities, without specifying a particular device beforehand. The virtualization system also allows the device to adapt to a particular device during and/or after establishing a network connection with that particular device. The virtualization system leverages resources by virtualizing input and/or output data of the device. Examples of virtualization may include, but are not limited to, generating output data (e.g., screen data) with or without presenting the output data locally at the device, capturing input data through a user interface of the device with or without processing the input data and/or providing a response with respect to the input data locally, etc.
- After virtualizing the data of the device, in some embodiments, the virtualization system may redirect part or all of the input data and/or the output data of the device to one or more other devices through a network for data presentation and/or processing.
- In other embodiments, due to the heterogeneous nature of data types, formats and/or complexities of different devices, for example, the virtualization system may pre-process the data of the device prior to sending the data to the one or more other devices. For instance, a virtualization system of a first device may have negotiated with a second device (or a virtualization system of the second device) an extent of data transformation (for example, data pre-processing, encoding, etc.) that the first device may perform for the second device and/or an extent of data transformation that the second device may perform for the first device when establishing a network or data connection between the first device and the second device. The first device may pre-process and encode the data in accordance with a result of the negotiation, and send the pre-processed/encoded data to the second device thereafter.
- The described home cloud computing system enables interaction among a plurality of devices that may have diverse input and output capabilities, types and formats, and leverages resources of the plurality of devices through input and output virtualization.
- In some of the examples described herein, the virtualization system virtualizes input and output data of a device, establishes or facilitates a network connection with another device for the device, negotiates respective degrees of data transformation to be done by each device, transforms the data of the device, and sends the data to the other device and/or receive data from the other device. However, in other embodiments, these functions may be performed by multiple separate systems or services. For example, in one embodiment, a virtualization service may virtualize data of the device, while a separate service may establish a network connection with other device and negotiate respective degrees of data transformation to be performed by each device, and yet another service may transform the data of the device and send the data to the other device and/or receive data from the other device.
- Furthermore, although in the examples described herein, the virtualization system may be implemented as software and/or hardware installed in a device, in other embodiments, the virtualization system may be implemented as a separate entity or device that is peripheral or attached to the device. Furthermore, in some embodiments, the virtualization system may be implemented as software and/or hardware included in one or more other devices forming a network through which each device of the plurality of devices connects to one another (e.g., a local server or router, etc.). Additionally or alternatively, the virtualization system may be implemented as a service provided in one or more servers over the network and/or in a cloud computing architecture.
- The application describes multiple and varied implementations and embodiments. The following section describes an example environment that is suitable for practicing various implementations. Next, the application describes example systems, devices, and processes for implementing a home cloud computing system.
- Exemplary Environment
-
FIG. 1 illustrates anexemplary environment 100 that implements a home cloud computing system. Theenvironment 100 may include a plurality of client devices 102-1, 102-2, 102-3, 102-4, . . . , 102-N (which are collectively referred to as client devices 102) and anetwork 104. In this example, each of the plurality of client devices 102 may be installed or attached with avirtualization system 106. The plurality of client devices 102 may communicate data with one another throughrespective virtualization systems 106 via thenetwork 104. - Although in this example, each client device 102 is described to install or attach with a
respective virtualization system 106, in some embodiments, some or all of the functions of thevirtualization system 106 may be included in one or more entities other than the client devices 102. For example, some or all of the functions of thevirtualization system 106 may be included and distributed in the client device 102 and a separate device that is peripheral to the client device 102 (e.g., a peripheral device, a set-top box, etc.). Additionally or alternatively, some or all of the functions of thevirtualization system 106 may be included and distributed among the client device 102 and one ormore servers 108 that are connected to thenetwork 104. For example, the client device 102 may include part of the functions of thevirtualization system 106 while other functions of thevirtualization system 106 may be included in one or moreother servers 108. Furthermore, in some embodiments, thevirtualization system 106 may be included in one or more third-party servers, e.g.,other servers 108, that may or may not be a part of a cloud computing system or architecture. - The client device 102 may be implemented as any of a variety of conventional computing devices including, for example, a mainframe computer, a server, a notebook or portable computer, a handheld device, a netbook, an Internet appliance, a tablet or slate computer, a mobile device (e.g., a mobile phone, a personal digital assistant, a smart phone, etc.), a game console, a set-top box, etc. or a combination thereof.
- Additionally or alternatively, the client device 102 may be implemented as any of a variety of conventional consumer devices including, for example, a television, a digital picture frame, an audio player, a video player, an eReader, a digital camera, etc. or a combination thereof. In one embodiment, the
virtualization system 106 may be configured to provide networking and computing capabilities for these consumer devices having no or limited networking and computing capabilities. - The
network 104 may be a wireless or a wired network, or a combination thereof. Thenetwork 104 may be a collection of individual networks interconnected with each other and functioning as a single large network (e.g., the Internet or an intranet). Examples of such individual networks include, but are not limited to, telephone networks, cable networks, Local Area Networks (LANs), Wide Area Networks (WANs), and Metropolitan Area Networks (MANS). Further, the individual networks may be wireless or wired networks, or a combination thereof. - In one embodiment, the client device 102 may include one or
more processors 110 coupled tomemory 112. Thememory 112 includes one or more applications or services 114 (e.g., web applications or services, video applications or services, etc.) andother program data 116. Thememory 112 may be coupled to, associated with, and/or accessible to other devices, such as network servers, routers, and/or theother servers 108. Additionally or alternatively, in some embodiments, the client device 102 may include an input interface 118 (such as a touch screen, a touch pad, a remote controller, a mouse, a keyboard, a camera, a microphone, etc.) and an output interface 120 (e.g., a screen, a loudspeaker, etc.). - In one embodiment, a user 122 may use the plurality of client devices 102 for home entertainment. The user 122 may read a web page on a mobile phone (e.g., client device 102-1) and note a video on the web page. The user 122 may want to watch the video on a television (e.g., client device 102-2) and use the mobile phone as a remote controller of the television. The user 122 may further want to read textual information of the web page using a tablet (e.g., client device 102-3). The user 122 may perform these operations using the plurality of client devices 102 that include the
virtualization systems 106. -
FIG. 2 illustrates thevirtualization system 106 in more detail. In one embodiment, thevirtualization system 106 includes, but is not limited to, one ormore processors 202, anetwork interface 204,memory 206, and an input/output interface 208. The processor(s) 202 is configured to execute instructions received from thenetwork interface 204, received from the input/output interface 208, and/or stored in thememory 206. - The
memory 206 may include computer-readable media in the form of volatile memory, such as Random Access Memory (RAM) and/or non-volatile memory, such as read only memory (ROM) or flash RAM. Thememory 206 is an example of computer-readable media. Computer-readable media includes at least two types of computer-readable media, namely computer storage media and communications media. - Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, phase change memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), other types of random-access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disk read-only memory (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device.
- In contrast, communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. As defined herein, computer storage media does not include communication media.
- The
memory 206 may includeprogram modules 210 andprogram data 212. In one embodiment, thevirtualization system 106 may include avirtualization module 214. Thevirtualization module 214 captures inputs and/or outputs of associated client device 102 and/or separates the inputs and/or outputs from the client device 102. Examples of inputs may include, but are not limited to, data captured from a user interface, such as mouse movement and click, keyboard's keystrokes, images from touch surfaces, visual and depth information from cameras or motion sensors, control signals from game controllers, data from accelerometers and/or gyros, data from contextual or ambient sensors, such as location information from GPS (Global Positioning System), WiFi, and other positioning sensors, orientation data from a compass sensor, data from a light sensor, and/or any other possible sensors attached to the client device 102. Examples of the outputs may include, for example, screen contents, audio contents, feedback of haptic devices, and any other results that may be presented to the user 122. - In one embodiment, the
virtualization module 214 may generate data virtually inside the client device 102 to obtain virtualized data (e.g., virtualized input (VI) or virtualized output (VO)). For example, thevirtualization module 214 may virtually render a screen image with or without displaying the screen image in a local display of the client device 102 and/or pre-process or pre-interpret real input raw data into a high level input signal (e.g., interpreting an image from a touch surface to a high level touch gesture, etc.). Additionally or alternatively, thevirtualization module 214 may generate data virtually inside the client device 102 by intercepting physical data directly from a physical device (such as a camera or motion sensor attached to the client device 102) that generates the physical data through a piece of software (e.g., an application programming interface (API) for screen capture, etc.) or a piece of hardware (such as a splitter connected to a display port of the client device 102, for example). - In some embodiments, the
virtualization system 106 may further include avirtualization interconnection layer 216. Thevirtualization interconnection layer 216 may be added and attached to the client device 102 on top of existing data network layers and may serve as a higher-level abstraction layer to interconnect the client device 102 with other client devices 102. Thevirtualization interconnection layer 216 facilitates data roaming or migration, i.e., moving the virtualized data from the client device 102 that generates the data to one or more other client devices 102. - Due to a large number of different varieties of input and output devices and data types or formats, in one embodiment, the
virtualization interconnection layer 216 may be made to be independent of the virtualization module 214 (or the virtualization process that is done by the virtualization module 214). Thevirtualization system 106 may enable this separation of thevirtualization interconnection layer 216 from the virtualization module 214 (or process) by defining aprogramming model 218 and corresponding application programming interface (API) that is independent of what is performed by the virtualization module 214 (or process). For example, thevirtualization system 106 may employ a driver model that may rely on respective software virtualization drivers for corresponding input and output devices. - By separating the
virtualization interconnection layer 216 from the virtualization process, thevirtualization interconnection layer 216 may be implemented in a dedicated cost effective hardware or a piece of software that provides minimal functions for the data roaming without involving sophisticated computation. Furthermore, this separation enables separating input and output functions from main computing tasks of theapplications 114 associated with the client device 102, and thus releases developers of theapplications 114 and/or the client device from considering which one or more output devices that the outputs may go to and/or which one or more input devices that the inputs may come from. Moreover, the separation allows thevirtualization interconnection layer 216 to interpret or translate data differently for different client devices 102, for example, enabling translation of gestures captured by cameras or motion sensors into mouse signals that are applicable to computer applications. - Depending on application scenarios, the
virtualization interconnection layer 216 of the client device 102 may include a sending virtualization interconnection node 220 (which may be referred to as SVIN) and/or a receiving virtualization interconnection node 222 (which may be referred to as RVIN). For example, if only a screen content of a computer is needed to be projected to a television, the computer may only need to have the sendingvirtualization interconnection node 220 while the television may only need the receivingvirtualization interconnection node 222 on the television or a computing device that is attached to the television, for example. Without loss of generality, thevirtualization interconnection layer 216 associated with the client device 102 is described hereinafter to have both the sendingvirtualization interconnection node 220 and the receivingvirtualization interconnection node 222. - In one embodiment, the sending
virtualization interconnection node 220 may be interfaced with thevirtualization module 214, and receive virtualized data (i.e., virtualized input (VI) and/or virtualized output (VO)) from thevirtualization module 214. Upon receiving the virtualized data from thevirtualization module 214, the sendingvirtualization interconnection node 220 may pre-process and/or encode the virtualized data, and transmit the processed/encoded virtualized data to one or more other client devices 102 via thenetwork 104. In one embodiment, the sendingvirtualization interconnection node 220 may include apre-processing module 224, anencoding module 226 and a sendingmodule 228. - On the other hand, the receiving
virtualization interconnection node 222 may receive virtualized data from one or more other client devices 102 via thenetwork 104, and decode, reconstruct or process the received virtualized data into a form that is acceptable by a device to which the receivingvirtualization interconnection node 222, i.e., the client device 102 in this example. In one embodiment, the receivingvirtualization interconnection node 222 may include areceiving module 230, adecoding module 232 and apost-processing module 234.FIG. 3 illustrates data flow between two client devices 102 through respective virtualization interconnection layers 216. - Upon receiving the virtualized data from the
virtualization module 214, the sending virtualization interconnection node 220 (or the pre-processing module 224) may pre-process and prepare the virtualized data for further encoding using thepre-processing module 224. Thepre-processing module 224 may perform one or more operations on the virtualized data. Examples of the one or more operations include, but are not limited to, analyzing, converting, recognizing, enhancing, filtering, extracting, combining, synchronizing, interpolating, subsampling, cleaning and denoising the virtualized data. For example, thepre-processing module 224 may analyze output content of the virtualized data, such as classifying screen contents into different types (e.g., text regions, image regions, video regions, object regions, etc.). Additionally or alternatively, thepre-processing module 224 may convert raw input data into a format more easily to encode and transmit, such as converting raw images from sensors of a touch surface into high-level gestures, or converting raw depth information from camera or motion sensors into high-level body gestures (such as direction and amount of movement, etc.), etc. - In one embodiment, an extent of data processing performed by the
pre-processing module 224 may be determined by one or more factors. For example, the one or more factors may include, but are not limited to, processing power, memory, storage capacity, network bandwidth, flexibility, delay criterion, etc., associated with the client device 102 and/or thevirtualization system 106 of thevirtualization system 106. Additionally or alternatively, the one or more factors may include relative processing power, memory, storage capacity, network bandwidth, flexibility, delay criterion, etc., between the client device 102 (and/or thevirtualization system 106 of the client device 102) and another client device 102 (and/or avirtualization system 106 of the other client device 102) to which the virtualized data is to be sent. For example, if the sending virtualization interconnection node 220 (either embedded or standalone) has very (relatively) limited processing power, thepre-processing module 224 may pass the virtualized data transparently to theencoding module 226, depending on the receivingvirtualization interconnection node 222 of the receiving client device 102 to process the virtualized data. Specifically, what to be pre-processed and/or an extent of which the pre-processing to be done by thepre-processing module 224 depend(s) on one or more factors as described above and other system considerations including, for example, a balance between different factors to achieve an overall optimized system performance. - Upon pre-processing the virtualized data, the
pre-processing module 224 sends the pre-processed virtualized data to theencoding module 226 for further processing. In one embodiment, theencoding module 226 may further compress and/or encode the pre-processed virtualized data into a compressed virtualized data stream. For example, theencoding module 226 may apply a generic lossless compression algorithm to compress the pre-processed virtualized data to a predetermined level, for example, for sending to the receiving client device 102. In some embodiments, in order to balance a resultant quality and/or data rate of the compressed/encoded virtualized data, theencoding module 226 may apply a content-aware encoding (and/or compression) scheme, i.e., applying different encoding and/or compression algorithms for different types or formats of the virtualized data. Furthermore, in some embodiments, theencoding module 226 may apply different and adaptive encoding and/or compression algorithms based on characteristics of the data at different regions and times even with the same data type. Examples of applications of different encoding or compression algorithms for different types or formats of virtualized data will be described in more detail hereinafter. - After encoding and/or compressing the pre-processed virtualized data, the sending
module 228 may transmit or put the compressed virtualized data into thenetwork 104 for other client devices 102 connected to thenetwork 104 to share and utilize. Furthermore, the sendingmodule 228 may serve as a system coordinator to coordinate operations and/or behaviors of the other modules (e.g., thepre-processing module 224 and the encoding module 226) in thevirtualization interconnection layer 216 of the client device 102. Additionally or alternatively, the sendingmodule 228 may negotiate with thevirtualization interconnection layer 216 of another client device 102. Examples of negotiation may include, but are not limited to, an extent of data transformation (or processing) that the other client device 102 may perform for the client device 102 of the sendingmodule 228 and/or an extent of data transformation (or processing) that the client device 102 of the sendingmodule 228 may perform for the other client device 102, etc. - Additionally or alternatively, the sending
module 228 may monitor network conditions of thenetwork 104 including, for example, bandwidth and delay. The sendingmodule 228 may adaptively guide thepre-processing module 224 and theencoding module 226 to adapt to changes in the network conditions and/or changes in capabilities (such as memory, storage, workload, etc.) of the client device 102 and/or one or more client devices 102 that receive the virtualized data stream. In some embodiments, the sendingmodule 228 may encrypt sensitive data included in the compressed virtualized data stream prior to sending the data stream to thenetwork 104. - In one embodiment, the sending
module 228 may further broadcast a presence of the client device 102 in a proximity or neighborhood of the client device 102. Additionally or alternatively, the sendingmodule 228 may broadcast the presence of the client device 102 via thenetwork 104. For example, the sendingmodule 228 may send a broadcast message using a network protocol such as Simple Service Discovery Protocol (SSDP) to thenetwork 104, allowing other client devices 102 to discover the presence of the client device 102. Additionally, the sendingmodule 228 may further broadcast one or more capabilities (e.g., abilities to display video and image, play audio, receive user input through a user interface such as a touchscreen, etc.), data qualities (such as display resolution, audio quality, etc.), acceptable data formats and/or types, etc. The sendingmodule 228 may broadcast or advertise thevirtualization interconnection layer 216 and/or corresponding client device 102 as a service to other client devices 102 of the same home cloud system ornetwork 104. - Additionally or alternatively, the sending
module 228 may detect or discover one or more client devices 102 by receiving a broadcast message from the one or more client devices 102 in a proximity or neighborhood of the client device 102 and/or through thenetwork 104. In some embodiments, depending on content on the client device 102 that the user 122 is currently or actively interacting with, the sendingmodule 228 may adaptively or proactively discover or recommend one or more client devices 102 that may provide a better experience to the user 122 with respect to the content that the user 122 is consuming. For example, if the user 122 is watching a video on the client device 102, the sendingmodule 228 of the client device 102 may detect or determine whether one or more other client devices 102 may be used for providing a better display resolution or size for the video, for example. The sendingmodule 228 may provide a prompt to the user 122 that he/she may switch to watch the video on another client device 102 found. - Additionally or alternatively, in some embodiments, the sending
module 228 may further be configured to authenticate and verify one or more client devices 102 that attempt to connect to the client device 102 of the sendingmodule 228 before finally establishing the connections with the one or more client devices 102. Furthermore, in one embodiment, the sendingmodule 228 may encrypt sensitive data present in the virtualized data stream to avoid other unrelated or unauthenticated devices from intercepting or tampering the sensitive data. Moreover, the sendingmodule 228 may support error resilience due to a loss of data (or data packets in the virtualized data stream) during data transmission in thenetwork 104. - Upon discovering, authenticating and/or connecting the one or more other client devices 102, the sending
module 228 may switch to real time and low delay transmission of the virtualized data stream from the sending client device 102 to the receiving client device 102. In some embodiments, the sendingmodule 228 may render a low delay (having a predetermined time threshold) as a criterion for smooth and natural user interaction for the user 122. This is because the virtualized data (VI and VO, for example) needs to go through a network layer where a transmission delay may be introduced in addition to delays due to data pre-processing, encoding, decoding and/or post-processing, etc. In order to facilitate provision of a low delay, in one embodiment, astreaming sending buffer 236 and astreaming receiving buffer 238 may be deployed with the sendingmodule 228 and the receivingmodule 230, respectively. - Additionally or alternatively, the sending
module 228 may guide theencoding module 226 to adaptively implement a rate control algorithm according to available channel bandwidth in thenetwork 104. Additionally or alternatively, in some embodiments, the sendingmodule 228 may identify which region and/or object in the virtualized data stream may be the focus of attention of the user 122 (e.g., a button pressed in an application, etc.) and prioritize a sending order of regions and/or objects in the virtualized data stream, having some regions and/or objects having a higher priority to be updated or sent faster or more frequently than others having a lower priority. Additionally or alternatively, the virtualized data and locally rendered data which represents (most) actively changed portions may be combined so that delay for this type of content may no longer be an issue. - In one embodiment, the sending
module 228 may further detect and signal to other modules (e.g., thepre-processing module 224 and theencoding module 226 of its associated client device 102 and the other modules of the client device 102 that receives the virtualized data stream, etc.) that a channel error, such as a packet loss or error, has occurred to allow the other modules to take proper actions to recover or compensate for the error at the sending client device 102 or the receiving client device 102. - Since the home cloud system adopts a network based service model, in some embodiments, the home cloud system may not know in advance what kinds of client devices 102 are connected thereto, and/or in what kind of network conditions the home cloud system is connected. In one embodiment, the sending
module 228 may facilitate adaptation of one or more services of the client device 102 for one or more other client devices 102 that are connected to thenetwork 104 and the network conditions. By way of example and not limitation, the sendingmodule 228 may perform negotiation with another client device 102 at the time when a connection is established with the other client device 102. Additionally or alternatively, the sendingmodule 228 may perform adaptation with the other client device 102 continuously or regularly when transmitting the virtualized data stream to the other client device 102. - For example, at the time of establishing a connection, the sending
module 228 of the client device 102 and the receivingmodule 230 of another client device 102 may exchange information regarding capabilities (such as screen resolutions, pre-processing/post-processing capabilities, encoding/decoding capabilities, acceptable data types or formats, input or output capabilities, etc.), etc. The sendingmodule 228 may coordinate with other modules (e.g., thepre-processing module 224 and the encoding module 226) in thevirtualization interconnection layer 216 to select a suitable configuration to meet the needs of the other client device 102. - After establishing the connection, the sending
module 228 of the client device 102 may constantly monitor network conditions (such as network bandwidth, network delay, rate of packet loss, etc.) and data transmission health conditions, etc., within a predetermined time interval. Additionally or alternatively, the sendingmodule 228 may detect changes that may occur in one or more client devices 102 to which the client device 102 of the sendingmodule 228 is connected. Based on the above collected information, the sendingmodule 228 may direct other modules (e.g., thepre-processing module 224 and the encoding module 226) to change one or more operations (e.g., performing additional pre-processing operations, etc.) to adapt to the current situation. For example, if a bandwidth change is detected, the sendingmodule 228 may inform theencoding module 226 to change a bit rate of the virtualized data stream to adapt to this bandwidth change. - In some embodiments, the client device 102 may receive a virtualized data stream from another client device 102 through the
virtualization system 106 via thenetwork 104. For example, the client device 102 may receive the virtualized data stream from another client device 102 through the receivingmodule 230 of the receivingvirtualization interconnection node 222 of thevirtualization system 106. In one embodiment, the receivingmodule 230 may perform one or more operations similar to or complementary to the sendingmodule 228. For example, the receivingmodule 230 may identify or verify identity of the other client device 102 and/or the virtualized data stream, and determine the authenticity of the other client device 102 or a source of the virtualized data stream. - Additionally or alternatively, the receiving
module 230 may decrypt a certain part of data included in the received virtualized data stream if the certain part of data is encrypted. Additionally or alternatively, the receivingmodule 230 may facilitate adaptation of the home cloud computing system in response to changes (e.g., changes in network conditions, capabilities of one or more client devices 102, etc.) by exchanging information regarding capabilities of the client device 102 and other client devices 102 with the sendingmodules 228 of the other client devices 102. - In one embodiment, after receiving the virtualized data stream and performing one or more operations (such as identity verification, decryption, etc.) on the received virtualized data stream, the receiving
module 230 may forward the received virtualized data stream to thedecoding module 232 for decoding and/or decompression. In one embodiment, thedecoding module 232 may decode and/or decompress the received virtualized data stream in an opposite way that theencoding module 226 of the other client device 102 has done on the received virtualized data stream. For example, as described in the foregoing embodiments, the two client devices 102 (or thevirtualization system 106 of the two client devices 102) may have negotiated and established respective responsibilities for data to be transmitted therebetween when a connection between the client devices 102 is established and may renegotiate again after the connection when a change (e.g., changes in capabilities of the client devices, network conditions, etc.). Based on a result of the negotiation, thedecoding module 232 may know what encoding and/or compression algorithm theencoding module 226 of the other client device 102 has applied on the received virtualized data stream, and therefore select an appropriate (or an agreed-upon) decoding and/or decompression algorithm to decode and/or decompress the received virtualized data stream. - Upon decoding and/or decompressing the received virtualized data stream, the
post-processing module 234 of the receivingvirtualization interconnection node 222 of the client device 102 may process the decoded/decompressed virtualized data stream. In one embodiment, thepost-processing module 234 may perform the same or similar operations that thepre-processing module 224 may perform. Additionally or alternatively, in some embodiments, thepost-processing module 234 may perform other operations including, but not limited to, post-filtering to remove compression artifacts due to lossy compression, for example, error resilience/hiding when data losses or errors occur, interpolation/extrapolation/recovery of data that may have been intentionally dropped at theencoding module 226 of the other client device 102 due to, for example, poor network condition or low processing power, etc. - In one embodiment, the number and/or types of operations to be performed by the
post-processing module 234 associated with the client device 102 may depend on the number and/or types of operations that have been done by thepre-processing module 224 associated with the other client device 102. This dependence may be one of the information to be established within the negotiation between the two client devices 102 as described in the foregoing embodiments. For example, in some embodiments, so long as an output from thepost-processing module 234 is an acceptable input to the client device 102, how to partition workload or operations between thepost-processing module 234 associated with the client device 102 and thepre-processing module 224 associated with the other client device 102 to achieve a modality transformation (or conversion) is flexible. In some embodiments, a partition between workloads (or operations) of two client devices 102 (or virtualization systems associated with the two client devices) may depend on factors, such as compression efficiency, bandwidth, processing capability, power consumption, etc., associated with the two client devices 102. A modality transform refers to a transformation from an input to apre-processing module 224 associated with one client device 102 to an output to apost-processing module 234 associated with another client device 102. - In one embodiment, the
virtualization system 106 may further includeother program data 240. Theother program data 240 may include log data that records information of one or more other client devices 102 to which thevirtualization system 106 of the client device 102 have connected. The record information may include, for example, device identification of the one or more previously connected client devices 102, capabilities (input, output, computing, storage, etc.) of the one or more previously connected client devices 102, etc. Additionally or alternatively, thelog data 240 may include information about user preference, for example, which one or more client devices 102 the user 122 most likely or often uses when watching a video, which one or more client devices 102 the user 122 most likely or often uses when reading a text (e.g., a web page), etc. Additionally or alternatively, the log data may include information about which one or more client devices 102 are most often or likely used by the user 122 during a particular time period. - In one embodiment, the
virtualization system 106 may employ this information to establish connections among these client devices 102 beforehand while leaving other client devices 102 in a waiting or disconnected state for connection to avoid the client device 102 from establishing too many network connections with other client devices 102 that are not needed and hence saving the resources of the client device 102 from wasting. Additionally or alternatively, thevirtualization system 106 may use the log data to re-establish and/or authenticate future connections and data communication with other previously connected client devices 102. - In one embodiment, the home cloud computing system may further provide a unified programming model. The programming model may allow developers of software applications (e.g., the
applications 114, etc.) and/or the client devices 102 to build applications that may be adapted to other output (e.g., display) and/or input (e.g., user interface (UI)) devices (e.g., personal computers, televisions, slate computers, mobile devices, etc.), which separate functionality of user interaction from main computing functions. The programming model may automatically or semi-automatically (i.e., with human intervention or input) determine a way for user interaction (both inputs and outputs) adaptation. - Given the
virtualization interconnection layer 216, inputs and outputs of associated client device 102 may be arbitrarily redirected to another client device 102 as if the inputs and the outputs are physically attached or connected to the other client device 102. Given the programming model and thevirtualization interconnection layer 216, existing applications in the client device 102 may not need to be updated for this redirection and thevirtualization interconnection layer 216 may be responsible for converting one mode (or type, format, version, etc.) of virtualized input or output to another mode (or type, format, version, etc.) that may be acceptable to the other client device 102. For example, a touch input on a slate computer may be converted into a mouse input for a desktop computer, while a screen output of a desktop computer may be scaled to be displayed in a screen of a mobile device, etc. - Furthermore, given the programming model, new applications may have more flexibility to aggregate new input and/or output capabilities that may or may not be physically attached or connected to a computing or client device 102. The home cloud computing system leverage natural UI devices such as touch, Kinect®, camera on one or more client devices 102 to control behaviors and/or operations of one or more other client devices 102 and/or new applications on the one or more other client devices 102. Additionally or alternatively, the home cloud computing system may leverage rendering, processing and/or other computing operations or resources on multiple client devices 102, and combine inputs and/or outputs of these client devices with input and/or output of another client device 102 to form a computing platform for special applications. For example, a delay sensitive object (e.g., a graphical UI button, a fillable form, a pull-down menu, a foreground active game character, etc.) may be rendered on a client device 102 locally, while computationally intensive but delay tolerant tasks may be offloaded to other computing or client devices 102. The other client devices 102 may then pass back (only) outputs or results of these computationally intensive but delay tolerant tasks to the client device 102 in which the delay sensitive object is locally rendered to form a final result for display to the user 122. No additional hardware and/or software other than the
virtualization interconnection layer 216 may be needed for these operations. - Although the foregoing embodiments describe a home computing environment, the foregoing embodiments may be readily applicable to cloud computing and mobile computing. For example, if the user 122 moves outside his/her home, the user 122 may still enjoy the benefits of the home cloud computing system with his/her mobile device and cloud devices as the plurality of client devices 102 of the home cloud computing system.
- As described in the foregoing embodiments, the
encoding module 226 may apply different encoding or compression algorithms for different types or formats of data in the virtualized data forwarded by thepre-processing module 224. By way of example and not limitation, three encoding/compression algorithms for different types of data, namely, frame-based screen data encoding, object-based screen data encoding and UI control data encoding, are described herein. It should be noted however that these encoding/compression algorithms are described herein for illustration purpose. The present disclosure is not limited to these encoding/compression algorithms and these types of data. - In one embodiment, the virtualized (captured) screen from the client device 102 may be represented as a sequence of display images organized in a temporal order. The virtualized screen may include pictorial, graphical and textual contents that may be different from normal/natural video. In one embodiment, the
encoding module 226 may employ a specific screen codec for frame-based screen coding. An example of the specific screen codec is described in the U.S. Pat. No. 8,180,165, issued on May 15, 2012, titled “Accelerated Screen Codec”, the entirety of which is incorporated by reference herein. Additionally, in some screen virtualization scenarios, theencoding module 226 may obtain additional information besides a time series of frame-based screen images. For example, given an OS (operating system) or application support, such as objects in a web browser application that may be obtained through parsing a web page, theencoding module 226 may obtain original compressed bitstreams of image or video objects. In this scenario, theencoding module 226 may simply redirect the original compressed bitstreams of the screen (image or video) objects to a receiving client device 102 that receives the virtualized data stream, instead of compressing the sequence of frame-based screen images. In this case, the virtualized screen may be a composition of the sequence of frame-based images and/or image and video objects. Furthermore, in an event that the receiving client device 102 has a display that is able to display part of the virtualized screen, theencoding module 226 may only encode or compress the part to be displayed and send the encoded part to the receiving client device 102 through the sendingmodule 228 in order to save both computing power and network bandwidth. - Additionally or alternatively, in some embodiments, the
encoding module 226 may decompose virtualized screen data into objects for predetermined screen types, i.e., applying object-based screen data encoding. For example, for virtualized display of some specific applications such as a web browser application, an entire web page may be rendered as an image with object and metadata information at the sending client device 102. Theencoding module 226 may encode web regions that are visible on a display of the receiving client device 102. In one embodiment, the decomposed screen data may include, for example, object metadata, redirectable media objects, unredirectable media objects and background image data, etc. The object metadata include object information, such as hyperlink, input text box, etc. Theencoding module 226 may encode object type, position and/or shape, for example, using conventional lossless coding scheme such as ZIP coding scheme. - The redirectable media objects correspond to media objects that may be handled by the receiving client device 102 (or the
virtualization system 106 of the receiving client device 102). The redirectable media objects may include, for example, GIF animations, video objects with specific formats, etc.) By redirecting these redirectable media objects to the receiving client device 102, repeated encoding and transmission of the redirectable media objects may be avoided, thus improving bandwidth usage and transmission delay. - The unredirectable media objects correspond to media objects that may be extracted from a web browser engine and may be handled (or decoded) by the receiving client device 102 (or the
virtualization system 106 of the receiving client device 102). In this case, theencoding module 226 may transcode these media objects into a media format that is accepted or supported by the receiving client device 102. For example, if the receiving client device 102 supports WMV (Windows Media Video) format but not Flash format, theencoding module 226 may transcode the Flash formatted video into the WMV formatted video before sending the video to the receiving client device 102. - The background image data correspond to the remaining content on the rendered web page that may be taken as a time series of background images. The background image data may include dynamically changed regions, e.g., an unextractable video or animation. The
encoding module 226 may apply a specific screen codec (e.g., the screen codec that is described in the above-referenced US patent) to encode or compress the background image data. Furthermore, in order to improve a compression ratio, a buffer may be deployed at or with theencoding module 226 of the sending client device 102 and another buffer at or with thedecoding module 232 of the receiving client device 102 to cache or store the background image data. For encoding visible regions of a current web page that are to be sent to the receiving client device 102, theencoding module 226 may compute differences between the visible regions of the current web page and corresponding regions that are cached or stored in the buffer, for example, in terms of temporal and/or spatial predictions. Theencoding module 226 may encode only the differences which may then be sent to the receiving client device 102. - In some embodiments, the
encoding module 226 may encode the redirectable media objects as background image using the same screen codec as described in the referenced US patent. For example, if only a small part of a picture having a high resolution in a web page or a downsized portion thereof is to be shown in a display of the receiving client device 102, theencoding module 226 may re-encode the picture as a part of the background image data and redirect a full resolution image to the receiving client device 102 (or thevirtualization system 106 of the receiving client device 102). Depending on specific situations of the home cloud computing system and/or current screen content, theencoding module 226 may flexibly adapt to different situations to select and provide an efficient composition and compression scheme to improve the performance of the home cloud computing system. - In general, the object-based web page image coding scheme as described above is extendable to encode virtualized data streams from other applications if, for example, similar metadata and data objects can be extracted from their virtualized data and associated side information. Using the object-based screen data coding, new features including, for example, data object redirection, mixed composition, UI customization, enabled object caching and selective transmission, etc., may be obtained. In one embodiment, the data object redirection is referred to as capturing certain virtualized data objects in a compressed format, directly passing the captured objects to the
virtualization interconnection layer 216 of the receiving client device 102, and decoding or rendering the captured objects in the receiving client device 102 based on its native acceptable data format. - The mixed composition feature corresponds to mixing locally captured or rendered data objects in a client device 102 with virtualized data objects from another client device 102 to form or compose a final data input or output for one or more client devices 102 (e.g., the client device 102 where the data objects are locally captured or rendered). This may avoid a round trip delay to respond to certain user interactions otherwise needed in a completely virtualized (remote) solution, e.g., text input box, graphical control button, pull down menu, etc. Furthermore, the mixed composition feature provides a flexible solution for both local and remote client devices 102 to collaborate for delivery of better user experience through mixed inputs or mixed outputs.
- The UI customization corresponds to flexible combination of virtualized UI control modalities, e.g., game controller, keyboard, mouse, Kinect®, touch surface, voice, audio and video, etc., to fit specific user scenarios. Moreover, within each modality, UI control data objects (e.g., extracted gesture data from Kinect® data) may be extracted from part or all of the virtualized data, and sent to the
virtualization interconnection layer 216 of the receiving client device 102 either alone or along with virtualized raw or compressed data (e.g., virtualized raw or compressed Kinect® data). Furthermore, the object-based screen data coding may enable prioritized processing, encoding and transmission of regions or objects (such as UI control, for example) that have stringent latency requirements from the sending client device 102 to the receiving client device 102. - Additionally, by using the buffer at or with the
encoding module 226 and thedecoding module 232, repeated objects may be cached for reuse and/or manipulation in the receiving client device 102 without re-transmission. Furthermore, theencoding module 226 may selectively encode (only) relevant data objects for sending to the receiving client device 102 through the sendingmodule 228. - In some embodiments, because of virtualization of input data that is put in the
virtualization interconnection layer 216 of thevirtualization system 106, different UI modalities on the plurality of client devices 102 may be shared and leveraged for enhanced user experience. For human-computer interaction, a variety of types of UI and sensor input data may be generated. These UI and sensor data may be transferred from one client device 102 to another client device 102 and vice versa through respectively virtualization interconnection layers 216 to bring new UI controls without locally attaching the UI controls to the client devices 102. For example, touch control data of a client device 102 (e.g., a mobile phone) may be transferred to another client device 102 (e.g., a tablet) to control the latter. On the other hand, gesture control data generated from a Kinect® camera that is attached to a client device 102 (e.g., Xbox®) may also be delivered to another client device 102 (e.g., a desktop computer) to control an application in the other client device 102. - Depending on a transfer rate for transferring UI/sensor input data from one client device 102 to another client device 102, the UI/sensor input data may be divided into a plurality of categories. By way of example and not limitation, a first category may include a low-rate raw input data with raw data rate generated by a corresponding UI device or sensor is less than a first predetermined rate threshold. These raw data may be directly interpreted by a client device 102. Examples of UI devices or sensors may include, for example, a keyboard, a mouse, a joystick, a game controller, etc. Although the bit rate for this type of raw data is low, this type of data is normally sensitive to errors or losses which may cause the client device 102 to behave differently. For this type of data, the
encoding module 226 may transmit the raw data uncompressed if a corresponding bit rate is less than a predetermined threshold or use a simple or generic lossless compression algorithm (such as ZIP or LZW algorithm) to compress the raw data before sending if an extra but rate saving is desirable at a system level. - A second category may include moderate-to-high-rate (i.e., data rate is between the first rate threshold and a second rate threshold, for example) raw input data which may become low-rate processed data after pre-processing by the
pre-processing module 224 at the sending client device 102. Examples of this second type of raw data include, but are not limited to, two-dimensional images from a touch surface, audio input data, visual data from a camera, visual and depth information from a Kinect® sensor, time series of data from an accelerometer and/or a gyroscope; data from a contextual or ambient sensor, such as satellite data received by GPS, orientation data from a compass sensor, data from a light sensor, and other possible sensors attached to the client device 102. - Although its raw data rate is moderate to high, in many cases, these raw data may be first pre-processed by the
pre-processing module 224 or the client device 102 to which corresponding UI/sensor devices are attached, through hardware or software solutions. In some cases, the pre-processing functions may be collocated with the UI/sensor devices for improved efficiency. After the pre-processing, the processed data rate may be significantly reduced, for example, from raw 2D images to high level touch gestures, from raw Kinect® visual and depth data to body gestures, from audio input to recognized texts or commands, from visual data to identified object/extracted features, from received satellite data to location coordinates, etc. This kind of processed data becomes high-level UI inputs which have low data rate but are more sensitive to errors or losses. In one embodiment, theencoding module 226 may transmit the raw data uncompressed if a corresponding bit rate is less than a predetermined threshold or use a simple or generic lossless compression algorithm (such as ZIP or LZW algorithm) to compress the raw data before sending if an extra but rate saving is desirable at a system level. - A third category may include raw input data that have moderate to high data rate generated by NUI or sensor devices, and the sending client device 102 does not have enough pre-processing capabilities or the raw data need to be sent to the receiving client device 102 for further data processing or manipulation. In one embodiment, the
encoding module 226 need to compress this type or kind of raw input data to a much lower data rate than its original raw data rate to fit into thenetwork 104 for making the home cloud computing system feasible. Furthermore, theencoding module 226 may select a compression algorithm for this kind of raw input data based on one or more other factors such as a low latency criterion for enabling a natural and smooth user experience. Depending on the nature and/or characteristics of the raw input data, theencoding module 226 may select a compression algorithm that may be specific to the data type of the raw input data. In some embodiments, theencoding module 226 may select a lossy compression algorithm in order to achieve a desired data compression ratio for this kind of raw input data, while enabling high-level UI controls that are extracted at the receiving client device 102 as close to those that are extracted at the sending client device 102 locally as much as possible. For example, theencoding module 226 may select and employ compression algorithms or schemes that are tuned to respective data characteristics of the raw input data having a low latency criterion. For example, for data types such as voice, audio, image and video, theencoding module 226 may apply existing standard coding schemes to compress them, e.g., MPS for audio data, H.264 for video data, JPEG for image data, etc. -
FIG. 4 is a flow chart depicting anexample method 400 of data virtualization and resource leveraging. The method ofFIG. 4 may, but need not, be implemented in the environment ofFIG. 1 and using the system ofFIG. 2 . For ease of explanation,method 400 is described with reference toFIGS. 1 and 2 . However, themethod 400 may alternatively be implemented in other environments and/or using other systems. -
Method 400 is described in the general context of computer-executable instructions. Generally, computer-executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, and the like that perform particular functions or implement particular abstract data types. The method can also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communication network. In a distributed computing environment, computer-executable instructions may be located in local and/or remote computer storage media, including memory storage devices. - The exemplary method is illustrated as a collection of blocks in a logical flow graph representing a sequence of operations that can be implemented in hardware, software, firmware, or a combination thereof. The order in which the method is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method, or alternate methods. Additionally, individual blocks may be omitted from the method without departing from the spirit and scope of the subject matter described herein. In the context of software, the blocks represent computer instructions that, when executed by one or more processors, perform the recited operations. In the context of hardware, some or all of the blocks may represent application specific integrated circuits (ASICs) or other physical components that perform the recited operations.
- Referring back to
FIG. 4 , atblock 402, a first device 102 (equipped with a virtualization system 106) detects a presence of a second device 102 in a proximity or neighborhood of the first device. - At
block 404, thevirtualization system 106 of the first device 102 establishes or initiates a network connection with thevirtualization system 106 of the second device 102. - At
block 406, thevirtualization system 106 of the first device 102 determines or receives information of functional capabilities of the second device 102. - At
block 408, thevirtualization system 106 of the first device 102 negotiates with thevirtualization system 106 of the second device 102 with respect to one or more responsibilities on data communicated therebetween. For example, thevirtualization system 106 of the first device 102 negotiates with thevirtualization system 106 of the second device 102 on an extent of data transformation to be performed by the first device 102 for the second device 102 and an extent of data transformation to be performed by the second device for the first device 102 based on the functional capabilities of the second device 102 and functional capabilities of the first device 102. - At
block 410, thevirtualization system 106 of the first device 102 virtualizes of the first device 102 as virtualized data. In one embodiment, thevirtualization system 106 of the first device 102 virtualizes the data of the first device 102 by generating output data at the first device 102 without locally presenting the output data in a display of the first device 102 and/or capturing input data through a user interface of the first device 102 without locally processing the input data at the first device 102. - At
block 412, thevirtualization system 106 of the first device 102 transforms the virtualized data into a virtualized data stream in a virtualization interconnection layer based on a result of the negotiation between the first device 102 and the second device 102. In one embodiment, thevirtualization system 106 of the first device 102 may pre-process the virtualized data to a predetermined extent that has been agreed upon between the first device 102 and the second device 102 based on the result of the negotiation. Additionally or alternatively, thevirtualization system 106 of the first device 102 may encode the pre-processed virtualized data using different algorithms for different types of data included in the virtualized data. - At
block 414, thevirtualization system 106 of the first device 102 sends the compressed or encoded virtualized data stream from the first device 102 to the second device 102 to leverage resources of the first device 102 and the second device 102. - Although the above acts are described to be performed by the
virtualization system 106, one or more acts that are performed by thevirtualization system 106 may be performed by the client device 102 or other software or hardware of the client device 102 and/or any other computing device (e.g., the server 108). For example, a sending client device 102 may virtualize data thereof and send the virtualized data to theserver 108 via thenetwork 104. Theserver 108 may pre-process and encode the virtualized data based on a data type or format that is acceptable to a receiving client device 102 (a client device 102 that receives the virtualized data), and send the processed and encoded virtualized data to the receiving client device 102. The sending client device 102 and/or theserver 108 may constantly monitor network conditions and/or computing conditions of the receiving client device 102, and perform adaptation to data pre-processing and data transmission accordingly in response to detecting or determining that a change in the network conditions and/or computing conditions of the receiving client device 102 occurs. - Any of the acts of any of the methods described herein may be implemented at least partially by a processor or other electronic device based on instructions stored on one or more computer-readable media. By way of example and not limitation, any of the acts of any of the methods described herein may be implemented under control of one or more processors configured with executable instructions that may be stored on one or more computer-readable media such as one or more computer storage media.
- Although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the claims are not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claimed subject matter.
Claims (21)
1-20. (canceled)
21. A method comprising:
detecting, by a first device, a presence of a second device within a distance from the first device;
establishing, by the first device, a network connection between the first device and the second device, the establishing comprising:
determining a first functional capability of the second device; and
negotiating between the first device and the second device a first extent of data transformation to be performed by the first device for the second device and a second extent of data transformation to be performed by the second device for the first device based at least partly on at least one of the first functional capability of the second device or a second functional capability of the first device;
virtualizing data of the first device into virtualized data, the virtualizing including:
capturing input data through a user interface of the first device without locally processing the input data at the first device; and
generating output data at the first device without presenting the output data on a display of the first device;
transforming the virtualized data into a virtualized data stream based on a result of the negotiating; and
sending the virtualized data stream from the first device to the second device.
22. The method as recited in claim 21 , further comprising at least one of:
broadcasting first information of one or more first services that the first device is able to provide; or
receiving second information of one or more second services that the second device is able to provide.
23. The method as recited in claim 21 , wherein the transforming comprises pre-processing the virtualized data to generate pre-processed virtualized data to an extent agreed upon between the first device and the second device based on the result of the negotiating.
24. The method as recited in claim 23 , wherein the transforming comprises encoding the pre-processed virtualized data using at least one of a first algorithm for a first type of data included in the virtualized data or a second algorithm for a second type of data included in the virtualized data.
25. The method as recited in claim 21 , further comprising:
receiving second virtualized data from a third device;
extracting second input data from the second virtualized data of the third device; and
combining the second input data with the input data to form a combined input.
26. The method as recited in claim 25 , further comprising at least one of:
presenting an output on the display of the first device based at least partly on the combined input; or
forwarding the combined input to a fourth device.
27. The method as recited in claim 21 , further comprising:
receiving second virtualized data from a third device;
extracting second output data from the second virtualized data of the third device;
combining the second output data with the output data to form a combined output; and
presenting the combined output on the display of the first device.
28. The method as recited in claim 21 , further comprising:
detecting a change in at least one of the first functional capability of the first device or the second functional capability of the second device; and
based at least partly on the change, re-negotiating the first extent of data transformation to be performed by the first device for the second device and the second extent of data transformation to be performed by the second device for the first device.
29. A method comprising:
detecting, by a first device, that a second device is within a distance from the first device;
establishing, by the first device, a network connection between the first device and the second device, the establishing comprising:
determining a first functional capability of the second device; and
determining a first extent of data transformation to be performed by the first device for the second device and a second extent of data transformation to be performed by the second device for the first device based at least partly on at least one of the first functional capability of the second device or a second functional capability of the first device;
virtualizing data of the first device into virtualized data, the virtualizing including:
capturing input data through a user interface of the first device without locally processing the input data at the first device; and
generating output data at the first device without presenting the output data on a display of the first device; and
transforming the virtualized data into a virtualized data stream based on a result of the negotiating.
30. The method as recited in claim 29 , further comprising sending the virtualized data stream from the first device to the second device.
31. The method as recited in claim 29 , wherein the transforming comprises pre-processing the virtualized data to generate pre-processed virtualized data to an extent agreed upon between the first device and the second device based at least partly on a result of a negotiation between the first device and the second device.
32. The method as recited in claim 29 , further comprising:
detecting a change in at least one of the first functional capability of the first device or the second functional capability of the second device; and
based at least partly on the change, negotiating at least one of the first extent of data transformation to be performed by the first device for the second device or the second extent of data transformation to be performed by the second device for the first device.
33. A method comprising:
capturing data of a first device from a user interface associated with the first device;
virtualizing the data of the first device into virtualized data;
transforming the virtualized data into a virtualized data stream for a second device, the transforming comprising adapting the data of the first device to a format that has been agreed upon between the first device and a second device;
determining that the second device is within a distance from the first device;
establishing, by the first device, a network connection between the first device and the second device;
determining one or more capabilities of the second device to at least one of receive, process, or display the virtualized data stream, the one or more capabilities including at least one of input capability, data processing power, or display resolution; and
sending, by the first device, the virtualized data stream to the second device.
34. The method as recited in claim 33 , wherein the data of the first device is captured and virtualized into the virtualized data without processing the data at the first device.
35. The method as recited in claim 33 , wherein the data of the first device comprises first input data received from the user interface associated with the first device, and further comprising:
receiving a second virtualized data stream from the second device, the second virtualized data stream comprising second input data received from a second user interface associated with the second device; and
combining the first input data and the second input data into a combined input.
36. The method as recited in claim 33 , further comprising adding a virtualization interconnection layer on top of a data network layer associated with the first device, the virtualization interconnection layer enabling virtualization of the data of the first device and adaptation of a new device to connect to the first device for at least one of receiving second data of the new device or sending the data of the first device.
37. The method as recited in claim 33 , further comprising decoupling the data of the first device from the first device, and wherein the sending comprises sending the virtualized data stream to the second device for at least one of presentation or processing at the second device.
38. The method as recited in claim 33 , further comprising:
detecting a change in at least one of a first functional capability of the first device or a second functional capability of the second device; and
based at least partly on the change, negotiating a first extent of data transformation to be performed by the first device for the second device and a second extent of data transformation to be performed by the second device for the first device.
39. The method as recited in claim 33 , wherein the transforming comprises encoding the virtualized data using at least one of a first algorithm for a first type of data included in the virtualized data or a second algorithm for a second type of data included in the virtualized data.
40. The method as recited in claim 33 , wherein the virtualizing comprises generating the data of the first device without presenting the data locally at the first device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/985,945 US20160112259A1 (en) | 2012-10-30 | 2015-12-31 | Home Cloud with Virtualized Input and Output Roaming over Network |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/663,720 US9264478B2 (en) | 2012-10-30 | 2012-10-30 | Home cloud with virtualized input and output roaming over network |
US14/985,945 US20160112259A1 (en) | 2012-10-30 | 2015-12-31 | Home Cloud with Virtualized Input and Output Roaming over Network |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/663,720 Continuation US9264478B2 (en) | 2012-10-30 | 2012-10-30 | Home cloud with virtualized input and output roaming over network |
Publications (1)
Publication Number | Publication Date |
---|---|
US20160112259A1 true US20160112259A1 (en) | 2016-04-21 |
Family
ID=49552426
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/663,720 Active 2033-11-05 US9264478B2 (en) | 2012-10-30 | 2012-10-30 | Home cloud with virtualized input and output roaming over network |
US14/985,945 Abandoned US20160112259A1 (en) | 2012-10-30 | 2015-12-31 | Home Cloud with Virtualized Input and Output Roaming over Network |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/663,720 Active 2033-11-05 US9264478B2 (en) | 2012-10-30 | 2012-10-30 | Home cloud with virtualized input and output roaming over network |
Country Status (2)
Country | Link |
---|---|
US (2) | US9264478B2 (en) |
WO (1) | WO2014070561A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9721036B2 (en) | 2012-08-14 | 2017-08-01 | Microsoft Technology Licensing, Llc | Cooperative web browsing using multiple devices |
US10595059B2 (en) * | 2011-11-06 | 2020-03-17 | Akamai Technologies, Inc. | Segmented parallel encoding with frame-aware, variable-size chunking |
Families Citing this family (84)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8601056B2 (en) * | 2010-03-09 | 2013-12-03 | Avistar Communications Corporation | Scalable high-performance interactive real-time media architectures for virtual desktop environments |
US9736065B2 (en) | 2011-06-24 | 2017-08-15 | Cisco Technology, Inc. | Level of hierarchy in MST for traffic localization and load balancing |
US8908698B2 (en) | 2012-01-13 | 2014-12-09 | Cisco Technology, Inc. | System and method for managing site-to-site VPNs of a cloud managed network |
US9264478B2 (en) * | 2012-10-30 | 2016-02-16 | Microsoft Technology Licensing, Llc | Home cloud with virtualized input and output roaming over network |
US10367914B2 (en) | 2016-01-12 | 2019-07-30 | Cisco Technology, Inc. | Attaching service level agreements to application containers and enabling service assurance |
US9043439B2 (en) | 2013-03-14 | 2015-05-26 | Cisco Technology, Inc. | Method for streaming packet captures from network access devices to a cloud server over HTTP |
KR102067276B1 (en) * | 2013-05-30 | 2020-02-11 | 삼성전자주식회사 | Apparatus and method for executing application |
US9232433B2 (en) * | 2013-12-20 | 2016-01-05 | Cisco Technology, Inc. | Dynamic coding for network traffic by fog computing node |
KR102131644B1 (en) * | 2014-01-06 | 2020-07-08 | 삼성전자주식회사 | Electronic apparatus and operating method of web-platform |
US9791485B2 (en) | 2014-03-10 | 2017-10-17 | Silver Spring Networks, Inc. | Determining electric grid topology via a zero crossing technique |
US9755858B2 (en) | 2014-04-15 | 2017-09-05 | Cisco Technology, Inc. | Programmable infrastructure gateway for enabling hybrid cloud services in a network environment |
US9473365B2 (en) | 2014-05-08 | 2016-10-18 | Cisco Technology, Inc. | Collaborative inter-service scheduling of logical resources in cloud platforms |
US11722365B2 (en) | 2014-05-13 | 2023-08-08 | Senseware, Inc. | System, method and apparatus for configuring a node in a sensor network |
US9551594B1 (en) * | 2014-05-13 | 2017-01-24 | Senseware, Inc. | Sensor deployment mechanism at a monitored location |
US10122605B2 (en) | 2014-07-09 | 2018-11-06 | Cisco Technology, Inc | Annotation of network activity through different phases of execution |
EP3174293A4 (en) * | 2014-07-22 | 2017-07-05 | Panasonic Intellectual Property Management Co., Ltd. | Encoding method, decoding method, encoding apparatus and decoding apparatus |
DE112014006847T5 (en) * | 2014-07-31 | 2017-04-13 | Mitsubishi Electric Corporation | Control and home system |
US10628186B2 (en) * | 2014-09-08 | 2020-04-21 | Wirepath Home Systems, Llc | Method for electronic device virtualization and management |
US9825878B2 (en) | 2014-09-26 | 2017-11-21 | Cisco Technology, Inc. | Distributed application framework for prioritizing network traffic using application priority awareness |
US9821222B1 (en) | 2014-11-14 | 2017-11-21 | Amazon Technologies, Inc. | Coordination of content presentation operations |
US9839843B1 (en) * | 2014-11-14 | 2017-12-12 | Amazon Technologies, Inc. | Coordination of content presentation operations |
KR102264050B1 (en) | 2014-11-28 | 2021-06-11 | 삼성전자주식회사 | Method and Apparatus for Sharing Function Between Electronic Devices |
US10050862B2 (en) | 2015-02-09 | 2018-08-14 | Cisco Technology, Inc. | Distributed application framework that uses network and application awareness for placing data |
US9558367B2 (en) * | 2015-02-18 | 2017-01-31 | Yahoo!, Inc. | Virtualization input component |
US10037617B2 (en) | 2015-02-27 | 2018-07-31 | Cisco Technology, Inc. | Enhanced user interface systems including dynamic context selection for cloud-based networks |
US10708342B2 (en) | 2015-02-27 | 2020-07-07 | Cisco Technology, Inc. | Dynamic troubleshooting workspaces for cloud and network management systems |
US10382534B1 (en) | 2015-04-04 | 2019-08-13 | Cisco Technology, Inc. | Selective load balancing of network traffic |
US10476982B2 (en) | 2015-05-15 | 2019-11-12 | Cisco Technology, Inc. | Multi-datacenter message queue |
US9628379B2 (en) | 2015-06-01 | 2017-04-18 | Cisco Technology, Inc. | Large scale residential cloud based application centric infrastructures |
US10034201B2 (en) | 2015-07-09 | 2018-07-24 | Cisco Technology, Inc. | Stateless load-balancing across multiple tunnels |
JP6540340B2 (en) * | 2015-07-31 | 2019-07-10 | 富士通株式会社 | Function call information collecting method and function call information collecting program |
US10057078B2 (en) * | 2015-08-21 | 2018-08-21 | Samsung Electronics Company, Ltd. | User-configurable interactive region monitoring |
CN105357259A (en) * | 2015-09-29 | 2016-02-24 | 青岛海尔智能家电科技有限公司 | Method and device for automatically setting equipment linkage rule and associated equipment |
US10067780B2 (en) | 2015-10-06 | 2018-09-04 | Cisco Technology, Inc. | Performance-based public cloud selection for a hybrid cloud environment |
US11005682B2 (en) | 2015-10-06 | 2021-05-11 | Cisco Technology, Inc. | Policy-driven switch overlay bypass in a hybrid cloud network environment |
US10462136B2 (en) | 2015-10-13 | 2019-10-29 | Cisco Technology, Inc. | Hybrid cloud security groups |
US9923993B2 (en) * | 2015-11-02 | 2018-03-20 | Rockwell Automation Technologies, Inc. | Self-describing diagnostic data for presentation on mobile devices |
US10523657B2 (en) | 2015-11-16 | 2019-12-31 | Cisco Technology, Inc. | Endpoint privacy preservation with cloud conferencing |
US10205677B2 (en) | 2015-11-24 | 2019-02-12 | Cisco Technology, Inc. | Cloud resource placement optimization and migration execution in federated clouds |
US10082955B2 (en) | 2015-12-03 | 2018-09-25 | International Business Machines Corporation | Automated home memory cloud with key authenticator |
US10084703B2 (en) | 2015-12-04 | 2018-09-25 | Cisco Technology, Inc. | Infrastructure-exclusive service forwarding |
US20170272365A1 (en) * | 2016-03-15 | 2017-09-21 | Hon Hai Precision Industry Co., Ltd | Method and appratus for controlling network traffic |
CA2961221A1 (en) | 2016-04-11 | 2017-10-11 | Tti (Macao Commercial Offshore) Limited | Modular garage door opener |
US10129177B2 (en) | 2016-05-23 | 2018-11-13 | Cisco Technology, Inc. | Inter-cloud broker for hybrid cloud networks |
US10560369B2 (en) * | 2016-06-23 | 2020-02-11 | Wipro Limited | Methods and systems for detecting and transferring defect information during manufacturing processes |
US10659283B2 (en) | 2016-07-08 | 2020-05-19 | Cisco Technology, Inc. | Reducing ARP/ND flooding in cloud environment |
US10432532B2 (en) | 2016-07-12 | 2019-10-01 | Cisco Technology, Inc. | Dynamically pinning micro-service to uplink port |
US10263898B2 (en) | 2016-07-20 | 2019-04-16 | Cisco Technology, Inc. | System and method for implementing universal cloud classification (UCC) as a service (UCCaaS) |
US10382597B2 (en) | 2016-07-20 | 2019-08-13 | Cisco Technology, Inc. | System and method for transport-layer level identification and isolation of container traffic |
US10387099B2 (en) * | 2016-07-28 | 2019-08-20 | Intelligent Waves Llc | System, method and computer program product for generating remote views in a virtual mobile device platform using efficient color space conversion and frame encoding |
US10142346B2 (en) | 2016-07-28 | 2018-11-27 | Cisco Technology, Inc. | Extension of a private cloud end-point group to a public cloud |
US10567344B2 (en) | 2016-08-23 | 2020-02-18 | Cisco Technology, Inc. | Automatic firewall configuration based on aggregated cloud managed information |
US10523592B2 (en) | 2016-10-10 | 2019-12-31 | Cisco Technology, Inc. | Orchestration system for migrating user data and services based on user information |
US10834231B2 (en) * | 2016-10-11 | 2020-11-10 | Synergex Group | Methods, systems, and media for pairing devices to complete a task using an application request |
US11044162B2 (en) | 2016-12-06 | 2021-06-22 | Cisco Technology, Inc. | Orchestration of cloud and fog interactions |
US11916994B1 (en) * | 2016-12-15 | 2024-02-27 | Blue Yonder Group, Inc. | Extending RESTful web service resources in a JAVA-component-driven-architecture application |
US10326817B2 (en) | 2016-12-20 | 2019-06-18 | Cisco Technology, Inc. | System and method for quality-aware recording in large scale collaborate clouds |
US10334029B2 (en) | 2017-01-10 | 2019-06-25 | Cisco Technology, Inc. | Forming neighborhood groups from disperse cloud providers |
US10552191B2 (en) | 2017-01-26 | 2020-02-04 | Cisco Technology, Inc. | Distributed hybrid cloud orchestration model |
US10320683B2 (en) | 2017-01-30 | 2019-06-11 | Cisco Technology, Inc. | Reliable load-balancer using segment routing and real-time application monitoring |
US10671571B2 (en) | 2017-01-31 | 2020-06-02 | Cisco Technology, Inc. | Fast network performance in containerized environments for network function virtualization |
US11005731B2 (en) | 2017-04-05 | 2021-05-11 | Cisco Technology, Inc. | Estimating model parameters for automatic deployment of scalable micro services |
US10382274B2 (en) | 2017-06-26 | 2019-08-13 | Cisco Technology, Inc. | System and method for wide area zero-configuration network auto configuration |
US10439877B2 (en) | 2017-06-26 | 2019-10-08 | Cisco Technology, Inc. | Systems and methods for enabling wide area multicast domain name system |
US10425288B2 (en) | 2017-07-21 | 2019-09-24 | Cisco Technology, Inc. | Container telemetry in data center environments with blade servers and switches |
US10892940B2 (en) | 2017-07-21 | 2021-01-12 | Cisco Technology, Inc. | Scalable statistics and analytics mechanisms in cloud networking |
US10601693B2 (en) | 2017-07-24 | 2020-03-24 | Cisco Technology, Inc. | System and method for providing scalable flow monitoring in a data center fabric |
US10541866B2 (en) | 2017-07-25 | 2020-01-21 | Cisco Technology, Inc. | Detecting and resolving multicast traffic performance issues |
EP3656110B1 (en) | 2017-09-29 | 2023-09-20 | Apple Inc. | Multi-device communication management |
US10353800B2 (en) | 2017-10-18 | 2019-07-16 | Cisco Technology, Inc. | System and method for graph based monitoring and management of distributed systems |
US11481362B2 (en) | 2017-11-13 | 2022-10-25 | Cisco Technology, Inc. | Using persistent memory to enable restartability of bulk load transactions in cloud databases |
US10705882B2 (en) | 2017-12-21 | 2020-07-07 | Cisco Technology, Inc. | System and method for resource placement across clouds for data intensive workloads |
US11595474B2 (en) | 2017-12-28 | 2023-02-28 | Cisco Technology, Inc. | Accelerating data replication using multicast and non-volatile memory enabled nodes |
US10511534B2 (en) | 2018-04-06 | 2019-12-17 | Cisco Technology, Inc. | Stateless distributed load-balancing |
US10728361B2 (en) | 2018-05-29 | 2020-07-28 | Cisco Technology, Inc. | System for association of customer information across subscribers |
EP3804343A1 (en) * | 2018-06-06 | 2021-04-14 | Seventh Sense Artificial Intelligence Private Limited | A network switching appliance, process and system for performing visual analytics for a streamng video |
US10904322B2 (en) | 2018-06-15 | 2021-01-26 | Cisco Technology, Inc. | Systems and methods for scaling down cloud-based servers handling secure connections |
US10764266B2 (en) | 2018-06-19 | 2020-09-01 | Cisco Technology, Inc. | Distributed authentication and authorization for rapid scaling of containerized services |
US11019083B2 (en) | 2018-06-20 | 2021-05-25 | Cisco Technology, Inc. | System for coordinating distributed website analysis |
US10819571B2 (en) | 2018-06-29 | 2020-10-27 | Cisco Technology, Inc. | Network traffic optimization using in-situ notification system |
US10904342B2 (en) | 2018-07-30 | 2021-01-26 | Cisco Technology, Inc. | Container networking using communication tunnels |
EP3948473B1 (en) * | 2019-03-29 | 2024-04-17 | Datakwip Holdings, LLC | Facility analytics |
CN111078755B (en) * | 2019-12-19 | 2023-07-28 | 远景智能国际私人投资有限公司 | Time sequence data storage query method and device, server and storage medium |
US12099473B1 (en) * | 2020-12-14 | 2024-09-24 | Cigna Intellectual Property, Inc. | Systems and methods for centralized logging for enhanced scalability and security of web services |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7167257B2 (en) * | 2000-01-27 | 2007-01-23 | Canon Kabushiki Kaisha | Method and apparatus for controlling image output on media of different output devices |
US20130194510A1 (en) * | 2010-03-22 | 2013-08-01 | Amimon Ltd | Methods circuits devices and systems for wireless transmission of mobile communication device display information |
US20130222419A1 (en) * | 2012-02-24 | 2013-08-29 | Jonathan Rosenberg | Video Calling |
US20140122729A1 (en) * | 2012-10-30 | 2014-05-01 | Microsoft Corporation | Home cloud with virtualized input and output roaming over network |
US20140244804A1 (en) * | 2012-09-28 | 2014-08-28 | Zhiwei Ying | Processing video data in a cloud |
US9232433B2 (en) * | 2013-12-20 | 2016-01-05 | Cisco Technology, Inc. | Dynamic coding for network traffic by fog computing node |
Family Cites Families (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6246688B1 (en) * | 1999-01-29 | 2001-06-12 | International Business Machines Corp. | Method and system for using a cellular phone as a network gateway in an automotive network |
US6651252B1 (en) | 1999-10-27 | 2003-11-18 | Diva Systems Corporation | Method and apparatus for transmitting video and graphics in a compressed form |
US7340763B1 (en) | 1999-10-26 | 2008-03-04 | Harris Scott C | Internet browsing from a television |
US7210099B2 (en) | 2000-06-12 | 2007-04-24 | Softview Llc | Resolution independent vector display of internet content |
US7275037B2 (en) * | 2001-01-25 | 2007-09-25 | Ericsson Ab | System and method for generating a service level agreement template |
JP4629929B2 (en) * | 2001-08-23 | 2011-02-09 | 株式会社リコー | Digital camera system and control method thereof |
US7574474B2 (en) * | 2001-09-14 | 2009-08-11 | Xerox Corporation | System and method for sharing and controlling multiple audio and video streams |
EP1330098A1 (en) | 2002-01-21 | 2003-07-23 | BRITISH TELECOMMUNICATIONS public limited company | Method and communication system for data web session transfer |
US20030195963A1 (en) | 2002-04-10 | 2003-10-16 | Yu Song | Session preservation and migration among different browsers on different devices |
US7987491B2 (en) | 2002-05-10 | 2011-07-26 | Richard Reisman | Method and apparatus for browsing using alternative linkbases |
NO319854B1 (en) | 2003-04-04 | 2005-09-26 | Telenor Asa | Procedure and system for handling web sessions |
JP2005328285A (en) * | 2004-05-13 | 2005-11-24 | Sanyo Electric Co Ltd | Portable telephone |
CN100375070C (en) * | 2004-12-31 | 2008-03-12 | 联想(北京)有限公司 | Video frequency data acquisition method employing mobile phone with camera as computer camera |
US20060189349A1 (en) * | 2005-02-24 | 2006-08-24 | Memory Matrix, Inc. | Systems and methods for automatic uploading of cell phone images |
CA2513014A1 (en) | 2005-07-22 | 2007-01-22 | Research In Motion Limited | A method of controlling delivery of multi-part content from an origin server to a mobile device browser via a proxy server |
JP4847161B2 (en) * | 2006-03-02 | 2011-12-28 | キヤノン株式会社 | Image transmitting apparatus and imaging apparatus |
US7590998B2 (en) | 2006-07-27 | 2009-09-15 | Sharp Laboratories Of America, Inc. | Television system having internet web browsing capability |
US8024400B2 (en) | 2007-09-26 | 2011-09-20 | Oomble, Inc. | Method and system for transferring content from the web to mobile devices |
KR101399553B1 (en) * | 2007-03-09 | 2014-05-27 | 삼성전자주식회사 | Apparatus and method for transmit multimedia stream |
KR101392318B1 (en) | 2007-04-11 | 2014-05-07 | 엘지전자 주식회사 | Mobile communication terminal and webpage controlling method thereof |
US8161177B2 (en) * | 2007-06-27 | 2012-04-17 | International Business Machines Corporation | Formulating multimedia content of an on-line interview |
JP4989350B2 (en) * | 2007-08-06 | 2012-08-01 | キヤノン株式会社 | Adapter and control method thereof |
US9413828B2 (en) * | 2008-07-01 | 2016-08-09 | Hewlett Packard Enterprise Development Lp | Virtualizing a video controller |
US20100050089A1 (en) | 2008-08-20 | 2010-02-25 | Company 100, Inc. | Web browser system of mobile communication terminal, using proxy server |
US8350744B2 (en) | 2008-12-03 | 2013-01-08 | At&T Intellectual Property I, L.P. | Virtual universal remote control |
KR20100065744A (en) | 2008-12-08 | 2010-06-17 | 엔에이치엔(주) | Method and apparatus for transcoding web page to be suitable for mobile device |
US8180165B2 (en) | 2008-12-19 | 2012-05-15 | Microsoft Corp. | Accelerated screen codec |
US9800837B2 (en) | 2008-12-31 | 2017-10-24 | Echostar Technologies L.L.C. | Virtual control device |
US20120210205A1 (en) | 2011-02-11 | 2012-08-16 | Greg Sherwood | System and method for using an application on a mobile device to transfer internet media content |
US9195775B2 (en) | 2009-06-26 | 2015-11-24 | Iii Holdings 2, Llc | System and method for managing and/or rendering internet multimedia content in a network |
JP5452107B2 (en) * | 2009-07-14 | 2014-03-26 | オリンパス株式会社 | Communication terminal and communication method thereof |
CA2824754A1 (en) | 2009-09-26 | 2011-03-31 | Disternet Technology Inc. | System and method for micro-cloud computing |
JP5586925B2 (en) * | 2009-11-26 | 2014-09-10 | キヤノン株式会社 | Imaging apparatus, control method thereof, and program |
WO2011078879A1 (en) | 2009-12-02 | 2011-06-30 | Packet Video Corporation | System and method for transferring media content from a mobile device to a home network |
US20110219105A1 (en) | 2010-03-04 | 2011-09-08 | Panasonic Corporation | System and method for application session continuity |
US8554938B2 (en) | 2010-08-31 | 2013-10-08 | Millind Mittal | Web browser proxy-client video system and method |
US9110976B2 (en) * | 2010-10-15 | 2015-08-18 | International Business Machines Corporation | Supporting compliance in a cloud environment |
CN102572094B (en) * | 2011-09-20 | 2015-11-25 | 广州飒特红外股份有限公司 | Mobile phone is utilized to control the system and method for thermal infrared imager |
US9721036B2 (en) | 2012-08-14 | 2017-08-01 | Microsoft Technology Licensing, Llc | Cooperative web browsing using multiple devices |
-
2012
- 2012-10-30 US US13/663,720 patent/US9264478B2/en active Active
-
2013
- 2013-10-24 WO PCT/US2013/066473 patent/WO2014070561A1/en active Application Filing
-
2015
- 2015-12-31 US US14/985,945 patent/US20160112259A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7167257B2 (en) * | 2000-01-27 | 2007-01-23 | Canon Kabushiki Kaisha | Method and apparatus for controlling image output on media of different output devices |
US20130194510A1 (en) * | 2010-03-22 | 2013-08-01 | Amimon Ltd | Methods circuits devices and systems for wireless transmission of mobile communication device display information |
US20130222419A1 (en) * | 2012-02-24 | 2013-08-29 | Jonathan Rosenberg | Video Calling |
US20140244804A1 (en) * | 2012-09-28 | 2014-08-28 | Zhiwei Ying | Processing video data in a cloud |
US20140122729A1 (en) * | 2012-10-30 | 2014-05-01 | Microsoft Corporation | Home cloud with virtualized input and output roaming over network |
US9264478B2 (en) * | 2012-10-30 | 2016-02-16 | Microsoft Technology Licensing, Llc | Home cloud with virtualized input and output roaming over network |
US9232433B2 (en) * | 2013-12-20 | 2016-01-05 | Cisco Technology, Inc. | Dynamic coding for network traffic by fog computing node |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10595059B2 (en) * | 2011-11-06 | 2020-03-17 | Akamai Technologies, Inc. | Segmented parallel encoding with frame-aware, variable-size chunking |
US9721036B2 (en) | 2012-08-14 | 2017-08-01 | Microsoft Technology Licensing, Llc | Cooperative web browsing using multiple devices |
US10970355B2 (en) | 2012-08-14 | 2021-04-06 | Microsoft Technology Licensing, Llc | Cooperative web browsing using multiple devices |
Also Published As
Publication number | Publication date |
---|---|
US9264478B2 (en) | 2016-02-16 |
US20140122729A1 (en) | 2014-05-01 |
WO2014070561A1 (en) | 2014-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9264478B2 (en) | Home cloud with virtualized input and output roaming over network | |
Lu et al. | Virtualized screen: A third element for cloud–mobile convergence | |
US20170277808A1 (en) | Cooperative Web Browsing Using Multiple Devices | |
US8438492B2 (en) | Apparatus and method for providing user interface service in a multimedia system | |
CN111221491A (en) | Interaction control method and device, electronic equipment and storage medium | |
JP6322834B2 (en) | Video chat data processing | |
KR101646958B1 (en) | Media encoding using changed regions | |
US8892633B2 (en) | Apparatus and method for transmitting and receiving a user interface in a communication system | |
US20130324099A1 (en) | System and Method for Running Mobile Devices in the Cloud | |
WO2019164753A1 (en) | Efficient streaming video for static video content | |
CN104685873B (en) | Encoding controller and coding control method | |
CN102770827B (en) | Method for showing multimedia content on the screen of terminal | |
KR102199270B1 (en) | System for cloud streaming service, method of cloud streaming service based on still image and apparatus for the same | |
CN113826074B (en) | Adaptive real-time communication plug-in for virtual desktop infrastructure solutions | |
JP2008040347A (en) | Image display device, image display method, and image display program | |
CN107005731B (en) | Image cloud end streaming media service method, server and system using application codes | |
KR20200115314A (en) | User interface screen recovery method in cloud streaming service and apparatus therefor | |
Tamm et al. | Plugin free remote visualization in the browser | |
KR20160087226A (en) | System for cloud streaming service, method of image cloud streaming service considering terminal performance and apparatus for the same | |
JP2010119030A (en) | Communication device, communication method, and communication program | |
WO2022252842A1 (en) | Media file transmission method and apparatus | |
KR20160044732A (en) | System for cloud streaming service, method of cloud streaming service based on still image and apparatus for the same | |
KR102225610B1 (en) | System for cloud streaming service, method of message-based image cloud streaming service and apparatus for the same | |
CN116781918A (en) | Data processing method and device for web page real-time communication and display equipment | |
KR20210027342A (en) | System for cloud streaming service, method of message-based image cloud streaming service and apparatus for the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:037391/0045 Effective date: 20141014 Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HON, HSIAO-WUEN;LI, SHIPENG;LU, YAN;AND OTHERS;SIGNING DATES FROM 20121007 TO 20121013;REEL/FRAME:037390/0982 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |