US20070174429A1 - Methods and servers for establishing a connection between a client system and a virtual machine hosting a requested computing environment - Google Patents

Methods and servers for establishing a connection between a client system and a virtual machine hosting a requested computing environment Download PDF

Info

Publication number
US20070174429A1
US20070174429A1 US11/552,315 US55231506A US2007174429A1 US 20070174429 A1 US20070174429 A1 US 20070174429A1 US 55231506 A US55231506 A US 55231506A US 2007174429 A1 US2007174429 A1 US 2007174429A1
Authority
US
United States
Prior art keywords
machine
client system
client
computing environment
virtual machine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/552,315
Inventor
Richard James Mazzaferri
David Neil Robinson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Citrix Systems Inc
Original Assignee
Citrix Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=40572801&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US20070174429(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Citrix Systems Inc filed Critical Citrix Systems Inc
Priority to US11/552,315 priority Critical patent/US20070174429A1/en
Assigned to CITRIX SYSTEMS, INC. reassignment CITRIX SYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MAZZAFERRI, RICHARD JAMES, ROBINSON, DAVID NEIL
Priority to PCT/US2007/060963 priority patent/WO2007087558A2/en
Priority to EP07762438A priority patent/EP1977317A1/en
Priority to CN2007800104850A priority patent/CN101410803B/en
Priority to CA002637980A priority patent/CA2637980A1/en
Priority to EP11161963A priority patent/EP2369479A3/en
Priority to AU2007208093A priority patent/AU2007208093A1/en
Priority to EP11161966A priority patent/EP2375328A3/en
Priority to BRPI0707220-1A priority patent/BRPI0707220A2/en
Publication of US20070174429A1 publication Critical patent/US20070174429A1/en
Priority to IL192910A priority patent/IL192910A/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • G06F16/748Hypervideo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/52Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow
    • G06F21/53Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems during program execution, e.g. stack integrity ; Preventing unwanted data erasure; Buffer overflow by executing in a restricted environment, e.g. sandbox or secure virtual machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/629Protecting access to data via a platform, e.g. using keys or access control rules to features or functions of an application
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1415Digital output to display device ; Cooperation and interconnection of the display device with other functional units with means for detecting differences between the image stored in the host and the images displayed on the displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display
    • G06F3/1438Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display using more than one graphics controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • G06F3/1462Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay with means for detecting differences between the image stored in the host and the images displayed on the remote displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5055Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering software capabilities, i.e. software resources associated or available to the machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • G09G5/006Details of the interface to the display terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0227Filtering policies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • H04L63/102Entity profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • H04L63/105Multiple levels of security
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/08Protocols specially adapted for terminal emulation, e.g. Telnet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/141Setup of application sessions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/2866Architectures; Arrangements
    • H04L67/30Profiles
    • H04L67/303Terminal profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/563Data redirection of data network streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/564Enhancement of application control based on intercepted application data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/59Providing operational support to end devices by off-loading in the network or by emulation, e.g. when they are unavailable
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/541Client-server
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2221/00Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/21Indexing scheme relating to G06F21/00 and subgroups addressing additional information or applications relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F2221/2149Restricted operating environment
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/16Use of wireless transmission of display information
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/22Detection of presence or absence of input display information or of connection or disconnection of a corresponding information source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/06Network architectures or network communication protocols for network security for supporting key management in a packet data network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/24Negotiation of communication capabilities

Definitions

  • the invention generally relates to providing access to computing environments. More particularly, the invention relates to methods and systems for establishing a connection between a client system and a virtual machine hosting a requested computing environment.
  • Contemporary computer networks consist of a number of computer systems communicating with other computer systems via communication links.
  • client machines and other systems are server machines.
  • a server machine may host a variety of application programs that can be accessed and executed by client machines.
  • client machines launches an application program
  • the execution of that application program can occur at either the client machine or the server machine, depending upon the computing model followed by the computer network.
  • the server machine executes a virtual machine, which executes the application program and provides output data to the client machine.
  • client machines may be unaware of the application programs and resources available for use on the server machines. In fact, client machines may not even be aware of each available server machine on the network. Additionally, in environments in which a virtual machine provides access to a resource for the client machine, the virtual machine may be relocated from one server machine to another server. In other environments in which a virtual machine provides access to a resource for the client machine, the client machine may not know that a virtual machine provides access to the application program. To find available application programs on a particular server machine, a user of the client machine may need to find and gain access to that server machine and perform a directory listing of the files existing on that server machine. Even then, this listing might not indicate to the user those applications which the user is authorized to use.
  • a method for identifying and providing access to virtualized resources available to a user of the client machine, including application programs, desktop environments, and other computing environments provided via virtual machines executing on server machines would be desirable.
  • An array of inexpensive physical machines may be partitioned into multiple virtual machines, creating a virtual PC for each user.
  • the physical machines may be servers such as rack-mount servers, blade servers, or stand-alone servers.
  • the physical machines may also be workstations or workstation blades or personal computers.
  • a policy-based dynamic deployment system provisions the virtual machines and associates the virtual machine with an execution machine (i.e., a physical machine) and a user.
  • Centralized hosting provides the manageability of server-based computing while the dedicated environment provides the flexibility and compatibility with applications that a desktop PC enables.
  • the system has a much lower total cost of ownership—because the system is implemented in software, rather than being dependent on hardware, the system has a much lower total cost of ownership.
  • the hardware lifecycle may be extended by increasing the amount of hardware resources assigned to virtual machines as computational demands increase over time. Additionally, the use of virtualization eases the difficulty in dealing with multiple OS images.
  • machines are configured to run multiple copies of one or more operating systems (e.g. different versions/releases of WINDOWS from Microsoft Corporation).
  • Users transmit requests for access to computing resources to the deployment system, which may use a configuration policy to decide how (with what physical and/or virtual resources) and where (on which physical machine in the machine farm and on which virtual machine) to provide access to the requested computing resource.
  • the virtual machine can be created on demand, and the requested software resource may be downloaded and installed in the virtual machine as required.
  • the virtual machine may be pre-configured with a plurality of software and/or virtual hardware resources to provide a particular computing environment to the user.
  • the user request is directed to the selected, configured virtual machine and a remote display connection is established between the virtual machine and a remote display client on the user's access device, which will be referred to generally as a “client machine.”
  • client machine Devices such as CD-ROM drives, floppy drives, USB drives and other similar devices that are connected to the client machine are connected and remotely accessible to the virtual machine, thereby allowing the use of these devices in a manner similar to a standard desktop computer.
  • a deployment system may manage a pool of virtual machines (a machine farm) to which new virtual machines can be added on demand.
  • a plurality of software modules including a session management component and a virtual machine management component may provide management functionality.
  • Executing virtual machines may be migrated from one physical machine to another, under control of the deployment system, to provide load balancing or to facilitate hardware maintenance.
  • Inactive virtual machines may be suspended to free physical computing resources.
  • Active virtual machines may be migrated from one physical machine to another to consolidate them onto a smaller number of physical machines to allow the unused physical machines to be shutdown to save power during off-peak periods or to free the physical resource to be re-assigned for a different purpose e.g. process web requests.
  • Suspended virtual machines may be resumed prior to users requiring access. This can be done manually or automatically via policies or preferences or through a learning process by monitoring a user's behavior over time.
  • Performance requirements of the requested resource may be considered when allocating computing resources to virtual machines.
  • a financial analysis package may require twice as many CPU resources as a generic productivity application, such as those included in MICROSOFT OFFICE, manufactured by Microsoft Corporation of Redmond, Wash.
  • a virtual machine providing the financial analysis package may execute on a physical machine determined to have sufficient spare computational capacity, or existing virtual machines may be relocated to other available physical machines to ensure sufficient available capacity on a particular physical machine.
  • Each user is provided a separate virtual machine environment, which provides increased flexibility in that each user may run any version or configuration of an operating system independently of other users and also allows users to run potentially dangerous or destabilizing applications with little risk of affecting other users. This is particularly useful for developers/testers/information technology personnel who frequently need to re-install and modify the operating system and run potentially destabilizing applications.
  • Virtual machines Since sharing computing resources and CPU scheduling occurs outside of the virtual machine environment, users can run computing-resource intensive resources with no risk of affecting other users. Virtual machines also provide increased security isolation between users. Because each user is running a separate copy of the OS, there is much less chance of security breaches and virus infections over the between-users boundaries than in the shared OS case.
  • a solution is also provided for problems that arise from a situation where, in a hardware-based system of machines, the hardware is mixed, whether due to an initial purchasing decision or due to the acquisition of different types of physical machines over time. Even if initially all of the hardware was uniform, purchasing additional hardware to replace failing modules and increasing the capacity typically leads to non-uniform hardware throughout a machine farm. Even if all hardware is purchased from the same vendor, it is likely that the hardware purchased later will use different chipsets and components, and will require different drivers.
  • Non-uniform hardware has traditionally translated into the need to maintain multiple versions of the operating system images (which means higher costs) and limits flexibility of moving users between machines—because the operating system image may be incompatible—which also translates into higher cost. Virtual machines allow efficient use of the same operating system image even in a hardware farm that includes heterogeneous machines. The use of the same operating system image helps to significantly reduce the management cost.
  • Adding remote display capability e.g. presentation layer protocols, such as ICA, RDP, or X11
  • virtualization techniques allows virtualization to be used for interactive computing.
  • Hosting multiple virtual machines on an execution machine allows better utilization of the available physical computing resources (e.g.: space, power, processing power, processing capacity, RAM, bandwidth, etc.) thereby lowering costs.
  • the use of virtualization also allows hardware to be updated and maintained independently of OS version and specific device drivers hosted in the operating systems or virtual machines. Additionally, virtual machines enhance system security by isolating computing environments from each other.
  • a method for providing access to a computing environment includes the step of receiving a request from a client system for an enumeration of available computing environments. Collected data regarding available computing environments are accessed. Accessed data indicating to a client system each computing environment available to a user of the client system are transmitted to the client system. A request to access one of the computing environments is received from the client system. A connection is established between the client system and a virtual machine hosting the requested computing environment.
  • the accessed data transmitted to the client system are displayable at the client system as icons in a graphical user interface window representing computing environments available to a user of the client system.
  • the accessed data transmitted to the client system are displayable at the client system as icons in a graphical user interface window representing computing environments unavailable to a user of the client system.
  • the connection between the client system and the virtual machine is established using a presentation layer protocol.
  • user credentials are received from the client system.
  • the accessed data are transmitted to the client system responsive to receiving the user credentials.
  • the user of the client system is authenticated based on the received user credentials and access is provided to a selected one of the available computing environment images without requiring further input of user credentials by a user of the client system.
  • information is gathered about the client system and a data set is generated from the gathered information.
  • the accessed data are transmitted to the client system indicating, responsive to the generated data set, each computing environment available to the client system.
  • the accessed data are transmitted to the client system indicating, responsive to an application of a policy to the generated data set, each computing environment available to the client system.
  • a web server receives a request from a client system for an enumeration of available computing environments.
  • a page template is retrieved from a persistent storage, the web server creates a page describing a display of computing environment images available to the client system, and the created page is transmitted to the client system.
  • a server in a network including a client system and a plurality of servers storing computing environments, includes a broker module, a transmitter, a receiver, and a transceiver.
  • the broker module accesses collected data regarding computing environments and determines, for each computing environment, whether that computing environment image is available to a client system.
  • the transmitter sends accessed data to the client system indicating to the client system each computing environment determined to be available to the client system.
  • the receiver receives a request to access one of the available computing environments.
  • the transceiver provides a connection between the client system and a virtual machine providing the requested computing environment.
  • the receiver receives user credentials from the client system.
  • the server further comprises a database storing the collected data.
  • the broker module determines for each computing environment whether that computing environment image is available to a client system based on the user credentials and the collected data.
  • the server further comprises an output display creation engine creating output displays indicating each computing environment available to the client system.
  • the output display creation engine creates a web page describing a display of the computing environments available to a client system, the web page created responsive to the collected information and a web page template.
  • transceiver provides a connection between the client system and a virtual machine providing the requested computing environment by establishing a presentation layer protocol connection.
  • FIG. 1 is a block diagram of one embodiment of an environment in which a client machine accesses a computing resource provided by a remote machine;
  • FIGS. 1A and 1B are block diagrams depicting embodiments of typical computers useful in embodiments with remote machines or client machines;
  • FIG. 2A is a block diagram of a system for providing access to a resource
  • FIG. 2B is a block diagram of one embodiment of a system in which a client machine can initiate execution of an application program for determining the resource neighborhood of that client machine;
  • FIG. 2C is a block diagram of an embodiment in which a client machine uses a web browser application to determine its resource neighborhood;
  • FIGS. 3A , 3 B, and 3 C are block diagrams of embodiments of systems of communication among a client machine and multiple remote machines;
  • FIG. 3D is a block diagram of one embodiment of a system in which a client machine can access a resource from a resource neighborhood web page displayed at that client machine;
  • FIG. 3E is a block diagram of one embodiment of a system in which a remote machine acts as an intermediary for a machine farm;
  • FIG. 4 is a block diagram of one embodiment of a resource neighborhood application in which a client machine is in communication with one of the remote machines;
  • FIG. 5 is a block diagram of a computing embodiment in which a client machine is in communication with a remote machine having an installed resource neighborhood application program of the invention
  • FIG. 6A is a screen shot of an embodiment of a display of a client machine after a resource neighborhood application program is executed
  • FIG. 6B is a screen shot of another embodiment of a display screen of a client machine after the resource neighborhood application program is executed;
  • FIG. 7A is a block diagram of an embodiment of a network providing policy-based access to application programs for a machine
  • FIG. 7B is a block diagram depicting a more detailed embodiment of a policy engine
  • FIG. 8 is a flowchart depicting one embodiment of a process for providing access to a resource
  • FIG. 9 is a flow diagram depicting one embodiment of a process for electing a management node
  • FIG. 10 is a flow diagram depicting one embodiment of a process to update information collected by the management node
  • FIG. 11 is a block diagram depicting an embodiment of a machine farm including first and second network management processes
  • FIG. 12 is a block diagram depicting one embodiment of a virtual machine management component
  • FIG. 13 is a block diagram depicting one embodiment of a session management component
  • FIG. 14 is a block diagram depicting one embodiment of a system in which a drive associated with the client machine 10 is made available to a computing environment;
  • FIG. 15A is a block diagram depicting one embodiment of a client machine supporting multiple client machine display devices
  • FIG. 15B is a block diagram depicting one embodiment of a system for supporting multiple client machine display devices
  • FIG. 15C is a block diagram depicting one embodiment of a session login mechanism providing support for multiple client machine display devices
  • FIG. 16A is a flow diagram depicting one embodiment of the steps to be taken to provide a desired display layout to a client machine having multiple display devices;
  • FIG. 16B is a flow diagram depicting one embodiment of a process to modify a window message
  • FIG. 16C is a flow diagram depicting one embodiment of the steps taken to associate a display layout with a client machine
  • FIG. 16D is a flow diagram depicting one embodiment of the steps taken to change a desired display layout for a client machine
  • FIG. 17 is a block diagram depicting one embodiment of a system in which a remote machine authenticates the user of a client machine
  • FIG. 18 is a flow diagram depicting one embodiment of the steps taken to access a plurality of files comprising an application program
  • FIG. 19 is a block diagram depicting one embodiment of a client machine 10 including an application streaming client, a streaming service and an isolation environment;
  • FIG. 20 is a flow diagram depicting one embodiment of steps taken by a client machine to execute an application
  • FIG. 21 is a block diagram depicts one embodiment of a plurality of application files
  • FIG. 22A is a flow diagram depicting one embodiment of the steps taken to enable transparent distributed program execution on a remote machine through the selection of graphical indicia representative of a data file located on the client machine;
  • FIG. 22B is a flow diagram depicting one embodiment of the steps taken by a remote machine to enable transparent distributed program execution on a remote machine through the selection of graphical indicia representative of a data file located on the client machine;
  • FIG. 23 is a flow diagram depicting another embodiment of the steps taken to enable transparent distributed program execution on a client machine through the selection of graphical indicia representative of a data file located on a remote machine;
  • FIG. 24 is a flow diagram depicting one embodiment of the steps taken to negotiate the protocol for a connection between a client machine and a remote machine;
  • FIG. 25 is a block diagram depicting an embodiment of a remote machine and a client machine establishing a protocol stack for communication
  • FIG. 26 is a block diagram depicting one embodiment of a client machine architecture
  • FIG. 27 is a block diagram depicting one embodiment of communication between a client machine and a machine farm
  • FIG. 28 is a block diagram depicting one embodiment of a client machine architecture
  • FIG. 29 is a flow diagram depicting one embodiment of the steps taken to display application output in a web page
  • FIG. 30 is a flow diagram depicting one embodiment of the steps taken link to a virtual machine identified by a hyperlink configuration file
  • FIG. 31 is a block diagram depicting an embodiment of a system architecture in which a multiplexer is used to transmit data to more than one client machine;
  • FIG. 32 is a block diagram depicting another embodiment of a system architecture in which a multiplexer is used to transmit data to more than one client machine;
  • FIG. 33 is a block diagram depicting one embodiment of an architecture for displaying application output in a web page
  • FIG. 34 is a block diagram depicting another embodiment of an architecture for displaying application output in a web page
  • FIG. 35 is a block diagram depicting another embodiment of an architecture for displaying application output in a web page
  • FIG. 36 is a block diagram depicting another embodiment of an architecture for displaying application output in a web page
  • FIG. 37 is a block diagram depicting one embodiment of a client machine receiving window attribute data via a virtual channel
  • FIG. 38 is a block diagram depicting a client machine connected to more than one remote machine
  • FIG. 39 is a flow diagram depicting one embodiment of the steps taken to detect and transmit server-initiated display changes
  • FIG. 40 is a flow diagram depicting one embodiment of the steps taken to detect and transmit client-initiated display changes
  • FIG. 41 is a flow diagram depicting one embodiment for enabling transmission of seamless windows between a client machine and a remote machine
  • FIG. 42 is a block diagram depicting one embodiment of an agent
  • FIG. 43 is a block diagram depicting one embodiment of a system for enabling seamless windowing mode between a client machine and remote computing environments
  • FIG. 44 is a flow diagram depicting one embodiment of the steps taken in a method of receiving window attribute data and graphical data associated with remote windows from virtualized operating systems and from native operating systems;
  • FIG. 45 is a block diagram of a system for providing a client with a reliable connection to a host service according to an embodiment of the invention.
  • FIG. 46 is a block diagram of a system for providing a client with a reliable connection to a host service according to another embodiment of the invention.
  • FIG. 47 depicts communications occurring over a network according to an embodiment of the invention.
  • FIG. 48 depicts communications occurring over a network according to another embodiment of the invention.
  • FIG. 49 depicts a process for encapsulating a plurality of secondary protocols within a first protocol for communication over a network according to an embodiment of the invention
  • FIG. 50 is a block diagram of an embodiment of a computer system to maintain authentication credentials in accordance with the invention.
  • FIG. 51 is a flow diagram of the steps followed in an embodiment of the computer system of FIG. 5 to maintain authentication credentials during a first communication session in accordance with the invention
  • FIG. 52 is a flow diagram of the steps followed in an embodiment of the computer system of FIG. 50 to maintain authentication credentials during a second communication session following the termination of the first communication session of FIG. 53A in accordance with the invention
  • FIG. 53 is a block diagram of an embodiment of a computer system to maintain authentication credentials in accordance with another embodiment of the invention.
  • FIG. 54 is a flow diagram of the steps followed in an embodiment of the computer system of FIG. 53 to maintain authentication credentials during a first communication session in accordance with the invention
  • FIG. 55 is a flow diagram of the steps followed in an embodiment of the computer system of FIG. 53 to maintain authentication credentials during a second communication session following the termination of the first communication session of FIG. 53 in accordance with the invention
  • FIG. 56 is a flow diagram of the steps followed in an embodiment of the computer system of FIG. 53 to maintain authentication credentials during a second communication session following the termination of a second communication channel of the first communication session of FIG. 53 in accordance with the invention
  • FIG. 57 is a block diagram of a system to maintain authentication credentials and provide a client with a reliable connection to a host service according to an embodiment of the invention
  • FIG. 58 is a block diagram of a system to maintain authentication credentials and provide a client with a reliable connection to a host service according to another embodiment of the invention.
  • FIG. 59 is a block diagram of a system to maintain authentication credentials and provide a client with a reliable connection to a host service according to another embodiment of the invention.
  • FIG. 60 is a block diagram of a system to maintain authentication credentials and provide a client with a reliable connection to a host service according to another embodiment of the invention.
  • FIG. 61 is a block diagram of a system for providing a client with a reliable connection to a host service and further including components for reconnecting the client to a host service according to an embodiment of the invention
  • FIG. 62 is a block diagram of an embodiment of a system for providing a client with a reliable connection to a host service and further including components for reconnecting the client to a host service;
  • FIG. 63 is a block diagram of an embodiment of FIG. 61 further including components for initially connecting the client to a host service;
  • FIG. 64 is a block diagram of the system of FIG. 62 further including components for initially connecting the client to a host service and to maintain authentication credential according to an embodiment of the invention
  • FIG. 65 is a flow diagram of a method for network communications according to an embodiment of the invention.
  • FIG. 66 is a flow diagram of a method for reconnecting the client to the host services
  • FIGS. 67-69 are flow diagrams of a method for connecting a client to a plurality of host services according to an embodiment of the invention.
  • FIG. 70 is a flow diagram of a method for providing a client with a reliable connection to host services and for reconnecting the client to the host services according to an embodiment of the invention
  • FIGS. 71-72 are flow diagrams of a method for reconnecting a client to host services according to an embodiment of the invention.
  • FIG. 73 is a conceptual block diagram of an embodiment of client software and server software
  • FIG. 74 is a flow chart of an embodiment of a method for monitoring network performance
  • FIG. 75 is a flow chart of an embodiment of a method of operation of the server software
  • FIG. 76 is a flow chart of an embodiment of a method of generating sub-metrics by the client
  • FIG. 77 is a flow chart of an embodiment of a method of generating sub-metrics by the client
  • FIG. 78 is a flow chart of an embodiment of a method of generating sub-metrics by the server
  • FIG. 79 is a schematic diagram depicting a networked client-server computing system
  • FIG. 80 is a flow chart depicting a method for connecting a client machine to disconnected application sessions
  • FIG. 81 is a flow chart depicting on embodiment a method for connecting the client machine to active application sessions
  • FIG. 82 is a schematic diagram depicting one embodiment of a client machine in communication with several remote machines
  • FIG. 83 is a flow diagram depicting one embodiment of steps taken in a method to connect a user of a client machine to a computing environment
  • FIG. 84 is a flow diagram depicting an embodiment of steps taken in a method to connect a user of a client machine to a computing environment in response to selection of a graphical user interface element;
  • FIG. 85 is a block diagram depicting one embodiment of a remote machine able to connect the client machine to an application session
  • FIG. 86 is a block diagram of an embodiment of a system for connecting a client machine to an application session responsive to application of a policy
  • FIG. 87 is a flow diagram depicting the steps taken in one method to connect a client machine to an application session responsive to application of a policy
  • FIG. 88 is a block diagram depicting one embodiment of a system for providing, by a virtual machine, access to a computing environment
  • FIG. 89A is a block diagram depicting one embodiment of a storage device and a computing device
  • FIG. 89B is a flow diagram depicting one embodiment of the steps taken in a method for providing access to a computing environment on a computing device via a storage device;
  • FIG. 90A is a block diagram depicting one embodiment of a mobile computing device
  • FIG. 90B is a flow diagram depicting one embodiment of the steps taken in a method for providing a portable computing environment by a mobile computing device
  • FIG. 91A is a block diagram of one embodiment of a mobile computing device and a computing device
  • FIG. 91B is a flow diagram depicting depicts one embodiment of the steps taken in a method for providing access to a computing environment on a computing device via a mobile computing device;
  • FIG. 92A is a block diagram depicting one embodiment of a mobile computing device and a computing device comprising a computing environment selector;
  • FIG. 92B is a flow diagram depicting an embodiment of the steps taken in a method for establishing a computing environment on a computing device via a mobile computing device;
  • FIG. 93A is a block diagram depicting one embodiment of a mobile computing device connecting to a docking station
  • FIG. 93B is a block diagram depicting one embodiment of a docking station connecting a mobile computing device and a computing device;
  • FIG. 93C is a block diagram depicting one embodiment of a mobile computing device and computing device having a docking mechanism
  • FIG. 93D is a flow diagram depicting one embodiment of the steps taken in a method of providing to a mobile computing device one or more hardware resources;
  • FIG. 94A is a block diagram depicting one embodiment of a mobile computing device having a plurality of processors
  • FIG. 94B is a flow diagram depicting one embodiment of the steps taken in a method for switching, by a mobile computing device, between use of multiple processors;
  • FIG. 95 is a block diagram depicting one embodiment of a system for providing to a first client agent, via a second client agent on a first remote machine, output data generated by a resource executing in a virtual machine provided by a second remote machine;
  • FIG. 96 is a block diagram depicting an embodiment of a system for providing to a first client agent, via a second client agent on a first remote machine, output data generated by a resource executing in a virtual machine provided by a second remote machine;
  • FIG. 97 is a block diagram depicting one embodiment of a system for identifying, by a coordinator machine, a worker machine providing, via a virtual machine, access to a computing environment.
  • FIG. 1 a block diagram of one embodiment of an environment in which a client machine 10 , 10 ′ accesses a computing resource provided by a remote machine, 30 , 30 ′, 30 ′′, 30 ′′′ is shown.
  • a remote machine 30 such as remote machine 30 , 30 ′, 30 ′′, or 30 ′′′ (hereafter referred to generally as remote machine 30 ) accepts connections from a user of a client machine 10 .
  • remote machine 30 accepts connections from a user of a client machine 10 .
  • the system may provide multiple ones of any or each of those components.
  • the system may include multiple, logically-grouped remote machines 30 , one or more of which is available to provide a client machine 10 , 10 ′ access to computing resources.
  • the logical group of remote machines may be referred to as a “server farm” or “machine farm,” indicated in FIG. 1A as machine farm 38 .
  • the remote machines 30 may be geographically dispersed.
  • the group of remote machines 30 logically grouped as a machine farm 38 may be interconnected using a wide-area network (WAN) connection, metropolitan-area network (MAN) connection, a local area network (LAN) a storage-area network (SAN), or a public network such as the Internet.
  • a machine farm 38 may include remote machines 30 physically located in geographically diverse locations around the world, including different continents, regions of a continent, countries, regions of a country, states, regions of a state, cities, regions of a city, campuses, regions of a campus, or rooms. Data transmission speeds between remote machines 30 in the machine farm 38 can be increased if the remote machines 30 are connected using a local-area network (LAN) connection or some form of direct connection.
  • a machine farm 38 may be administered as a single entity.
  • a centralized service may provide management for machine farm 38 .
  • one or more remote machines 30 elect a particular remote machine 30 to provide management functionality for the farm.
  • the elected remote machine 30 may be referred to as a management server, management node, or management process.
  • the management node 30 may gather and store information about a plurality of remote machines 30 , respond to requests for access to resources hosted by remote machines 30 , and enable the establishment of connections between client machines 10 and remote machines 30 .
  • an administrator designates one or more remote machines 30 to provide management functionality for machine farm 38 .
  • management of the machine farm 38 may be de-centralized.
  • one or more remote machines 30 comprise components, subsystems and modules to support one or more management services for the machine farm 38 .
  • one or more remote machines 30 provide functionality for management of dynamic data, including techniques for handling failover, data replication, and increasing the robustness of the machine farm 38 .
  • one or more remote machines 30 include communications capabilities to enable the one or more remote machines 30 to interact with one another to share responsibility for management tasks.
  • Each remote machine 30 may communicate with a persistent store and, in some embodiments, with a dynamic store.
  • Persistent store may be physically implemented on a disk, disk farm, a redundant array of independent disks (RAID), writeable compact disc, or any other device that allows data to be read and written and that maintains written data if power is removed from the storage device.
  • a single physical device may provide storage for a plurality of persistent stores, i.e., a single physical device may be used to provide the persistent store for more than one machine farm 38 .
  • the persistent store maintains static data associated with each remote machine 30 in machine farm 38 and global data used by all remote machines 30 within the machine farm 38 .
  • the persistent store may maintain the server data in a Lightweight Directory Access Protocol (LDAP) data model.
  • LDAP Lightweight Directory Access Protocol
  • the persistent store stores server data in an ODBC-compliant database.
  • static data refers to data that do not change frequently, i.e., data that change only on an hourly, daily, or weekly basis, or data that never change.
  • the data stored by the persistent store may be replicated for reliability purposes physically or logically.
  • physical redundancy may be provided using a set of redundant, mirrored disks, each providing a copy of the data.
  • the database itself may be replicated using standard database techniques to provide multiple copies of the database.
  • both physical and logical replication may be used concurrently.
  • the remote machines 30 store “static” data, i.e., data that persist across client sessions, in the persistent store. Writing to the persistent store can take relatively long periods of time. To minimize accesses to the persistent store, the remote machines 30 may develop a logical, common database (i.e., the dynamic store) that is accessible by all of the remote machines 30 in the machine farm 38 for accessing and storing some types of data.
  • the dynamic store may be physically implemented in the local memory of a single or multiple remote machines 30 in the machine farm 38 .
  • the local memory can be random access memory, disk, disk farm, a redundant array of independent disks (RAID), or any other memory device that allows data to be read and written.
  • data stored in the dynamic store are data that are typically queried or changed frequently during runtime.
  • Examples of such data are the current workload level for each of the remote machines 30 in the machine farm 38 , the status of the remote machines 30 in the machine farm 38 , client session data, the number of virtual machines supported by a remote machine 30 , the identity of the operating systems supported by a remote machine 30 , and licensing information.
  • the dynamic store comprises one or more tables, each of which stores records of attribute-value pairs. Any number of tables may exist, but each table stores records of only one type. Tables are, in some embodiments identified by name. Thus, in this embodiment, two remote machines 30 that use the same name to open a table refer to the same logical table.
  • the dynamic store (i.e., the collection of all record tables) can be embodied in various ways.
  • the dynamic store is centralized; that is, all runtime data are stored in the memory of one remote machine 30 in the machine farm 38 .
  • That server operates in a manner similar to the management node described above, that is, all other remote machines 30 in the machine farm 38 communicate with the server acting as the centralized data store when seeking access to that runtime data.
  • each remote machine 30 in the machine farm 38 keeps a full copy of the dynamic store.
  • each remote machine 30 communicates with every other remote machine 30 to keep its copy of the dynamic store up to date.
  • each remote machine 30 maintains its own runtime data and communicates with every other remote machine 30 when seeking to obtain runtime data from them.
  • a remote machine 30 attempting to find an application program requested by the client machine 10 may communicate directly with every other remote machine 30 in the machine farm 38 to find one or more servers hosting the requested application.
  • a collector point is a server that collects run-time data.
  • Each collector point stores runtime data collected from certain other remote machines 30 in the machine farm 38 .
  • Each remote machine 30 in the machine farm 38 is capable of operating as, and consequently is capable of being designated as, a collector point.
  • each collector point stores a copy of the entire dynamic store.
  • each collector point stores a portion of the dynamic store, i.e., it maintains runtime data of a particular data type.
  • the type of data stored by a remote machine 30 may be predetermined according to one or more criteria. For example, remote machines 30 may store different types of data based on their boot order. Alternatively, the type of data stored by a remote machine 30 may be configured by an administrator using administration tool 140 . In these embodiments, the dynamic store is distributed among two or more remote machines 30 in the machine farm 38 .
  • Remote machines 30 not designated as collector points know the remote machines 30 in a machine farm 38 that are designated as collector points.
  • a remote machine 30 not designated as a collector point communicates with a particular collector point when delivering and requesting runtime data. Consequently, collector points lighten network traffic because each remote machine 30 in the machine farm 38 communicates with a single collector point remote machine 30 , rather than with every other remote machine 30 , when seeking to access the runtime data.
  • the machine farm 38 can be heterogeneous, that is, one or more of the remote machines 30 can operate according to one type of operating system platform (e.g., WINDOWS NT, manufactured by Microsoft Corp. of Redmond, Wash.), while one or more of the other remote machines 30 can operate according to another type of operating system platform (e.g., Unix or Linux). Additionally, a heterogeneous machine farm 38 may include one or more remote machines 30 operating according to a type of operating system, while one or more other remote machines 30 execute one or more types of hypervisors rather than operating systems. In these embodiments, hypervisors may be used to emulate virtual hardware, partition physical hardware, virtualize physical hardware, and execute virtual machines that provide access to computing environments.
  • hypervisors may be used to emulate virtual hardware, partition physical hardware, virtualize physical hardware, and execute virtual machines that provide access to computing environments.
  • Hypervisors may include those manufactured by VMWare, Inc., of Palo Alto, Calif.; the Xen hypervisor, an open source product whose development is overseen by XenSource, Inc., of Palo Alto; the VirtualServer or virtual PC hypervisors provided by Microsoft or others.
  • a hypervisor executes on a machine executing an operating system.
  • a machine executing an operating system and a hypervisor may be said to have a host operating system (the operating system executing on the machine), and a guest operating system (an operating system executing within a computing resource partition provided by the hypervisor).
  • a hypervisor interacts directly with hardware on a machine, instead of executing on a host operating system.
  • the hypervisor may be said to be executing on “bare metal,” referring to the hardware comprising the machine.
  • Remote machines 30 may be servers, file servers, application servers, appliances, network appliances, gateways, application gateways, gateway servers, virtualization servers, deployment servers, or firewalls.
  • the remote machine 30 may be an SSL VPN server.
  • the remote machine 30 may be an application acceleration appliance.
  • the remote machine 30 may provide functionality including firewall functionality, application firewall functionality, or load balancing functionality.
  • the remote machine 30 comprises an appliance such as one of the line of appliances manufactured by the Citrix Application Networking Group, of San Jose, Calif., or Silver Peak Systems, Inc., of Mountain View, Calif., or of Riverbed Technology, Inc., of San Francisco, Calif., or of F5 Networks, Inc., of Seattle, Wash., or of Juniper Networks, Inc., of Sunnyvale, Calif.
  • a remote machine 30 comprises a remote authentication dial-in user service, referred to as a RADIUS server.
  • remote machines 30 may have the capacity to function as a master network information node monitoring resource usage of other machines in the farm 38 .
  • a remote machine 30 may provide an Active Directory.
  • Remote machines 30 may be referred to as execution machines, intermediate machines, broker machines, intermediate broker machines, or worker machines.
  • remote machines 30 in the machine farm 38 may be stored in high-density racking systems, along with associated storage systems, and located in an enterprise data center.
  • consolidating the machines in this way may improve system manageability, data security, the physical security of the system, and system performance by locating machines and high performance storage systems on localized high performance networks. Centralizing the machines and storage systems and coupling them with advanced system management tools allows more efficient use of machine resources.
  • the client machines 10 may also be referred to as endpoints, client nodes, clients, or local machines.
  • the client machines 10 have the capacity to function as both client machines seeking access to resources and as remote machines 30 providing access to remotely hosted resources for other client machines 10 .
  • remote machines 30 may request access to remotely-hosted resources.
  • the remote machines 30 may be referred to as client machines 10 .
  • the client machine 10 communicates directly with one of the client machines 30 in a machine farm 38 .
  • the client machine 10 executes an application to communicate with the remote machine 30 in a machine farm 38 .
  • the client machine 10 communicates with one of the remote machines 30 via a gateway, such as an application gateway.
  • the client machine 10 communicates with the remote machine 30 in the machine farm 38 over a communications link 150 . Over the communications link 150 , the client machine 10 can, for example, request access to or execution of various resources provided by remote machines 30 , such as applications, computing environments, virtual machines, or hypervisors hosted by or executing on the remote machines 30 , 30 ′, 30 ′′, and 30 ′′′ in the machine farm 38 .
  • the client machine 10 , 10 ′ receives for display output of the results of execution of the resource or output of interaction between the client machine 10 and the applications or computing environments provided by the remote machines 30 .
  • the client machine 10 can receive the output of applications executing in one or more virtual machines on a remote machine 30 , 30 ′, 30 ′′, and 30 ′′′ in the machine farm 38 .
  • the communications link 150 may be synchronous or asynchronous and may be a LAN connection, MAN connection, or a WAN connection. Additionally, communications link 150 may be a wireless link, such as an infrared channel or satellite band.
  • the communications link 150 may use a transport layer protocol such as TCP/IP or any application layer protocol, such as the Hypertext Transfer Protocol (HTTP), Extensible Markup Language (XML), Independent Computing Architecture Protocol (ICA) manufactured by Citrix Systems, Inc. of Ft. Lauderdale, Fla., or the Remote Desktop Protocol manufactured by the Microsoft Corporation of Redmond, Wash.
  • HTTP Hypertext Transfer Protocol
  • XML Extensible Markup Language
  • ICA Independent Computing Architecture Protocol
  • the communications link 150 uses a Wi-Fi protocol.
  • the communications link 150 uses a mobile internet protocol.
  • the communications link 150 may provide communications functionality through a variety of connections including standard telephone lines, LAN or WAN links (e.g., T1, T3, 56 kb, X.25, SNA, DECNET), broadband connections (ISDN, Frame Relay, ATM, Gigabit Ethernet, Ethernet-over-SONET), and wireless connections or any combination thereof. Connections can be established using a variety of communication protocols (e.g., TCP/IP, IPX, SPX, NetBIOS, Ethernet, ARCNET, SONET, SDH, Fiber Distributed Data Interface (FDDI), RS232, IEEE 802.11, IEEE 802.11a, IEEE 802.11b, IEEE 802.11g, CDMA, GSM, WiMax and direct asynchronous connections).
  • standard telephone lines LAN or WAN links
  • broadband connections ISDN, Frame Relay, ATM, Gigabit Ethernet, Ethernet-over-SONET
  • Connections can be established using a variety of communication protocols (e
  • the remote machine 30 and the client machine 10 communicate via any type and/or form of gateway or tunneling protocol such as Secure Socket Layer (SSL) or Transport Layer Security (TLS), or the Citrix Gateway Protocol manufactured by Citrix Systems, Inc. of Ft. Lauderdale, Fla.
  • the computer system 100 may include a network interface comprising a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computer system 100 to any type of network capable of communication and performing the operations described herein.
  • the computer system 100 may support installation devices, such as a floppy disk drive for receiving floppy disks such as 3.5-inch, 5.25-inch disks or ZIP disks, a CD-ROM drive, a CD-R/RW drive, a DVD-ROM drive, network interface card, tape drives of various formats, USB device, hard-drive or any other device suitable for installing software, programs, data or files, such as any software, or portion thereof.
  • installation devices such as a floppy disk drive for receiving floppy disks such as 3.5-inch, 5.25-inch disks or ZIP disks, a CD-ROM drive, a CD-R/RW drive, a DVD-ROM drive, network interface card, tape drives of various formats, USB device, hard-drive or any other device suitable for installing software, programs, data or files, such as any software, or portion thereof.
  • the computer system 100 may also include a storage device of any type and form for storing an operating system and other related software, and for storing application software programs.
  • the storage device includes one or more hard disk drives or redundant arrays of independent disks.
  • the storage device comprises any type and form of portable storage medium or device, such as a compact flash card, a micro hard drive or pocket drive, embedded flash storage, or USB storage drive.
  • Portable storage devices may be generally referred to by a variety of names, including but not limited to, finger drive, flash disk, flash drive, flash memory drive, jump drive, jump stick, keychain drive, keydrive, memory key, mobile drive, pen drive, thumb drive, thumb key, vault drive, USB drive, or USB stick.
  • any of the installation devices or mediums could also provide a storage medium or device.
  • the client machine 10 includes a client agent which may be, for example, implemented as a software program and/or as a hardware device, such as, for example, an ASIC or an FPGA.
  • a client agent with a user interface is a Web Browser (e.g., INTERNET EXPLORER manufactured by Microsoft Corp. of Redmond, Wash. or SAFARI, manufactured by Apple Computer of Cupertino, Calif.).
  • the client agent can use any type of protocol, such as a remote display protocol, and it can be, for example, an HTTP client agent, an FTP client agent, an Oscar client agent, a Telnet client agent, an Independent Computing Architecture (ICA) client agent manufactured by Citrix Systems, Inc.
  • ICA Independent Computing Architecture
  • the client agent is configured to connect to the remote machine 30 .
  • the client machine 10 includes a plurality of client agents, each of which may communicate with a remote machine 30 , respectively.
  • the remote machines 30 and the client machines 10 , are provided as computers or computer servers, of the sort manufactured by Apple Computer, Inc., of Cupertino, Calif., International Business Machines of White Plains, N.Y., Hewlett-Packard Corporation of Palo Alto, Calif. or the Dell Corporation of Round Rock, Tex.
  • the remote machines 30 may be blade servers, servers, workstation blades or personal computers executing hypervisors emulating hardware required for virtual machines providing access to computing environments.
  • a single physical machine may provide multiple computing environments.
  • FIGS. 1A and 1B depict block diagrams of typical computer architectures useful in those embodiments as the remote machine 30 , or the client machine 10 .
  • each computer 100 includes a central processing unit 102 , and a main memory unit 104 .
  • Each computer 100 may also include other optional elements, such as one or more input/output devices 130 a - 130 n (generally referred to using reference numeral 130 ), and a cache memory 140 in communication with the central processing unit 102 .
  • the central processing unit 102 is any logic circuitry that responds to and processes instructions fetched from the main memory unit 104 .
  • the central processing unit is provided by a microprocessor unit, such as those manufactured by Intel Corporation of Mountain View, Calif.; those manufactured by Motorola Corporation of Schaumburg, Ill.; those manufactured by International Business Machines of White Plains, N.Y.; or those manufactured by Advanced Micro Devices of Sunnyvale, Calif.
  • Main memory unit 104 may be one or more memory chips capable of storing data and allowing any storage location to be directly accessed by the microprocessor 102 , such as Static random access memory (SRAM), Burst SRAM or SynchBurst SRAM (BSRAM), Dynamic random access memory (DRAM), Fast Page Mode DRAM (FPM DRAM), Enhanced DRAM (EDRAM), Extended Data Output RAM (EDO RAM), Extended Data Output DRAM (EDO DRAM), Burst Extended Data Output DRAM (BEDO DRAM), Enhanced DRAM (EDRAM), synchronous DRAM (SDRAM), JEDEC SRAM, PC100 SDRAM, Double Data Rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), SyncLink DRAM (SLDRAM), Direct Rambus DRAM (DRDRAM), or Ferroelectric RAM (FRAM).
  • SRAM Static random access memory
  • BSRAM SynchBurst SRAM
  • DRAM Dynamic random access memory
  • FPM DRAM Fast Page Mode DRAM
  • EDRAM Extended Data
  • FIG. 1A the processor 102 communicates with main memory 104 via a system bus 120 (described in more detail below).
  • FIG. 1B depicts an embodiment of a computer system 100 in which the processor communicates directly with main memory 104 via a memory port.
  • the main memory 104 may be DRDRAM.
  • FIG. 1A and FIG. 1B depict embodiments in which the main processor 102 communicates directly with cache memory 140 via a secondary bus, sometimes referred to as a “backside” bus.
  • the main processor 102 communicates with cache memory 140 using the system bus 120 .
  • Cache memory 140 typically has a faster response time than main memory 104 and is typically provided by SRAM, BSRAM, or EDRAM.
  • the processor 102 communicates with various I/O devices 130 via a local system bus 120 .
  • Various buses may be used to connect the central processing unit 102 to the I/O devices 130 , including a VESA VL bus, an ISA bus, an EISA bus, a MicroChannel Architecture (MCA) bus, a PCI bus, a PCI-X bus, a PCI-Express bus, or a NuBus.
  • MCA MicroChannel Architecture
  • PCI bus PCI bus
  • PCI-X bus PCI-X bus
  • PCI-Express PCI-Express bus
  • NuBus NuBus.
  • the processor 102 may use an Advanced Graphics Port (AGP) to communicate with the display.
  • AGP Advanced Graphics Port
  • FIG. 1B depicts an embodiment of a computer system 100 in which the main processor 102 communicates directly with I/O device 130 b via HyperTransport, Rapid I/O, or InfiniBand.
  • FIG. 1B also depicts an embodiment in which local busses and direct communication are mixed: the processor 102 communicates with I/O device 130 a using a local interconnect bus while communicating with I/O device 130 b directly.
  • I/O devices 130 may be present in the computer system 100 .
  • Input devices include keyboards, mice, trackpads, trackballs, microphones, and drawing tablets.
  • Output devices include video displays, speakers, inkjet printers, laser printers, and dye-sublimation printers.
  • An I/O device may also provide mass storage for the computer system 100 such as a hard disk drive, a floppy disk drive for receiving floppy disks such as 3.5-inch, 5.25-inch disks or ZIP disks, a CD-ROM drive, a CD-R/RW drive, a DVD-ROM drive, DVD ⁇ RW drive, DVD+RW drive, tape drives of various formats, and USB storage devices such as the USB Flash Drive line of devices manufactured by Twintech Industry, Inc. of Los Alamitos, Calif., and the iPod Shuffle line of devices manufactured by Apple Computer, Inc., of Cupertino, Calif.
  • the client machine 10 may comprise or be connected to multiple display devices, which each may be of the same or different type and/or form.
  • any of the I/O devices 130 a - 130 n may comprise a display device or any type and/or form of suitable hardware, software, or combination of hardware and software to support, enable or provide for the connection and use of multiple display devices by the client machine 10 .
  • the client machine 10 may include any type and/or form of video adapter, video card, driver, and/or library to interface, communicate, connect or otherwise use the display devices.
  • a video adapter may comprise multiple connectors to interface to multiple display devices.
  • the client machine 10 may include multiple video adapters, with each video adapter connected to one or more of the display devices. In some embodiments, any portion of the operating system of the client machine 10 may be configured for using multiple displays. In other embodiments, one or more of the display devices may be provided by one or more other computing devices, such as remote machine 30 connected to the client machine 10 , for example, via a network. These embodiments may include any type of software designed and constructed to use another computer's display device as a second display device for the client machine 10 .
  • a client machine 10 may be configured to have multiple display devices.
  • an I/O device 130 may be a bridge between the system bus 120 and an external communication bus, such as a USB bus, an Apple Desktop Bus, an RS-232 serial connection, a SCSI bus, a FireWire bus, a FireWire 800 bus, an Ethernet bus, an AppleTalk bus, a Gigabit Ethernet bus, an Asynchronous Transfer Mode bus, a HIPPI bus, a Super HIPPI bus, a SerialPlus bus, a SCI/LAMP bus, a FibreChannel bus, or a Serial Attached small computer system interface bus.
  • an external communication bus such as a USB bus, an Apple Desktop Bus, an RS-232 serial connection, a SCSI bus, a FireWire bus, a FireWire 800 bus, an Ethernet bus, an AppleTalk bus, a Gigabit Ethernet bus, an Asynchronous Transfer Mode bus, a HIPPI bus, a Super HIPPI bus, a SerialPlus bus, a SCI/LAMP bus, a FibreChannel bus, or a
  • General-purpose computers of the sort depicted in FIG. 1A and FIG. 1B typically operate under the control of operating systems which control scheduling of tasks and access to system resources.
  • the computers operate under control of hypervisors, which represent virtualized views of physical hardware as one or more virtual machines.
  • Operating systems may execute in these virtual machines to control the virtual machine in a manner analogous to the way a native operating system controls a physical machine.
  • Typical operating systems include: the MICROSOFT WINDOWS family of operating systems, manufactured by Microsoft Corp.
  • the client machines 10 and 20 may be any personal computer (e.g., a Macintosh computer or a computer based on processors manufactured by Intel Corporation of Mountain View, Calif.), Windows-based terminal, Network Computer, wireless device, information appliance, RISC Power PC, X-device, workstation, mini computer, main frame computer, personal digital assistant, television set-top box, living room media center, gaming console, mobile gaming device, NetPC's, thin client, or other computing device that has a windows-based desktop and sufficient persistent storage for executing a small, display presentation program.
  • the display presentation program uses commands and data sent to it across communication channels to render a graphical display.
  • Windows-oriented platforms supported by the client machines 10 and 20 can include, without limitation, WINDOWS 3.x, WINDOWS 95, WINDOWS 98, WINDOWS NT 3.51, WINDOWS NT 4.0, WINDOWS 2000, Windows 2003, WINDOWS CE, Windows XP, Windows Vista, MAC/OS, Java, Linux, and UNIX.
  • the client machines 10 can include a visual display device (e.g., a computer monitor), a data entry device (e.g., a keyboard), persistent or volatile storage (e.g., computer memory) for storing downloaded application programs, a processor, and a mouse. Execution of a small, display presentation program allows the client machines 10 to participate in a distributed computer system model (i.e., a server-based computing model).
  • the general-purpose computers of the sort depicted in FIG. 1A and FIG. 1B may have different processors, operating systems, and input devices consistent with the device and in accordance with embodiments further described herein.
  • the computer system 100 can be any workstation, desktop computer, laptop or notebook computer, server, handheld computer, mobile telephone or other portable telecommunication device, media playing device, a gaming system, or any other type and/or form of computing, telecommunications or media device that is capable of communication and that has sufficient processor power and memory capacity to perform the operations described herein.
  • the computer system 100 may comprise a device of the IPOD family of devices manufactured by Apple Computer of Cupertino, Calif., a PLAYSTATION 2, PLAYSTATION 3, or PERSONAL PLAYSTATION PORTABLE (PSP) device manufactured by the Sony Corporation of Tokyo, Japan, a NINTENDO DS, NINTENDO GAMEBOY, NINTENDO GAMEBOY ADVANCED or NINTENDO REVOLUTION device manufactured by Nintendo Co., Ltd., of Kyoto, Japan, or an XBOX or XBOX 360TM device manufactured by the Microsoft Corporation of Redmond, Wash.
  • PSP PERSONAL PLAYSTATION PORTABLE
  • a client machine 10 is a mobile device
  • the device may be a JAVA-enabled cellular telephone, such as those manufactured by Motorola Corp. of Schaumburg, Ill., those manufactured by Kyocera of Kyoto, Japan, or those manufactured by Samsung Electronics Co., Ltd., of Seoul, Korea.
  • the client machine 10 may be a personal digital assistant (PDA) operating under control of the PalmOS operating system, such as the devices manufactured by palmOne, Inc. of Milpitas, Calif.
  • PDA personal digital assistant
  • the client machine 10 may be a personal digital assistant (PDA) operating under control of the PocketPC operating system, such as the iPAQ devices manufactured by Hewlett-Packard Corporation of Palo Alto, Calif., the devices manufactured by ViewSonic of Walnut, Calif., or the devices manufactured by Toshiba America, Inc. of New York, N.Y.
  • the client machine 10 is a combination PDA/telephone device such as the Treo devices manufactured by palmOne, Inc. of Milpitas, Calif.
  • the client machine 10 is a cellular telephone that operates under control of the PocketPC operating system, such as those manufactured by Motorola Corp.
  • a client machine 10 communicates with a remote machine 30 to determine an enumeration of resources available to the client machine 10 or to a user of the client machine 10 .
  • Resources may include, without limitation, computing environments, applications, documents, and hardware resources.
  • the remote machine 30 provides the client machine 10 with address information associated with a remote machine 30 ′ hosting a resource identified by the enumeration of resources.
  • the client machine 10 communicates with the remote machine 30 ′ to access the identified resource.
  • the client machine 10 executes a resource neighborhood application to communicate with the remote machines 30 and 30 ′.
  • each of the remote machines 30 provides the functionality required to identify and provide address information associated with a remote machine 30 ′ hosting a requested resource.
  • a block diagram depicts one embodiment of a system for providing access to a resource.
  • a request to enumerate computing resources is transmitted from a client machine 10 (step 202 ).
  • the request includes an identification of a user of the client machine 10 .
  • An enumeration of a plurality of resources available to the user of the requesting machine is provided by the remote machine (step 204 ).
  • the client machine 10 transmits a request for access to a particular resource included in the enumeration (step 206 ).
  • the transmitted request is a request for an enumeration of computing environments available to the client machine 10 .
  • the request is a request for an enumeration of computing environments supporting a particular application requested for execution by the client machine 10 .
  • the request is a request for access to a computing environment supported by a particular plurality of hardware resources.
  • information associated with the client machine 10 or with a user of the client machine 10 is received with the request.
  • credentials associated with the user, or with a user of the client machine 10 are received.
  • the remote machine 30 receives a request for an enumeration of available computing environments from the client machine 10 with the information associated with the client machine 10 , 10 ′ or the user of the client machine 10 .
  • the remote machine 30 receives a transmission from a policy engine including the information.
  • the remote machine 30 receives a transmission from a collection agent including the information.
  • the remote machine 30 comprises a component receiving requests and associated information.
  • a remote machine 30 functioning as a web server receives communications from the client machine 10 , 10 ′. In one of these embodiments, the web server forwards the communications to a remote machine 30 ′. In one of these embodiments, the web server forwards the communications to a service on the remote machine 30 ′. In another of these embodiments where communications from the client machine 10 , 10 ′ are routed to a remote machine 30 ′ by the web server, the remote machine 30 may be selected responsive to an Internet Protocol (IP) address of the client machine 10 .
  • IP Internet Protocol
  • the user provides credentials to the remote machine 30 via a graphical user interface presented to the client machine 10 , 10 ′ by the remote machine 30 .
  • a remote machine 30 ′′′ having the functionality of a web server provides the graphical user interface to the client machine 10 .
  • a collection agent transmitted to the client machine 10 , 10 ′ by the remote machine 30 gathers the credentials from the client machine 10 .
  • collected data regarding available resources is accessed.
  • collected data regarding computing environments is accessed.
  • the accessed data includes an indication of a virtual machine providing access to one of the computing environments.
  • the accessed data includes an indication of a location of the virtual machine.
  • the accessed data concerning computing environments includes an indication of a plurality of hardware resources required to support the computing environments.
  • the accessed data concerning computing environments includes an indication of a user or type of user authorized to access the computing environments.
  • the accessed data is provided responsive to a request for identification of a computing environment providing access to an application program.
  • the collected data is stored on a server, such as a remote machine 30 .
  • the server is in communication with a database storing the collected data.
  • the server collects the data from a plurality of machines 30 in a machine farm 38 .
  • the data is received from at least one server responsive to a request for the information concerning the computing environments.
  • the server collects the data from a hypervisor executing on a machine 30 ′ in the machine farm 38 .
  • the server collects the data from a management component residing in a guest operating system provided by a virtual machine launched into a hypervisor executing on a machine 30 ′ in the machine farm 38 .
  • the data is collected by an intermediate, brokering machine.
  • the brokering machine maintains a database of a status of at least one computing environments and collects information from at least one machine providing access to at least one computing environments.
  • the brokering machine collects information from a virtual machine service component residing in a virtual machine providing the computing environments.
  • the brokering machine collects information from a virtual machine providing management functionality for a virtual machine providing a computing environment.
  • the brokering machine collects information from a hypervisor on which an executing virtual machine provides a computing environment.
  • the brokering machine comprises a machine 30 including a brokering module.
  • data is gathered about the client system and a data set is generated from the gathered information.
  • the accessed data is transmitted to the client system with an indication to the client system, made responsive to the generated data set, of each computing environment available to the client system.
  • the accessed data is transmitted to the client system indicating to the client system, responsive to the application of a policy to the generated data set, each computing environment available to the client system.
  • the indication includes at least one method of access available to the user seeking access to the computing environment.
  • the indication includes at least one type of action associated with the computing environment which may be taken by, or on behalf of, the user of the client system.
  • An enumeration of a plurality of resources available to the client machine 10 is provided (step 204 ).
  • the enumeration is provided responsive to an application of a policy to received information associated with the user of the client machine 10 or the remote machine 30 .
  • the enumeration is provided responsive to a request from the user for a particular type of computing environment.
  • the enumeration is provided responsive to a request from the user for computing environments providing access to a type of application program.
  • the enumeration is provided responsive to a request from the user for computing environments supported by a specified plurality of hardware resources.
  • an indication is transmitted to the client machine 10 of a plurality of computing environments available to a user of the client machine 10 .
  • the indication is generated responsive to accessing collected data associated with the plurality of computing environments.
  • the accessed data is transmitted to the client machine 10 with an enumeration of computing environments available to the client machine 10 .
  • a determination is made, for each stored computing environment, as to whether that computing environment is available to the client machine 10 .
  • the collected information is transmitted to the client machine 10 , the transmitted information displayable at the client machine 10 as icons in a graphical user interface window representing computing environments available to the client system.
  • the collected information is transmitted to the client machine 10 , the transmitted information displayable at the client machine 10 as icons in a graphical user interface window representing computing environments unavailable to the client machine 10 .
  • an enumeration of available computing environments is presented to a user of the client machine 10 .
  • an enumeration of applications is presented to a user of the client machine 10 .
  • a physical machine provides access to an enumerated application.
  • a virtual machine provides access to an enumerated application.
  • a virtual machine provides access to a computing environment from which a user of the client machine 10 may access the application.
  • an enumeration of standard operating environments (such as a guest operating system pre-configured with a plurality of application programs) is provided to the user of the client machine 10 .
  • the enumeration of available resources includes an enumeration of a plurality of actions associated with a requested resource.
  • the enumeration of the plurality of actions enables the user to request execution of a computing environment.
  • the enumeration of the plurality of actions enables the user to request cloning of a computing environment.
  • the enumeration of the plurality of actions enables the user to request shutdown of a computing environment.
  • the enumeration of the plurality of actions enables the user to request that a computing environment be rebooted.
  • the enumeration of the plurality of actions enables the user to request that a snapshot be taken of an existing state of a computing environment. In other embodiments, the enumeration of the plurality of actions enables the user to request that a previous snapshot of a computing environment be provided.
  • a request is transmitted for access to a particular resource (step 206 ).
  • a user of the client machine 10 requests a resource responsive to a received enumeration of available resources.
  • the user requests a resource independent of a received enumeration.
  • the user requests a resource by selecting a graphical representation of the resource presented on the client machine 10 by a client agent.
  • the user requests a resource by selecting a graphical or textual representation of the resource presented to the user on a web server or other remote machine 30 ′′′.
  • the user requests an action associated with a resource. In one of these embodiments, the user requests execution of the resource. In another of these embodiments, the user requests termination of the resource. In still another of these embodiments, the user requests transmission of the resource, including transmission across an application streaming session. In yet another of these embodiments, the user requests that a resource be shutdown. In other embodiments, a request to execute an application is received from the client machine 10 , the requested application requiring one of the computing environments. In still other embodiments, a request to access a file is received from the client machine 10 , the requested file requiring execution within one of the computing environments.
  • a remote machine 30 launches the Resource Neighborhood (RN) application and presents results of the RN application to the client machine 10 .
  • the remote machine 30 can launch the RN application 241 in response to a request 202 by the client machine 10 for an enumeration of available resources.
  • the remote machine 30 provides an enumeration of available resources to the client machine 10 (step 204 ).
  • the client machine 10 and remote machine 30 ′ establish a connection (arrows 245 and 246 ). By this connection, the remote machine 30 ′ can transfer the executable code of the particular application to the client machine 10 , when the client machine 10 and remote machine 30 ′ are operating according to the client-based computing model.
  • the remote machine 30 ′ can execute the particular application and transfer the graphical user interface to the client machine 10 , when the client machine 10 and remote machine 30 ′ are operating according to the server-based computing model.
  • the remote machine 30 ′ can execute the Resource Neighborhood application 241 and push the results back to the client machine 10 so that when the client machine 10 requests the Resource Neighborhood application, the Resource Neighborhood results are already available at the client machine 10 .
  • FIG. 2B shows another embodiment of a system in which the client machine 10 initiates execution of the Resource Neighborhood application 241 and a remote machine 30 presents the results of the RN application 241 to the client machine 10 .
  • the client machine 10 launches the Resource Neighborhood application (e.g., by clicking on a Resource Neighborhood icon representing the application 241 ).
  • the client machine 10 directs a request 202 for the Resource Neighborhood application to the remote machine 30 .
  • the remote machine 30 can execute the Resource Neighborhood application 241 , if the application is on the remote machine 30 , and return the results to the client machine 10 .
  • the remote machine 30 can indicate (arrow 204 ) to the client machine 10 that the Resource Neighborhood application 241 is available on another remote machine, in this example remote machine 30 ′.
  • the client machine 10 and remote machine 30 ′ establish a connection (arrows 206 and 210 ) by which the client machine 10 requests execution of the Resource Neighborhood application 241 .
  • the remote machine 30 ′ can execute the application 241 and transfer the results (i.e., the graphical user interface any audio output etc.) to the client machine 10 .
  • FIG. 2C shows another embodiment of a system in which a client machine 10 initiates execution of the Resource Neighborhood application 241 , in this example via the World Wide Web.
  • a client machine 10 executes a web browser application 280 , such as NETSCAPE NAVIGATOR, manufactured by Netscape Communications, Inc. of Mountain View, Calif., INTERNET EXPLORER, manufactured by Microsoft Corporation of Redmond, Wash., or SAFARI, manufactured by Apple Computer of Cupertino, Calif.
  • NETSCAPE NAVIGATOR manufactured by Netscape Communications, Inc. of Mountain View, Calif.
  • INTERNET EXPLORER manufactured by Microsoft Corporation of Redmond, Wash.
  • SAFARI manufactured by Apple Computer of Cupertino, Calif.
  • the client machine 10 via the web browser 280 , transmits a request 282 to access a Uniform Resource Locator (URL) address corresponding to an HTML page residing on remote machine 10 .
  • a Uniform Resource Locator URL
  • the first HTML page returned 284 to the client machine 10 by the remote machine 30 is an authentication page that seeks to identify the client machine 10 or the user of the client machine 10 .
  • the authentication page allows the client machine 10 to transmit user credentials, via the web browser 280 , to the remote machine 30 for authentication. Transmitted user credentials are verified either by the remote machine 30 or by another remote machine 30 in the farm 38 .
  • This allows a security domain to be projected onto the remote machine 30 .
  • the remote machine 30 runs the WINDOWS NT operating system, manufactured by Microsoft Corporation of Redmond, Wash.
  • the authenticating machine runs the UNIX operating system
  • the UNIX security domain may be said to have been projected onto the remote machine 30 .
  • User credentials may be transmitted “in the clear,” or they may be encrypted.
  • user credentials may be transmitted via a Secure Socket Layer (SSL) connection, which encrypts data using algorithms such as the RC4 algorithm, manufactured by RSA Security Inc. of Bedford, Mass.
  • SSL Secure Socket Layer
  • an access control decision is made based on received information about the user resources available to the user of the client system are identified responsive to the access control decision.
  • a policy is applied to the received information about the user.
  • the remote machine 30 may verify the user credentials received from the client machine 10 .
  • the remote machine 30 may pass the user credentials to another remote machine for authentication.
  • the authenticating server may be in a different domain from the remote machine 30 .
  • Authenticated user credentials of the client machine 10 may be stored at the client machine 10 in a per-session cookie, in fields that are not displayed by the web browser 280 , or in any other manner common in maintenance of web pages.
  • a machine farm 38 with which the remote machine 30 is associated may allow guest users, i.e., users that do not have assigned user credentials, to access resources hosted by the farm 38 .
  • the authentication page may provide a mechanism for allowing a client machine 10 to identify that it is a guest user, such as a button or menu selection.
  • the remote machine 30 may omit the authentication page entirely.
  • the remote machine prepares and transmits to the client machine 10 an HTML page 288 that includes a Resource Neighborhood window 258 in which appears graphical icons 257 , 257 ′ representing resources to which the client machine 10 has access.
  • a user of client machine 10 requests access to a resource represented by icon 257 by clicking that icon 257 .
  • FIG. 3A shows one embodiment of a process of communication among the client machine 10 and multiple remote machines 30 , 30 ′.
  • the client machine 10 has an active connection 372 with the remote machine 30 ′.
  • the client machine 10 and remote machine 30 ′ can use the active connection 372 to exchange information regarding the status or execution of a first resource.
  • User credentials may be stored at the client machine 10 . Such storage of the user credentials can be in cache memory or persistent storage.
  • the Resource Neighborhood application (not shown on FIG. 3A ) runs on the client machine 10 .
  • the client machine display has a Resource Neighborhood window 258 in which appears a graphical icon 257 representing a second resource.
  • a user of the client machine 10 can access the second resource by double-clicking the icon 257 with the mouse.
  • the request passes to the remote machine 30 via connection 359 .
  • the remote machine 30 indicates to the client machine 10 via connection 359 that the sought-after resource is available on remote machine 30 ′.
  • the client machine 10 signals the remote machine 30 ′ to establish a second connection 370 .
  • the remote machine 30 ′ requests the user credentials from the client machine 10 to authenticate access to the second resource.
  • the client machine 10 and remote machine 30 ′ Upon a successful authentication, the client machine 10 and remote machine 30 ′ establish the second connection 370 and exchange information regarding status of or execution of the second resource.
  • the remote machine does not request user credentials to establish the second connection 370 .
  • the remote machine 30 ′ may use the credentials supplied by the user of client machine 10 to establish the connection 372 to also establish the second connection 370 . Accordingly, the client machine 10 and the remote machine 30 ′ communicate with each other over multiple connections.
  • FIG. 3B shows one embodiment of a system of communication among the client machine 10 , master remote machine 30 , and servers 32 , 34 , and 36 .
  • the client machine 10 has an active connection 373 with the remote machine 32 .
  • the client machine 10 and remote machine 32 can use the active connection 373 to exchange information regarding the status of or execution of a first resource.
  • User credentials may be stored at the remote machine 32 in cache memory or in persistent storage.
  • the Resource Neighborhood application runs on the remote machine 32 .
  • the remote machine 32 includes software providing a server-based client engine 62 , enabling the remote machine 32 to operate in the capacity of the client machine 10 .
  • the client machine 10 display has a Resource Neighborhood window 258 in which appear graphical icons 357 , 357 ′ representing a second resource and a third resource, respectively. A user of the client machine 10 can access the second resource by double-clicking the icon 357 .
  • the request to launch the second resource passes to the remote machine 32 via active connection 373 , and the remote machine 32 forwards the request to the master remote machine 30 (arrow 365 ).
  • the master remote machine 30 indicates (arrow 365 ) to the remote machine 32 that the sought-after resource is available on server 34 .
  • the remote machine 32 contacts the server 34 to establish a connection 366 .
  • the server 34 obtains the user credentials of the client machine 10 from the remote machine 32 .
  • the remote machine 32 and server 34 establish the connection (arrow 366 ) by which the remote machine 32 requests access to the second resource and the server 34 returns the results to the remote machine 32 .
  • the remote machine 32 forwards the results to the client machine 10 , where the results are displayed. Accordingly, the information exchanged between the client machine 10 and the server 34 “passes through” the remote machine 32 .
  • the client machine 10 can launch the third resource by double-clicking the icon 357 ′.
  • the request to launch the third resource passes to the remote machine 32 .
  • the remote machine 32 forwards the request to the master remote machine 30 .
  • the master remote machine 30 indicates that the server 36 can be used to access the third resource.
  • the remote machine 32 and the server 36 establish a connection (arrow 374 ) by which the remote machine 32 requests access to the third resource, and the server 36 returns the results to the remote machine 32 .
  • the server 36 can authenticate the user credentials of the user of the client machine 10 , which are obtained from the remote machine 32 .
  • the remote machine 32 forwards the results to the client machine 10 where the results are displayed. Accordingly, the results of accessing the third resource pass between the client machine 10 and the server 36 through the remote machine 32 .
  • FIG. 3C shows another embodiment of a system of communication among the client machine 10 , a master remote machine 30 , and servers 32 and 34 .
  • the client machine 10 has an active connection 376 with server 32 .
  • the client machine 10 and server 32 can use the active connection 376 to exchange information regarding the access to a first resource.
  • the client machine 10 can store user credentials in cache memory or in persistent storage.
  • the Resource Neighborhood application runs on the server 32 .
  • the client machine 10 display has a Resource Neighborhood window 258 in which appears a graphical icon 257 representing a second resource.
  • a user of the client machine 10 can access the second resource by double-clicking the icon 257 .
  • the request to access the second resource passes to the server 32 .
  • the server 32 responds (i.e., “calls back”) to the client machine 10 by returning resource-related information such as the name of the resource and capabilities needed by the client machine 10 to access the second application.
  • the client machine 10 With the information provided by the server 32 , the client machine 10 then communicates with the master remote machine 30 via connection 377 to determine the server for accessing the second resource. In this example, that server is server 34 .
  • the client machine 10 then establishes a connection 378 to the server 34 .
  • Server 34 requests the user credentials from the client machine 10 to authenticate the user of the client machine 10 .
  • the client machine 10 accesses the second resource on the server 34 , and the server 34 returns the results to the client machine 10 via the established connection 378 . Accordingly, the client machine 10 can have multiple active connections between the multiple servers.
  • FIG. 3D shows one embodiment of a system of communication between the client machine 10 , a remote machine 30 that in this example acts as a web server, and a second remote machine 30 ′.
  • the client machine 10 authenticates itself to the remote machine 30 as described above in connection with FIG. 2C .
  • the remote machine 30 accesses an output display template 390 , such as an SGML, HTML or XML file, to use as a base for constructing the Resource Neighborhood window to transmit to the client machine 10 .
  • the Resource Neighborhood window may display an enumeration of resources available to the client.
  • the enumeration of resources may include an enumeration of available application programs or computing environments.
  • the template may be stored in volatile or persistent memory associated with the server 30 or it may be stored in mass memory 392 , such as a disk drive or optical device, as shown in FIG. 3D .
  • the template 390 is a standard SGML, HTML, or XML document containing Resource Neighborhood-specific tags that are replaced with dynamic information.
  • the tags indicate to the server 30 where in the output display to insert information corresponding to available resources, such as icon images.
  • the Resource Neighborhood-specific tags are embedded within comments inside a file, allowing the file to remain compatible with standard interpreters.
  • the Resource Neighborhood-specific tags are extensions of the markup language used as the base for the template.
  • HTML tags Examples of HTML tags that may be used in a template are set forth below in Table 1:
  • ControlField field value This tag is used to set the value of data that either persists between Resource Neighborhood web pages, is set by the user, or is used to help in cross page navigation, such as user name, domain, password, template, and resource.
  • DrawResourceNeighborhood This tag is used to draw a Resource Neighborhood display at this location in an output display.
  • ResourceName This tag is replaced by the name of the published resource in the current context.
  • WindowType This tag is replaced by the window type of the published resource in the current context.
  • WindowHeight This tag is replaced by the window height of the published resource in the current context.
  • WindowWidth This tag is replaced by the window width of the published resource in the current context.
  • WindowScale This tag is replaced by the window scale of the published resource in the current context.
  • WindowColors This tag is replaced by the color depth of the published resource in the current context.
  • SoundType This tag is replaced by the sound setting of the published resource in the current context.
  • VideoType This tag is replaced by the video setting of the published resource in the current context.
  • EncryptionLevel This tag is replaced by the encryption level of the published resource in the current context.
  • Icon This tag is replaced by the icon of the published resource in the current context.
  • the template is constructed dynamically using, for example, COLD FUSION, manufactured by Allaire Corp. of Cambridge, Mass. or ACTIVE SERVER PAGES manufactured by Microsoft Corporation of Redmond, Wash.
  • the template may be static.
  • the Resource Neighborhood application parses the template, replacing Resource Neighborhood-specific tags as noted above. Tags that are not Resource Neighborhood-specific are left in the file to be parsed by the browser program 80 executing on the client 10 .
  • a template parser object accepts an HTML template as input, interprets Resource Neighborhood-specific tags present in the template, and outputs the original template with all Resource Neighborhood tags replaced with appropriate text.
  • the template parser object can be passed a cookie, a URL query string, or a control field from a web server interface to provide the information with which Resource Neighborhood-specific tags should be replaced.
  • a web server receives a request from the client machine 10 for an enumeration of available computing environments.
  • the web server executes an application to access data regarding the computing environments.
  • a page template is retrieved from a database.
  • a page is created, at the web server, describing a display of stored computing environment images available to the client machine 10 responsive to the collected information and the retrieved page template, and the created page is transmitted to the client machine 10 , indicating to the client machine 10 each computing environment available to the client machine 10 .
  • computing environment images may comprise virtual machine images, resource images, screenshots of suspended virtual machines, and other images selected by a user or administrator for presentation to the user.
  • an output display is created indicating each computing environment available to the client machine 10 and transmitting the created output display to the client machine 10 .
  • an output display comprising a page constructed in a markup language, the output display indicating each computing environment available to the client system and transmitted to the client system.
  • the Resource Neighborhood application allows scripts to access information via an application programming interface. Scripts may be written in, for example, VBScript or Jscript.
  • the scripting language is used to dynamically generate an output display using information returned by the application in response to queries posed by the script. Once the output display is generated, it is transmitted to client machine 10 for display by the browser program 80 .
  • a user of the client machine 10 can access a resource by clicking an icon 257 , 257 ′ displayed in the Resource Neighborhood web page.
  • each icon 257 , 257 ′ is associated with an encoded URL that specifies: the location of the resource (i.e., on which remote machines it is hosted or, alternatively, the address of a master remote machine, a gateway, or other remote machine 30 ); a launch command associated with the resource; and a template identifying how the results of accessing the resource should be displayed (i.e., in a window “embedded” in the browser or in a separate window).
  • the URL includes a file, or a reference to a file, that contains the information necessary for the client to create a connection to the remote machine hosting the resource.
  • This file may be created by the Resource Neighborhood application dynamically.
  • the client machine 10 establishes a connection (arrow 394 ) with the remote machine 30 ′ identified as hosting the requested resource and exchanges information regarding access to the desired resource.
  • the connection 394 is made using the Independent Computing Architecture (ICA) protocol, manufactured by Citrix Systems, Inc. of Fort Lauderdale, Fla.
  • ICA Independent Computing Architecture
  • the connection is made using: the RDP protocol, manufactured by Microsoft Corp. of Redmond, Wash.; the X11 protocol; or the Virtual Network Computing (VNC) protocol, manufactured by AT&T Bell Labs.
  • the client machine 10 may display the results of accessing the resource in a window separate from the web browser 280 , or it may “embed” application output within the web browser.
  • FIG. 3E depicts an embodiment in which a remote machine 30 acts as an intermediary for a machine farm 38 and comprises a broker module 310 , a transmitter 312 , a receiver 314 , and a transceiver 316 .
  • the broker module 310 accesses collected data regarding resources, including application programs, computing environments, and hardware resources. In some embodiments, the broker module 310 accesses collected data regarding resources and determines for each resource whether that resource image is available to a client machine 10 . In some embodiments, the server further comprises a database storing the collected data. In one of these embodiments, the broker module 310 determines for each resource whether that resource image is available to a client machine 10 based on the collected data. In other embodiments, the broker module 310 receives user credentials and determines for each resource whether that resource image is available to a client machine 10 based on the user credentials and the collected data.
  • the server further comprises an output display creation engine creating output displays indicating each resource available to the client machine 10 .
  • the output display creation engine creates a page describing a display of the resources available to a client system, the page created responsive to the collected information and a page template.
  • the transmitter 312 transmits accessed data to the client machine 10 indicating to the client machine 10 each resource determined to be available to the client machine 10 .
  • the transmitted data is displayable at the client system as icons in a graphical user interface window representing resources available to the client system.
  • the transmitted data is displayable at the client system as icons in a graphical user interface window representing resources unavailable to the client system.
  • the receiver 314 receives a request to access one of the available resources.
  • the receiver receives user credentials from the client machine 10 .
  • the receiver receives a request to access an application program available through one of the available resources, such as an available computing environment.
  • a database storing the collected information and the service module determines for each resource stored by the plurality of servers whether that resource image is available to a client machine 10 based on the user credentials and the collected information. In yet other embodiments, a determination is made as to an availability of resources, such as virtual machines or application servers, providing access to the available resources.
  • the transceiver 316 provides a connection between the client machine 10 and a virtual machine providing the requested resource. In some embodiments, the transceiver 316 provides a connection between the client machine 10 and a virtual machine providing the requested resource and the transceiver 316 establishes a presentation-layer protocol connection. In one of these embodiments, the transceiver 316 establishes an X11 or VNC connection. In another of these embodiments, the transceiver 316 establishes an ICA connection. In still another of these embodiments, the transceiver 316 establishes an RDP connection.
  • An intermediary machine of the sort just described may be used as any one of the remote machine 30 described above in FIGS. 1-1B , 2 A- 2 B, and 3 A- 3 D.
  • FIG. 4 illustrates one embodiment of program components for a client-based implementation of the Resource Neighborhood application.
  • a client-based implementation of the Resource Neighborhood application 416 can be used in a network using either the server-based computing model in which the servers execute the Resource Neighborhood application or in a client-based computing model in which the client machine 10 executes the Resource Neighborhood application locally.
  • the Resource Neighborhood application includes a Resource Neighborhood Service (RNSVC) component 444 , a resource database component 448 , a Resource Neighborhood Application Program Interface (RNAPI) component 452 , a Resource Neighborhood User Interface component 456 , and a local cache 460 .
  • RNSVC Resource Neighborhood Service
  • RNAPI Resource Neighborhood Application Program Interface
  • the remote machine 30 includes the service component (RNSVC) 444 and the resource authorization cache 448 .
  • the client machine 10 which is a representative example of a client machine 10 that can support a client-based implementation of the Resource Neighborhood application, includes the application program interface RNAPI 452 , the user interface user interface component 456 , and the local cache 460 components.
  • the RNAPI 452 communicates with the user interface component 456 and the local cache 460 .
  • the RNSVC 444 communicates with the resource authorization cache 448 and with the RNAPI 452 on the client machine 10 via communications link 462 .
  • the communications link 462 can be established by, for example, using the ICA protocol, the RDP protocol, the X11 protocol, the VNC protocol, or any other suitable presentation-level protocol designed to run over industry standard transport protocols, such as TCP/IP, IPX/SPX, NetBEUI, using industry-standard network protocols, such as ISDN, frame relay, and asynchronous transfer mode (ATM) and which provides for virtual channels, which are session-oriented transmission connections that can be used by application-layer code to issue commands for exchanging data.
  • the communications link 462 may also be established by protocols that support RPC or RPC-equivalents such as SOAP and HTTP.
  • the communications link 462 may also be a communications link 150 as described above.
  • the virtual channel commands are designed to be closely integrated with the functions of client machines.
  • the ICA protocol can support the Resource Neighborhood virtual channel.
  • the Resource Neighborhood virtual channel protocol can include four groups of commands:
  • the resource authorization cache 448 may be a cache of the authorized user and group information for all the public (i.e., published) resources in a machine farm 38 or in a group of trusted domains. Each remote machine in a machine farm 38 can maintain its own resource-related information in persistent storage and build up the resource authorization cache 448 in volatile storage. In another embodiment, all collected resource-related information in the resource authorization cache 448 can be stored in persistent storage and made accessible to each other server in the machine farm 38 .
  • the resource authorization cache 448 can be implemented in a proprietary format (e.g., as a linked list in memory) or using Novell's Directory Services (NDS) or any directory service adhering to the X.500 standard defined by the International Telecommunication Union (ITU) for distributed electronic directories.
  • the resource authorization cache 448 may be implemented as a standard relational database.
  • the resource authorization cache 448 includes a list of remote machines. Each remote machine in the list has an associated set of resources. Associated with each resource is resource-related information that can include the resource name, a list of remote machines, and client users that are authorized to use that resource.
  • resource-related information can include the resource name, a list of remote machines, and client users that are authorized to use that resource.
  • An overly-simplified example of the resource-related information maintained in the database is illustrated by the following Table 2.
  • Users A and B are users of the client machines 10 , “n/a” indicates that a desired application program is hosted, but is not available to client machine users, and “-” indicates that the application program is not hosted.
  • Table 2 shows: a list of servers 30 , 32 , 34 ; applications hosted by the servers (Spreadsheet, Customer Database, Word Processor, and Calculator); and those users who are authorized to use the applications.
  • the server 30 hosts the Spreadsheet program, the Customer Database and the Word Processor.
  • User A is authorized to use the Spreadsheet
  • User B is authorized to use the Customer Database
  • no users are authorized to use the Word Processor. It is to be understood that other techniques can be used to indicate who is authorized to use a particular application.
  • the user information stored in the database can be used to indicate those users who are unauthorized to use a particular application rather than those who are authorized, or to indicate that multiple users may access a resource on a remote machine 30 , or to indicate that a predetermined group of users are authorized to access a particular resource.
  • Table 2 depicts an embodiment in which the resources that are available are application programs, a similar technique may be used for computing environments and other resources.
  • the remote machine 30 obtains the resource-related information from each other machine in the machine farm 38 regarding the resources on those remote machines, including control information that indicates which client users and remote machines are permitted to access each particular resource.
  • the resource-related information maintained in the database may or may not persist across re-boots of the remote machine 30 .
  • Each remote machine 30 having the Resource Neighborhood application installed thereon executes the RNSVC software 444 .
  • the RNSVC software 444 operating on each remote machine 30 establishes a communication link (e.g. a named pipe) with at least one other and, in some embodiments, each other remote machine 30 .
  • the remote machines 30 exchange resource-related information on the communications links.
  • the RNSVC software 444 collects the resource-related information from the other remote machine 30 in the machine farm 38 through remote registry calls (e.g., the service component 444 transmits a datagram to other remote machine 30 in the farm 38 requesting the resource-related information corresponding to the resources hosted by those remote machine 30 ).
  • the resource authorization cache is populated by system administrators of by programs and scripts communicating with remotes machines 30 .
  • the RNSVC 444 software also maintains the relationships of groups and users to published resources in the resource authorization cache 448 and accesses the information when authenticating a client user.
  • An administrator of the remote machine 30 can use a user interface to configure the RNSVC 444 .
  • RNSVC software 444 includes implementing the services and functions requested by the RNAPI 452 and communicating with the RNAPI 452 on the client machine 10 using a Resource Neighborhood virtual channel driver (VCRN).
  • VCRN operates according to the Resource Neighborhood virtual channel protocol described.
  • the RNAPI 452 is a set of software functions or services that are used by the Resource Neighborhood application to perform various operations (e.g., open windows on a display screen, open files, and display message boxes).
  • the RNAPI 452 provides a generic mechanism for accessing user interface elements (e.g., icons) produced by running the Resource Neighborhood application and objects in a legacy (i.e., predecessor or existing for some time) client user interface.
  • the accessing mechanism can launch the resource on the remote machine 30 , if necessary (e.g., when the client machine 10 is unable to locally execute the application).
  • the RNAPI 452 provides all published resource information to the user interface component 456 for display on the screen 12 ( FIG. 1 ) of the client machine 10 .
  • the RNAPI 452 also manages machine farm 38 logons in a local database of logon credentials (e.g., passwords) for users of the client machine 10 to support the single authentication feature. Credentials may or may not be persistent across a reboot (power-off and on cycles) of the client machine 10 .
  • the RNAPI 452 provides automatic and manual management for Resource Neighborhood objects stored in the local cache 460 .
  • the local cache 460 can either be refreshed manually by the user of the client machine 10 , or at a user-definable refresh rate, or by the server at any time during a connection.
  • the RNAPI 452 can build remote application file resource associations and manage the “Start” menu and desktop icons for resource object shortcuts.
  • the user interface module 456 interfaces the RNAPI 452 and can be a functional superset of an existing client user interface (e.g., Remote Resource Manager).
  • the user interface module 456 accesses the information stored in the local cache 460 through the RNAPI 452 and visually presents that information to the user on the display screen 12 ( FIG. 1 ) of the client machine 10 .
  • the displayed information is a mixture of information generated by a user of the client machine 10 and information obtained by the Resource Neighborhood application.
  • the user interface module 456 can also show the user all resources that the user is currently accessing and all active and disconnected sessions.
  • the user interface module 456 can present a variety of graphical components, such as windows and pull-down menus, to be displayed on the display screen 12 ( FIG. 1 ).
  • a display of a combination of such graphical user interface components is generally referred to as a “desktop.”
  • a desktop produced by the user interface module 456 can include a Resource Neighborhood window displaying the neighborhood of resources available to the user of the client machine 10 . These resources may be a filtered combination of the published resources hosted by a machine farm 38 .
  • the user interface module 456 can generate a Resource Neighborhood window for each machine farm 38 or merge the resources from different machine farms 38 under a single Resource Neighborhood window.
  • the Resource Neighborhood window includes a folder for each machine farm 38 . Clicking on one of the folders produces a window containing a representation (e.g., an icon) of each hosted resource available to the user, e.g., see FIGS. 6A and 6B .
  • the Resource Neighborhood window becomes the focal point for accessing published resources, and the user interface module 456 can be used to access resources and launch applications through the RNAPI 452 .
  • the user of the client machine 10 can use the mouse 18 ( FIG. 1 ) to select one of the displayed icons and launch the associated resource.
  • a feature of a client-based implementation is that the user can browse the objects displayed in the Resource Neighborhood window although the client machine is offline, that is, the connection 462 is inactive. Also, a user of the client machine 10 can drag application objects and folders out of the Resource Neighborhood window and into other graphical components (e.g., other windows, folders, etc.) of the desktop.
  • FIG. 5 shows one embodiment of the program components for a server-based implementation of the Resource Neighborhood application.
  • the components include a Service (RNSVC) component 544 ′, a Resource Database component 548 ′, an Application Program Interface (RNAPI) component 552 ′, a User Interface component 556 ′ and a local cache 560 ′.
  • RNSVC Service
  • RNAPI Application Program Interface
  • Each software component 544 ′, 548 ′, 552 ′, 556 ′, and 560 ′ is installed on the application server 30 ′.
  • the software components for the server-based implementation correspond to the software components for the client-based implementation of FIG. 4 .
  • the functionality of each server-based software component is similar to the client-based counterpart, with differences or added capabilities described below.
  • the RNSVC 544 ′ communicates with the resource database 548 ′ and with the RNAPI 552 ′ using local procedure calls.
  • the RNAPI 552 ′ also communicates with the user interface module 556 ′ and the
  • the client machine 10 logs on to the network 40 ( FIG. 1 ), the server 30 ′ develops and maintains a database containing the resource related information collected from the other machines in the machine farm 38 , and a communication link is established between the server 30 ′ and the client machine 20 .
  • the application server 30 ′ may be in communication with the client machine 10 via an ICA connection 562 ′.
  • the user of the client machine 10 connects to an initial desktop (at the server 30 ′) and launches the Resource Neighborhood application from within that desktop environment.
  • the connection to the initial desktop can occur automatically, e.g., via a logon script of the client machine 20 , via an entry in a Startup group, or by another centrally managed server specific mechanism. All remote application management and launching is accomplished through this initial desktop.
  • the server 30 ′ uses the user credentials to determine those resources that the user of the client machine 10 is authorized to use.
  • a Resource Neighborhood graphical window is returned to the client machine 10 and displayed on the client screen 22 ( FIG. 1 ). This window can contain icons representing the available and, possibly, the unavailable resources that are in the Resource Neighborhood of the client machine 20 .
  • the web-based Resource Neighborhood application includes a group of objects that manage various aspects of a resource.
  • the Resource Neighborhood application includes three primary object classes that “plug in” to a web server: a gateway object class; a credentials object class; and a resources object class.
  • the object classes are provided as JavaBeans. The three primary object classes facilitate: validation of user credentials into a server farm; generation of lists of published resources that a specified user may access; provisioning of detailed information about a specific published resource; and conversion of resource application information into a format compatible with the protocol over which connection will be made.
  • the objects can be accessed in a number of different ways. For example, they may be compiled as COM objects and made available to the web server as ActiveX components.
  • the JavaBeans can be used in their native form, such as when the server uses Java Server Pages technology.
  • the JavaBeans can be instantiated and used directly in a Java Servlet.
  • the remote machine 30 can instantiate the JavaBeans as COM objects directly.
  • a credentials object class manages information necessary to authenticate a user into a target machine farm 38 .
  • a credentials object passes stored user credentials to other Resource Neighborhood objects.
  • the credentials object is an abstract class that cannot be instantiated and represents a user's credentials.
  • class extensions may be provided to allow different authentication mechanisms to be used, including biometrics, smart cards, token-based authentication mechanisms such as challenge-response and time-based password generation, or others.
  • a “clear text credentials” extension may be provided that stores a user's name, domain, and password in plain text.
  • a gateway object class handles communications with a target machine farm 38 .
  • the gateway object class is provided as an abstract Java class that cannot be instantiated.
  • a particular gateway object may retrieve resource information by communicating with a machine farm 38 using a particular protocol, reading cached resource information, a combination of these two methods, or other various methods.
  • the gateway object class may cache information to minimize communication with a target machine farm 38 .
  • Extensions to the gateway object may be provided to communicate with the machine farm 38 over specific protocols, such as HTTP.
  • an extension class is provided that allows the gateway object to communicate with the machine farm 38 via WINDOWS NT named pipes.
  • the gateway object may provide an application programming interface hook that allows other Resource Neighborhood objects to query the object for application information.
  • a resources object class contains information about published resources and returns information about resources hosted by the machine farm 38 in order to create the Resource Neighborhood web page.
  • the resources object class creates objects representing resources by retrieving information relating to the resources, either from an object created by the gateway object or directly from the machines in the machine farm 38 .
  • a resources object acts as a container for certain properties of the resource, some settable and some not settable, such as: the name of the resource (not settable); the width of the client window, in pixels, for this resource (settable); the height of the client window, in pixels, for this resource (settable); the number of colors to use when connecting to the resource (settable); the severity of audio bandwidth restriction (settable); the level of encryption to use when connecting to the resource (settable); the level of video to use when connecting to this resource (settable); whether the resource should be placed on a client's start menu (settable); whether the resource should be placed on the client's desktop (settable); the identity of the Resource Neighborhood folder to which the resource belongs (settable); the description of the resource (settable); the source of the graphics icon file for the resource (settable); the type of window that should be used when connecting to the resource (not settable); and whether to override default parameters for the object.
  • FIG. 6A is a screenshot of one embodiment of Resource Neighborhood window 620 that can be displayed on the screen 12 , 22 ( FIG. 1 ) of a client machine 10 , 10 ′ after the Resource Neighborhood application has executed.
  • the window 120 includes graphical icons 622 .
  • Each icon 622 represents a resource that is hosted by one of the machines in a machine farm 38 .
  • Each represented resource is available to the user of the client machine 10 .
  • the user can select one of the resources using the mouse 18 , 28 or keyboard 14 , 24 .
  • FIG. 6B is a screenshot of another embodiment of a Resource Neighborhood window 624 that can be displayed on the screen 12 , 22 ( FIG. 1 ) of a client machine 10 , 10 ′ after the Resource Neighborhood application has executed.
  • the window 624 includes graphical icons 626 , 628 .
  • Each icon 626 , 628 represents a resource that is hosted by one of the machines in a machine farm 38 .
  • Each resource represented by one of the icons 626 is available to the user of the client machine 10 .
  • the user can select one of the resources using the mouse 18 , 28 or keyboard 14 , 24 .
  • the screenshots of FIGS. 6A and 6B are similar, except that icons 622 , 626 , 628 are displayed within a browser window.
  • Each resource represented by one of the icons 628 is unavailable to the user of the client machine 10 , although such resources are present in the server farm.
  • the unavailability of these resources can be noted on the display screen (e.g., “X”s can be drawn through the icons 628 ).
  • An attempt to access such a resource can trigger a message indicating that the user is not authorized to access the resource.
  • the attempt may invoke a method allowing the user of the client machine 10 to request access to the resource.
  • the resource comprises a computing environment.
  • a connection is established between the client machine 10 and a virtual machine hosting the requested computing environment.
  • a presentation layer protocol is used in establishing the connection between the client system and the virtual machine.
  • the X11 protocol is used in establishing the connection.
  • the Remote Desktop Protocol RDP
  • the Independent Computing Architecture (ICA) protocol is used in establishing the connection.
  • a connection is established between the client machine 10 and a physical machine, such as a traditional workstation or server, hosting the requested computing environment. In other embodiments, a connection is established between the client machine 10 and a hardware partition hosting the requested computing environment.
  • an enumeration of a plurality of resources available to the client machine 10 is provided (step 204 ) responsive to a determination by a policy engine regarding whether and how a client machine may access a resource.
  • the policy engine may collect information about the client machine prior to making the determination.
  • FIG. 7A one embodiment of a computer network is depicted which includes a client machine 10 , a machine farm 38 , a collection agent 704 , a policy engine 706 , a policy database 708 , and a resource server 30 ′.
  • the policy engine 706 is a remote machine 30 .
  • FIG. 7A Although only one client machine 10 , collection agent 704 , policy engine 706 , machine farm 38 , and resource server 30 ′ are depicted in the embodiment shown in FIG. 7A , it should be understood that the system may provide multiple ones of any or each of those components.
  • the collection agent 704 communicates with the client machine 10 , retrieving information about the client machine 10 , and transmits the client machine information 712 to the policy engine 706 .
  • the policy engine 706 makes an access control decision by applying a policy from the policy database 708 to the received information 712 .
  • the client machine 710 transmits to the policy engine 706 a request 206 for resource enumeration.
  • the policy engine 706 resides on a resource server 30 ′.
  • the policy engine 706 resides on a remote machine 30 .
  • a resource server 30 ′ receives the request 206 from the client machine 10 and transmits the request 206 to the policy engine 706 .
  • the client machine 10 transmits a request 206 for resource enumeration to an intermediate remote machine 30 ′′′ (not shown), which transmits the request 206 to the policy engine 706 .
  • the client machine 10 transmits the request 206 over a network connection such as those described above.
  • the policy engine 706 initiates information gathering by the collection agent 704 .
  • the collection agent 704 gathers information regarding the client machine 10 and transmits the information 712 to the policy engine 706 .
  • the collection agent 704 gathers and transmits the information 712 over a network connection.
  • the collection agent 704 comprises bytecode, such as an application written in the bytecode programming language JAVA.
  • the collection agent 704 comprises at least one script.
  • the collection agent 704 gathers information by running at least one script on the client machine 10 .
  • the collection agent comprises an Active X control on the client machine 10 .
  • An Active X control is a specialized Component Object Model (COM) object that implements a set of interfaces that enable it to look and act like a control.
  • COM Component Object Model
  • the policy engine 706 transmits the collection agent 704 to the client machine 10 .
  • the policy engine 706 requires another execution of the collection agent 704 after the collection agent 704 has transmitted information 712 to the policy engine 706 .
  • the policy engine 706 requires another execution of the collection agent 704 because the policy engine 706 may have insufficient information 712 to determine whether the client machine 10 satisfies a particular condition.
  • the policy engine 706 requires a plurality of executions of the collection agent 704 in response to received information 712 .
  • the policy engine 706 transmits instructions to the collection agent 704 determining the type of information the collection agent 704 gathers from the client machine 10 .
  • a system administrator may configure the instructions transmitted to the collection agent 704 from the policy engine 706 . This provides greater control over the type of information collected. This also expands the types of access control decisions that the policy engine 706 can make, due to the greater control over the type of information collected.
  • the collection agent 704 gathers information 712 including, without limitation, machine ID of the client machine 10 , operating system type, existence of a patch to an operating system, MAC addresses of installed network cards, a digital watermark on the client device, membership in an Active Directory, existence of a virus scanner, existence of a personal firewall, an HTTP header, browser type, device type, network connection information such as internet protocol address or range of addresses, machine ID of the remote machine 30 , date or time of access request including adjustments for varying time zones, and authorization credentials.
  • information 712 including, without limitation, machine ID of the client machine 10 , operating system type, existence of a patch to an operating system, MAC addresses of installed network cards, a digital watermark on the client device, membership in an Active Directory, existence of a virus scanner, existence of a personal firewall, an HTTP header, browser type, device type, network connection information such as internet protocol address or range of addresses, machine ID of the remote machine 30 , date or time of access request including adjustments for varying time zones, and authorization credentials.
  • the device type is a personal digital assistant. In other embodiments, the device type is a cellular telephone. In other embodiments, the device type is a laptop computer. In other embodiments, the device type is a desktop computer. In other embodiments, the device type is an Internet kiosk. In still other embodiments, the device type is a game console.
  • the digital watermark includes data embedding.
  • the watermark comprises a pattern of data inserted into a file to provide source information about the file.
  • the watermark comprises hashed data files to provide tamper detection.
  • the watermark provides copyright information about the file.
  • the network connection information pertains to bandwidth capabilities. In other embodiments, the network connection information pertains to the Internet Protocol address of the client machine 10 . In still other embodiments, the network connection information consists of the Internet Protocol address of the client machine 10 . In one embodiment, the network connection information comprises a network zone identifying the logon agent to which the client machine 10 provided authentication credentials.
  • the authorization credentials include a number of types of authentication information, including without limitation, user names, client names, client addresses, passwords, Personal Identification Numbers (PINs), voice samples, one-time passcodes, biometric data, digital certificates, tickets, etc. and combinations thereof.
  • the policy engine 706 After receiving the gathered information 712 , the policy engine 706 makes an access control decision based on the received information 712 .
  • a block diagram depicts one embodiment of a policy engine 706 , including a first component 720 , including a condition database 722 and a logon agent 724 , and a second component 730 , including a policy database 732 .
  • the first component 720 applies a condition from the condition database 722 to information 712 received about client machine 10 and determines whether the received information 712 satisfies the condition.
  • a condition may require that the client machine 10 execute a particular operating system to satisfy the condition. In other embodiments, a condition may require that the client machine 10 execute a particular operating system patch to satisfy the condition. In still other embodiments, a condition may require that the client machine 10 provide a MAC address for each installed network card to satisfy the condition. In some embodiments, a condition may require that the client machine 10 indicate membership in a particular Active Directory to satisfy the condition. In another embodiment, a condition may require that the client machine 10 execute a virus scanner to satisfy the condition. In other embodiments, a condition may require that the client machine 10 execute a personal firewall to satisfy the condition. In some embodiments, a condition may require that the client machine 10 comprise a particular device type to satisfy the condition. In other embodiments, a condition may require that the client machine 10 establish a particular type of network connection to satisfy the condition.
  • the first component 720 stores an identifier for that condition in a data set 726 .
  • the received information satisfies a condition if the information makes the condition true.
  • a condition may require that a particular operating system be installed. If the client machine 10 has that operating system, the condition is true and satisfied.
  • the received information satisfies a condition if the information makes the condition false.
  • a condition may address whether spyware exists on the client machine 10 . If the client machine 10 does not contain spyware, the condition is false and satisfied.
  • the logon agent 724 resides outside of the policy engine 706 . In other embodiments, the logon agent 724 resides on the policy engine 706 . In one embodiment, the first component 720 includes a logon agent 724 , which initiates the information gathering about client machine 10 . In some embodiments, the logon agent 724 further comprises a data store. In these embodiments, the data store includes the conditions for which the collection agent may gather information. This data store is distinct from the condition database 722 .
  • the logon agent 724 initiates information gathering by executing the collection agent 704 . In other embodiments, the logon agent 724 initiates information gathering by transmitting the collection agent 704 to the client machine 10 for execution on the client machine 10 . In still other embodiments, the logon agent 724 initiates additional information gathering after receiving information 712 . In one embodiment, the logon agent 724 also receives the information 712 . In this embodiment, the logon agent 724 generates the data set 726 based upon the received information 712 . In some embodiments, the logon agent 724 generates the data set 726 by applying a condition from the database 722 to the information received from the collection agent 704 .
  • the first component 720 includes a plurality of logon agents 724 .
  • at least one of the plurality of logon agents 724 resides on each network domain from which a client machine 10 may transmit a resource request 710 .
  • the client machine 10 transmits the resource request 710 to a particular logon agent 724 .
  • the logon agent 724 transmits to the policy engine 706 the network domain from which the client machine 10 accessed the logon agent 724 .
  • the network domain from which the client machine 10 accesses a logon agent 724 is referred to as the network zone of the client machine 10 .
  • the condition database 722 stores the conditions that the first component 720 applies to received information.
  • the policy database 732 stores the policies that the second component 730 applies to the received data set 726 .
  • the condition database 722 and the policy database 732 store data in an ODBC-compliant database.
  • the condition database 722 and the policy database 732 may be provided as an ORACLE database, manufactured by Oracle Corporation of Redwood Shores, Calif.
  • the condition database 722 and the policy database 732 can be a Microsoft ACCESS database or a Microsoft SQL Server database, manufactured by Microsoft Corporation of Redmond, Wash.
  • the first component 720 After the first component 720 applies the received information to each condition in the condition database 722 , the first component transmits the data set 726 to second component 730 . In one embodiment, the first component 720 transmits only the data set 726 to the second component 730 . Therefore, in this embodiment, the second component 730 does not receive information 712 , only identifiers for satisfied conditions. The second component 730 receives the data set 726 and makes an access control decision by applying a policy from the policy database 732 based upon the conditions identified within data set 726 .
  • policy database 732 stores the policies applied to the received information 712 .
  • the policies stored in the policy database 732 are specified at least in part by the system administrator.
  • a user specifies at least some of the policies stored in the policy database 732 .
  • the user-specified policy or policies are stored as preferences.
  • the policy database 732 can be stored in volatile or non-volatile memory or, for example, distributed through multiple servers.
  • an access control decision based upon information received about a client machine 10 is made.
  • the policy engine 706 Upon receiving gathered information about the client machine 10 , the policy engine 706 generates a data set based upon the information.
  • the data set contains identifiers for each condition satisfied by the received information 712 .
  • the policy engine 706 applies a policy to each identified condition within the data set 726 . That application yields an enumeration of resources which the client machine 10 may access.
  • the enumeration of resources includes an enumeration of levels of access to the resource.
  • a plurality of allowable actions associated with the resource is enumerated.
  • a plurality of methods of execution of the resource is enumerated.
  • the policy engine 706 then presents that enumeration to the client machine 10 .
  • the policy engine 706 creates a Hypertext Markup Language (HTML) document used to present the enumeration to the client machine.
  • HTML Hypertext Markup Language
  • the policy engine 706 transmits the enumeration to a different remote machine 30 .
  • the remote machine 30 transmits the enumeration to the client machine 10 .
  • the remote machine 30 applies additional policies to the enumeration.
  • the remote machine is an appliance such as an application gateway or a firewall.
  • the policy engine 706 transmits an assigned level of action applicable to a requested resource to a remote machine 30 functioning as a broker server. The broker server establishes, responsive to the assigned level of access, a connection between the client machine 10 and a computing environment providing the requested resource.
  • a flow diagram depicts one embodiment of the steps taken to provide access to a resource.
  • a request for access to a resource is received (step 802 ).
  • a method for providing access to the resource is identified (step 804 ).
  • An application execution server may be selected to provide access to the resource (step 806 ).
  • a virtualized environment may be selected to provide access to a resource (step 808 ).
  • An application streaming service may be selected to provide access to the resource (step 816 ). If the virtualized environment is selected to provide access to the resource, an execution machine is identified (step 810 ).
  • a virtual machine is selected (step 812 ). The virtual machine is configured (step 814 ). Access to the resource is provided (step 818 ).
  • a request for access to a resource is received (step 802 ).
  • a remote machine 30 receives the request.
  • the remote machine 30 is an intermediate broker server.
  • the remote machine 30 is a gateway.
  • the remote machine 30 is a policy engine.
  • the remote machine 30 is an appliance.
  • the remote machine 30 verifies that the user is authorized to access the resource. In still another embodiment, the remote machine 30 receives with the request information verifying authorization for access by the user.
  • the remote machine 30 receives a request for an application program. In another embodiment, the remote machine 30 receives a request for access to a file. In yet other embodiments, the remote machine 30 receives a request for access to a computing environment. In one of these embodiments, the computing environment is a desktop environment from which the client machine 10 may execute application programs. In another of these embodiments, the computing environment provides access to one or more application programs. In some embodiments, the remote machine 30 receives a request for access to a computing environment supported by a plurality of hardware requirements. In some embodiments, a remote machine 30 functioning as deployment system receives a request for access to a resource, such as execution of an application program, from a client machine 10 .
  • a method for providing access to the resource is identified (step 804 ).
  • a remote machine 30 consults a database to identify the method for providing access.
  • a remote machine 30 consults a policy or rules database to identify the method for providing access.
  • a remote machine 30 receives from a policy engine an identification of a method to select.
  • a policy may allow execution of the application program on the client machine 10 .
  • a policy may enable the client machine 10 to receive a stream of files comprising the application program.
  • the stream of files may be stored and executed in an isolation environment on the client.
  • a policy may allow execution of the application program only on a remote machine, such as an application server, and require the remote machine to transmit application-output data to the client machine 10 .
  • a policy may allow execution of the application program only in a computing environment hosted on a virtual machine. In either of these cases, a stream of files comprising the application programs may be sent to the remote machine.
  • a policy may allow installation of the computing environment on the client machine 10 .
  • a policy may enable the client machine 10 to access a copy of the computing environment executing in a virtual machine on a remote machine 30 .
  • a policy may forbid the user of the client machine 10 to access the requested computing environment and offer an alternative computing environment.
  • a policy may enable the client machine 10 to access a copy of the computing environment executing in a virtual machine, which in turn executes on a hypervisor providing access to the requested plurality of hardware resources.
  • a policy may forbid the user of the client machine 10 to access the requested computing environment and offer a computing environment supported by an alternative plurality of hardware resources.
  • the remote machine 30 may choose to provide access to an application execution server which provides access to a requested application program (step 806 ).
  • the application execution server executes the application program and transmits application output data to the client machine 10 .
  • the application execution server may transmit the application output data over a presentation layer protocol, such as X11, VNC, ICA, or RDP.
  • the remote machine 30 may choose to provide access to an application streaming service capable of transmitting a requested application program to the client machine 10 (step 816 ) for execution.
  • an application streaming service capable of transmitting a requested application program to the client machine 10 (step 816 ) for execution.
  • Embodiments of application streaming services are described in greater detail below.
  • the remote machine 30 may choose to respond to the client's request by allowing access to a computing environment provided by a virtual machine, the computing environment providing access to the requested resource (step 808 ).
  • the computing environment may be provided by a virtual machine launched into a hypervisor executing on a remote machine 30 ′.
  • the remote machine 30 determines to provision on the client machine 10 a virtual machine providing access to the computing environment.
  • a remote machine 30 determines to provide access to the requested resource via a virtualized environment
  • the remote machine 30 identifies an execution machine providing access to a computing environment requested by the client machine 10 (step 810 ).
  • the remote machine 30 identifies an execution machine capable of hosting the computing environment.
  • the remote machine 30 determines that the user requesting access to the computing environment lacks authorization to access the requested computing environment.
  • the remote machine 30 may identify an alternative computing environment which the user is authorized to access.
  • the remote machine 30 identifies an execution machine on which a hypervisor provides access to a requested plurality of hardware and in which the requested computing environment may execute.
  • the remote machine 30 is an execution machine capable of hosting the computing environment.
  • the computing environment is installed on the execution machine.
  • a hypervisor on the execution machine emulates a plurality of hardware resources required by the requested computing environment and the computing environment is launched in the hypervisor.
  • the remote machine 30 identifies a remote machine 30 ′ functioning as an execution machine capable of providing access to the computing environment supported by a requested plurality of hardware resources.
  • the remote machine 30 ′ functions as an execution machine on which a hypervisor emulating the requested plurality of hardware resources executes and on which a computing environment supported by the hypervisor executes.
  • an execution machine providing hardware resources, physical or virtual, capable of supporting a particular virtual machine is identified responsive to a load-balancing determination.
  • the execution machine is selected responsive to load-balancing information maintained by a management server 30 .
  • the management server 30 is a single machine.
  • several remote machines 30 may be capable of acting as a management server, but only one of such nodes is designated the management server.
  • a client request is directed to the management server 30 in the first instance.
  • a remote machine 30 queries the management server 30 to determine the identity of a suitable execution machine.
  • the master network information server node 30 maintains a table of addresses for the remote machines 30 ′, 30 ′′.
  • the master network information server node 30 receives messages from the remote machines 30 ′, 30 ′′ indicating their level of activity, which may comprise CPU load or may comprise an identification of the number of a virtual machines currently hosted by a remote machine 30 ′, 30 ′′.
  • the level of activity of the remote machines 30 ′, 30 ′′ is maintained in a table along with the address of each of the remote machines 30 ′, 30 ′′.
  • a management server 30 in which a single management server 30 is used, it is desirable to dynamically select a master network information server node 30 from the available remote machines 30 on the network. In this way, if the active management server 30 fails, a new management server 30 may be selected as soon as the failure of the previous management server 30 is detected. In one embodiment a management server 30 is selected by an election process among the remote machines 30 .
  • any machine may force an election at any time by broadcasting a request election datagram to the machine farm 38 .
  • the election results are determined by a comparison of the set of election criteria which is transmitted within the request election datagram transmitted by the requesting node with the set of election criteria maintained on each receiving node. That is, the first election criterion from the datagram of the requesting node is compared by the receiving node to the first criterion of the receiving node. The highest ranking of the two criteria being compared wins the comparison and the node with that criterion wins the election. If the two criteria tie, then the next criteria are sequentially compared until the tie is broken.
  • a remote machine 30 receiving the request election datagram has a higher election criterion than that received in the request election datagram, the remote machine 30 receiving the request election datagram issues its own request election datagram. If the receiving remote machine 30 has a lower election criteria than the criteria received in the request election datagram, the receiving remote machine 30 determines it is not the master network information server node and attempts to determine which remote machine 30 in the machine farm 38 is the management server 30 .
  • the criteria which determine the outcome of the election include: whether or not the node is statically configured as a master network information server node; whether the remote machine 30 has the higher master network information server software version number; whether the remote machine 30 is an NT domain controller; whether the remote machine 30 is the longest running node; and whether the remote machine 30 has a lexically lower network name.
  • the datagram structure for the election request includes an unsigned shortword for the server version number, an unsigned shortword in which the bits are flags which designate whether the node is statically configured as a master network information server node, or is executing on a NT domain controller and an unsigned longword containing the amount of time the server has been running.
  • the management server 30 transmits a declare message to the other remote machines 30 declaring itself to be the management server 30 . If another remote machine 30 believes itself to be a management server 30 , the other remote machine 30 will request an election. In this way erroneous master network information server nodes 30 of the same protocol are detected and removed. In addition an election will also be requested: by any remote machine 30 when that remote machine 30 reboots; by any remote machine 30 to whom the master network information server node has failed to acknowledge an update message; or any client machine 10 to whom the master network information server node 30 has failed to respond to a request for information.
  • any remote machine 30 (which may be referred to as a node) broadcasts a request election datagram requesting an election (Step 920 )
  • the remote machine 30 receiving the request election datagram (Step 924 ) first compares its election criteria to the criteria in the request election datagram (Step 930 ) to determine if the receiving remote machine 30 has higher criteria (Step 934 ). If the remote machine 30 receiving the datagram has lower election criteria (Step 938 ) than the criteria contained in the request election datagram, the remote machine 30 receiving the request election datagram drops out of the election process and awaits the results of the election (Step 938 ).
  • the remote machine 30 receiving the request election datagram broadcasts its own request election datagram containing the remote machine's own election criteria (Step 940 ). If in response to the transmission of the request election datagram by the second remote machine 30 , another remote machine 30 ′ responds with a request election datagram with even higher election criteria, then the second remote machine 30 drops out of the election and the remote machine 30 ′ with higher criteria broadcasts it's own request election datagram.
  • Step 956 the remote machine 30 which has sent the n election requests is the new management server 30 .
  • the new management server 30 After the election has occurred and the new management server 30 has been determined, all the remote machines 30 send all of their configured gateway addresses to the new network information server node 30 . In this way the new management server 30 becomes a gateway node.
  • the remote machines 30 send update datagrams to the master network information server 30 providing information about each remote machine 30 transmitting the update datagram.
  • the update datagram sent to the master network information server node 30 from a remote machine 30 includes: the remote machine 30 name; the network address; the cluster name; the network transport protocol; the total number of remote machines 30 configured with this transport; the number of ports available for connection with a client using this transport protocol; the total number of users permitted to be active at one time; number of available user slots; and server load level.
  • the master network information server node 30 Upon receipt of the update datagram, the master network information server node 30 returns an acknowledgment to the remote machines 30 that transmitted the update datagram indicating that the update datagram was received. If the remote machine 30 transmitting the update datagram does not receive an acknowledgment from the master network information server node 30 , the transmitting remote machine 30 assumes that the master network information server node 30 has failed and transmits an election request.
  • a remote machine 30 after the election of a management server 30 , waits a random period of time and then sends a datagram to the management server 30 with its latest load information (Step 1000 ). In one embodiment the delay is between four and six seconds. If the management server 30 receives (Step 1008 ) an update datagram from a remote machine 30 , then the master network information server node 30 replies to the transmitting remote machine 30 with an acknowledgment (Step 1010 ) and forwards the data to any remote machine 30 configured as a gateway node. If the master network information server 30 fails to receive data from a remote machine 30 (Step 1008 ), then the master network information server 30 discards the old data from the remote machine 30 after a predetermined amount of time (Step 1020 ).
  • Step 1028 the remote machine 30 retransmits the update datagram.
  • the remote machine 30 will attempt n retransmits (in one embodiment three) before it assumes that the master network information server 30 has failed and then transmits an election request (Step 1030 ). If the remote machine 30 receives an acknowledgment, then it periodically updates the master network information server node 30 , in one embodiment every 5 to 60 minutes (Step 1040 ).
  • FIG. 11 is a block diagram depicting one embodiment of a machine farm 38 including a first and second network management processes.
  • the first network management process 1110 executes in a native operating system 1105 (such as WINDOWS NT) and accesses a native memory element storing (i) a data table and (ii) at least one election criteria for allowing the first network management process 1110 to be dynamically selected as a management process, the data table having an entry for each of said at least two network management processes.
  • a native operating system 1105 such as WINDOWS NT
  • the second network management process 1120 executes in a virtualized operating system 1115 and accesses a virtualized memory element storing (i) a data table and (ii) at least one election criteria for allowing the second network management process 1 120 to be dynamically selected as the management process, the data table having an entry for each of said at least two network management processes.
  • the client machine 10 communicates with the one of the first network management process 1110 and the second network management process 1120 selected as the management process and receives from the management process an address of a remote machine 30 with which to communicate.
  • a plurality of client machines 10 is in communication with a master network information process.
  • the first network management process 1110 executes in a native operating system 1105 .
  • the second network management process 1120 executes in a virtualized operating system 1115 .
  • the at least two network management processes are grouped into clusters.
  • one of the at least two network processes is a gateway process.
  • the gateway process is a master network management process.
  • the master network management process is selected by a process comprising the steps of (a) broadcasting an election datagram to the at least two network management processes, the election datagram comprising election criteria; and (b) selecting a master network management process in response to the election criteria.
  • the master network management process broadcasts a declare datagram to detect multiple master network management processes using the same transport protocol.
  • the master network management process is selected by a process that occurs after an event selected from the group of events consisting of: a system reboot, a master network management process failing to respond to a datagram sent from a network management process, a master network management process failing to respond to a request from a client machine, detection of at least two master network management processes configured with the same transport, and a new network management process appearing on said network.
  • the management process is elected as described above in connection with FIGS. 9 and 10 .
  • the network includes a third network management process using a different network transport protocol from the first network management process.
  • the third network management process comprises a master network management process for the different network transport protocol.
  • each remote machine 30 may include a load management subsystem (LMS) providing a load management capability.
  • LMS load management subsystem
  • the LMS manages overall server and network load to minimize response time to client requests.
  • an apparatus for selecting a server from a network plurality of servers to service a client request comprises a plurality of network management processes.
  • each of said plurality of network management processes includes an event bus and a subsystem in communication with the event bus.
  • a first one of the plurality of network management processes receives from a client machine a request for access to a computing resource and sends the client request to a second one of the plurality of network management processes.
  • the second one of the plurality of network management processes executes in a virtualized operating system and comprises a dynamic store and a load management subsystem.
  • the dynamic store loads information associated with at least some of the plurality of network management processes in a virtualized memory element.
  • the dynamic store contains information relating to server processor load.
  • the dynamic store contains information relating to server input/output transaction load.
  • the load management subsystem (i) receives, via said event bus, a request to identify a server for servicing a client request, (ii) retrieves from said dynamic store the loading information, (iii) chooses, based on the retrieved loading information, one of the plurality of servers for servicing the client request, and (iv) transmits, via said event bus, a message including information identifying the chosen server.
  • the load management subsystem stores run-time information in the dynamic store at predetermined intervals.
  • the apparatus further includes a persistent store, the load management subsystem in communication with the persistent store via the event bus, the persistent store containing an identification of at least one rule to be used to manage server load.
  • the LMS is rule-based, and an administration tool can be used to modify or create rules for managing server load.
  • a rule is one or more criteria that influences how a LMS will direct requests.
  • Rules may be individualized to a specific remote machine 30 .
  • Rules can also be individualized to a specific application or computing environment on a per-server basis. That is, one or more rules may be associated with a copy of an application or a computing environment residing on a first remote machine 30 in the machine farm 38 and different rules may be associated with a copy of the same application or computing environment residing on a second remote machine 30 in a machine farm 38 .
  • the output of rules individualized to a specific application may be combined with the output of general server rules to direct a client request.
  • Operational meters may measure any aspect of server performance and the result is used by rules to help determine which remote machine 30 is most appropriate to service a client request. For example, operational meters may measure: processor load; context switches; memory usage; page faults; page swaps; transmission rate of input/output reads or writes; number of input/output operations performed or number of virtual machines hosted.
  • operational meters are used by a LMS to measure server performance during the occurrence of certain events such as a request for a client connection.
  • operational meters are used by a LMS to measure server performance at predetermined intervals, which may be configured by an administrator.
  • a LMS on each remote machine 30 in the machine farm 38 evaluates various performance metrics for the remote machine 30 for each predetermined period of time and stores that information in the dynamic store. For example, every thirty seconds, an evaluation of server load may include a query to operational meters for server's CPU utilization and memory utilization. The results from the query will be used, in conjunction with other applicable load factors, to calculate a load number for this server load. The new load number is then sent to the dynamic store.
  • Rules and operational meters are, in one embodiment, executable code modules that query specific system conditions, resources, and performance metrics for remote machines 30 in the machine farm 38 .
  • Some of the rules accept user-configurable parameters that are entered by the administrator via the administration tool.
  • Rules may be provided to the LMS using a dynamic link library (“DLL”), and the rules and rule parameters applicable to a specific server may be stored in the persistent store. That is, the administrator's selection of rules is stored, together with a weighting factor and applicable settings associated with those rules, in the persistent store.
  • DLL dynamic link library
  • some operational meters may measure load at a predetermined interval; the predetermined interval may be set by the administrator.
  • conditional rules that may be used by the LMS to determine to which remote machine 30 to direct a request include: whether the number of client machines 10 that may connect to a remote machine 30 is limited; whether the number of client sessions that may be serviced by a remote machine 30 is limited; whether the number of virtual machines that may be hosted by a remote machine 30 is limited; the number of application or connection licenses available to a remote machine 30 ; whether the application requested by the client machine 10 is currently executing on the remote machine 30 ; whether a client is physically proximate to, or is connected by a high bandwidth link to, a server; and whether a client request is being made during a time period for which the remote machine 30 is available to service client requests.
  • a set of rules may be grouped together by the group subsystem 300 to form a load evaluator associated with a particular server or a particular application.
  • a server load evaluator is a load evaluator that applies to all applications published on the server.
  • An application load evaluator is a load evaluator that encapsulates rules specific to certain applications.
  • loads for published application programs are the sum of a server load evaluator and an application load evaluator.
  • the load evaluator associated with a particular server may be stored in the persistent store 230 . When a LMS initializes, it queries persistent store 230 to determine whether a load evaluator is associated with the remote machine 30 on which the LMS resides.
  • each rule encapsulated in a load evaluator may have a configurable weighting factor.
  • Many rules have user-configurable parameters that control the way LMS loads are calculated. For example, in one embodiment, a CPU Utilization rule has two parameters: Report Full Load when processor utilization is greater than X-percent; report no load when processor utilization is less than X percent.
  • the load reported by a load evaluator equals the sum of each rule's load times each rule's weight.
  • a remote machine 30 that hosts four applications may have three load evaluators with which it is associated.
  • the server itself and a first application may by associated with a first load evaluator
  • the second and third applications may be associated with a second load evaluator
  • the fourth application may be associated with a third load evaluator.
  • the remote machine 30 When the remote machine 30 boots, it read the first, second, and third load evaluators from the persistent store 230 . Periodically (or perhaps after certain events) the remote machine 30 calculates the output for each of the load evaluators and sends those values to the dynamic store. When a connection request is received, those values are used to determine if the remote machine 30 should service a client request.
  • the LMS can obtain information about the processor load on a particular remote machine 30 , the memory load on that remote machine 30 , and the network load of that remote machine 30 .
  • the LMS combines these results to obtain an overall load number that indicates the total aggregate load on that remote machine 30 .
  • the load evaluator may weight each piece of information differently.
  • the rule may disqualify a remote machine 30 from servicing a client request.
  • a rule may limit the number of client sessions a remote machine 30 may initiate.
  • a remote machine 30 if a remote machine 30 is currently servicing the maximum number of client sessions allowed by the rule, it will not be chosen by the LMS to service a new client request, even if the outputs of its operational meters indicate that it is the most favorable remote machine 30 to which to route the client request.
  • a virtual machine providing a requested computing environment is identified (step 812 ).
  • declarative policies such as rules databases, policy databases or scripts are consulted to direct requests to a virtual machine.
  • a remote machine 30 functioning as an application server hosting a plurality of virtual machines is identified.
  • one of the plurality of virtual machines hosted by the application server may be selected and associated with the client machine 10 .
  • an identifier for the selected virtual machine may be transmitted to the client machine 10 .
  • a session management component identifies the virtual machine.
  • an intermediate machine 30 receiving the request invokes a session management component.
  • the intermediate machine launches the session management component in a terminal services session executing on the intermediate machine.
  • the intermediate machine launches the session management component in a terminal services session executing on the identified execution machine.
  • the session management component provides functionality for identifying a location of a virtual machine providing access to a computing environment.
  • the session management component is provided as a program module published on a server, such as an application server.
  • the session management component identifies, launches, and monitors virtual machines.
  • the session management component communicates with a virtual machine management component to identify a virtual machine.
  • the virtual machine management component provides functionality for locating virtual machines.
  • the virtual machine management component provides functionality for allocating an available virtual machine to a user from a plurality of available virtual machines.
  • the virtual machine management component provides functionality for reallocating shared virtual machines to the plurality of available virtual machines.
  • the virtual machine management component provides functionality for tracking a state associated with a virtual machine for each virtual machine in a plurality of virtual machines.
  • a block diagram depicts one embodiment of a virtual machine management component 1200 .
  • the virtual machine management component 1200 provides functionality for accessing and updating a database including a virtual machine catalog.
  • the virtual machine management component 1200 provides functionality for allowing an administrator or virtual machine provisioning system to add, remove, or modify entries in the database including a virtual machine catalog.
  • the virtual machine management component 1200 includes a virtual machine providing administrative functionality.
  • the virtual machine component 1200 includes a virtual machine providing management functionality.
  • the virtual machine management component 1200 may receive a request from a provisioning system or from a session management component.
  • a provisioning system contacts the virtual machine management component 1200 when a virtual machine is created or destroyed.
  • the session management component contacts the virtual machine management component 1200 when the session management component is invoked to request a virtual machine to launch.
  • the session management component contacts the virtual machine management component 1200 when the session management component identifies a change in a state of a launched virtual machine.
  • the session management component may send messages, such as heartbeat messages, to the virtual machine management component 1200 while a virtual machine is active. If the virtual machine may be accessed by more than one user, the virtual machine management component 1200 may reassign the virtual machine to the plurality of available virtual machines after a user has terminated a session with the virtual machine.
  • virtual machines of the same machine type may be categorized into a plurality of standard operating environments (SOE).
  • SOE may be a group of virtual machine images of a particular configuration that implement the function of a particular Machine Type, e.g. a machine type “C++ Developer Workstation” may have one SOE containing images with WinXP Pro SP2 with Visual Studio 2003 installed and another SOE containing images with Win Vista with Visual Studio 2005 installed.
  • the virtual machine management component 1200 may provide functionality for one or more of the following actions related to a standard operating environment (an SOE): creating an SOE, updating an SOE, deleting an SOE, finding an SOE, and retrieving an SOE.
  • the virtual machine management component 1200 may provide functionality for one or more of the following actions related to virtual machines: create a virtual machine, update a virtual machine, delete a virtual machine, find a virtual machine, and assignment to or removal from a standard operating environment.
  • a machine type may refer to a non-technical description of a computing environment provided by a virtual machine. Some examples of machine types are “C++ Developer Workstation” or “Secretarial Workstation.” Many virtual machines may be grouped in a single machine type.
  • the virtual machine management component 1200 may provide functionality for one or more of the following actions related to machine types: creating machine types, updating a machine type, deleting a machine type, finding a machine type, and retrieving a machine type.
  • the virtual machine management component 1200 may provide functionality for creating virtual machines.
  • an administrator or provisioning service creates a new machine type in a database of virtual machines.
  • the machine type is given a meaningful name such as “HR Manager Workstation.”
  • the machine type name is the name for a class of standard operating environment (SOE) rather than a specific SOE, and multiple SOEs may be assigned to the machine type name.
  • SOE standard operating environment
  • the machine type may be used to publish the class of virtual machines.
  • a standard operating environment is created for the machine type and assigned to the machine type in the database of virtual machines.
  • the SOE is a virtual machine with a specific hardware and software configuration.
  • a snapshot of the SOE virtual machine may be taken and used as a template for virtual machine clones.
  • clones of the SOE virtual machine are assigned to users.
  • an administrator clones an SOE for use by users by creating linked clones of the snapshot of the SOE virtual machine.
  • the linked clone virtual machines may be created in consecutively numbered subfolders in the SOE folder.
  • the linked clones of the SOE may be assigned to the SOE in the database of virtual machines.
  • an administrator updates a machine type by creating a new SOE, and new linked clones of the SOE.
  • the administrator updates an SOE pointer within a machine type record in the database of virtual machines to point to the new SOE, and marks the old SOE as being superseded.
  • the administrator may create the new SOE by creating a new virtual machine and installing the software, or by creating a full clone of an existing SOE and updating it.
  • the administrator could create a new virtual machine and install Microsoft Windows XP Professional, followed by Windows XP SP1, followed by Microsoft Office 2003, or the administrator could have taken a full clone of an existing SOE with Windows XP and Microsoft Office 2003 already installed, and installs Windows XP SP1 to achieve the same SOE.
  • the new SOE may be created in a new SOE folder and a new SOE record is created in the database of virtual machines.
  • Linked clones of the superseded SOE can be deleted when users have finished with them and the superseded SOE can be deleted when all linked clones have been deleted.
  • a virtual machine may be designated as a shared virtual machine.
  • a shared virtual machine is an instance of a virtual machine image that is designated for use by multiple users.
  • the shared virtual machine is used by one user at a time and returned to a pool of available virtual machines when not in use.
  • users may change the image but may not persist any changes to the image once it is shutdown. In this embodiment, all changes are discarded when the image is shutdown or a user terminates a session.
  • a virtual machine may be designated as a private virtual machine.
  • a private virtual machine is an instance of a virtual machine image that is designated for use by a specific user. Only that user may be allocated to the image, launch the image, or execute the image.
  • private images will be configured to permit changes to be persisted when the image is shutdown.
  • changes may be configured to be discarded upon image shutdown as per shared images, depending on the requirements of the user.
  • a session management component is launched and identifies a virtual machine.
  • the session management component transmits an identification of a user and a virtual machine type identified responsive to a request for access to a resource to the virtual machine management component 1200 .
  • the session management component requests an identification of a specific virtual machine to launch.
  • the session management component requests an identification of a location of the configuration and virtual disk files of the identified virtual machine.
  • a virtual machine is identified responsive to the received identification of the user of the requesting machine. In other embodiments, a virtual machine is identified responsive to a request by the user for a type of virtual machine. In still other embodiments, a virtual machine is identified responsive to a request by the user for a type of computing environment.
  • the virtual machine management component 1200 transmits to the session management component an identification of a specific virtual machine to launch. In one of these embodiments, the session management component then proceeds to launch the virtual machine. In another of these embodiments, the virtual machine management component launches the virtual machine.
  • the virtual machine management component transmits to the session management component an identification of a plurality of virtual machines to launch.
  • the session management component may present an enumeration of available virtual machines to a user.
  • the session management component receives a selection of a virtual machine from the enumeration of available virtual machines and the session management component launches the selected virtual machine.
  • the virtual machine management component transmits to the session management component an indication that no virtual machines are available for the user requesting the access.
  • the virtual machine management component 1200 transmits to the session management component an indication that an existing, executing virtual machine has now been allocated to the user.
  • the virtual machine management component transmits to the session management component an identification of an available virtual machine responsive to accessing a database storing information associated with a plurality of virtual machines, the information including, but not limited to, an identification of the plurality of virtual machines, an identification of a location of files associated with the plurality of virtual machines, an identification of an access control list associated with the plurality of virtual machines, and an indication of availability of the plurality of virtual machines.
  • the virtual machine management component 1200 modifies an access control list associated with the virtual machine responsive to the identification of the user received from the session management component in the initial request.
  • the virtual machine management component 1200 modifies the access control list to allow the virtual machine to be launched for the user.
  • the virtual machine management component 1200 transmits additional information associated with the virtual machine to the session management component.
  • the additional information may include network share details relating to a folder storing files associated with the virtual machine.
  • the session management component uses the additional information to map the folder to a mount point, such as a drive letter, in the virtual machine.
  • virtual machine images configuration and data files comprising the virtual machine—are stored on a storage area network.
  • virtual machine images are stored in network attached storage.
  • a file server in communication with the storage area network makes the virtual machine images accessible as if they were located on network attached storage.
  • an identified virtual machine is configured (step 814 ).
  • an execution machine identified by the intermediate machine executes a hypervisor emulating hardware resources required by the requested computing environment.
  • a session management component launches a configured virtual machine in the hypervisor. Configuration occurs of the virtual machine for a particular client machine 10 .
  • a connection is established between the client machine and the virtual machine.
  • FIG. 13 a block diagram depicts one embodiment of a session management component 1300 in a system providing access to a computing environment by an intermediate machine to a requesting machine.
  • the session management component 1300 includes an identification component 1302 , an execution component 1304 , and a management component 1306 .
  • the identification component 1302 is in communication with a virtual machine management component and receives an identification of a virtual machine providing a requested computing environment. In some embodiments, the identification component 1302 is in communication with the virtual machine management component 1200 . In one embodiment, the identification component 1302 receives an identification of an execution machine 30 ′ into which to launch the virtual machine. In some embodiments, the identification component 1302 identifies an execution machine on which a required hypervisor executes and into which to launch the virtual machine. In other embodiments, the identification component 1302 receives an identification of the execution machine. In one of these embodiments, the identification component 1302 receives the identification from the intermediate machine 30 .
  • the identification component 1302 further comprises a transceiver.
  • the transceiver in the identification component 1302 receives an identification of a user of the requesting machine and transmits the identification of the user to the virtual machine management component.
  • the transceiver receives an identification by a user of a type of computing environment requested and transmits the identification to the virtual machine management component 1200 .
  • the transceiver receives an identification by a user of a type of virtual machine requested and transmits the identification of the type of virtual machine requested to the virtual machine management component 1200 .
  • the identification component 1302 receives an identification of a virtual machine providing a requested computing environment, the virtual machine selected responsive to a received identification of a user of the requesting machine. In other embodiments, the identification component 1302 receives an identification of a virtual machine providing a requested computing environment, the virtual machine selected responsive to a received identification of a type of computing environment requested. In other embodiments, the identification component 1302 receives an identification of a virtual machine providing a requested computing environment, the virtual machine selected responsive to a received identification of a type of virtual machine requested.
  • the execution component 1304 launches the virtual machine into a hypervisor.
  • the hypervisor executes on an execution machine 30 ′.
  • the execution component 1304 is in communication with the identification component.
  • the execution component 1304 receives from the identification component 1302 an identification of an execution machine 30 ′ executing a hypervisor into which to launches the virtual machine.
  • the execution component 1304 launches the virtual machine into a hypervisor emulating hardware resources required to support the computing environment.
  • a virtual machine service component executes in the hypervisor.
  • a virtual machine service component executes in a guest operating system provided by a virtual machine executing in the hypervisor.
  • the virtual machine service component is in communication with the session management component 1300 and receives configuration information associated with the client machine 10 .
  • the management component 1306 establishes a connection between the requesting machine and the virtual machine and manages the connection.
  • the management component 1306 provides an internet protocol address associated with the virtual machine to the user of the requesting machine.
  • the management component 1306 provides an internet protocol address associated with an execution machine to the user of the requesting machine.
  • the management component 1306 provides a proxy for communication between the requesting machine and the virtual machine.
  • the management component 1306 establishes a connection between the requesting machine and the virtual machine using a presentation layer protocol.
  • identification component 1302 the execution components 1304 and the management component 1306 may be provided as a single functional unit or the functions provided by those components may be grouped into two or more components.
  • the session management component 1300 establishes and manages a user's virtual machine session.
  • the session management component 1300 provides functionality for, without limitation, locating a virtual machine, launching a hypervisor, launching a virtual machine in the hypervisor, connecting a user to the virtual machine, and managing the established connection.
  • the session management component 1300 publishes a plurality of available virtual machines.
  • the session management component 1300 provides, without limitation, enumeration into client drives, mapping of client drives to shared folders on the virtual machine, monitoring of the hypervisor, monitoring of an operating system provided by the virtual machine, and a virtual machine control panel to the user.
  • the session management component 1300 provides a virtual machine control panel to the user.
  • the virtual machine control panel may enable a user to switch to the virtual machine, power off the virtual machine, reset the virtual machine, or suspend the virtual machine.
  • the session management component 1300 provides the virtual machine control panel only to users authorized to access the functionality of the virtual machine control panel.
  • a virtual machine service component executes in the hypervisor.
  • the virtual machine service component is in communication with the session management component 1300 and receives configuration information associated with the client machine 10 .
  • the session management component 1300 creates a connection to the virtual machine service component, such as a TCP/IP connection, and communicates with the virtual machine service component over the created connection.
  • the session management component 1300 transmits information associated with the client machine 10 , such as initialization parameters or client monitor geometry, to the virtual machine service component.
  • the session management component 1300 identifies a folder containing an image of the identified virtual machine.
  • the folder contains configuration and data files comprising the virtual machine.
  • the session management component 1300 mounts the folder in the execution machine prior to launching the virtual machine.
  • the session management component 1300 copies definition data files associated with the virtual machine onto the execution machine.
  • the session management component 1300 may copy the definition data files back into the identified folder when a session is completed.
  • the configuration and data files are streamed to the execution machine, as described below.
  • the session management component 1300 enumerates in the virtual machine a plurality of drives associated with the client machine 10 .
  • the session management component 1300 creates a folder associated with each drive in the plurality of drives.
  • the session management component 1300 stores a folder associated with a drive in the plurality of drives in the mounted folder containing the identified virtual machine.
  • an enumeration of the stored folder associated with the drive is provided to a user of the client machine 10 .
  • a protocol stack located in the hypervisor or in the guest operating system enables drive mapping through other techniques, including techniques enabled by presentation layer protocols.
  • FIG. 14 a block diagram depicts one embodiment of a system in which a drive associated with the client machine 10 is made available to a computing environment.
  • the client machine 10 has a connection ( 1 ) to an execution machine and a connection ( 2 ) to a plurality of drives available to a user of the client machine 10 .
  • the session management component 1300 creates a folder associated with each drive in the plurality of drives ( 3 ). In one embodiment, the session management component 1300 stores the created folder associated with a drive in the plurality of drives in a virtual machine folder 1002 , the mounted folder containing configuration and data files associated with the identified virtual machine. In another embodiment, the session management component 1300 generates a list of shared folders stored in the virtual machine folder 1002 .
  • the session management component 1300 notifies the virtual machine service component of the change to the virtual machine folder 1002 ( 4 ). In some embodiments, the session management component 1300 responds to changes in the client device by rebuilding a shared folder list in the virtual machine folder 1002 . In one of these embodiments, the session management component 1300 receives an identification of a modification to the drive associated with the client machine 10 . In another of these embodiments, the session management component 1300 transmits a notification to the virtual machine service component identifying the change to the virtual machine 1002 .
  • the virtual machine service component For each folder associated with a drive in the virtual machine folder 1002 , the virtual machine service component provides an indication of a mapped client drive to the virtual machine ( 5 ). In one embodiment, the virtual machine service component associates the mapped client drive with a drive letter on the virtual machine. In another embodiment, the virtual machine service component monitors for changes to the shared folder list in the virtual machine folder 1002 . In some embodiments, an enumeration of the stored folder associated with the drive is provided to a user of the client machine 10 .
  • the session management component 1300 enumerates in the virtual machine a plurality of printers associated with the client machine 10 . In one of these embodiments, the session management component 1300 accesses a printer service to acquire an authorization level required to enumerate a printer in the plurality of printers.
  • a printer associated with the client machine 10 is shared as a network printer and made accessible to the virtual machine as a network resource.
  • the virtual machine generates printer output using the TCP/IP and LPR protocols, and this output is intercepted and transmitted to the printer associated with the client machine 10 .
  • the virtual machine transmits printer output to a virtualized hardware resource provided by the hypervisor, such as a COM port on the virtual machine. The output is captured and transmitted to the printer associated with the client machine 10 .
  • a hypervisor may provide access to a virtual printer or printer port.
  • an execution machine identified by the intermediate machine executes a hypervisor emulating hardware resources required by the requested computing environment.
  • the hypervisor executes on the intermediate machine.
  • the hypervisor executes in a terminal services session executing on the intermediate machine.
  • the hypervisor executes on the execution machine.
  • the hypervisor executes in a terminal services session executing on the execution machine.
  • the hypervisor may be executed on the client machine 10 .
  • the hypervisor provisions a plurality of hardware resources on the execution machine for use by the requested computing environment.
  • the hypervisor partitions a plurality of hardware resources on the execution machine and makes the partition available for use by the requested computing environment.
  • the hypervisor emulates a plurality of hardware resources on the execution machine for use by the requested computing environment.
  • the hypervisor may partition hardware resources, emulate hardware resources, or provision hardware resources, or all three.
  • a hypervisor may emulate a device (such as a graphics card, network card, and disk), partition the (execution time) of the CPU, and virtualize registers, storage, and underlying devices which they use to fulfill operations on their emulated hardware (such as RAM, and network interface cards).
  • the session management component 1300 executes the hypervisor. In one of these embodiments, the session management component 1300 executes the hypervisor in full-screen mode. In other embodiments, the session management component 1300 monitors execution of the hypervisor. In one of these embodiments, the session management component 1300 transmits a notification to the virtual machine management component 1200 that the virtual machine has terminated when the session management component 1300 receives an indication that a virtual machine executing in the hypervisor has terminated. In another of these embodiments, the session management component 1300 receives a notification when the user logs out of a session.
  • the hypervisor provides a hardware abstraction layer between hardware on the execution machine and a computing environment provided by a virtual machine.
  • the hypervisor may be said to be executing “on bare metal.”
  • there is an operating system executing on the execution machine referred to as a host operating system, and the hypervisor executes from within the operating system.
  • Computing environments provided by a virtual machine may be referred to as guest operating systems.
  • the hypervisor executes in a terminal server session on a host operating system on the execution machine.
  • the hypervisor may emulate hardware resources required by a computing environment provided by a virtual machine.
  • the hypervisor may partition hardware and provide access to the partition.
  • the hypervisor may also virtualize existing hardware, making it appear to at least one domain on the hardware as if that domain were the only domain accessing the hardware.
  • output from the computing environment, or an application or resource executing within the computing environment is passed from the computing environment to a virtualized hardware resource provided by the hypervisor.
  • the hypervisor transmits the output to a component such as the session management component 1300 .
  • the session management component 1300 may transmit the received output to a client machine 10 from which a user accesses the computing environment.
  • the hypervisor redirects the output from the virtualized hardware resource to an actual hardware resource, such as a network interface card.
  • the hypervisor provides a hardware abstraction layer and creates an environment into which a virtual machine may be launched, the virtual machine comprised of configuration and data files creating a computing environment, which may comprise a guest operating system and application programs or other resource.
  • the hypervisor provides functionality for transmitting data directed to a virtualized hardware resource and redirecting the data to a requesting machine via the session management component 1300 .
  • the communication between the session management component 1300 and the hypervisor enable transmission of updates, such as audio updates, updates associated with a graphical user interface, or updates associated with serial COM port input/output, from the virtual machine to the requesting machine.
  • the communication enables transmission of keyboard or mouse or audio updates from the requesting machine to the virtual machine.
  • the hypervisor may map terminal server drives to the computing environment.
  • a virtual machine is configured for access by a particular client machine 10 .
  • the management component 1300 receives an identification of a virtual machine already executing in the hypervisor.
  • the session management component 1300 launches the virtual machine in the hypervisor.
  • the session management component 1300 receives an identification of a folder containing configuration and data files comprising the virtual machine.
  • the session management component 1300 mounts the identified folder in the execution machine.
  • a virtual machine service component executes in a guest operating system executing within the virtual machine.
  • the virtual machine service component is a system service running in a network service account.
  • the virtual machine service component is configured to initiate execution automatically upon the execution of the computing environment.
  • the virtual machine service component communicates with the session management component 1300 .
  • the virtual machine service component executes in the hypervisor.
  • a virtual machine service component executes within the virtual machine.
  • the session management component 1300 after launching the virtual machine in the hypervisor, the session management component 1300 establishes a connection, such as a TCP/IP connection, with the virtual machine service component.
  • the virtual machine service component establishes the connection.
  • the connection may be a single multiplexed connection between the components or multiple independent connections.
  • the session management component 1300 uses the connection to transmit configuration information to the virtual machine service component.
  • the configuration information may be associated with a presentation layer protocol session executing on the client machine 10 in which output from the virtual machine is presented.
  • the configuration information may also include information associated with display settings and changes, client drive information and authentication data.
  • the virtual machine service component receives information associated with a printer to which the requesting machine has access. In one of these embodiments, the virtual machine service component access a network printer service to create in the virtual machine a printer connected to the printer to which the requesting machine has access.
  • the virtual machine service component transmits session status messages to the session management component 1300 .
  • the virtual machine service component transmits heartbeat messages to the session management component 1300 .
  • the virtual machine service component transmits keep-alive messages to the session management component 1300 , to prevent the session management component 1300 from shutting down the virtual machine.
  • the virtual machine service component transmits a message to the session management component 1300 providing an indication that the user of the client machine 10 has logged off, shut down, or suspended a session with the computing environment.
  • the virtual machine service component may receive the indication of the user's activity from an authentication module.
  • a request for access to a resource is received (step 802 ), a method for providing access to the resource is identified (step 804 ), and a virtualized environment may be selected to provide access to a resource (step 808 ).
  • a client machine 10 receives the request, identifies a method for providing access, and selects a virtualized environment to provide access to a resource.
  • a mobile computing device connects to a client machine 10 referred to as a computing device, which identifies a method for providing access to a computing environment, selects a portable computing environment residing in storage on the mobile computing device and provides access to the portable computing environment.
  • the storage device stores data associated with a computing environment, such as a portable computing environment, which in some embodiments includes virtualization software, a virtual machine image, and user data.
  • a computing environment such as a portable computing environment, which in some embodiments includes virtualization software, a virtual machine image, and user data.
  • a computing device connecting to the storage device, executing a virtual machine, and providing access to the computing environment responsive to data stored in the storage device.
  • the storage device 8905 stores the portable computing environment 8920 of one or more users.
  • the storage device 8905 may be any type and form of hard drive, including a micro hard drive.
  • the storage device 8905 may be any type and form of portable storage device, such as a flash drive or USB drive, or any type and form of portable storage medium, such as a CD or DVD.
  • the storage device 8905 comprises a flash card, a memory stick, multi-media card or a secure digital card.
  • the storage device 8905 may store applications including word processing or office applications, ICA clients, RDP clients, software to establish any type and form of virtual private network (VPN) or SSL VPN connection, software to accelerate network communications or application delivery or any other type and form of application.
  • applications including word processing or office applications, ICA clients, RDP clients, software to establish any type and form of virtual private network (VPN) or SSL VPN connection, software to accelerate network communications or application delivery or any other type and form of application.
  • VPN virtual private network
  • the storage device 8905 may store a virtual machine image.
  • the storage device 8905 may comprise a transmitter for transmitting stored data to a computing device 8910 .
  • the storage device 8905 may comprise a transceiver for accessing stored data, transmitting stored data and receiving data for storage.
  • the storage device 8905 may comprise stored data comprising an application program for executing a virtual machine on a computing device.
  • the storage device 8905 is embedded in a mobile computing device. In other embodiments, the storage device 8905 is connected to a mobile computing device. In still other embodiments, the storage device 8905 comprises a portable storage device removable from a computing device.
  • the storage device 8905 stores data associated with a computing environment.
  • the data may comprise a portable computing environment 8920 .
  • the portable computing environment 8920 is considered portable in that the portable computing environment 8920 may be easily or conveniently carried and transported from one computing device 8910 to another computing device 8910 ′.
  • the portable computing environment 8920 is considered portable in that the computing environment may be established or executed on any suitable computing device 8910 with little or no changes to the computing device 8910 , or in a further embodiment, with little or no maintenance or administration.
  • the portable computing environment 8920 includes a plurality of files representing a desktop environment, or a portion thereof, of a computer system 100 , which a user desires to execute on the computing device 8910 .
  • the portable computing environment 8920 may represent an environment under which a user operates a home or office desktop computer.
  • the portable computing environment 8920 represents one or more applications to which a user has access.
  • the portable computing environment 8920 may include a virtual machine image 8925 .
  • the virtual machine image 8925 comprises a computing environment image, including any of the information, data, files, software, applications and/or operating system needed to execute a computing environment 8920 , including files needed to execute the computing environment 8920 via the virtualization software 8921 .
  • the virtual machine image 8925 comprises configuration and data files required to execute a virtual machine providing access to a computing environment requested by a user.
  • the virtual machine image 8925 comprises a virtual machine image as described above.
  • the portable computing environment 8920 may also include user data 8930 , including, without limitation, any data, information, files, software or applications of a user.
  • the user data 8930 is stored in, or as a part of, the virtual machine image 8925 .
  • the user data 8930 may be created, edited or provided by any software, program, or application of the storage device 8905 or of the computing device 8910 .
  • the portable computing environment 8920 may include virtualization software 8921 .
  • the virtualization software 8921 may comprise any suitable means or mechanisms for a user to access, read and/or write any user data 8930 included in or provided by the virtualization software 8921 and/or virtual machine image 8925 .
  • the virtualization software 8921 may track, manage and synchronize the access, reading and/or writing of user data 8930 during an established computing environment 8920 ′ with the user data 8930 provided on the storage device 8905 .
  • the user data 8930 may only be accessed via the virtualization software 8921 or the established computing environment 8920 ′.
  • any software, programs or applications of the storage device 8905 may access the user data 8930 when the storage device 8905 is not connected to the computing device 120 or when a computing environment 8920 ′ is not executing.
  • the user data 8930 may comprise data and files created during a session of an established computing environment 8920 ′.
  • the computing device 8910 may be any type and form of computer system as described in connection with FIG. 1A and FIG. 1B above.
  • the computing device 8910 is a client machine 10 as described above.
  • a connection between a computing device 8910 and a storage device 8905 provides a user of a client machine 10 with access to a requested resource.
  • the computing device 8910 receives a request for access to a resource when a connection is made between the computing device 8910 and the storage device 8905 .
  • a method for providing access to the resource is identified responsive to information received from the storage device 8905 .
  • the computing device 8910 has a storage element 128 . In another embodiment, the computing device 8910 has a network interface 118 ′ connected to network 150 . In still another embodiment, the computing device 8910 has a transceiver for accessing data stored in a storage device 8905 or in a computing device 8910 ′.
  • the computing device 8910 comprises an operational or performance characteristic not provided by the storage device 8905 .
  • the computing device 8910 comprises elements, such as a processor or a memory, which the storage device 8905 does not include.
  • the computing device 8910 provides an I/O device, display device, installation medium, or other peripherals, such as a keyboard or printer not available to the storage device 8905 .
  • the computing device 8910 may provide a feature, a resource, or peripheral desired to be used by the user of the storage device 8905 .
  • the user may want to access a file or an application provided on a remote machine 30 ′ available via a connection across the network 150 .
  • the computing device 8910 provides access to a network, such as machine farm 38 , not available to the storage device 8905 , or to a user of the storage device 8905 .
  • the computing device 8910 establishes a computing environment 8920 ′ based on the portable computing environment 8920 provided by the storage device 8905 .
  • the computing device 8910 establishes a virtual machine 8925 ′ and a virtualization layer 8922 to execute the computing environment 8920 ′ based on the virtualization software 8921 or 8921 ′, virtual machine image 8925 and /or user data 230 .
  • virtualization allows multiple virtual machines 8925 ′, with heterogeneous operating systems to run in isolation, side-by-side on the same physical machine 8910 .
  • the virtualization software 8921 may include a virtual machine image.
  • Virtual machines may include cross-platform X86 PC emulators, such as the products distributed by The Bochs Project at bochs.sourceforge.net, or VMware products manufactured and distributed by VMware, Inc. of Palo Alto, Calif., or products manufactured and distributed by Softricity, Inc., or the Virtuozzo products manufactured and distributed by SWSoft, Inc. of Herndon, Va., or the Microsoft® Virtual PC products manufactured and distributed by Microsoft Corporation of Redmond, Wash.
  • the virtualization software 8921 includes any the AppStream products manufactured and distributed by AppStream Inc, of Palo Alto, Calif., or the AppExpress products manufactured and distributed by Stream Theory, Inc of Irvine, Calif.
  • the computing device 8910 may use any other computing resources of computer system 100 b required by the computing environment 8920 ′.
  • the hypervisor 8923 provides a virtualized hardware resource required by the computing environment 8920 ′.
  • a hypervisor 8923 provides, via a virtualization layer 8922 , access to a hardware resource required for execution of a computing environment.
  • the hypervisor 8923 provisions the hardware resource.
  • the hypervisor 8923 virtualizes the hardware resource.
  • the hypervisor 8923 partitions existing hardware resources and provides access to a partitioned hardware resource.
  • a virtual machine 8925 ′ executing on a virtualization layer provides access to a computing environment 8920 ′.
  • a session management component 1300 executes the virtual machine 8925 .
  • virtualization software 8921 or 8921 ′ execute the virtual machine 8925 .
  • the portable computing environment 8920 includes any type and form of software for virtualizing on a computing device a user-accessible resource, such as an operating system, desktop, application, and any hardware computing resources.
  • virtual machine image 8925 is accessed to execute a virtual machine 8925 ′.
  • the virtualization software 8921 or 8921 ′ accesses the virtual machine image.
  • the virtualization software 8921 may include software for virtualizing a server, such as the Microsoft Virtual Server products manufactured and distributed by Microsoft Corporation of Redmond, Wash., or the Linux Vserver products distributed by the Linux Vserver Project located at linux-vserver.org.
  • the virtualization software 8921 may also include an interpreter or just-in-time compiler, such as the JAVA Virtual Machine (JVM) originally manufactured by Sun Microsystems of Santa Clara, Calif., or the Common Language Runtime (CLR) interpreter manufactured by the Microsoft Corporation.
  • JVM JAVA Virtual Machine
  • CLR Common Language Runtime
  • the computing device 8910 has the virtualization software 8921 ′ stored or installed in storage element 128 prior to a connection with the storage device 8905 .
  • the virtualization software 8921 ′ does not need to be installed on the computing device 8910 , and can, instead, be executed from the storage device 8905 .
  • the computing device 8910 installs and executes the virtualization software 8921 on a per connection basis.
  • the computing device 8910 may remove the virtualization software 8921 from storage element 128 upon termination of the established computing environment 8920 ′.
  • the computing device 8910 installs and executes the virtualization software 8921 on a first connection.
  • the computing device 8910 upon other connections, if the computing device 8910 detects changes to the virtualization software 8921 , such as a newer version, the computing device 8910 updates the virtualization software 8921 , or installs a newer version of the virtualization software 8921 . In other embodiments, the computing device 8910 obtains the virtualization software 8921 from a storage element 128 ′′ or a remote machine 30 accessible via network 150 .
  • the virtualization software 8921 is used to establish a virtualization layer 8922 on the computing device 8910 .
  • the virtualization layer 8922 provides an abstraction layer that decouples or isolates an application or a hardware resource from the operating system.
  • the virtualization layer 8922 comprises an application to host or run another operating system or application, such as virtual machine 8925 .
  • the hypervisor 8923 comprises the virtualization software 8921 .
  • the session management component 1300 comprises the virtualization software 8921 .
  • the host computing device 8910 stores virtualization software 8921 ′ in storage element 128 .
  • the computing device 8910 accesses a remotely located copy of virtualization software 8921 ′.
  • the virtualization layer 8922 and/or virtual machine 8925 provide an execution environment on the computing device 8910 .
  • each execution environment is a unique instance of the same execution environment, while, in another of these embodiments, each execution environment may be an instance of different execution environments. Each execution environment may be isolated from and/or not accessible by another execution environment.
  • the virtualization layer 8922 and/or virtual machine 8925 provides an execution context, space or “sandbox” to isolate processes and tasks running on the same operating system.
  • the virtualization layer 8922 communicates with a session management component 1300 .
  • the session management component 1300 is software executing in a layer between a hypervisor 8923 or operating system of the computing device 8910 and one or more virtual machines 8925 that provide a virtual machine abstraction to guest operating systems.
  • the session management component 1300 may reside outside of the computing device 8910 and be in communication with a hypervisor 8923 or operating system of the computing device 8910 .
  • the session management component 1300 can load, run or operate the virtual machine image 8925 from the storage device 8905 to execute a virtual machine 8925 ′.
  • the session management component 1300 and hypervisor 8923 are incorporated into the same application, software or other executable instructions to provide the virtualization layer 8922 .
  • the session management component 1300 is in communication with a virtual machine service component executing within the computing environment 8920 .
  • the computing device 8910 includes a loading mechanism 8940 , which may comprise software, hardware, or any combination of software and hardware.
  • the loading mechanism 8940 comprises an autorun configuration file.
  • the storage device 8905 may include the loading mechanism 8940 .
  • the storage device 8905 includes the loading mechanism 8940 in an autorun file.
  • a loading mechanism 8940 on the storage device 8905 establishes the computing environment 8920 ′ on the computing device 8910 based on the portable computing environment 8920 stored in the storage device 8905 .
  • the loading mechanism 8940 ′ of the computing device 8910 establishes of the computing environment 8920 ′.
  • the loading mechanism 8940 of the storage device 8905 works in conjunction with the loading mechanism 8940 ′ of the computing device 8910 to establish the computing environment 8920 ′.
  • the loading mechanism 8940 comprises a driver, such as a device driver or a kernel or user-mode driver for connecting to and/or accessing the storage device 8905 , or the storage element 128 thereof.
  • the loading mechanism 8940 comprises any type and form of executable instructions, such as a program, library, application, service, process, thread or task for accessing the storage element 128 or storage device 8905 .
  • the loading mechanism 8940 accesses any type and form of data and information on the storage 128 to establish the user environment 8920 ′ in accordance with the operations discussed herein. For example, in some embodiments, the loading mechanism 8940 reads an autorun configuration file in storage element 128 or on storage device 8905 .
  • the loading mechanism 8940 comprises a plug-n-play (PnP) mechanism by which the operating system of the host computing device 8910 recognizes the storage device 8905 upon connection, and loads the drivers to connect to the storage device 8905 .
  • PnP plug-n-play
  • the loading mechanism 8940 upon detection of a connection between the storage device 8905 and computing device 8910 initiates the loading, establishing and/or executing of the virtualization software 8921 and/or the user environment 8920 ′ on the computing device 8910 .
  • the loading mechanism 8940 may comprise any rules, logic, operations and/or functions regarding the authentication and/or authorization of establishing a computing environment 8920 ′ on the computing device 8910 based on the portable computing environment 8920 .
  • the loading mechanism 8940 may determine the existence of the virtualization software 8921 ′ on the computing device 8910 and/or the difference in versions between the virtualization software 8921 and virtualization software 8921 ′.
  • the loading mechanism 8940 may store, load, and/or execute the virtualization software 8921 or 8921 ′ on the computing device 8910 .
  • the loading mechanism 8940 may store, load, and/or execute the virtual machine image 8925 on the computing device 8910 as a virtual machine 8925 providing access to the computing environment 8920 ′.
  • the loading mechanism 8940 may comprise or provide any type and form of user interface, such as graphical user interface or command line interface.
  • the virtualization software 8921 , portable computing environment 8920 and/or loading mechanism 8940 are designed and constructed in accordance with the U3 application design specification, or USB smart drive, provided by U3 LLC of Redwood City, Calif.
  • the loading mechanism 8940 may comprise a U3 launchpad program
  • the virtualization software 8921 and/or portable user environment 120 may comprise a U3-based application.
  • a flow diagram depicts one embodiment of the steps taken in a method for providing access to a computing environment on a computing device via a storage device.
  • a method for providing access to a computing environment includes the step of storing, in a storage device, data associated with a computing environment (step 8950 ).
  • a computing device connects to the storage device (step 8960 ).
  • a virtual machine executing on the computing device provides access to the computing environment, based on the data stored in the storage device (step 8970 ).
  • a storage device 8905 stores data associated with a portable computing environment 8920 (step 8950 ).
  • the storage device 8905 stores user data associated with the computing environment.
  • the storage device 8905 stores a virtual machine image 8925 .
  • the storage device 8905 stores data associated with a computing environment, the computing environment comprising at least one application program.
  • the storage device 8905 stores data associated with a computing environment, the computing environment comprising an operating system.
  • the storage device 8905 stores data comprising an operating system. In another embodiment, the storage device 8905 stores data comprising an application program. In still another embodiment, the storage device 8905 stores an application program for executing a virtual machine on a computing device. In yet another embodiment, the storage device 8905 stores virtualization software for executing a virtual machine on a computing device.
  • the storage device 8905 may include a connector for establishing a connection between the storage device 8905 and a computing device.
  • the storage device 8905 resides in a computing device, such as a mobile computing device.
  • the storage device 8905 is embedded in a mobile computing device.
  • the storage device 8905 comprises a portable storage device removable from a computing device.
  • a computing device connects to the storage device (step 8960 ).
  • the storage device 8905 may connect to the computing device 8910 by any suitable means and/or mechanism.
  • the storage device 8905 connects to a computing device 8910 via a mobile computing device.
  • the storage device 8905 is embedded in a mobile computing device connectable to the computing device 8910 .
  • a request may be received by the computing device 8910 for access to a resource.
  • the request is for a desktop environment.
  • the request is for an application or for a plurality of applications.
  • the request is for a virtual machine.
  • a determination may be made to provide access to the requested resource via a virtualized environment. In one of these embodiments, the determination is made as described above in connection with FIG. 8 . In another of these embodiments, the determination is made responsive to information received from the storage device 8905 , such as a rule requiring the determination.
  • the computing device 8910 accesses the storage device 8905 to access the portable computing environment 8920 . In another embodiment, the computing device 8910 obtains the virtualization software 8921 from the storage device 8905 to establish a computing environment 8920 ′. In still another embodiment, the computing device 8910 does not obtain the virtualization software 8921 from the storage device 8905 as the computing device 8910 has access to the virtualization software 8921 in storage element 128 ′ or via network 150 . In yet another embodiment, the computing device 8910 obtains portions of the virtualization software 8921 from the storage device 8905 .
  • the virtualization software 8921 on the storage device 8905 may be an updated version or have updated files to the virtualization software 8921 ′ on the computing device 8910 .
  • the storage device 8905 transmits information to the computing device 8910 . In one of these embodiments, the storage device 8905 transmits the information with a request for access to a resource.
  • a virtual machine executing on the computing device provides access to the computing environment, based on the data stored in the storage device (step 8970 ).
  • the computing device 8910 retrieves data from the storage device 8905 .
  • the computing device 8910 accesses the storage device 8905 to obtain a virtual machine image 8925 used to execute the virtual machine.
  • the computing device 8910 accesses the storage device 8905 to obtain data or information identifying a location of the portable computing environment 8920 that may be accessible to the computing device 8910 .
  • the storage device 8905 may comprise user data 8930 identifying a Uniform Resource Locator (URL) associated with a location on which a virtual machine image 8925 is stored, the URL accessible by the computing device 8910 via network 150 .
  • the computing device 8910 accesses a storage element identified by the user data 8930 , for example, a storage element or remote machine 30 on the network 150 storing the virtual machine image 8925 .
  • URL Uniform Resource Locator
  • the computing device 8910 mounts the storage device 8905 as a storage, such as a disk, available to the computing device 8910 . In one of these embodiments, the computing device 8910 mounts the storage device 8905 as removable media. In other embodiments, the loading mechanism 8940 accesses the storage device 8905 .
  • the computing device 8910 establishes an environment for executing or providing access to the computing environment 8920 ′.
  • a virtual machine may be executed in the computing environment 8920 ′ to provide access to a requested resource.
  • a virtual machine is the requested resource.
  • a virtual machine 8925 ′ executes a virtual machine 8925 ′′.
  • the computing device 8910 executes a virtual machine responsive to a virtual machine image 8925 stored in the storage device 8905 . In another embodiment, the computing device 8910 executes a virtual machine 8925 ′ responsive to the data stored in the storage device 8905 . In still another embodiment, the computing device 8910 executes the virtual machine responsive to a policy stored in the storage device.
  • the computing device 8910 retrieves data stored in the storage device 8905 .
  • the computing device 8910 uses an application program stored in the storage device 8905 to access the data.
  • the computing device 8910 provides access to a computing environment by executing an operating system providing access to one or more applications identified by information stored in the storage device, the operating system and the one or more applications having access to user data stored in the storage device 8905 .
  • the computing device 8910 installs and/or loads the virtualization software 8921 to establish the virtualization layer 8922 .
  • the virtualization software 8921 is designed and constructed as a portable application that can execute, load or establish the virtualization layer 8922 on the computing device 8910 without requiring installation of the virtualization software 8921 .
  • the virtualization software 8921 is automatically installed on the computing device 8910 via an installation script.
  • the virtualization software 8921 is installed without requiring a reboot.
  • the virtualization software 8921 is installed and the virtualization layer 8922 established transparently to a user.
  • the virtualization layer 8922 is established using the virtualization software 8921 ′ stored on the computing device 8910 or accessed via network 150 .
  • the computing device 8910 executes a hypervisor 8923 to establish the virtualization layer 8922 .
  • a hypervisor 8923 on the computing device 8910 and in communication with a hypervisor 8923 ′ on a remote machine 30 ′ establishes the virtualization layer 8922 .
  • a hypervisor 8923 in communication with a session management component 1300 establishes the virtualization layer 8922 .
  • the session management component 1300 identifies, provisions, and/or executes a virtual machine in the virtualization layer 8922 as described above in connection with FIG. 8 .
  • the loading mechanism 8940 establishes the virtualization layer 8922 .
  • the computing device 8910 establishes a virtualization layer 8922 in which a virtual machine service component executes.
  • the virtualization layer 8922 has been established prior to the storage device 8905 connecting to the computing device 8910 .
  • the virtualization layer 8922 may have been established for another computing environment 8920 ′ or during a previous connection of the same or a different storage device 8905 .
  • the computing device 8910 and/or loading mechanism 8940 establishes the virtualization layer 8922 and actuates, starts, or executes a session management component 1300 and/or hypervisor 8923 .
  • the computing device 8910 and/or loading mechanism 8940 executes session management component 1300 and/or hypervisor 8923 upon loading or executing a virtual machine 8925 .
  • the computing device 8910 provides access to the computing environment 8920 ′ based on the portable computing environment 8920 (step 8970 ).
  • the computing device 8910 and/or loading mechanism 8940 accesses the virtual machine image 8925 from storage device 8905 and executes the virtual machine image 8925 as a virtual machine 8925 ′ in the established virtualized environment 8922 .
  • the computing device 8910 and/or loading mechanism 8940 automatically loads, executes or otherwise establishes the computing environment 8920 with the virtualization layer 8922 upon detection of a connection over network 150 .
  • the computing device 8910 and/or loading mechanism 8940 automatically loads, executes or otherwise establishes the computing environment 8920 and the virtualization layer 8922 upon detection of existence or identification of the portable computing environment 8920 in storage element 128 .
  • a user may select the virtual machine image 8925 from the storage device 8905 for execution as a virtual machine 8925 ′ via any type and form of user interface.
  • the virtualization software 8921 , virtualization layer 8922 , hypervisor 8923 , or loading mechanism 8940 may display a user interface for a user to identify a virtual machine image 8925 , and/or to execute a virtual machine 8925 ′ based on a virtual machine image 8925 .
  • a client such as an ICA client, an RDP client, or an X11 client, executes on the computing device 8910 and provides the user interface to the user.
  • a user may access, read, and/or write user data 8930 during the course of using the established computing environment 8920 ′.
  • a user of the computing device 8910 may access, read and/or write the user data 8930 to the storage device 8905 .
  • a user of the computing device 8910 may edit or modify user data 8930 or may create new data and information in user data 8930 .
  • a user of the computing device 8910 may access, read, and/or write user data to the storage 128 ′ of the computing device 8910 .
  • the computing device 8910 may synchronize user data 8930 on the computing device 8910 with user data 8930 on the storage device 8905 .
  • the computing device 8910 uses the virtualization layer 8922 or the loading mechanism 8940 to synchronize the user data 8930 .
  • the storage device 8905 may have a program or application for synchronizing data between the storage device 8905 and the computing device 8910 .
  • the storage device 8905 may disconnect from the computing devi