US11182214B2 - Allocating computing resources based on properties associated with location - Google Patents
Allocating computing resources based on properties associated with location Download PDFInfo
- Publication number
- US11182214B2 US11182214B2 US16/451,632 US201916451632A US11182214B2 US 11182214 B2 US11182214 B2 US 11182214B2 US 201916451632 A US201916451632 A US 201916451632A US 11182214 B2 US11182214 B2 US 11182214B2
- Authority
- US
- United States
- Prior art keywords
- location
- client device
- computing
- virtual machine
- user account
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K19/00—Record carriers for use with machines and with at least a part designed to carry digital markings
- G06K19/06—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
- G06K19/067—Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components
- G06K19/07—Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips
- G06K19/0723—Record carriers with conductive marks, printed circuits or semiconductor circuit elements, e.g. credit or identity cards also with resonating or responding marks without active components with integrated circuit chips the record carrier comprising an arrangement for non-contact communication, e.g. wireless communication circuits on transponder cards, non-contact smart cards or RFIDs
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5016—Session
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/502—Proximity
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45541—Bare-metal, i.e. hypervisor runs directly on hardware
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- VDI virtual desktop infrastructure
- Providing a VDI environment requires creating a virtualized version of a physical device, such as a server, a storage device, a central processing unit (CPU), a graphics processing unit (GPU), or other computing resources that can accessed through a VDI client by a remote user.
- Other remote or virtualized computing resources can also be provided to users.
- a virtual machine is an emulation of a computer system and can be customized to include, for example, a predefined amount of random access memory (RAM), hard drive storage space, as well as other computing resources that emulate a physical machine.
- RAM random access memory
- virtual machines can provide the equivalent functionality of a physical computer.
- Virtual machines can be executed remotely, in a data center for example, to provide remote desktop computer sessions for employees of an enterprise.
- FIG. 1 is a drawing of an example of a virtual desktop infrastructure (VDI) environment for predictive allocation of computing resources for virtual machines.
- VDI virtual desktop infrastructure
- FIG. 2 is a drawing showing an example architecture for predictive allocation of computing resources in a computing environment.
- FIG. 3 is a flowchart illustrating functionality implemented by components of the environment of FIG. 1 .
- FIG. 4 is a flowchart illustrating functionality implemented by components of the environment of FIG. 1 .
- the present disclosure relates to the predictive allocation of computing resources in computing environment.
- the computing environment can provide a virtual desktop infrastructure (VDI) environment, another type of virtualized environment, or other computing services to users of an enterprise.
- VDI virtual desktop infrastructure
- various organizations are moving away from providing and maintaining purely physical desktops for their user bases and, instead, are moving towards providing VDI environments.
- an organization might utilize one or more virtual machines (VMs) to provide other types of computing services to its users, such as email access, development environments, testing environments, or other services that are deployed in one or more virtual or physical data centers.
- VMs virtual machines
- DRS distributed resource scheduler
- DPM distributed power management
- VDI virtual desktop infrastructure
- every keystroke and user input can be sent to a remotely executed VDI server or machine that is running a VDI session for the user.
- a client device of the user running a VDI client interacts with the VDI server and reflects each keystroke and user input on the display of the client device.
- a high latency situation can seriously degrade the user experience.
- DPM and DRS services can attempt to redistribute VMs to save computing resources or for other reasons, they again cannot make predictions about from where a user might access a VDI session. Their capabilities are limited, especially in context of a virtual desktop infrastructure environment.
- an end user is not bound to a particular virtual machine or, in other words, an end user can log-on to any virtual machine and all the application and user data will be retained and provided to the user, regardless of which virtual machine accessed.
- a virtual machine is only needed when being actively used by an end user.
- a computing environment can identify usage patterns of virtual machines (VMs), as well as users of those virtual machines, in a virtual desktop infrastructure environment.
- the computing environment can generate a predictive usage model to forecast a predicted location of a user at a future time.
- the overall efficiency of the virtualization environment is improved as computing resources are more efficiently allocated while the operational cost for running a data center can be reduced due to the reduction in energy costs.
- virtual machines can be assigned to end users based on geographic location, thereby reducing network latency in virtual desktop sessions.
- the present application and the examples described herein, are directed to improving the performance of a computer network, namely, by improving the efficiency and operation of hosts and related computing resources in a data center that provides, for instance, a virtualization environment that provides virtual desktop sessions for end users.
- the networked environment 100 can include a computing environment 103 and various computing systems 106 a . . . 106 b in communication with one other over a network 109 .
- the network 109 can include, for example, the Internet, intranets, extranets, wide area networks (WANs), local area networks (LANs), wired networks, wireless networks, other suitable networks, or any combination of two or more such networks.
- the networks can include satellite networks, cable networks, Ethernet networks, telephony networks, and other types of networks.
- the networked environment 100 can also be described as a virtual desktop infrastructure (VDI) environment or computing environment.
- VDI virtual desktop infrastructure
- the computing systems 106 can include a plurality of devices installed in racks 112 which can make up a server bank, computing cluster, or a computer bank in a data center or other like facility.
- the devices in the computing systems 106 can include any number of physical machines, virtual machines, and software, such as operating systems, drivers, hypervisors, and computer applications.
- a computing environment 103 can include an enterprise computing environment that includes hundreds or even thousands of physical machines, virtual machines, and other software implemented in devices stored in racks 112 , distributed geographically and connected to one another through the network 109 . It is understood that any virtual machine is implemented using at least one physical device.
- the devices in the racks 112 can include, for example, memory and storage devices, servers 115 a . . . 115 m , switches 118 a . . . 118 d , graphics cards (having one or more GPUs 121 a . . . 121 e installed thereon), central processing units (CPUs), power supplies, and similar devices.
- the devices, such as servers 115 and switches 118 can have dimensions suitable for quick installation in slots 124 a . . . 124 d on the racks 112 .
- the servers 115 can include requisite physical hardware and software to create and manage a virtualization infrastructure.
- the physical hardware for a server 115 can include a CPU, graphics card (having one or more GPUs 121 ), data bus, memory, and other components.
- the servers 115 can include a pre-configured hyper-converged computing device where a hyper-converged computing device includes pre-tested, pre-configured, and pre-integrated storage, server and network components, including software, that are positioned in an enclosure installed in a slot 124 on a rack 112 .
- a server 115 executes a virtual machine
- the server 115 can be referred to as a “host,” while the virtual machine can be referred to as a “guest.”
- the hypervisor can be installed on a server 115 to support a virtual machine execution space within which one or more virtual machines can be concurrently instantiated and executed.
- the hypervisor can include VMware ESXTM hypervisor or a VMware ESXiTM hypervisor.
- the computing systems 106 are scalable, meaning that the computing systems 106 in the networked environment 100 can be scaled dynamically to include additional servers 115 , switches 118 , GPUs 121 , power sources, and other components, without degrading performance of the virtualization environment.
- the computing environment 103 can include, for example, a server or any other system providing computing capability.
- the computing environment 103 can include a plurality of computing devices that are arranged, for example, in one or more server banks, computer banks, computing clusters, or other arrangements.
- the computing environments 103 can include a grid computing resource or any other distributed computing arrangement.
- the computing devices can be located in a single installation or can be distributed among many different geographical locations. Although shown separately from the computing systems 106 , it is understood that in some examples the computing environment 103 the computing systems 106 can be a portion of the computing environment 103 .
- the computing environment 103 can include or be operated as one or more virtualized computer instances.
- the computing environment 103 is referred to herein in the singular. Even though the computing environment 103 is referred to in the singular, it is understood that a plurality of computing environments 103 can be employed in the various arrangements as described above.
- the computing environment 103 communicates with the computing systems 106 and client devices 108 for end users over the network 109 , sometimes remotely, the computing environment 103 can be described as a remote computing environment 103 in some examples.
- the computing environment 103 can be implemented in servers 115 of a rack 112 and can manage operations of a virtualized computing environment.
- the computing environment 103 can be referred to as a management cluster in the computing systems 106 .
- the computing environment 103 can include one or more top-of-rack (TOR) devices.
- TOR top-of-rack
- the servers 115 of a rack 112 can also provide data storage services for the purpose of storing user data, data relating to workloads 145 , and other data.
- the data can be replicated across different data centers or computing systems 106 that are geographically dispersed.
- a storage area network (SAN) or virtual storage area network (vSAN) can be implemented across different computing systems 106 and/or racks 112 .
- the data stored across the computing systems 106 can include virtual machine images, data accessed by the VM images, VDI images, VDI environments, applications, services, containers, and other data, applications, and services that are utilized by users of the enterprise.
- the data can be hosted in certain hosts servers 115 or hosts depending upon where users who access certain data or applications are generally located.
- a data center providing a VDI environment for users in a particularly city can be located in or near that city rather than on a different continent. Geographic and network proximity to the end users can assist with improving network latency and improve the user experience for users utilizing the data and applications that they generally need.
- An administrator, when planning a network architecture, can assess what applications and services are generally utilized by its users and locate the applications and data near the respective user populations' physical locations.
- the computing environment 103 can include a data store 130 .
- the data store 130 can include memory of the computing environment 103 , mass storage resources of the computing environment 103 , or any other storage resources on which data can be stored by the computing environment 103 .
- the data store 130 can include memory of the servers 115 in some examples.
- the data store 130 can include one or more relational databases, such as structure query language (SQL) databases, non-SQL databases, or other relational or non-relational databases.
- SQL structure query language
- non-SQL databases or other relational or non-relational databases.
- the data stored in the data store 130 for example, can be associated with the operation of the various services or functional entities described below.
- the data store 130 can include a database or other memory that includes, for example, user data 139 and workload data 160 .
- User data 139 can store or reference information about the user of an enterprise, such as the user's calendar, email, and other user data. Additionally, user data 139 can include usage logs having records of user interactions with a VDI session served up by the computing systems 106 , VMs provided by the computing system 106 , or other types of workloads 145 . User interactions can include, for example, log-on requests, log-off requests, particular actions performed in a virtualized desktop session, periods of activity or inactivity, as well as other interactions. Each interaction can be stored in the data store 130 in association with a timestamp describing the time the user interaction was performed. One or more location signals associated with the user's location can also be stored in the data store 130 as user data 139 .
- Workload data 141 can include metadata about the workloads 145 that are deployed across the computing systems 106 in an enterprise deployment.
- Workloads 145 can include VMs, VDI sessions, containerization services, applications, and other computing resources that users may require.
- the workload data 141 can identify which server 115 , GPU 121 , or set of servers 115 and GPUs 121 on which a particular workload 145 is executed. Accordingly, if a user attempts to access a VDI session through the access gateway 140 , the access gateway 140 can consult the workload data 141 to determine which server 115 to direct the user's request and where the user's VDI session will be executed.
- the components executed on the computing environment 103 can include, for example, a management service 135 as well as other applications, services, processes, systems, engines, or functionality not discussed in detail herein.
- the management service 135 can be executed to oversee the operation of the networked environment 100 through management of the computing systems 106 as well as the devices and software that make up the computing systems 106 .
- an enterprise, organization, or other entity can operate the management service 135 to oversee or manage the operation of devices in the racks 112 , such as servers 115 , switches 118 , GPUs 121 , power supplies, cooling systems, or other components.
- IT information technology
- end users can be provided with a familiar, personalized environment that they can access from any number of devices from any location while the administrators gain centralized control, efficiency, and security by having desktop data in the data center.
- locating the user's VDI session to a server 115 in relative proximity to the user's location can improve network latency between the user's client device 108 and the server 115 .
- the management service 135 through the prediction engine 115 can generate a predicted location for users so that workloads 145 such as a VDI session can be located in a computing system 106 that optimizes latency to the user's client device 108 .
- the management service 135 can include one or more access gateways 140 and a prediction engine 155 .
- the access gateway 140 can include an application or other service that acts as a broker for client connections by authenticating and directing incoming user requests to an appropriate virtual machine, virtual desktop, or server 115 .
- the access gateway 140 can include an application or other service that serves any request from a client device 108 for a virtual desktop.
- the VDI gateway 140 and/or the desktop manager 142 can be configured to store data pertaining to requests from the client devices 108 in the usage log 138 .
- the data stored in the usage log 138 can include, for example, a type of request performed, a timestamp at which the request was received, as well as other information.
- the management service 135 can also include a usage analysis engine 150 and a prediction engine 155 .
- the prediction engine 155 can process data from user data 129 , such as usage logs, a user's calendar, external data sources, or third party hosted services with which the user has an account to generate a predicted location of the user.
- the prediction engine 155 can utilize a predictive usage model that generates predicted location of the user for a particular point in time in the future based on the user's past behavior and an analysis of the user data 129 corresponding to the user.
- a predictive usage model can comprise a machine learning model that receives the user data 129 and data from external sources as inputs.
- the model can generate a predicted location as its output and be trained by the actual location of other users in the enterprise user population and the respective user data 129 of the same enterprise user population.
- the prediction engine 155 can apply the predictive usage model to forecast where one or more workloads 145 that the user is likely to utilize should be located within a geographically dispersed set of computing systems 106 . Once a predicted location is generated, the prediction engine 155 can also assess the network conditions of the predicted location.
- the prediction engine 155 can also utilize the capabilities of the management service 135 to locate the workloads 145 that the user is expected to utilize on one or more servers 115 to optimize the user experience.
- optimizing the user experience involves locating the workloads 145 on a server such that predicted network latency is minimized. Therefore, in the case of a VDI session and the corresponding data needed by the VDI session of the user, both can be relocated to available servers 115 in a rack 112 or computing system 106 that is closest to the predicted location or that which has the lowest latency to a hypothetical client device 108 located in the predicted location.
- the lowest latency to the hypothetical client device 108 can be identified by assessing network latency to one or more network addresses in a geographical area to which the computing system 106 can connect in the geographical area.
- These network addresses can include VPN access points, other servers, test endpoints, or other client devices 108 of other users in the geographical area.
- Workloads 145 can refer to an application or process that a server 115 , switch 118 , GPU 121 , or other physical or virtual component that have been deployed onto hardware within a computing system 106 or data center.
- the management service 135 can orchestrate deployment and management of workloads 145 onto servers 115 across a fleet of servers 115 in various geographic locations and data centers.
- the workloads 145 can be associated with virtual machines or other software executing on the servers 115 .
- the workloads 145 can include tasks to be processed to provide users of an enterprise with remote desktop sessions or other virtualized computing sessions.
- the workloads 145 can also represent containerized applications that are running to provide services to users of the enterprise.
- a workload 145 can require multiple servers 115 to execute. In other instances, a workload 145 can be executed on a single server 115 . In many cases, multiple workloads 145 , such as multiple VDI sessions, can be deployed on a single server 115 and on data storage resources within the same rack 112 as the server 115 .
- the management service 135 can maintain a listing of active or inactive workloads 145 as well as oversee the assignment of various workloads 145 to various devices in the computing systems 106 . For instance, the management service 135 can assign a workload 145 lacking in available resources to a server 115 that has resources sufficient to handle the workload 145 . The workloads 145 can be routed to various servers 115 by the switches 118 as network traffic 148 a . . . 148 b.
- a diagram 200 is provided showing an example of computing architecture for predictive allocation of computing resources according to the disclosure.
- the computing resources can involve workloads 145 , such as VDI sessions, VMs, or other applications and services that are executed by a computing system 106 to provide data and services to a client device 108 of a user.
- the diagram 200 can include includes hosts 203 a . . . 203 n (collectively “hosts 203 ”) which can include servers 115 which can execute one or more virtual machines 206 a . . . 206 n .
- the virtual machines 206 can provide a VDI session or other computing resources when requested by a client device 108 .
- a client device 108 through a client application running on the client device 108 , can send a log-on request to the access gateway 140 , which can authenticate the user or client device 108 .
- the access gateway 140 can identify one of the virtual machines 206 to utilize in serving up a virtual desktop for the client device 108 .
- the virtual machines 206 selected by the access gateway 140 can be selected based upon the user account associated with the user and where the user account has been assigned a host 203 .
- the access gateway 140 can select the appropriate host 203 based upon the workload data 141 , which can store a host assignment that relates the requested workload and a host 203 .
- the host assignment can be based on properties of the user account associated with the logon request.
- the host assignment can be generated by the management service 135 based on an office location assigned to the user, which can be reflected in the user data 139 .
- the host assignment can have a default location based on the assigned office location of the user, which can be an office of the enterprise or a geographic area if the user is a remote user.
- the host assignment can assign the user to a particular data center in a particular geographic location that is nearest to the user's assigned location.
- the prediction engine 155 can receive location signals that are associated with a user account.
- Location signals can include usage data associated with a client device 108 , such as the location of the client device, the data accessed by the client device, the workloads or VMs utilized by the device, and other signals.
- the location signals can allow the prediction engine 155 to generate a predicted location associated with the user account or the client device 108 that is associated with the user account.
- the location signals can take various forms.
- a location signal can be a location indication on a calendar event on the user's calendar.
- the user data 139 can store or reference the user's enterprise calendar, which can include meetings, events, and appointments. Accordingly, the prediction engine 155 can take as an input the location of a calendar event at a particular time or day on the user's calendar to generate a predicted location of the user.
- another location signal can include a location indication received from a radio frequency identification (RFIC) RFID or near field communication (NFC) reader that can obtain a swipe or tap from an employee badge, ID card, or client device 108 .
- RFIC radio frequency identification
- NFC near field communication
- a location signal can be input into the prediction engine 155 as an indication that the user is located in or near the location of the reader.
- many buildings and facilities of an enterprise have physical access control systems that require the user to tap or swipe a badge to authenticate with a reader that unlocks a door or entrance gate. Accordingly, an indication from a reader or access control system that a user has authenticated with the reader can be input as a location signal to the prediction engine 155 .
- Another location signal can include a network address of a client device 108 that is used by the client device 108 when accessing a network accessible service of the enterprise.
- the network address of the client device 108 can be determined by identifying a network address through which the client device 108 communicates with the management service 135 .
- the network address can be determined using IP address header or other connection properties associated with the client device. Accordingly, a physical or geographic location can be estimated or determined from the network address of the client device 108 . Therefore, the network address can be provided as a location signal to the prediction engine 155 .
- Another location signal can include virtual private network (VPN) usage associated with the user or client device 108 of the user.
- the enterprise might require mobile users to utilize a VPN capability to access network resources or to secure network traffic coming from or to the client device 108 .
- a VPN gateway can record a network address or a location of a client device 108 that is accessing a VPN capability provided by the enterprise. Accordingly, a physical or geographic location can be estimated or determined from the properties of a VPN connection between the client device 108 and a VPN gateway. Therefore, the properties of a VPN connection can be provided as a location signal to the prediction engine 155 .
- a location signal can include data from third party data sources, such as weather data, data about network conditions between the client device 108 and the access gateway 140 . If inclement weather is imminent in a physical location associated with the client device 108 , this can be a signal that the workload 145 utilized by the user account associated with the client device 108 should be relocated or reconfigured. If network conditions associated with the client device 108 are degraded such that bandwidth or latency does not meet a bandwidth threshold or latency threshold, this can also be provided to the prediction engine 155 as a location signal.
- third party data sources such as weather data, data about network conditions between the client device 108 and the access gateway 140 . If inclement weather is imminent in a physical location associated with the client device 108 , this can be a signal that the workload 145 utilized by the user account associated with the client device 108 should be relocated or reconfigured. If network conditions associated with the client device 108 are degraded such that bandwidth or latency does not meet a bandwidth threshold or latency threshold, this can also be provided to
- a location signal can include the current physical or geographic location of a client device 108 .
- the client device 108 in some scenarios, can periodically reports its location to the management service 135 or a service with which the client device 108 is enrolled as a managed device. Accordingly, the location history of the client device 108 can also be provided to the prediction engine 155 as a location signal.
- the prediction engine 155 can also receive data about a user account from third party hosted services that a user is associated with.
- third party hosted services such as a customer relationship management tool. Access to these accounts can be accomplished through a federated authentication scheme.
- a data engine can obtain data from various third party services utilizing an authentication token on behalf of a user account and feed data related to the user's location into the prediction engine 155 as location signals.
- the prediction engine 155 using the location signals that are provided as inputs, can utilize a machine learning model to generate a predicted location for users.
- the predicted location can be generated daily at the start of a user's typical workday.
- the machine learning model can be trained using historical location data of a population of users and/or client devices 108 within the enterprise and the location signals corresponding to the historical location data for those users.
- the machine learning model can also be retrained on an ongoing or periodic basis using the location history of users and/or client devices 108 .
- the machine learning model can comprise an online learning model that continues to train on new detected locations or incorrectly predicted VM transfers as that data is made available. Accordingly, the model can be dynamically updated and trained on newer data regarding physical locations, network locations, or resultant latency experienced by users.
- a general model trained on aggregated data from all other employees can be used as a starting point for predictions until location history is accumulated for the user.
- user data such as job title, position, etc., could be used to find similar employees in the company and borrow their model as a starting point.
- the prediction engine 155 can generate predicted location for a user and then take action with respect to VMs 206 or workloads 145 with which the user is associated. For example, the prediction engine 155 can determine that a predicted location has changed. The prediction engine 155 can also assess the predicted latency in the predicted location for a client device 108 associated with the user account to access VMs 206 associated with services that are reflected in a historical usage pattern of the user. The assessment can involve determining whether a host 203 that is closer to the predicted location is available to host the VMs 206 that are expected to be used by the user and migrating the VMs 206 or other workloads 145 to the closer host 203 .
- the migration can be performed by the prediction engine 155 prior to business hours so that the VMs 206 are migrated before the user needs to use them.
- the VMs 206 that are migrated can be associated with a VDI session of the user or other computing services or resources that are provided to the user.
- the assessment can also involve calculating a predicted latency to a hypothetical client device 106 in the predicted location.
- the lowest latency to the hypothetical client device 108 can be identified by assessing network latency to one or more network addresses in a geographical area to which the computing system 106 can connect in the geographical area. These network addresses can include VPN access points, other servers, test endpoints, or other client devices 108 of other users in the geographical area.
- the prediction engine 155 can determine that latency to a client device 108 in the predicted location is simply too degraded to provide an acceptable user experience. The prediction engine 155 can make this determination by assessing whether network conditions associated with a hypothetical client device 108 in the predicted location are degraded such that bandwidth or latency do not meet a bandwidth threshold or latency threshold. In this scenario, rather than simply migrating the VMs 206 used by the user to another host 203 , the prediction engine 155 can obtain data needed for the VM 206 to be executed on the client device 108 in a VM client. The prediction engine 155 can then obtain a copy of a VM image and transmit the VM image to the client device 108 .
- the VM image can be pushed to the client device 108 or provided as a download option once the user requests access to a VDI session, VM, or other computing resource through the access gateway 140 .
- the client device 108 can then execute the VM image using a VM client on the client device 108 .
- the prediction engine 155 can also migrate data accessed by a user using a client device 108 in addition to VMs 206 and workloads 145 .
- the prediction engine 155 can migrate the files and data that are accessed by the user to a host 203 that is closer to the predicted location generated by the prediction engine 155 .
- the prediction engine 155 can optimize the user experience for network-accessible applications and data.
- the prediction engine 155 can migrate recently accessed files or all files associated with a user account.
- the prediction engine 155 can replicate files to a host 203 that is geographically closer to the predicted location rather than migrating or moving them.
- the prediction engine 155 can also migrate data and VMs 206 to hosts 203 that may not be physically closer to the predicted location but that have a lower expected latency to the predicted location.
- the prediction engine 155 can identify a host 203 or set of hosts 203 having the lowest expected latency by performing a network test to one or more network addresses in various geographical areas to which the computing system 106 can connect. These network addresses can again include VPN access points, other servers, test endpoints, or other client devices 108 of other users in the geographical area.
- the prediction engine 155 can also migrate data and VMs 206 to hosts 203 that may not be physically closer to the predicted location based on policy reasons. For example, privacy or data security policies might mandate that a user's VMs 206 or data be located in a particular jurisdiction because migrating data outside of a certain region could create privacy or regulatory issues for the enterprise. In this scenario, the VMs 206 and data should be migrated to the best hosts 203 from a latency perspective that are located within permitted jurisdictions or physical location. From a data security perspective, data security concerns of the enterprise might cause the enterprise to have a policy in place that prohibits the VMs 206 or data to be located on hosts 203 within a particular country or region.
- FIG. 3 shown is a flowchart that provides one example of the operation of a portion of the networked environment 100 .
- the flowchart of FIG. 3 can be viewed as depicting an example of elements of a method implemented by the management service 135 or the prediction engine 155 executing in the computing environment 103 according to one or more examples.
- FIG. 3 illustrates how the prediction engine 155 can train a machine learning model that can generate location predictions for client devices 108 that are associated with user accounts in an enterprise.
- the separation or segmentation of functionality as discussed herein is presented for illustrative purposes only.
- the prediction engine 155 can obtain a location history over a period of time for a population of client devices 108 associated with the enterprise.
- the location history can be collected from devices that are enrolled as managed devices with a device management service. Additionally, the location history can also include location predictions that were previously generated by the prediction engine 155 . In this way, a portion of the training data used to train the machine learning model can include previous location predictions generated by the model. Accordingly, this portion of the training set can also identify whether the previous location predictions were accurate.
- the prediction engine 155 can obtain location signals over the period of time for the population of client devices 108 associated with the enterprise.
- the location signals data that the prediction engine 155 can use to generate predicted locations.
- the data provided to the machine learning model as training data can also include an indication of network latency experienced by the client device 108 in its interactions with a host 203 that was assigned to handle a user's request for a computing resource.
- a portion of the training data used to train the machine learning model can include latencies previously experienced by the client devices 108 so that the model can be trained to take steps to improve latency.
- the training data can further include the steps that were taken to migrate a VM or generate a VM image for execution by the client device 106 as a part of the training data.
- the location history and the location signals corresponding to the period of time can be input as training data into the machine learning model.
- the model can be a supervised learning, an unsupervised learning, a semi-supervised learning, or a reinforcement learning model. Different model types can require different filters or sets of training data. Accordingly, the type of training data provided to the machine learning model can be varied according to how the model is configured to learn. In general, the model can be setup to generate location predictions that are accurate as well as generate the location predictions to optimize for the lowest possible network latency. Therefore, the location predictions generated by the model may not be the most geographically accurate as long as the model is optimizing for network latency.
- the prediction engine 155 can initiate training of the machine learning model.
- the prediction engine 155 can train the model on an ongoing basis as location predictions are being generated by the model and new training data is generated through usage of the computing systems 106 .
- the prediction engine 155 can periodically train the model to free computing resources for the model to generate location predictions. Thereafter, the process can proceed to completion.
- FIG. 4 shown is a flowchart that provides one example of the operation of a portion of the networked environment 100 .
- the flowchart of FIG. 4 can be viewed as depicting an example of elements of a method implemented by the prediction engine 155 executing in the computing environment 103 according to one or more examples.
- FIG. 4 illustrates how the prediction engine 155 can generate location predictions and optimize the deployment of resources in a computing environment based on the changing location of users.
- the separation or segmentation of functionality as discussed herein is presented for illustrative purposes only.
- the prediction engine 155 can use the machine learning model to forecast the location of client devices 108 , and therefore users, to optimize the deployment of resources within an enterprise environment. These resources can include VDI sessions, VMs, data, applications, or other services deployed across data centers by an enterprise. Additionally, the prediction engine 155 can be configured to generate a location prediction on behalf of a particular user account each day, week, month, etc., at a particular time of day. Additionally, the prediction engine 155 can also be configured to migrate a workload or data similarly periodically at a particular time of day. At step 403 , the prediction engine 155 can obtain usage data from a client device 108 for which it is generating a location prediction. The usage data can include information that can serve as location signals to feed into the machine learning model. For example, the current location of the client device 108 can serve as a location signal in addition to information about what network the client device 108 is currently communicating with.
- the prediction engine 155 can obtain other location signals associated with the user account.
- the other location signals can include data from user data 139 , such as calendar data, and employee badge data, and information from third party sources, such as information from federated services associated with the user and external data sources.
- the prediction engine 155 can obtain one or more usage policies associated with the user account.
- the usage policies can specify whether VMs, data, or workloads 145 are not permitted to be located in certain geographical locations or regions for policy, legal, or regulatory reasons, such as data security policies or data privacy policies.
- the prediction engine 155 can execute the predictive usage model, which can be the machine learning model, to generate a location prediction associated with the user account.
- the machine learning model can generate a location prediction for a client device 108 associated with a user account based upon the location signals for the user account and the usage data associated with the client device 108 .
- the location prediction can be associated with a client device 108 that is primarily used by a user. To this end, the prediction engine 155 can identify the most often or most recently used client device 108 associated with a user account for which the location prediction should be generated.
- the prediction engine 155 can generate the predicted location associated with the client device 108 .
- the predicted location can be used by the prediction engine 155 to later optimize computing resource to optimize for latency.
- the prediction engine 155 can redistribute VMs or workloads 145 to different hosts 203 in response to the location prediction generated by the prediction engine 155 .
- the prediction engine 155 can migrate the VMs 206 or other workloads 145 used by the user to another host 203 that is either closest to the predicted location or that has the lowest expected latency based upon the predicted location.
- the prediction engine 155 can generate executable versions of the workloads 145 that can run on the client device 106 rather than on a host.
- the prediction engine 155 can also take into account the usage policies that might specify certain locations or regions where the workloads 145 are not permitted to be placed for regulatory or legal reasons.
- the prediction engine 155 can obtain data needed for the VM 206 to be executed on the client device 108 in a VM client.
- the prediction engine 155 can then obtain a copy of a VM image and transmit the VM image to the client device 108 .
- the VM image can be pushed to the client device 108 or provided as a download option once the user requests access to a VDI session, VM, or other computing resource through the access gateway 140 .
- the prediction engine 155 can also migrate data accessed by a user using a client device 108 in addition to VMs 206 and workloads 145 .
- the prediction engine 155 can migrate the files and data that are accessed by the user to a host 203 that is closer to the predicted location generated by the prediction engine 155 .
- the prediction engine 155 can optimize the user experience for network-accessible applications and data.
- the prediction engine 155 can migrate recently accessed files or all files associated with a user account.
- the prediction engine 155 can replicate files to a host 203 that is geographically closer to the predicted location rather than migrating or moving them. Thereafter, the process proceeds to completion.
- Stored in the memory device are both data and several components that are executable by the processor. Also stored in the memory can be a data store 130 and other data. A number of software components are stored in the memory and executable by a processor. In this respect, the term “executable” means a program file that is in a form that can ultimately be run by the processor.
- Examples of executable programs can be, for example, a compiled program that can be translated into machine code in a format that can be loaded into a random access portion of one or more of the memory devices and run by the processor, code that can be expressed in a format such as object code that is capable of being loaded into a random access portion of the one or more memory devices and executed by the processor, or code that can be interpreted by another executable program to generate instructions in a random access portion of the memory devices to be executed by the processor.
- An executable program can be stored in any portion or component of the memory devices including, for example, random access memory (RAM), read-only memory (ROM), hard drive, solid-state drive, USB flash drive, memory card, optical disc such as compact disc (CD) or digital versatile disc (DVD), floppy disk, magnetic tape, or other memory components.
- RAM random access memory
- ROM read-only memory
- hard drive solid-state drive
- USB flash drive USB flash drive
- memory card such as compact disc (CD) or digital versatile disc (DVD), floppy disk, magnetic tape, or other memory components.
- CD compact disc
- DVD digital versatile disc
- Memory can include both volatile and nonvolatile memory and data storage components.
- a processor can represent multiple processors and/or multiple processor cores, and the one or more memory devices can represent multiple memories that operate in parallel processing circuits, respectively.
- Memory devices can also represent a combination of various types of storage devices, such as RAM, mass storage devices, flash memory, or hard disk storage.
- a local interface can be an appropriate network that facilitates communication between any two of the multiple processors or between any processor and any of the memory devices.
- the local interface can include additional systems designed to coordinate this communication, including, for example, performing load balancing.
- the processor can be of electrical or of some other available construction.
- Client devices 108 can be used to access user interfaces generated to configure or otherwise interact with the management service 135 .
- These client devices 108 can include a display upon which a user interface generated by a client application for providing a virtual desktop session (or other session) can be rendered.
- the user interface can be generated using user interface data provided by the computing environment 103 .
- the client device 108 can also include one or more input/output devices that can include, for example, a capacitive touchscreen or other type of touch input device, fingerprint reader, or keyboard.
- management service 135 and other various systems described herein can be embodied in software or code executed by general-purpose hardware as discussed above, as an alternative the same can also be embodied in dedicated hardware or a combination of software/general purpose hardware and dedicated hardware. If embodied in dedicated hardware, each can be implemented as a circuit or state machine that employs any one of or a combination of a number of technologies. These technologies can include discrete logic circuits having logic gates for implementing various logic functions upon an application of one or more data signals, application specific integrated circuits (ASICs) having appropriate logic gates, field-programmable gate arrays (FPGAs), or other components.
- ASICs application specific integrated circuits
- FPGAs field-programmable gate arrays
- each block can represent a module, segment, or portion of code that can include program instructions to implement the specified logical function(s).
- the program instructions can be embodied in the form of source code that can include human-readable statements written in a programming language or machine code that can include numerical instructions recognizable by a suitable execution system such as a processor in a computer system or other system.
- the machine code can be converted from the source code.
- each block can represent a circuit or a number of interconnected circuits to implement the specified logical function(s).
- sequence diagram flowcharts show a specific order of execution, it is understood that the order of execution can differ from that which is depicted.
- the order of execution of two or more blocks can be scrambled relative to the order shown.
- two or more blocks shown in succession can be executed concurrently or with partial concurrence.
- one or more of the blocks shown in the drawings can be skipped or omitted.
- any logic or application described herein that includes software or code can be embodied in any non-transitory computer-readable medium for use by or in connection with an instruction execution system such as, for example, a processor in a computer system or other system.
- the logic can include, for example, statements including program code, instructions, and declarations that can be fetched from the computer-readable medium and executed by the instruction execution system.
- a “computer-readable medium” can be any medium that can contain, store, or maintain the logic or application described herein for use by or in connection with the instruction execution system.
- the computer-readable medium can include any one of many physical media, such as magnetic, optical, or semiconductor media. More specific examples of a suitable computer-readable medium include solid-state drives or flash memory. Further, any logic or application described herein can be implemented and structured in a variety of ways. For example, one or more applications can be implemented as modules or components of a single application. Further, one or more applications described herein can be executed in shared or separate computing devices or a combination thereof. For example, a plurality of the applications described herein can execute in the same computing device, or in multiple computing devices.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Computer Hardware Design (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Computer Networks & Wireless Communication (AREA)
- Debugging And Monitoring (AREA)
- Computer And Data Communications (AREA)
Abstract
Description
Claims (17)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/451,632 US11182214B2 (en) | 2019-06-25 | 2019-06-25 | Allocating computing resources based on properties associated with location |
| US17/452,153 US12175293B2 (en) | 2019-06-25 | 2021-10-25 | Allocating computing resources based on properties associated with location |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/451,632 US11182214B2 (en) | 2019-06-25 | 2019-06-25 | Allocating computing resources based on properties associated with location |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/452,153 Continuation US12175293B2 (en) | 2019-06-25 | 2021-10-25 | Allocating computing resources based on properties associated with location |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20200409761A1 US20200409761A1 (en) | 2020-12-31 |
| US11182214B2 true US11182214B2 (en) | 2021-11-23 |
Family
ID=74042741
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/451,632 Active 2039-12-09 US11182214B2 (en) | 2019-06-25 | 2019-06-25 | Allocating computing resources based on properties associated with location |
| US17/452,153 Active 2040-08-21 US12175293B2 (en) | 2019-06-25 | 2021-10-25 | Allocating computing resources based on properties associated with location |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/452,153 Active 2040-08-21 US12175293B2 (en) | 2019-06-25 | 2021-10-25 | Allocating computing resources based on properties associated with location |
Country Status (1)
| Country | Link |
|---|---|
| US (2) | US11182214B2 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12106116B2 (en) * | 2019-12-11 | 2024-10-01 | Cohesity, Inc. | Virtual machine boot data prediction |
Families Citing this family (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11917724B2 (en) * | 2019-06-26 | 2024-02-27 | EMC IP Holding Company LLC | Location based application migration for enhancing lightweight IoT device applications |
| US11429422B2 (en) * | 2019-10-24 | 2022-08-30 | Dell Products L.P. | Software container replication using geographic location affinity in a distributed computing environment |
| US11778053B1 (en) * | 2020-06-11 | 2023-10-03 | Amazon Technologies, Inc. | Fault-tolerant function placement for edge computing |
| US11363113B1 (en) * | 2020-06-18 | 2022-06-14 | Amazon Technologies, Inc. | Dynamic micro-region formation for service provider network independent edge locations |
| US11709696B2 (en) | 2020-07-01 | 2023-07-25 | Hypori, LLC | Preloading of virtual devices in anticipation of a connection request from a physical device |
| US20220004415A1 (en) * | 2020-07-01 | 2022-01-06 | Intelligent Waves Llc | Latency-based selection of a virtual device platform on which to load a virtual device |
| US12050525B2 (en) * | 2020-08-28 | 2024-07-30 | Red Hat, Inc. | Simulating containerized clusters |
| US11689421B2 (en) * | 2021-04-19 | 2023-06-27 | Hewlett Packard Enterprise Development Lp | Selection of virtual private network profiles |
| US12309061B2 (en) | 2021-06-25 | 2025-05-20 | Oracle International Corporation | Routing policies for graphical processing units |
| US20230125491A1 (en) * | 2021-10-25 | 2023-04-27 | International Business Machines Corporation | Workload migration |
| US20230229468A1 (en) * | 2022-01-15 | 2023-07-20 | Vmware, Inc. | Pre-populated security policies for virtual desktop sessions |
| US12009974B1 (en) | 2023-05-05 | 2024-06-11 | Dish Wireless L.L.C. | Self-optimizing networks |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060093118A1 (en) * | 2004-11-04 | 2006-05-04 | International Business Machines Corporation | Rerouting ongoing telecommunications to a user |
| US20120036251A1 (en) * | 2010-08-09 | 2012-02-09 | International Business Machines Corporation | Method and system for end-to-end quality of service in virtualized desktop systems |
| US20160092785A1 (en) * | 2014-09-26 | 2016-03-31 | Sony Corporation | Mapping gathered location information to short form place names |
| US20160180222A1 (en) * | 2014-12-23 | 2016-06-23 | Ejenta, Inc. | Intelligent Personal Agent Platform and System and Methods for Using Same |
| US20170031913A1 (en) * | 2015-07-30 | 2017-02-02 | Foursquare Labs, Inc. | Creating segments for directed information using location information |
| US20180197099A1 (en) * | 2017-01-11 | 2018-07-12 | Google Inc. | User state predictions for presenting information |
| US20200169464A1 (en) * | 2018-11-27 | 2020-05-28 | Citrix Systems, Inc. | Activity-based resource allocation among virtual-computing sessions |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9299027B2 (en) * | 2012-05-07 | 2016-03-29 | Runaway 20, Inc. | System and method for providing intelligent location information |
| US9276827B2 (en) * | 2013-03-15 | 2016-03-01 | Cisco Technology, Inc. | Allocating computing resources based upon geographic movement |
| US9832256B1 (en) * | 2013-09-20 | 2017-11-28 | Ca, Inc. | Assigning client virtual machines based on location |
-
2019
- 2019-06-25 US US16/451,632 patent/US11182214B2/en active Active
-
2021
- 2021-10-25 US US17/452,153 patent/US12175293B2/en active Active
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20060093118A1 (en) * | 2004-11-04 | 2006-05-04 | International Business Machines Corporation | Rerouting ongoing telecommunications to a user |
| US20120036251A1 (en) * | 2010-08-09 | 2012-02-09 | International Business Machines Corporation | Method and system for end-to-end quality of service in virtualized desktop systems |
| US20160092785A1 (en) * | 2014-09-26 | 2016-03-31 | Sony Corporation | Mapping gathered location information to short form place names |
| US20160180222A1 (en) * | 2014-12-23 | 2016-06-23 | Ejenta, Inc. | Intelligent Personal Agent Platform and System and Methods for Using Same |
| US20170031913A1 (en) * | 2015-07-30 | 2017-02-02 | Foursquare Labs, Inc. | Creating segments for directed information using location information |
| US20180197099A1 (en) * | 2017-01-11 | 2018-07-12 | Google Inc. | User state predictions for presenting information |
| US20200169464A1 (en) * | 2018-11-27 | 2020-05-28 | Citrix Systems, Inc. | Activity-based resource allocation among virtual-computing sessions |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12106116B2 (en) * | 2019-12-11 | 2024-10-01 | Cohesity, Inc. | Virtual machine boot data prediction |
Also Published As
| Publication number | Publication date |
|---|---|
| US12175293B2 (en) | 2024-12-24 |
| US20200409761A1 (en) | 2020-12-31 |
| US20220043686A1 (en) | 2022-02-10 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12175293B2 (en) | Allocating computing resources based on properties associated with location | |
| US12175266B1 (en) | Virtual provisioning with implementation resource boundary awareness | |
| US10795711B2 (en) | Predictive allocation of virtual desktop infrastructure computing resources | |
| US10956230B2 (en) | Workload placement with forecast | |
| AU2011314183B2 (en) | Virtual resource cost tracking with dedicated implementation resources | |
| US10013662B2 (en) | Virtual resource cost tracking with dedicated implementation resources | |
| US9503549B2 (en) | Real-time data analysis for resource provisioning among systems in a networked computing environment | |
| US9396042B2 (en) | Methods and systems for evaluating historical metrics in selecting a physical host for execution of a virtual machine | |
| US20200019444A1 (en) | Cluster load balancing based on assessment of future loading | |
| US11886898B2 (en) | GPU-remoting latency aware virtual machine migration | |
| US20230300086A1 (en) | On-demand resource capacity in a serverless function-as-a-service infrastructure | |
| US20230196182A1 (en) | Database resource management using predictive models | |
| US10938891B2 (en) | Reducing cloud application execution latency | |
| US11016813B2 (en) | Optimizing initiator allocation | |
| US20220311740A1 (en) | Computer asset discovery for digital transformation | |
| US10579419B2 (en) | Data analysis in storage system | |
| US11030015B2 (en) | Hardware and software resource optimization | |
| US12210939B2 (en) | Explaining machine learning based time series models | |
| US11381468B1 (en) | Identifying correlated resource behaviors for resource allocation | |
| US20210397480A1 (en) | Cluster resource management using adaptive memory demand | |
| US20250023853A1 (en) | Zero-trust virtual desktop infrastructure authentication | |
| US20210092157A1 (en) | Prioritize endpoint selection based on criteria | |
| US20230056965A1 (en) | Dynamic multi-stream deployment planner | |
| US12386664B2 (en) | Determining optimal data access for deep learning applications on a cluster | |
| US11914586B2 (en) | Automated partitioning of a distributed database system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| AS | Assignment |
Owner name: VMWARE, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:STUNTEBECK, ERICH PETER;CHAWLA, RAVISH;TSE, KAR FAI;SIGNING DATES FROM 20190611 TO 20190620;REEL/FRAME:049885/0424 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| AS | Assignment |
Owner name: VMWARE LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:VMWARE, INC.;REEL/FRAME:067102/0314 Effective date: 20231121 |
|
| AS | Assignment |
Owner name: UBS AG, STAMFORD BRANCH, CONNECTICUT Free format text: SECURITY INTEREST;ASSIGNOR:OMNISSA, LLC;REEL/FRAME:068118/0004 Effective date: 20240701 |
|
| AS | Assignment |
Owner name: OMNISSA, LLC, CALIFORNIA Free format text: PATENT ASSIGNMENT;ASSIGNOR:VMWARE LLC;REEL/FRAME:068327/0365 Effective date: 20240630 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |