US20140344461A1 - Techniques for intelligent service deployment - Google Patents

Techniques for intelligent service deployment Download PDF

Info

Publication number
US20140344461A1
US20140344461A1 US14/448,468 US201414448468A US2014344461A1 US 20140344461 A1 US20140344461 A1 US 20140344461A1 US 201414448468 A US201414448468 A US 201414448468A US 2014344461 A1 US2014344461 A1 US 2014344461A1
Authority
US
United States
Prior art keywords
service
cloud
target cloud
plan
deployment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/448,468
Inventor
Stephen R. Carter
Jason Allen Sabin
Michael John Jorgensen
Nathaniel Brent Kranendonk
Kal A. Larsen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micro Focus Software Inc
Original Assignee
Novell Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Novell Inc filed Critical Novell Inc
Priority to US14/448,468 priority Critical patent/US20140344461A1/en
Publication of US20140344461A1 publication Critical patent/US20140344461A1/en
Assigned to MICRO FOCUS SOFTWARE INC. reassignment MICRO FOCUS SOFTWARE INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: NOVELL, INC.
Assigned to JPMORGAN CHASE BANK, N.A. reassignment JPMORGAN CHASE BANK, N.A. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ARCSIGHT, LLC, ATTACHMATE CORPORATION, BORLAND SOFTWARE CORPORATION, ENTIT SOFTWARE LLC, MICRO FOCUS (US), INC., MICRO FOCUS SOFTWARE, INC., NETIQ CORPORATION, SERENA SOFTWARE, INC.
Assigned to ATTACHMATE CORPORATION, SERENA SOFTWARE, INC, MICRO FOCUS LLC (F/K/A ENTIT SOFTWARE LLC), MICRO FOCUS (US), INC., MICRO FOCUS SOFTWARE INC. (F/K/A NOVELL, INC.), NETIQ CORPORATION, BORLAND SOFTWARE CORPORATION reassignment ATTACHMATE CORPORATION RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718 Assignors: JPMORGAN CHASE BANK, N.A.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0806Configuration setting for initial configuration or provisioning, e.g. plug-and-play
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/32
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources

Definitions

  • Various embodiments of the invention provide techniques for intelligent service deployment. Specifically, a method for service deployment is presented. Cloud attribute data for a target cloud processing environment and service attribute data for a service are acquired. Next, a deployment specification is evaluated for deploying the service to the target cloud processing environment. Then, a service placement plan is developed for scheduling the deployment of the service to the target cloud processing environment based on the cloud attribute data, the service attribute data, and the deployment specification. Finally, the service is deployed to the target cloud processing environment in accordance with the service placement plan.
  • FIG. 1 is a diagram of a method for service deployment, according to an example embodiment.
  • FIG. 2 is a diagram of another method for service deployment, according to an example embodiment.
  • FIG. 3 is a diagram of a service deployment system, according to an example embodiment.
  • FIG. 4 is a diagram of an example architecture for intelligent service deployment, according to the techniques presented herein.
  • a “resource” includes a user, service, system, device, directory, data store, groups of users, combinations of these things, etc.
  • a “principal” is a specific type of resource, such as an automated service or user that acquires an identity.
  • a designation as to what is a resource and what is a principal can change depending upon the context of any given network transaction. Thus, if one resource attempts to access another resource, the actor of the transaction may be viewed as a principal.
  • An “identity” is something that is formulated from one or more identifiers and secrets that provide a statement of roles and/or permissions that the identity has in relation to resources.
  • An “identifier” is information, which may be private and permits an identity to be formed, and some portions of an identifier may be public information, such as a user identifier, name, etc. Some examples of identifiers include social security number (SSN), user identifier and password pair, account number, retina scan, fingerprint, face scan, etc.
  • SSN social security number
  • password password
  • a “processing environment” defines a set of cooperating computing resources, such as machines (processor and memory-enabled devices), storage, software libraries, software systems, etc. that form a logical computing infrastructure.
  • a “logical computing infrastructure” means that computing resources can be geographically distributed across a network, such as the Internet. So, one computing resource at network site X can be logically combined with another computing resource at network site Y to form a logical processing environment.
  • processing environment computing environment
  • cloud processing environment computing environment
  • cloud computing environment
  • a “cloud” refers to a logical and/or physical processing environment as discussed above.
  • the phrase “software product” refers to independent software products that are independent of the workloads and that provides features to the workloads, such as but not limited to directory services, network services, and the like.
  • a “workload” refers to a task, a function, and/or a distinct unit of work that is processed within a workflow management system.
  • a “workload service” refers to the logical association between multiple workloads and software products organized as one logical unit, referred to herein as a “service” or “workload service.”
  • Networkgraphy is used herein to indicate the state of a cloud network, such that messages and packets traveling between processes, storage, and end users can be affected, monitored, and altered.
  • the state or updated stated is a relationship (linkage and association) between geographical data for the cloud network, the attribute data for the cloud network, and metric usage data for the cloud network.
  • Various embodiments of this invention can be implemented in existing network architectures.
  • the techniques presented herein are implemented in whole or in part in the Novell® operating system products, directory-based products, cloud-computing-based products, and other products distributed by Novell®, Inc., of Waltham, Mass.
  • the techniques presented herein are implemented in machines, such as processor or processor-enabled devices. These machines are configured to specifically perform the processing of the methods and systems presented herein. Moreover, the methods and systems are implemented and reside within a non-transitory computer-readable storage media or machine-readable storage medium and are processed on the machines configured to perform the methods.
  • Embodiments and components of the invention are implemented and reside in a non-transitory computer-readable medium that executes on one or more processors that are specifically configured to process the embodiments and components described herein and below.
  • FIG. 1 is a diagram of a method 100 for service deployment, according to an example embodiment.
  • the method 100 (hereinafter “service planner”) is implemented and resides within a non-transitory computer-readable or processor-readable medium that executes on one or more processors of a network.
  • the service planner is operational over a network and the network may be wired, wireless, or a combination of wired and wireless.
  • the service planner acquires cloud attribute data for a target cloud processing environment.
  • the service planner also simultaneously acquires service attribute data for a service.
  • the service comprises one or more workloads; each workload defining one or more functions for a workload management system.
  • the service also includes one or more software products; each software product different from the workloads.
  • the service planner obtains the cloud attribute data as one or more of: cloud geographical data, cloud state data (cloud Netgraphy data), cloud reputation data, and/or cloud expense data. More detail of the types of cloud attribute data is provided below with the discussion of the FIG. 4 .
  • the service planner obtains the service attribute data as one or more of: service configuration data, service level agreement data, service expense data, and/or service reputation data. Again, more detail of the types of service data is also provided below with the discussion of the FIG. 4 .
  • the service planner evaluates a deployment specification for deploying the service to the target cloud processing environment. Greater detail of this evaluation and some specific examples are provided below with the discussion of the FIG. 4 .
  • the service planner acquires policies that control the deployment of the service to the target cloud processing environment from the deployment specification. That is, the deployment specification defines or identifies policies that are to be followed when evaluating the deployment specification.
  • the service planner identifies at least one policy that includes alternative actions to take based on particular values assigned to the cloud attribute data and/or the service attribute data.
  • An example of this alternative action approach is provided below with reference to the FIG. 4 .
  • the service planner develops a service placement plan for scheduling the deployment of the service to the target cloud processing environment. This is done based on the cloud attribute data, the service attribute data, and the deployment specification.
  • the service planner balances the service placement plan by dynamically weighing values defined in the cloud attribute data, the service attribute data, and the deployment specification.
  • the service planner changes a selection that is associated with or that identifies the target cloud processing environment based on weighing the values. So, the plan can identify or change the identity of the target cloud processing environment.
  • the service planner alters a mix of workloads or software products that define the service based on weighing the values.
  • the assets or resources that comprise the service can be altered based on weighing the values.
  • the service planner defines a sequencing order for deploying the workloads and software products that comprise the service within the service placement plan. So, the service planner can define a specific sequencing order for initiating and starting the workloads and software products that comprise the service within the target cloud processing environment by defining the order within the service placement plan.
  • the service planner receives dynamic alert notifications regarding events and/or usage metrics that cause the service planner to redevelop and alter the service placement plan in a dynamic and real time fashion. This accounts for the dynamic and chaotic condition of cloud assets and the network to ensure the service placement plan is optimized prior to actual service deployment.
  • the service planner deploys or causes to be deployed the service to the target cloud processing environment in accordance with the dictates and policies of the service placement plan.
  • FIG. 2 now describes in greater detail the actual deployment of the service to the target cloud processing environment in accordance with the service placement plan (can also be referred to as the “plan” or “service deployment plan” herein and below).
  • the service placement plan can also be referred to as the “plan” or “service deployment plan” herein and below.
  • FIG. 2 is a diagram of another method 200 for service deployment, according to an example embodiment.
  • the method 200 (hereinafter “service deployment manager”) is implemented and resides within a non-transitory computer-readable or processor-readable medium that executes on one or more processors of a network.
  • the service deployment manager is operational over a network and the network may be wired, wireless, or a combination of wired and wireless.
  • the service deployment manager presents another and in some cases enhanced perspective of the service planner represented by the method 100 of the FIG. 1 and discussed in detail above. That is, the service planner focuses primarily on the processing associated with developing a service deployment plan whereas the service deployment manager focuses on deploying the service in accordance with the plan.
  • the service deployment manager receives an instruction to deploy a service to a target cloud processing environment. This can be done based on a schedule, such as the schedule discussed above with reference to the method 100 of the FIG. 1 . This can also be done based on an event raised that according to a policy indicates that the service is to be deployed to a target cloud processing environment.
  • the service deployment manager acquires a service deployment plan for the service, such as the service placement plan described above with reference to the method 100 of the FIG. 1 .
  • the service deployment manager follows the directives of the service deployment plan to deploy the service to the target cloud processing environment.
  • the service deployment manager subsequently receives usage metrics back from a deployed version of the service and other resources of the target cloud processing environment.
  • the service deployment manager dynamically feeds the usage metrics back to a service planning service, such as the service planner described above with reference to the method 100 of the FIG. 1 , for purposes of dynamically modifying the service deployment plan.
  • a service planning service such as the service planner described above with reference to the method 100 of the FIG. 1 .
  • the service deployment manager logs the usage metrics for subsequent analysis and auditing of the service deployment plan.
  • the service deployment manager can audit the service deployment plan by comparing the usage metrics against a service level agreement for the service and/or the target cloud processing environment.
  • the service deployment manager notifies a principal when the audit indicates a present violation of the service level agreement or a situation in which a potential for a violation of the service level agreement is deemed imminent based on policies or threshold value evaluations or comparisons.
  • FIG. 3 is a diagram of a service deployment system 300 , according to an example embodiment.
  • the components of the intelligent service deployment system 300 are implemented within and reside within an non-transitory and computer or processor-readable storage medium for purposes of executing on one or more processors of a network.
  • the network may be wired, wireless, or a combination of wired and wireless.
  • the service deployment system 300 implements, inter alia, the method 100 and the method 200 of the FIGS. 1 and 2 , respectively.
  • the intelligent service deployment system 300 includes a service deployment planner 301 and a service deployment manager 302 . Each of these components and their interactions with one another will now be discussed in detail.
  • the service deployment planner 301 is implemented in a non-transitory computer-readable storage medium and executes on one or more processors of the network. Example aspects of the service deployment planner 301 were provided in detail above with reference to the methods 100 and 200 of the FIGS. 1 and 2 , respectively.
  • the service deployment planner 301 is configured to develop a plan for deploying a service to a target cloud processing environment. This is done in response to cloud attribute data and service attribute data (defined above with reference to the methods 100 and 200 of the FIGS. 1 and 2 , respectively and defined in greater detail below with reference to the FIG. 4 ).
  • the service deployment planner 301 is further configured to receive dynamic feedback on usage metrics for the service and the target cloud processing environment for purposes of dynamically modifying and adjusting the plan.
  • the service deployment planner 301 is configured to select and initially identify the target cloud processing environment in response to or based on the cloud attribute data and the service attribute data.
  • the cloud attribute data defines attribute data for multiple cloud processing environments including the selected and identified target cloud processing environment.
  • the service deployment manager 302 is implemented in a non-transitory computer-readable storage medium and executes on one or more processors of the network. Example aspects of the service deployment manager 302 were provided in detail above with reference to the methods 100 and 200 of the FIGS. 1 and 2 , respectively.
  • the service deployment manager 302 is configured to interact with the service deployment planner 301 for purposes of acquiring the plan and deploying the service to the target cloud processing environment in accordance with the directives of the plan.
  • the service deployment manager 302 is further configured to sequence deployment of workloads and software products that comprise the service when the service is being deployed to the target cloud processing environment.
  • FIG. 4 is a diagram of an example architecture for managing service definitions in an intelligent workload management system, according to the techniques presented herein.
  • FIG. 4 is presented for purposes of illustration and comprehension. It is to be understood that other architectural arrangements can be used to achieve the teachings presented herein and above.
  • the architecture of the FIG. 4 utilizes an Identity Service at 190 .
  • the identity service provides a variety of authentication and policy management services for the components described with reference to the FIG. 4 .
  • Germane to the future of the Internet and cloud computing is the ability to have an indisputable identity. This type of identity relies upon an infrastructure of identity services, which have some type of trust relationship that can be evaluated by policy and enforced at each endpoint by that policy.
  • Identity services in the FIG. 4 are depicted by 105 and 106 with the trust relationship depicted by 107 . Of course there can be a plurality of identity services and trust relationships of various descriptions, policy specification, trust specifications, etc.
  • the embodiments of the FIG. 4 utilize the maintenance of a Service Repository, at 111 , and a Service Configuration, at 112 , maintained by Service Configure, at 110 .
  • Elements 111 and 112 provide access to configuration and operational images to instantiate a service, which is a collection of multiple workloads. The relationship between each of the workloads and the functionality provided by those relationships is described in 112 .
  • the geography/Netgraphy repository, at 121 is maintained by the process, at 120 , which provides the information necessary to locate network resources in a geographic sense and to evaluate responsiveness and other Service Level Agreement (SLA) type metrics in light of a geographic location.
  • SLA Service Level Agreement
  • the repositories of Cloud Reputation, at 126 , and Cloud Charges, at 127 , are maintained by the process, at 125 .
  • the repository describes the specification for a plan and the governing policies necessary to adequately describe the deployment. For example, if the deployment plan and policy were developed for a cloud deployment of Novell's GroupWise® product, then the deployment plan would need to take into account Post Office Agents (POA) and Message Transport Agent (MTA) along with other processes and storage that comprise the GroupWise® deployment.
  • POA Post Office Agents
  • MTA Message Transport Agent
  • the plan describes the specifics of the deployment in light of the license that the end-user has obtained from the owner of the product (in this case Novell) and in light of factors governing the price point expected to be paid for cloud assets along with other considerations.
  • the policy describes what to do if the price point were to rise or fall what type of load factoring and load balance factoring should be taken into account and how geographic and Netgraphy situation should be taken into account for disaster recovery etc.
  • the repository specifies the service-level agreement that the customer is paying for. This specifies response time; fail over characteristics; disaster recovery characteristics; policies governing the changing of the SLA based upon extenuating circumstances; etc.
  • the SLA may be structured to specify SLA constraints that are specific to each end-point and time of day (e.g., the SLA for Toronto would have different specifications for 8:00 to 17:00 than from 17:00 to 8:00—as well the specifications for Atlanta would be different from Toronto as a location and temporarily).
  • the repository specifies where each endpoint to be serviced is located geographically and how many clients are within that endpoint. For example, this repository may specify that a given office in Cleveland has 500 users whereas another office in Toronto may have only 10. The expected SLA for each of these offices is contained within 117 whereas 118 specifying where the endpoint are located.
  • the Service Placement Plan takes the information contained in 112 , 116 , 117 , 118 , 121 , 126 , and 127 to develop a balanced plan, at 141 .
  • the final balanced plan, at 141 needs to take into account the Netgraphy based upon the geography of the endpoints specified in 118 together with the SLA specification at 117 along with cloud reputation, at 126 , and cloud charges, at 127 , to determine the best mixing of cloud assets and cloud providers to provide the final balanced plan that represents the deployment plan/policy, at 116 .
  • the processing, at 140 then takes into account the information in 112 to determine how many workloads are needed in each of the cloud locations identified in the balanced plan in order to realize the service as a whole.
  • the processing, at 140 provides a summary of alert triggers, at 142 , which specify the major relationships that Deployment Monitor, at 160 , should watch for which would materially affect the balanced plan.
  • the processing, at 140 takes into account any current cloud metrics, at 131 , while making the balanced plan, at 141 .
  • the service placement plan may receive alerts from the Deployment Monitor, at 160 , which causes a reevaluation of the balanced plan and, therefore, action by 150 to realize the change in the plan.
  • service deployment, at 150 uses the balanced plan along with service configuration, at 112 , and service repository, at 111 , to instruct cloud interfaces, at 155 , 156 , 157 , etc. to deploy specific workloads along with the appropriate sequencing of the workloads and sharing of information such as Internet Protocol (IP) addresses so that the balanced plan is realized in each cloud, at 190 , 191 , and 192 .
  • IP Internet Protocol
  • the cloud interfaces, at 155 , 156 , and 157 also monitor the workloads and services that have been deployed and report back responsiveness, resources utilized, and other cloud metrics to the deployment monitor, at 160 .
  • the deployment monitor, at 160 monitors the information and if an alert trigger occurs notifies the Service Placement Plan, at 140 .
  • the processing, at 160 logs the current cloud metrics concerning responsiveness, time to start, costs accrued, etc. for Current Cloud Metrics, at 131 , and Deployment Metrics, at 161 .
  • the processing, at 160 also has access to the SLA Specification, at 117 , (not shown in the FIG. 4 ) and uses this information and the monitoring information to calculate the compliance of the plan with the SLA.
  • the Cloud Monitor keeps current the Current Cloud Metrics, at 131 , for consumption by 140 .
  • the Plan Monitor provides a graphical user interface to show the instantiation of the balanced plan, at 141 , to a viewer, at 166 . As the balanced plan changes, the monitor shows this along with any historical information showing the morphing of the plan as operational characteristics affect it. Likewise, the Plan Monitor, at 165 , maintains a Plan Log, at 167 , for further analysis concerning the balanced plan.

Abstract

Techniques for intelligent service deployment are provided. Cloud and service data are evaluated to develop a service deployment plan for deploying a service to a target cloud processing environment. When dictated by the plan or by events that trigger deployment, the service is deployed to the target cloud processing environment in accordance with the service deployment plan.

Description

    RELATED APPLICATIONS
  • The present application is a continuation of U.S. patent application Ser. No. 12/790,335, filed May 28, 2010, which is a non-provisional application of and claims the benefit of priority under 35 U.S.C. 119(e) to U.S. Provisional Patent Application Ser. No. 61/315,869, filed Mar. 19, 2010, and entitled “Techniques for Intelligent Service Deployment;” each disclosure of which is incorporated herein by reference in its entirety.
  • BACKGROUND
  • The future of cloud computing will be realized when the cloud is a natural extension of what is consider today to be the enterprise data center. The ability to consider multiple cloud providers as a single data center or collection of computing assets will revolutionize the way that modern enterprises run their business. Of most importance to being able to utilize the cloud in this way will be the ability to describe a deployment and service-level agreement for the deployment in reference to a specific business need and have that deployment analyzed and realized in the cloud in an optimal way. This has not been achieved in the art heretofore.
  • SUMMARY
  • Various embodiments of the invention provide techniques for intelligent service deployment. Specifically, a method for service deployment is presented. Cloud attribute data for a target cloud processing environment and service attribute data for a service are acquired. Next, a deployment specification is evaluated for deploying the service to the target cloud processing environment. Then, a service placement plan is developed for scheduling the deployment of the service to the target cloud processing environment based on the cloud attribute data, the service attribute data, and the deployment specification. Finally, the service is deployed to the target cloud processing environment in accordance with the service placement plan.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a diagram of a method for service deployment, according to an example embodiment.
  • FIG. 2 is a diagram of another method for service deployment, according to an example embodiment.
  • FIG. 3 is a diagram of a service deployment system, according to an example embodiment.
  • FIG. 4 is a diagram of an example architecture for intelligent service deployment, according to the techniques presented herein.
  • DETAILED DESCRIPTION
  • A “resource” includes a user, service, system, device, directory, data store, groups of users, combinations of these things, etc. A “principal” is a specific type of resource, such as an automated service or user that acquires an identity. A designation as to what is a resource and what is a principal can change depending upon the context of any given network transaction. Thus, if one resource attempts to access another resource, the actor of the transaction may be viewed as a principal.
  • An “identity” is something that is formulated from one or more identifiers and secrets that provide a statement of roles and/or permissions that the identity has in relation to resources. An “identifier” is information, which may be private and permits an identity to be formed, and some portions of an identifier may be public information, such as a user identifier, name, etc. Some examples of identifiers include social security number (SSN), user identifier and password pair, account number, retina scan, fingerprint, face scan, etc.
  • A “processing environment” defines a set of cooperating computing resources, such as machines (processor and memory-enabled devices), storage, software libraries, software systems, etc. that form a logical computing infrastructure. A “logical computing infrastructure” means that computing resources can be geographically distributed across a network, such as the Internet. So, one computing resource at network site X can be logically combined with another computing resource at network site Y to form a logical processing environment.
  • The phrases “processing environment,” “cloud processing environment,” and the term “cloud” may be used interchangeably and synonymously herein.
  • Moreover, it is noted that a “cloud” refers to a logical and/or physical processing environment as discussed above. The phrase “software product” refers to independent software products that are independent of the workloads and that provides features to the workloads, such as but not limited to directory services, network services, and the like.
  • A “workload” refers to a task, a function, and/or a distinct unit of work that is processed within a workflow management system.
  • A “workload service” refers to the logical association between multiple workloads and software products organized as one logical unit, referred to herein as a “service” or “workload service.”
  • The term “Netgraphy” is used herein to indicate the state of a cloud network, such that messages and packets traveling between processes, storage, and end users can be affected, monitored, and altered. The state or updated stated is a relationship (linkage and association) between geographical data for the cloud network, the attribute data for the cloud network, and metric usage data for the cloud network.
  • Various embodiments of this invention can be implemented in existing network architectures. For example, in some embodiments, the techniques presented herein are implemented in whole or in part in the Novell® operating system products, directory-based products, cloud-computing-based products, and other products distributed by Novell®, Inc., of Waltham, Mass.
  • Also, the techniques presented herein are implemented in machines, such as processor or processor-enabled devices. These machines are configured to specifically perform the processing of the methods and systems presented herein. Moreover, the methods and systems are implemented and reside within a non-transitory computer-readable storage media or machine-readable storage medium and are processed on the machines configured to perform the methods.
  • Of course, the embodiments of the invention can be implemented in a variety of architectural platforms, devices, operating and server systems, and/or applications. Any particular architectural layout or implementation presented herein is provided for purposes of illustration and comprehension only and is not intended to limit aspects of the invention.
  • It is within this context that embodiments of the invention are now discussed within the context of the FIGS. 1-4.
  • Embodiments and components of the invention are implemented and reside in a non-transitory computer-readable medium that executes on one or more processors that are specifically configured to process the embodiments and components described herein and below.
  • FIG. 1 is a diagram of a method 100 for service deployment, according to an example embodiment. The method 100 (hereinafter “service planner”) is implemented and resides within a non-transitory computer-readable or processor-readable medium that executes on one or more processors of a network. Moreover, the service planner is operational over a network and the network may be wired, wireless, or a combination of wired and wireless.
  • At 110, the service planner acquires cloud attribute data for a target cloud processing environment. At 110, the service planner also simultaneously acquires service attribute data for a service. The service comprises one or more workloads; each workload defining one or more functions for a workload management system. The service also includes one or more software products; each software product different from the workloads.
  • According to an embodiment, at 111, the service planner obtains the cloud attribute data as one or more of: cloud geographical data, cloud state data (cloud Netgraphy data), cloud reputation data, and/or cloud expense data. More detail of the types of cloud attribute data is provided below with the discussion of the FIG. 4.
  • In an embodiment, at 112, the service planner obtains the service attribute data as one or more of: service configuration data, service level agreement data, service expense data, and/or service reputation data. Again, more detail of the types of service data is also provided below with the discussion of the FIG. 4.
  • At 120, the service planner evaluates a deployment specification for deploying the service to the target cloud processing environment. Greater detail of this evaluation and some specific examples are provided below with the discussion of the FIG. 4.
  • In one scenario, at 121, the service planner acquires policies that control the deployment of the service to the target cloud processing environment from the deployment specification. That is, the deployment specification defines or identifies policies that are to be followed when evaluating the deployment specification.
  • In another case, at 122, the service planner identifies at least one policy that includes alternative actions to take based on particular values assigned to the cloud attribute data and/or the service attribute data. An example of this alternative action approach is provided below with reference to the FIG. 4.
  • At 130, the service planner develops a service placement plan for scheduling the deployment of the service to the target cloud processing environment. This is done based on the cloud attribute data, the service attribute data, and the deployment specification.
  • According to an embodiment, at 131, the service planner balances the service placement plan by dynamically weighing values defined in the cloud attribute data, the service attribute data, and the deployment specification.
  • Continuing with the embodiment of 131 and at 132, the service planner changes a selection that is associated with or that identifies the target cloud processing environment based on weighing the values. So, the plan can identify or change the identity of the target cloud processing environment.
  • Still continuing with the embodiment of 132 and at 133, the service planner alters a mix of workloads or software products that define the service based on weighing the values. Here, the assets or resources that comprise the service can be altered based on weighing the values.
  • Returning to the embodiment of 130 and at 134, the service planner defines a sequencing order for deploying the workloads and software products that comprise the service within the service placement plan. So, the service planner can define a specific sequencing order for initiating and starting the workloads and software products that comprise the service within the target cloud processing environment by defining the order within the service placement plan.
  • In another case of 130 and at 135, the service planner receives dynamic alert notifications regarding events and/or usage metrics that cause the service planner to redevelop and alter the service placement plan in a dynamic and real time fashion. This accounts for the dynamic and chaotic condition of cloud assets and the network to ensure the service placement plan is optimized prior to actual service deployment.
  • At some subsequent time thereafter and at 140, the service planner deploys or causes to be deployed the service to the target cloud processing environment in accordance with the dictates and policies of the service placement plan.
  • The FIG. 2 now describes in greater detail the actual deployment of the service to the target cloud processing environment in accordance with the service placement plan (can also be referred to as the “plan” or “service deployment plan” herein and below).
  • FIG. 2 is a diagram of another method 200 for service deployment, according to an example embodiment. The method 200 (hereinafter “service deployment manager”) is implemented and resides within a non-transitory computer-readable or processor-readable medium that executes on one or more processors of a network. Moreover, the service deployment manager is operational over a network and the network may be wired, wireless, or a combination of wired and wireless.
  • The service deployment manager presents another and in some cases enhanced perspective of the service planner represented by the method 100 of the FIG. 1 and discussed in detail above. That is, the service planner focuses primarily on the processing associated with developing a service deployment plan whereas the service deployment manager focuses on deploying the service in accordance with the plan.
  • At 210, the service deployment manager receives an instruction to deploy a service to a target cloud processing environment. This can be done based on a schedule, such as the schedule discussed above with reference to the method 100 of the FIG. 1. This can also be done based on an event raised that according to a policy indicates that the service is to be deployed to a target cloud processing environment.
  • At 220, the service deployment manager acquires a service deployment plan for the service, such as the service placement plan described above with reference to the method 100 of the FIG. 1.
  • At 230, the service deployment manager follows the directives of the service deployment plan to deploy the service to the target cloud processing environment.
  • According to an embodiment at 240, the service deployment manager subsequently receives usage metrics back from a deployed version of the service and other resources of the target cloud processing environment.
  • Continuing with the embodiment of 240 and at 241, the service deployment manager dynamically feeds the usage metrics back to a service planning service, such as the service planner described above with reference to the method 100 of the FIG. 1, for purposes of dynamically modifying the service deployment plan.
  • In another case of 240 and at 242, the service deployment manager logs the usage metrics for subsequent analysis and auditing of the service deployment plan.
  • So, at 243, the service deployment manager can audit the service deployment plan by comparing the usage metrics against a service level agreement for the service and/or the target cloud processing environment.
  • Continuing with the embodiment of 243 and at 244, the service deployment manager notifies a principal when the audit indicates a present violation of the service level agreement or a situation in which a potential for a violation of the service level agreement is deemed imminent based on policies or threshold value evaluations or comparisons.
  • FIG. 3 is a diagram of a service deployment system 300, according to an example embodiment. The components of the intelligent service deployment system 300 are implemented within and reside within an non-transitory and computer or processor-readable storage medium for purposes of executing on one or more processors of a network. The network may be wired, wireless, or a combination of wired and wireless.
  • The service deployment system 300 implements, inter alia, the method 100 and the method 200 of the FIGS. 1 and 2, respectively.
  • The intelligent service deployment system 300 includes a service deployment planner 301 and a service deployment manager 302. Each of these components and their interactions with one another will now be discussed in detail.
  • The service deployment planner 301 is implemented in a non-transitory computer-readable storage medium and executes on one or more processors of the network. Example aspects of the service deployment planner 301 were provided in detail above with reference to the methods 100 and 200 of the FIGS. 1 and 2, respectively.
  • The service deployment planner 301 is configured to develop a plan for deploying a service to a target cloud processing environment. This is done in response to cloud attribute data and service attribute data (defined above with reference to the methods 100 and 200 of the FIGS. 1 and 2, respectively and defined in greater detail below with reference to the FIG. 4).
  • According to an embodiment, the service deployment planner 301 is further configured to receive dynamic feedback on usage metrics for the service and the target cloud processing environment for purposes of dynamically modifying and adjusting the plan.
  • In another case, the service deployment planner 301 is configured to select and initially identify the target cloud processing environment in response to or based on the cloud attribute data and the service attribute data. The cloud attribute data defines attribute data for multiple cloud processing environments including the selected and identified target cloud processing environment.
  • The service deployment manager 302 is implemented in a non-transitory computer-readable storage medium and executes on one or more processors of the network. Example aspects of the service deployment manager 302 were provided in detail above with reference to the methods 100 and 200 of the FIGS. 1 and 2, respectively.
  • The service deployment manager 302 is configured to interact with the service deployment planner 301 for purposes of acquiring the plan and deploying the service to the target cloud processing environment in accordance with the directives of the plan.
  • According to an embodiment, the service deployment manager 302 is further configured to sequence deployment of workloads and software products that comprise the service when the service is being deployed to the target cloud processing environment.
  • FIG. 4 is a diagram of an example architecture for managing service definitions in an intelligent workload management system, according to the techniques presented herein.
  • The FIG. 4 is presented for purposes of illustration and comprehension. It is to be understood that other architectural arrangements can be used to achieve the teachings presented herein and above.
  • The architecture of the FIG. 4 utilizes an Identity Service at 190. The identity service provides a variety of authentication and policy management services for the components described with reference to the FIG. 4.
  • Germane to the future of the Internet and cloud computing is the ability to have an indisputable identity. This type of identity relies upon an infrastructure of identity services, which have some type of trust relationship that can be evaluated by policy and enforced at each endpoint by that policy. Identity services in the FIG. 4 are depicted by 105 and 106 with the trust relationship depicted by 107. Of course there can be a plurality of identity services and trust relationships of various descriptions, policy specification, trust specifications, etc.
  • The embodiments of the FIG. 4 utilize the maintenance of a Service Repository, at 111, and a Service Configuration, at 112, maintained by Service Configure, at 110.
  • Elements 111 and 112 provide access to configuration and operational images to instantiate a service, which is a collection of multiple workloads. The relationship between each of the workloads and the functionality provided by those relationships is described in 112.
  • The geography/Netgraphy repository, at 121, is maintained by the process, at 120, which provides the information necessary to locate network resources in a geographic sense and to evaluate responsiveness and other Service Level Agreement (SLA) type metrics in light of a geographic location.
  • The repositories of Cloud Reputation, at 126, and Cloud Charges, at 127, are maintained by the process, at 125.
  • Other repositories for Deployment Plan/Policy, at 116, SLA Specification, at 117, and Endpoint Placement, at 118, are all shown being maintained by a process, at 115. The process, at 115, may be an automated process or, as shown in the diagram, a manual process administered by personnel.
  • The repository, at 116, describes the specification for a plan and the governing policies necessary to adequately describe the deployment. For example, if the deployment plan and policy were developed for a cloud deployment of Novell's GroupWise® product, then the deployment plan would need to take into account Post Office Agents (POA) and Message Transport Agent (MTA) along with other processes and storage that comprise the GroupWise® deployment. The plan describes the specifics of the deployment in light of the license that the end-user has obtained from the owner of the product (in this case Novell) and in light of factors governing the price point expected to be paid for cloud assets along with other considerations. The policy describes what to do if the price point were to rise or fall what type of load factoring and load balance factoring should be taken into account and how geographic and Netgraphy situation should be taken into account for disaster recovery etc.
  • The repository, at 117, specifies the service-level agreement that the customer is paying for. This specifies response time; fail over characteristics; disaster recovery characteristics; policies governing the changing of the SLA based upon extenuating circumstances; etc. The SLA, at 117, may be structured to specify SLA constraints that are specific to each end-point and time of day (e.g., the SLA for Toronto would have different specifications for 8:00 to 17:00 than from 17:00 to 8:00—as well the specifications for Atlanta would be different from Toronto as a location and temporarily).
  • The repository, at 118, specifies where each endpoint to be serviced is located geographically and how many clients are within that endpoint. For example, this repository may specify that a given office in Cleveland has 500 users whereas another office in Toronto may have only 10. The expected SLA for each of these offices is contained within 117 whereas 118 specifying where the endpoint are located.
  • The Service Placement Plan, at 140, takes the information contained in 112, 116, 117, 118, 121, 126, and 127 to develop a balanced plan, at 141. The final balanced plan, at 141, needs to take into account the Netgraphy based upon the geography of the endpoints specified in 118 together with the SLA specification at 117 along with cloud reputation, at 126, and cloud charges, at 127, to determine the best mixing of cloud assets and cloud providers to provide the final balanced plan that represents the deployment plan/policy, at 116. The processing, at 140, then takes into account the information in 112 to determine how many workloads are needed in each of the cloud locations identified in the balanced plan in order to realize the service as a whole. At this point, reevaluation takes place concerning the balanced plan to make sure that the SLA and charge expectations are still in line. This may require several iterations before a final balanced plan, at 141, can be achieved. As well, the processing, at 140, provides a summary of alert triggers, at 142, which specify the major relationships that Deployment Monitor, at 160, should watch for which would materially affect the balanced plan. Likewise, the processing, at 140, takes into account any current cloud metrics, at 131, while making the balanced plan, at 141.
  • During operation the service placement plan, at 140, may receive alerts from the Deployment Monitor, at 160, which causes a reevaluation of the balanced plan and, therefore, action by 150 to realize the change in the plan.
  • Once the balanced plan, at 141, is constructed, service deployment, at 150, uses the balanced plan along with service configuration, at 112, and service repository, at 111, to instruct cloud interfaces, at 155, 156, 157, etc. to deploy specific workloads along with the appropriate sequencing of the workloads and sharing of information such as Internet Protocol (IP) addresses so that the balanced plan is realized in each cloud, at 190, 191, and 192.
  • The cloud interfaces, at 155, 156, and 157, also monitor the workloads and services that have been deployed and report back responsiveness, resources utilized, and other cloud metrics to the deployment monitor, at 160. The deployment monitor, at 160, monitors the information and if an alert trigger occurs notifies the Service Placement Plan, at 140. As well, the processing, at 160, logs the current cloud metrics concerning responsiveness, time to start, costs accrued, etc. for Current Cloud Metrics, at 131, and Deployment Metrics, at 161. The processing, at 160, also has access to the SLA Specification, at 117, (not shown in the FIG. 4) and uses this information and the monitoring information to calculate the compliance of the plan with the SLA. This may cause other triggers to be emitted. It is noted that, in the SLA there is specific performance metrics that need to be achieved. These metrics are calculated to determine things like how many Identity Providers (IDP's) would be needed to achieve 100 logins per second with a max spike of 200 login's per second. The calculations can either be performed dynamically via a testing process, which would actually determine the numbers (i.e., test to see if 2 IDP's can do the SLA performance metric or do we need to bump it up to 3 to achieve the max spike of 200) or previously recorded metrics from other tests.
  • The Cloud Monitor, at 130, keeps current the Current Cloud Metrics, at 131, for consumption by 140.
  • The Plan Monitor, at 165, provides a graphical user interface to show the instantiation of the balanced plan, at 141, to a viewer, at 166. As the balanced plan changes, the monitor shows this along with any historical information showing the morphing of the plan as operational characteristics affect it. Likewise, the Plan Monitor, at 165, maintains a Plan Log, at 167, for further analysis concerning the balanced plan.
  • The above description is illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of embodiments should therefore be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (21)

1. (canceled)
2. method, comprising:
obtaining attribute data for a cloud and a service;
developing a service placement plan for deploying the service based on a weighted evaluation of the attribute data; and
deploying the service to the cloud in accordance with the developed service placement plan.
3. The method of claim 2, wherein obtaining further includes identifying a collection of workloads within the service.
4. The method of claim 2, wherein obtaining further include identifying at least a portion of the attribute data as a cloud reputation for the cloud
5. The method of claim 4, wherein identifying further includes identifying another portion of the attribute data as a cloud geography for the cloud.
6. The method of claim 5, wherein identifying further includes identifying still another portion of the attribute data as one or more of: a cloud service-level agreement, cloud expense data, and a cloud state.
7. The method of claim 2, wherein developing further includes defining a sequence for deployment of workloads and software products that comprise the server within the service placement plan.
8. The method of claim 2, wherein developing further includes providing data relevant to the service for configuration of the service, a service-level agreement for the service, a service reputation for the service, and service expense data within the service placement plan.
9. The method of claim 2, wherein deploying further includes processing directives of the service placement plan to deploy the service to a target cloud.
10. The method of claim 2, wherein deploying further includes receiving metrics back from the service when the service is operational within the target cloud.
11. A method, comprising:
weighting attribute data associated with a service placement plan for a service that is to be deployed to a target cloud; and
processing directives of the service placement plan based on the weighted attribute data to deploy the service to the target cloud.
12. The method of claim 11, wherein weighting further includes weighting component attributes of the attribute data, wherein each component attribute is one of: a service-level agreement, a target cloud reputation, a target cloud geography, a target cloud state, and a target cloud expense data.
13. The method of claim 11, wherein weighting further includes altering a mix of workloads and software products that comprise the service based on the weighted attribute data.
14. The method of claim 11, wherein weighting further includes identifying the target cloud based on the weighted attribute data.
15. The method of claim 11, wherein processing further includes receiving dynamic alters relevant to events and usage metrics and in response thereto dynamically re-developing the service placement plan.
16. The method of claim 11 further comprising, receiving metrics for the service and the target cloud once the service is deployed and operational in the target cloud
17. The method of claim 16 further comprising, auditing the service placement plan in response to the received metrics.
18. A system, comprising:
a processor; and
a service deployment planner configured and adapted to: i) execute on the processor, ii) develop a plan to deploy a service to a target cloud based on an evaluation of weighted attribute data associated with the service and a cloud network having the target cloud, and iii) dynamically revise the plan in response to real-time alerts.
19. The system of claim 18, wherein the system further includes a deployment manager configured and adapted to: i) execute on the processor, ii) follow directives of the plan to configure the service for deployment to the target cloud, and iii) deploy the service to the target cloud.
20. The system of claim 18, wherein the service deployment planner is further configured and adapted to audit the plan in response to metrics returned from the target cloud with the service deployed therein.
21. system of claim 18, wherein the weighted attribute data includes one or more of: target cloud expense data, a target cloud reputation rating, a service-level agreement for the service with the target cloud, and a geography for a location of the target cloud.
US14/448,468 2010-03-19 2014-07-31 Techniques for intelligent service deployment Abandoned US20140344461A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/448,468 US20140344461A1 (en) 2010-03-19 2014-07-31 Techniques for intelligent service deployment

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US31586910P 2010-03-19 2010-03-19
US12/790,335 US8806014B2 (en) 2010-03-19 2010-05-28 Techniques for intelligent service deployment
US14/448,468 US20140344461A1 (en) 2010-03-19 2014-07-31 Techniques for intelligent service deployment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/790,335 Continuation US8806014B2 (en) 2010-03-19 2010-05-28 Techniques for intelligent service deployment

Publications (1)

Publication Number Publication Date
US20140344461A1 true US20140344461A1 (en) 2014-11-20

Family

ID=44648114

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/790,335 Active 2031-06-16 US8806014B2 (en) 2010-03-19 2010-05-28 Techniques for intelligent service deployment
US14/448,468 Abandoned US20140344461A1 (en) 2010-03-19 2014-07-31 Techniques for intelligent service deployment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/790,335 Active 2031-06-16 US8806014B2 (en) 2010-03-19 2010-05-28 Techniques for intelligent service deployment

Country Status (1)

Country Link
US (2) US8806014B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140244799A1 (en) * 2013-02-28 2014-08-28 International Business Machines Corporation Installation of an Asset from a Cloud Marketplace to a Cloud Server in a Private Network
CN109783110A (en) * 2019-02-19 2019-05-21 安徽智融景和科技有限公司 Melt media system server disposition software systems
WO2020002030A1 (en) 2018-06-26 2020-01-02 Siemens Aktiengesellschaft Method and system for determining an appropriate installation location for an application to be installed in a distributed network environment
US10846070B2 (en) 2018-07-05 2020-11-24 At&T Intellectual Property I, L.P. Facilitating cloud native edge computing via behavioral intelligence

Families Citing this family (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2316071A4 (en) 2008-06-19 2011-08-17 Servicemesh Inc Cloud computing gateway, cloud computing hypervisor, and methods for implementing same
US10411975B2 (en) 2013-03-15 2019-09-10 Csc Agility Platform, Inc. System and method for a cloud computing abstraction with multi-tier deployment policy
US9489647B2 (en) 2008-06-19 2016-11-08 Csc Agility Platform, Inc. System and method for a cloud computing abstraction with self-service portal for publishing resources
US9069599B2 (en) 2008-06-19 2015-06-30 Servicemesh, Inc. System and method for a cloud computing abstraction layer with security zone facilities
US8839254B2 (en) * 2009-06-26 2014-09-16 Microsoft Corporation Precomputation for data center load balancing
WO2011091056A1 (en) 2010-01-19 2011-07-28 Servicemesh, Inc. System and method for a cloud computing abstraction layer
US8806014B2 (en) * 2010-03-19 2014-08-12 Novell, Inc. Techniques for intelligent service deployment
US9448790B2 (en) 2010-04-26 2016-09-20 Pivotal Software, Inc. Rapid updating of cloud applications
US9772831B2 (en) 2010-04-26 2017-09-26 Pivotal Software, Inc. Droplet execution engine for dynamic server application deployment
US9207993B2 (en) 2010-05-13 2015-12-08 Microsoft Technology Licensing, Llc Dynamic application placement based on cost and availability of energy in datacenters
US8849469B2 (en) 2010-10-28 2014-09-30 Microsoft Corporation Data center system that accommodates episodic computation
US9063738B2 (en) 2010-11-22 2015-06-23 Microsoft Technology Licensing, Llc Dynamically placing computing jobs
US8612615B2 (en) * 2010-11-23 2013-12-17 Red Hat, Inc. Systems and methods for identifying usage histories for producing optimized cloud utilization
US8904005B2 (en) * 2010-11-23 2014-12-02 Red Hat, Inc. Indentifying service dependencies in a cloud deployment
US9043767B2 (en) 2011-04-12 2015-05-26 Pivotal Software, Inc. Release management system for a multi-node application
US9595054B2 (en) * 2011-06-27 2017-03-14 Microsoft Technology Licensing, Llc Resource management for cloud computing platforms
US9450838B2 (en) 2011-06-27 2016-09-20 Microsoft Technology Licensing, Llc Resource management for cloud computing platforms
US9170798B2 (en) 2012-03-02 2015-10-27 Vmware, Inc. System and method for customizing a deployment plan for a multi-tier application in a cloud infrastructure
US9098344B2 (en) * 2011-12-27 2015-08-04 Microsoft Technology Licensing, Llc Cloud-edge topologies
WO2013110965A1 (en) * 2012-01-27 2013-08-01 Empire Technology Development Llc Spiral protocol for iterative service level agreement (sla) execution in cloud migration
US9052961B2 (en) 2012-03-02 2015-06-09 Vmware, Inc. System to generate a deployment plan for a cloud infrastructure according to logical, multi-tier application blueprint
US9047133B2 (en) 2012-03-02 2015-06-02 Vmware, Inc. Single, logical, multi-tier application blueprint used for deployment and management of multiple physical applications in a cloud environment
US10031783B2 (en) 2012-03-02 2018-07-24 Vmware, Inc. Execution of a distributed deployment plan for a multi-tier application in a cloud infrastructure
US9071613B2 (en) * 2012-04-06 2015-06-30 International Business Machines Corporation Dynamic allocation of workload deployment units across a plurality of clouds
US9086929B2 (en) 2012-04-06 2015-07-21 International Business Machines Corporation Dynamic allocation of a workload across a plurality of clouds
CN104335170A (en) 2012-06-08 2015-02-04 惠普发展公司,有限责任合伙企业 Cloud application deployment
US9882824B2 (en) 2012-06-08 2018-01-30 Hewlett Packard Enterpise Development Lp Cloud application deployment portability
US9311066B1 (en) * 2012-06-25 2016-04-12 Amazon Technologies, Inc. Managing update deployment
US9348652B2 (en) 2012-07-02 2016-05-24 Vmware, Inc. Multi-tenant-cloud-aggregation and application-support system
US9256412B2 (en) * 2012-07-04 2016-02-09 Sap Se Scheduled and quarantined software deployment based on dependency analysis
GB2504487A (en) 2012-07-30 2014-02-05 Ibm Automated network deployment of cloud services into a network by matching security requirements
US10291488B1 (en) * 2012-09-27 2019-05-14 EMC IP Holding Company LLC Workload management in multi cloud environment
GB2507338A (en) * 2012-10-26 2014-04-30 Ibm Determining system topology graph changes in a distributed computing system
AU2014232562B2 (en) * 2013-03-15 2019-11-21 Servicemesh, Inc. Systems and methods for providing ranked deployment options
US9329881B2 (en) 2013-04-23 2016-05-03 Sap Se Optimized deployment of data services on the cloud
CN103269282A (en) 2013-04-25 2013-08-28 杭州华三通信技术有限公司 Method and device for automatically deploying network configuration
US9430354B2 (en) 2013-08-30 2016-08-30 Citrix Systems, Inc. Aggregation of metrics for tracking electronic computing resources based on user class hierarchy
US9432794B2 (en) 2014-02-24 2016-08-30 International Business Machines Corporation Techniques for mobility-aware dynamic service placement in mobile clouds
US10234835B2 (en) 2014-07-11 2019-03-19 Microsoft Technology Licensing, Llc Management of computing devices using modulated electricity
US9933804B2 (en) 2014-07-11 2018-04-03 Microsoft Technology Licensing, Llc Server installation as a grid condition sensor
US9769206B2 (en) 2015-03-31 2017-09-19 At&T Intellectual Property I, L.P. Modes of policy participation for feedback instances
US10277666B2 (en) 2015-03-31 2019-04-30 At&T Intellectual Property I, L.P. Escalation of feedback instances
US9992277B2 (en) 2015-03-31 2018-06-05 At&T Intellectual Property I, L.P. Ephemeral feedback instances
US10129157B2 (en) 2015-03-31 2018-11-13 At&T Intellectual Property I, L.P. Multiple feedback instance inter-coordination to determine optimal actions
US9524200B2 (en) 2015-03-31 2016-12-20 At&T Intellectual Property I, L.P. Consultation among feedback instances
US10129156B2 (en) 2015-03-31 2018-11-13 At&T Intellectual Property I, L.P. Dynamic creation and management of ephemeral coordinated feedback instances
US10009234B2 (en) * 2015-11-19 2018-06-26 International Business Machines Corporation Predictive modeling of risk for services in a computing environment
US9891982B2 (en) 2015-12-04 2018-02-13 Microsoft Technology Licensing, Llc Error handling during onboarding of a service
US9798583B2 (en) 2015-12-04 2017-10-24 Microsoft Technology Licensing, Llc Onboarding of a service based on automated supervision of task completion
US10374930B2 (en) 2016-01-28 2019-08-06 Microsoft Technology Licensing, Llc Off-peak patching for enterprise stability
US10432707B2 (en) 2016-03-02 2019-10-01 International Business Machines Corporation Optimization of integration flows in cloud environments
US10013550B1 (en) 2016-12-30 2018-07-03 ShieldX Networks, Inc. Systems and methods for adding microservices into existing system environments
US10541901B2 (en) 2017-09-19 2020-01-21 Keysight Technologies Singapore (Sales) Pte. Ltd. Methods, systems and computer readable media for optimizing placement of virtual network visibility components
US10764169B2 (en) 2017-10-09 2020-09-01 Keysight Technologies, Inc. Methods, systems, and computer readable media for testing virtual network components deployed in virtual private clouds (VPCs)
US11038770B2 (en) 2018-02-01 2021-06-15 Keysight Technologies, Inc. Methods, systems, and computer readable media for managing deployment and maintenance of network tools
US10812349B2 (en) 2018-02-17 2020-10-20 Keysight Technologies, Inc. Methods, systems and computer readable media for triggering on-demand dynamic activation of cloud-based network visibility tools
US10884815B2 (en) 2018-10-29 2021-01-05 Pivotal Software, Inc. Independent services platform
US10951509B1 (en) 2019-06-07 2021-03-16 Keysight Technologies, Inc. Methods, systems, and computer readable media for providing intent-driven microapps for execution on communications network testing devices
US11489745B2 (en) 2019-10-15 2022-11-01 Keysight Technologies, Inc. Methods, systems and computer readable media for providing a declarative network monitoring environment
CN110825391B (en) * 2019-10-31 2023-10-13 北京金山云网络技术有限公司 Service management method, device, electronic equipment and storage medium
US11616882B2 (en) * 2020-05-22 2023-03-28 Microsoft Technology Licensing, Llc Accelerating pre-production feature usage

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100125664A1 (en) * 2008-11-14 2010-05-20 Computer Associates Think, Inc. System, Method, and Software for Integrating Cloud Computing Systems
US20100235355A1 (en) * 2009-03-13 2010-09-16 Novell, Inc. System and method for unified cloud management
US20100250744A1 (en) * 2009-03-24 2010-09-30 International Business Machines Corporation System and method for deploying virtual machines in a computing environment
US20100306772A1 (en) * 2009-06-01 2010-12-02 International Business Machines Corporation Virtual solution composition and deployment system and method
US20100332401A1 (en) * 2009-06-30 2010-12-30 Anand Prahlad Performing data storage operations with a cloud storage environment, including automatically selecting among multiple cloud storage sites
US20110041126A1 (en) * 2009-08-13 2011-02-17 Levy Roger P Managing workloads in a virtual computing environment
US20110145392A1 (en) * 2009-12-11 2011-06-16 International Business Machines Corporation Dynamic provisioning of resources within a cloud computing environment
US20110191759A1 (en) * 2010-02-01 2011-08-04 International Business Machines Corporation Interactive Capacity Planning
US20110213875A1 (en) * 2010-02-26 2011-09-01 James Michael Ferris Methods and Systems for Providing Deployment Architectures in Cloud Computing Environments
US20110289440A1 (en) * 2010-05-20 2011-11-24 Carter Stephen R Techniques for evaluating and managing cloud networks
US20120311012A1 (en) * 2008-10-08 2012-12-06 Jamal Mazhar Cloud Computing Lifecycle Management for N-Tier Applications
US8806014B2 (en) * 2010-03-19 2014-08-12 Novell, Inc. Techniques for intelligent service deployment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6760775B1 (en) * 1999-03-05 2004-07-06 At&T Corp. System, method and apparatus for network service load and reliability management
US6910024B2 (en) * 2000-02-04 2005-06-21 Hrl Laboratories, Llc System for pricing-based quality of service (PQoS) control in networks
US7526764B2 (en) * 2004-05-18 2009-04-28 Bea Systems, Inc. System and method for deployment plan
US20060080413A1 (en) * 2004-06-17 2006-04-13 International Business Machines Corporation Method and system for establishing a deployment plan for an application
US8656019B2 (en) * 2009-12-17 2014-02-18 International Business Machines Corporation Data processing workload administration in a cloud computing environment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120311012A1 (en) * 2008-10-08 2012-12-06 Jamal Mazhar Cloud Computing Lifecycle Management for N-Tier Applications
US20100125664A1 (en) * 2008-11-14 2010-05-20 Computer Associates Think, Inc. System, Method, and Software for Integrating Cloud Computing Systems
US20100235355A1 (en) * 2009-03-13 2010-09-16 Novell, Inc. System and method for unified cloud management
US20100250744A1 (en) * 2009-03-24 2010-09-30 International Business Machines Corporation System and method for deploying virtual machines in a computing environment
US20100306772A1 (en) * 2009-06-01 2010-12-02 International Business Machines Corporation Virtual solution composition and deployment system and method
US20100332401A1 (en) * 2009-06-30 2010-12-30 Anand Prahlad Performing data storage operations with a cloud storage environment, including automatically selecting among multiple cloud storage sites
US20110041126A1 (en) * 2009-08-13 2011-02-17 Levy Roger P Managing workloads in a virtual computing environment
US20110145392A1 (en) * 2009-12-11 2011-06-16 International Business Machines Corporation Dynamic provisioning of resources within a cloud computing environment
US20110191759A1 (en) * 2010-02-01 2011-08-04 International Business Machines Corporation Interactive Capacity Planning
US20110213875A1 (en) * 2010-02-26 2011-09-01 James Michael Ferris Methods and Systems for Providing Deployment Architectures in Cloud Computing Environments
US8806014B2 (en) * 2010-03-19 2014-08-12 Novell, Inc. Techniques for intelligent service deployment
US20110289440A1 (en) * 2010-05-20 2011-11-24 Carter Stephen R Techniques for evaluating and managing cloud networks

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140244799A1 (en) * 2013-02-28 2014-08-28 International Business Machines Corporation Installation of an Asset from a Cloud Marketplace to a Cloud Server in a Private Network
US9094473B2 (en) * 2013-02-28 2015-07-28 International Business Machines Corporation Installation of an asset from a cloud marketplace to a cloud server in a private network
WO2020002030A1 (en) 2018-06-26 2020-01-02 Siemens Aktiengesellschaft Method and system for determining an appropriate installation location for an application to be installed in a distributed network environment
US11561781B2 (en) 2018-06-26 2023-01-24 Siemens Aktiengesellschaft Method and system for determining an appropriate installation location for an application to be installed in a distributed network environment
US10846070B2 (en) 2018-07-05 2020-11-24 At&T Intellectual Property I, L.P. Facilitating cloud native edge computing via behavioral intelligence
US11334332B2 (en) 2018-07-05 2022-05-17 At&T Intellectual Property I, L.P. Facilitating cloud native edge computing via behavioral intelligence
CN109783110A (en) * 2019-02-19 2019-05-21 安徽智融景和科技有限公司 Melt media system server disposition software systems

Also Published As

Publication number Publication date
US20110231552A1 (en) 2011-09-22
US8806014B2 (en) 2014-08-12

Similar Documents

Publication Publication Date Title
US8806014B2 (en) Techniques for intelligent service deployment
US11645396B2 (en) Cybersecurity vulnerability management based on application rank and network location
Liaqat et al. Federated cloud resource management: Review and discussion
Beserra et al. Cloudstep: A step-by-step decision process to support legacy application migration to the cloud
US11310335B2 (en) Function as a service gateway
US10541871B1 (en) Resource configuration testing service
US8639791B2 (en) Techniques for evaluating and managing cloud networks
US6857020B1 (en) Apparatus, system, and method for managing quality-of-service-assured e-business service systems
US9912666B2 (en) Access management for controlling access to computer resources
US8447848B2 (en) Preparing execution of systems management tasks of endpoints
US20100268568A1 (en) Workflow model for coordinating the recovery of it outages based on integrated recovery plans
US20080235761A1 (en) Automated dissemination of enterprise policy for runtime customization of resource arbitration
US10044630B2 (en) Systems and/or methods for remote application introspection in cloud-based integration scenarios
US8554885B2 (en) Techniques for evaluating and managing cloud networks via political and natural events
Hedhli et al. A survey of service placement in cloud environments
Lawrence et al. Using service level agreements for optimising cloud infrastructure services
Scheid et al. Automatic sla compensation based on smart contracts
US9823999B2 (en) Program lifecycle testing
Mills et al. Can economics-based resource allocation prove effective in a computation marketplace?
US11593463B2 (en) Execution type software license management
Coppolino et al. Effective QoS monitoring in large scale social networks
Macías et al. Enforcing service level agreements using an economically enhanced resource manager
Bernardo Utilizing security risk approach in managing cloud computing services
Stallings ◾ Overview of Cloud Computing
US20240129340A1 (en) Methods and systems for cloud security operations

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICRO FOCUS SOFTWARE INC., DELAWARE

Free format text: CHANGE OF NAME;ASSIGNOR:NOVELL, INC.;REEL/FRAME:040020/0703

Effective date: 20160718

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., DELAWARE

Free format text: SECURITY INTEREST;ASSIGNORS:ATTACHMATE CORPORATION;BORLAND SOFTWARE CORPORATION;NETIQ CORPORATION;AND OTHERS;REEL/FRAME:044183/0718

Effective date: 20170901

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: NETIQ CORPORATION, WASHINGTON

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: MICRO FOCUS SOFTWARE INC. (F/K/A NOVELL, INC.), WASHINGTON

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: ATTACHMATE CORPORATION, WASHINGTON

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: SERENA SOFTWARE, INC, CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: MICRO FOCUS (US), INC., MARYLAND

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: BORLAND SOFTWARE CORPORATION, MARYLAND

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131

Owner name: MICRO FOCUS LLC (F/K/A ENTIT SOFTWARE LLC), CALIFORNIA

Free format text: RELEASE OF SECURITY INTEREST REEL/FRAME 044183/0718;ASSIGNOR:JPMORGAN CHASE BANK, N.A.;REEL/FRAME:062746/0399

Effective date: 20230131