KR20150086516A - Network resource management - Google Patents

Network resource management Download PDF

Info

Publication number
KR20150086516A
KR20150086516A KR1020157016151A KR20157016151A KR20150086516A KR 20150086516 A KR20150086516 A KR 20150086516A KR 1020157016151 A KR1020157016151 A KR 1020157016151A KR 20157016151 A KR20157016151 A KR 20157016151A KR 20150086516 A KR20150086516 A KR 20150086516A
Authority
KR
South Korea
Prior art keywords
policy
model
processor
network
cloud
Prior art date
Application number
KR1020157016151A
Other languages
Korean (ko)
Inventor
마티아스 살레
웨이웨이 저우
시 신
Original Assignee
휴렛-팩커드 디벨롭먼트 컴퍼니, 엘.피.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 휴렛-팩커드 디벨롭먼트 컴퍼니, 엘.피. filed Critical 휴렛-팩커드 디벨롭먼트 컴퍼니, 엘.피.
Priority to KR1020157016151A priority Critical patent/KR20150086516A/en
Publication of KR20150086516A publication Critical patent/KR20150086516A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled

Abstract

A network resource management method includes: generating, by a processor, a model of an application; defining a plurality of substitution points in the model; and replacing the alternative point with an abstract model having a set of sub- , And codifying a number of policies representing which sourcing option to use for each alternate point.

Description

NETWORK RESOURCE MANAGEMENT

The present invention relates to network resource management.

Cloud computing services are becoming increasingly available to individuals and businesses to dynamically extend their information technology (IT) infrastructure and resources. These individuals and businesses often contract with cloud service providers when the internal IT infrastructure or resources of an individual or enterprise are inadequate to accommodate the increased use or increase of network activity. This increase in network activity may be due, for example, to an increase in the sales of their respective goods or services. Thus, individuals and businesses can take advantage of the economies of scale associated with open and other forms of cloud computing services.

The system and method of the present invention provides network resource management. The method includes, in a processor, creating a model of an application, defining a plurality of substitution points in the model, representing the alternate point as an abstraction model with a set of sub-types, And codifying a number of policies representing which sourcing option to use for the alternate point. The method may also include receiving a request to instantiate a model, applying a plurality of policies to select a resource candidate, and acquiring a selected candidate's resources.

The systems and methods of the present invention provide a cloud management device that may include a processor and a memory communicatively coupled to the processor. The cloud management device may also include a replacement module stored in memory for creating a model of the to-be-scaled application and defining a number of alternative points in the model when executed by the processor . The cloud management device also includes a static binding policy creation module, stored in memory, for creating a plurality of explicit statements that, when executed by the processor, define sub-types within the alternate point, And a dynamic binding policy generation step of generating, when executed by the processor, a plurality of policies including a scoring function evaluated at runtime informing the resource provider about the best sub-type to select, Module (dynamic binding policy creation module).

The systems and methods of the present invention may be implemented as computer program products for managing network resources, which may include computer readable storage media including computer usable program code embodied in computer readable storage media, . The computer usable program code includes computer usable program code for creating a model defining an application to be scaled and defining a plurality of alternative points in the model when executed by the processor. The computer usable program code also includes computer usable program code for generating a plurality of explicit statements that, when executed by the processor, define sub-types within the alternate point, and instructions that, when executed by the processor, May include computer usable program code for generating a plurality of policies including a scoring function evaluated at runtime that informs the resource provider about the policy.

BRIEF DESCRIPTION OF THE DRAWINGS The accompanying drawings illustrate various examples of the principles set forth herein and form a part hereof. The illustrated example is provided for illustrative purposes only and does not limit the scope of the claims.
1 is a block diagram of a system for cloud bursting based on alternate point and bursting policy, in accordance with an example of the principles described herein.
2 is a block diagram of the cloud management device of FIG. 1, in accordance with an example of the principles described herein.
Figure 3 is a block diagram of an application model or dependency graph in an infrastructure as a service (IaaS) scenario, according to an example of the principles described herein.
4 is a block diagram of an application model or dependency graph in a software as a service (SaaS) scenario, according to an example of the principles described herein.
5 is a block diagram of an application model or dependency graph representing scale points and alternative points, in accordance with an example of the principles described herein.
6 is a flow chart illustrating a method for managing network resources, in accordance with an example of the principles described herein.
7 is a flow chart illustrating a method for managing network resources, according to another example of the principles described herein.
Figures 8 and 9 are flow charts illustrating a method for managing network resources in accordance with another example of the principles described herein.
Figure 10 is a block diagram of a dependency graph depicting the dependencies of a number of rules in a policy, according to another example of the principles described herein.
Throughout the drawings, like reference numerals designate similar but not necessarily identical elements.

The system and method provide network resource management. The method includes the steps of creating a model of an application, defining a plurality of alternative points in the model, representing the alternate point as an abstraction model with a set of sub-types, and for each alternate point And codifying a number of policies representing which sourcing option to use. The method may also include receiving a request to instantiate a model, applying a plurality of policies to select a resource candidate, and acquiring a selected candidate's resources.

The present systems and methods provide a cloud management device that may include a processor and a memory communicatively coupled to the processor. The cloud management device may also include a replacement module stored in memory for creating a model of the to-be-scaled application and defining a number of alternative points in the model when executed by the processor . The cloud management device also includes a static binding policy creation module, stored in memory, for creating a plurality of explicit statements that, when executed by the processor, define sub-types within the alternate point, And a dynamic binding policy generation step of generating, when executed by the processor, a plurality of policies including a scoring function evaluated at runtime informing the resource provider about the best sub-type to select, Module (dynamic binding policy creation module).

The system and method provide a computer program product for managing network resources that may include computer readable storage medium including computer usable program code embodied in computer readable storage medium do. The computer usable program code includes computer usable program code for creating a model defining an application to be scaled and defining a plurality of alternative points in the model when executed by the processor. The computer usable program code also includes computer usable program code for generating a plurality of explicit statements that, when executed by the processor, define sub-types within the alternate point, and instructions that, when executed by the processor, May include computer usable program code for generating a plurality of policies including a scoring function evaluated at runtime that informs the resource provider about the policy.

As illustrated above, it may be difficult for an individual or business to decide when to purchase an external cloud service or to purchase more or less of these services. For example, individuals or businesses may not understand at some point whether this purchase of external cloud services will be economically feasible for their underlying business activities. For example, an internal or private network that is currently being used by a market, an individual or a business that will use the external cloud service and is going to scale out, a scaling-out of what remains in the internal dedicated network to an external cloud service Some environmental factors such as economic advantages can be considered.

Cloudbusting can be used to perform additional workloads on one or more external clouds or networks on an internal resource at a given point in time and to perform additional workloads on an on- It may be more economical to trigger a shift to use. A spike in demand on an application within an internal resource can be handled dynamically by adding the capacity provided by the third-party provider to the external resource. The degree of association associated with internal resources can be an aspect of the cloud busting that you want to control.

For example, an information technology (IT) organization can use cloud bursting to deploy new releases of applications for testing and evaluation purposes. In this case, the tested application runs in the cloud and is not connected to internal resources at all. Similarly, a development project may provide a smoke test that is triggered by its own continuous build environment to determine if there is a simple obstacle that is sufficiently large to reject the prospective software release. You can use cloud busting to provision. Thus, the use of cloud bursting shifts capital expenditure to operational expenditure for their test bed.

In the above two situations, cloud resources are loosely coupled with internal IT resources. Cloud bursts are tightly coupled with internal resources when provisioned resources in the cloud require frequent data sharing and communication with internal resources. Not all applications or application components lend themselves to cloud busting. Whether it is tightly or loosely coupled, or whether the requested service is at the infrastructure level or the service level, cloud bursting may not be an improvise operation when a spike occurs. Cloudbusting is an inherent part of a particular application design and user's deployment model. The present system and method help an administrator in determining which applications utilized in an internal network can be deployed in an external network using cloud busting technology and how those particular applications can be deployed in an external application .

As used in this specification and in the claims that follow, the expression "cloud " has a broader meaning to any network that delivers the requested virtual resource as a service. In one example, a cloud network may provide a computing environment in which a user may have access to applications or computing resources as a service from anywhere through a user's connected device. These services may be provided by an entity referred to as a cloud service provider. Examples of services that can be provided through the cloud network include infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS), storage as a service (STaaS) ), A test environment as a service (TEaaS), and an application program interface (API) as a service (API). Throughout this specification and in the appended claims, the expression "network" may include a cloud network as defined above or any other form of network.

Also, as used in this specification and the appended claims, the expression "public cloud" is intended to encompass a wide variety of applications, storage and other resources, As a service of a broad sense. In one example, these services are provided by the service provider through a pay-per-use model. In this example, the public cloud service provider owns and operates the infrastructure. In another example, a public cloud service provider provides access through an open network, such as the Internet, and no direct connectivity is provided. Examples of cloud services provided within the public cloud include AMAZON WEB SERVICES, developed and sold as a service by Amazon.com, Inc., and RACKSPACE CLOUD web application hosting services developed and provided by Rackspace US, Inc.

As used in this specification and the appended claims, the expression "private cloud " has a broader meaning as any cloud computing environment in which access is exclusively restricted to individuals or businesses. In one example, a private cloud may be any cloud infrastructure that is operated solely for an individual or enterprise. In one example, the private cloud is managed internally by the owner of the private cloud infrastructure. In another example, the private cloud is managed by a third party and is hosted internally or externally.

As used in this specification and the appended claims, the expression "hybrid cloud" has a broader meaning as any cloud computing environment including a plurality of public cloud resources and a plurality of private cloud resources . In one example, a hybrid cloud includes a number of cloud networks, such as private clouds and public clouds, which are maintained as separate networks but are associated to provide multiple services.

As used in this specification and the appended claims, the expression "scaling out" or similar expressions may be used to refer to a second or third cloud computing environment for a first or original cloud computing environment, Has a broadly understood meaning as any activity that initially allocates or consumes additional resources within the computing environment. Similarly, the term "scaling in" or similar language, as used in this specification and the appended claims, may be used to release, freely or in part, free up " or " discharge " Scaling out and scaling in may generally be referred to as "horizontal" scaling or "cloud bursting. &Quot;

As used in this specification and the appended claims, the expression "scaling up" or similar language is intended to encompass additional resources within the cloud computing environment to accommodate an increase in network activity in the cloud computing environment. It has a broadly understood meaning as any activity that is assigned or consumed. Similarly, as used herein and in the appended claims, a "scaling down" or similar language representation may be used to release, free, or free some or all of the resources in the cloud computing environment It has a broadly understood meaning as any activity. Scaling up and scaling down can be generally referred to as "vertical" scaling.

Also, as used in this specification and the appended claims, the expression "a plurality" or similar language has a broader meaning as any positive integer, including 1 to infinity, It means that there is no number and no number.

In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present systems and methods. However, it will be apparent to those skilled in the art that the present apparatus, systems and methods may be practiced without these specific details. Reference in the present specification to "example" or similar language means that a particular feature, structure, or characteristic described in connection with the example is included as described, but may not be included in another example.

Referring to Figure 1, a block diagram of a system 100 for cloud busting based on alternate point and busting policies is shown, in accordance with an example of the principles described herein. Figure 1 illustrates a system 100 for scaling out or cloud bursting from a first network, such as an internal network or a private network, to a second network, e.g., an external cloud network. In one example, the external cloud network is a public cloud supported by a public cloud service provider. The system 100 may include a cloud management device 102 having access to multiple networks, i.e., network A 104 and network B 106. Although two networks 104 and 106 are shown in FIG. 1, any number of networks 104 and 106 may be communicatively coupled to the cloud management device 102. The networks 104, 106 may also be communicatively coupled to one another via a direct communication line 180, such as, for example, a public or private Internet connection. For example, networks may be connected via a virtual private network (VPN) available over an open network or leased line.

In another example, the network is a cloud service network as defined above. Throughout this specification and the figures, network A 104 and network B 106 will be described as a cloud service network. However, any type of network may be employed in achieving the goals of the present systems and methods.

In one example, the cloud management device 102 may be separate from the network 104, 106. In this example, the cloud management device may be controlled by a third party, and the function may be provided as a service, for example, to the owner of network A 104. [

In another example, the cloud management device 102 may be integrated into one of the networks 104, 106. In this example, the administrator 160 of the network 104, 106 in which the cloud management device 102 is integrated operates by using the cloud management device 102 to achieve the functionality provided by the cloud management device 102 . Further, in this example, the cloud management device 102 may be provided as a computer program product, as described in more detail below.

The network 104, 106 may include multiple servers 120, 140. In the example of FIG. 1, each network 104, 106 includes one server 120, 140. However, each network 104, 106 may include any number of servers 120, 140. As shown in FIG. 1, each server includes hardware layers 121 and 141, including, for example, a processor and memory, among other computer hardware devices for forming a computing device.

The hardware layers 121 and 141 support a virtualization layer 122 and 142, respectively. The virtualization layers 122 and 142 in the servers 120 and 140 provide an abstraction layer and can store, for example, a virtual server, a virtual storage, a virtual network including a virtual private network, a virtual application and an operating system, A virtual entity such as a virtual client may be instantiated. In particular, multiple operating systems 123, 143 may be executed by processors in hardware layers 121, Although one operating system 123, 143 is shown in the server 120, 140 of FIG. 1, any number of virtual machines, including their own operating systems and any number of applications, , 140 to provide one user access to these virtual resources or a number of different user accesses.

In addition, a plurality of applications 124 can be executed by the processor in the hardware layer 121 of the server A 120. [ Although only one application 124 is shown, any number of applications may be stored and executed on server A 120. [ In one example, and throughout this disclosure, server A 120 may be configured to determine the presence of a server (e.g., server) 120 relative to application 124 due to an increase in the number of clients making requests to server A 120, You may experience overuse of resources. As discussed above, the administrator 160 may wish to scale the resource vertically and / or horizontally. In this scenario, the resources provided by server B 140 on network B 106 provide application 124 on server B 140 and all or part of the network transaction may be provided on server B 140 To be an instance of the application.

In one example, applications 124 and 144 executed on servers 120 and 140 may be executed in their respective operating systems 123 and 143 of the same or different types. Each of the applications 124 and 144 and their respective operating systems 123 and 143 may include additional virtual resources supported by hardware layers 121 and 141, such as processors, memory, network adapters, Lt; / RTI >

The cloud service management layers 125 and 145 in the networks 104 and 106 provide management of the cloud services residing on the servers 120 and 140 in the networks 104 and 106. In one example, the cloud service management layers 125 and 145 provision resources. Resource provisioning provides dynamic procurement of computing resources and other resources used to perform tasks within the cloud computing environment. In another example, the cloud service management layer 125, 145 provides service level management in which cloud computing resources are allocated such that the contracted service level is met. In another example, the cloud service management layer 125, 145 performs a combination of the above services.

A number of cloud services 126, 146 are supported by the servers 120, 140 in the networks 104, 106. As described above, examples of services that can be provided through the cloud network include IaaS, PaaS, and SaaS. The applications 124 and 144, the operating systems 123 and 143, the cloud service management layers 125 and 145 and the hardware layers 121 and 141 can thus be used to provide a number of these types of services to the user . In one example, cloud services 126 and 146 support an underlying service in which users or buyers of cloud services 126 and 146 participate. For example, the user or buyer of the cloud services 126, 146 may participate in selling the goods or service itself and may do so through the cloud services 126, 146 provided by the owner and operator of the network B 106.

In this example, for simplicity of illustration, the cloud management device 102, the server A 120, and the server B 140 communicate with each other via their respective networks 104, 106 and the cloud management device 102, Lt; RTI ID = 0.0 > communicably < / RTI > However, the principles set forth herein extend equally to any alternative configuration. As such, alternative examples within the principles of this disclosure include, but are not limited to, examples in which the cloud management device 102, server A 120, and server B 140 are implemented by the same computing device An example in which the functions of the cloud management device 102, the server A 120 or the server B 140 are implemented by a plurality of interconnected computers and the example in which the cloud management device 102, the server A 120, RTI ID = 0.0 > 140 < / RTI > communicates directly over the bus without an intermediate network device.

In another example, the cloud management device 102 may be implemented on either server A 120 or server B 140 to manage scaling out, scaling up, or scaling down, which is the scaling of the cloud service .

In another example, the cloud management device 102 may be implemented as a service by a third party. In this example, the third party may be a Hewlett Packard company that develops a CLOUDSYSTEM cloud network infrastructure to help build a private cloud computing environment, a public cloud computing environment, and a hybrid cloud computing environment, for example, by combining storage, servers, networking and software Organization or business.

In one example, a global load balancer 170 may likewise be communicatively coupled to the cloud management device 102. Global load balancer 170 includes a number of policies that assign transaction requests to servers 120 and 140 in networks 104 and 106 or networks 104 and 106. The load balancers 127 and 147 of network A 104 and network B 106 also include a number of policies that allocate transaction requests to servers 120 and 140 in networks 104 and 106.

The cloud management device 102 receives the incoming transaction (HTTP) request and sends the request to the server 120, 140 in the network 104, 106 where the server 120, A 104 and the network load balancer 127 and 147 of the network B 106 and the global load balancer 170. [ In one example, the processor (202 in FIG. 2) of the cloud management device 102 has access to the global load balancer 170 and can control the global load balancer 170. Global load balancer 170 directs new transaction requests to network B 106 instead of network A 104 and vice versa as described below with respect to cloud bursting and vertical scaling.

As discussed above, the system 100 may further include a cloud management device 102. In one example, the cloud management device 102 is a third party device that runs separately from the networks 104 and 106. In another example, the cloud management device 102 is implemented in private network A 104 or associated with private network A 104. The cloud management device 102 provides access to the cloud computing environment created by the network 104,106 for the consumer and system administrator 160 and provides access to the cloud computing environment created by the network 104,106, Assist and prescribe vertical scaling.

2 is a block diagram of the cloud management device 102 of FIG. 1, in accordance with an example of the principles described herein. The cloud management device 102 is a computing device that performs the methods described herein. In another example, the cloud management device 102 is a mobile computing device, such as a mobile phone, a smartphone, a personal digital assistant (PDA), or a laptop computer with the ability to perform the methods described herein. In another example, the cloud management device 102 is a desktop computing device.

In another example, the cloud management device 102 may be provided as a service by a cloud computing resource provider, a manager, or a third party. In this example, the cloud management device 102 may be executed on one computing device, or it may be distributed over a plurality of devices located at any number of points.

In order to achieve the required functionality of the cloud management device 102, the cloud management device 102 includes various hardware components. These hardware components may be a plurality of processors 202, a plurality of data storage devices 204, a plurality of peripheral device adapters 206, and a plurality of network adapters 208. These hardware components may be interconnected through the use of multiple buses and / or network connections. In one example, processor 202, data storage 204, peripheral adapter 206, and network adapter 208 may be communicatively coupled via bus 207. [

The processor 202 may include a hardware architecture for retrieving executable code from the data storage 204 and executing the executable code. The executable code may be used by processor 202 to cause processor 202 to manage scaling-out, scaling-up, and scaling-down, which is at least the scaling of cloud network services in accordance with the inventive method described herein And the like. In the course of executing the code, the processor 202 may receive inputs from a number of remaining hardware units and provide outputs to them.

The data storage device 204 may store data, such as executable program code, executed by the processor 202 or other processing device. As described, the data storage device 204 specifically includes a plurality of applications 202 that are executed by the processor 202 to implement the functions of managing scaling-out, scaling-up, and scaling-down, which are at least the scaling of cloud network services. Lt; / RTI >

The data storage device 204 may include various types of memory modules including volatile and non-volatile memory. For example, the data storage 204 of the present example includes a random access memory (RAM) 231, a read only memory (ROM) 232, and a hard disk drive (HDD) memory 233. It is contemplated that many other types of memory may also be utilized and that the present invention employs a number of different types of memory 130 in the data storage 204 if it is suitable for the particular application of the principles described herein . In some instances, different types of memory in the data storage 204 may be used for different data storage requests. For example, in some instances, the processor 202 may boot from the ROM 232, maintain non-volatile storage within the HDD memory 233, and execute the program code stored in the RAM 231. [

In general, the data storage device 204 may comprise a computer-readable storage medium. For example, the data storage device 204 may be, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any suitable combination thereof. More specific examples of computer readable storage media include, for example, electrical connections with multiple wires, portable computer diskettes, hard disks, RAM, ROM, erasable programmable read only memory (EPROM or flash memory), portable compact disc read only memory CD-ROM), optical storage, magnetic storage, or any suitable combination thereof. In the context of this document, a computer-readable storage medium can be any tangible medium that contains or can store a program for use by or in connection with an instruction execution system, apparatus, or device. In yet another example, the computer-readable storage medium can be any non-transitory medium that includes or can store a program for use by or in connection with an instruction execution system, apparatus, or device.

The hardware adapters 206 and 208 in the cloud management device 102 allow the processor 202 to interface with various other hardware elements external and internal to the cloud management device 102. [ For example, the peripheral device adapter 206 may be coupled to the network adapter 206 to create a user interface and / or to access the network A (104 in Figure 1), the network B (106 in Figure 1) and other external devices 212, 0.0 > 210 < / RTI > The display device 210 may be provided to allow the user (160 of FIG. 1) to interact and implement the functionality of the cloud management device 102.

The peripheral adapter 206 may also create an interface between the processor 202 and the printer, display device 210, or other media output device. The network adapter 208 may provide an interface to the networks 104 and 106 so that data transmission between the cloud management device 102, the global load balancer 170 and the networks A and B 104 and 106 .

The cloud management device 102 may also include a number of modules used to determine when and how to scale computing resources vertically and horizontally in networks A and B 104,106 and between networks A and B 104,106, . The various modules within the cloud management device 102 may be executed separately. In this example, the various modules may be stored as separate computer program products. In another example, various modules within the cloud management device 102 may be combined within a plurality of computer program products, each computer program product comprising a plurality of modules.

The cloud management device 102, when executed by the processor 202, generates a model of the design conditions of the application (124, 144 in FIG. 1) and uses the replacement module 240 to determine a number of alternative points in the model. . ≪ / RTI > The alternate point is used to indicate to the user or manager 160 that any part of the model may be replaced with a particular sub-type of that part of the model. Alternate module 240 also binds a number of alternative points in the application definition or model, such as the dependency graph shown in Figs. 3-5. In one example, the replacement module 240 is stored in the data storage 204 of the cloud management device 102, and is accessible and executable by the processor 202.

Referring back to Figure 1, which illustrates the cloud management device 102 and the servers 120 and 140 of the networks A and B 104,106, the cloud management device 102 is connected to the network A 104 and the network B 106). In one example, the interface is accomplished through a number of application programming interfaces (APIs) 128, 148.

3 is a block diagram of an application model or dependency graph 300 within an infrastructure as a service (IaaS) scenario, in accordance with an example of the principles described herein. In one example, the replacement module 240 may generate the dependency graph 300 of FIG. In the example of FIG. 3, block 302 of the dependency graph 300 identifies the IaaS model generated by the replacement module (240 of FIG. 2). In this example, the replacement module 240 determines either that the application to be scaled requires a particular operating system, either voluntarily or through data received from the manager 160. Examples of operating systems that the replacement module (240 in FIG. 2) can identify include, for example, the WINDOWS® operating system developed and distributed by Microsoft Corporation, the UBUNTU® operating system developed and distributed by Canonical Ltd, American Telephone and Telegraph Company UNIX® operating system developed by and distributed as an open source software package, LINUX® Unix-based operating system developed and distributed as an open source software package, ANDROID® Linux-based operating system developed and distributed by Google, Inc. , The BERKELEY SOFTWARE DISTRIBUTION (BSD) Unix-based operating system developed and distributed by the Computer Systems Research Group (CSRG) of the University of California at Berkeley, the iOS® operating system developed and distributed by Apple Inc., , And the Mac OS X operating system developed and distributed by Red Hat, Inc., and the Community Enterp Rise Operating System (CentOS) Unix-based operating system, and the like.

In the example of FIG. 3, the replacement module (240 of FIG. 2) has determined that the application requires a CentOS Unix-based operating system, as indicated by block 302. The replacement module (240 in FIG. 2) identifies an IaaS replacement point for the service device at block 304. At this point, the replacement module (240 of FIG. 2) identifies multiple sub-types of the server at blocks 306, 308 and 310. The "<< Extends >>" in FIG. 3 represents the sub-type of the base device (e.g., server) available to the user as an option for the device. The number and type of sub-types depends on which type of base device the application is constrained to work with.

The sub-types of servers that may be provided by the cloud network service provider may include an internal virtual machine (VM) such as a virtual machine residing on server A (120 in FIG. 1) on network A 104 have. Other examples of server sub-types include the ELASTIC COMPUTER CLOUD (EC2) cloud computing platform developed and provided by Amazon.com, Inc. or the THE RACKSPACE CLOUD cloud computing platform developed and provided by RackSpace US, Inc. And a plurality of external VMs, such as virtual machines, residing on server B (140 in FIG. 1) on network B 106. The example of the server shown in Fig. 3 is not limited to this.

The dependency graph, such as the dependency graph 300 of FIG. 3, may be used to determine any number of alternative points that can be applied to any number of applications that can potentially scale out to another network in a cloud bursting operation . Other computing devices, such as processors, load balancers, memory devices, peripheral adapters, network adapters, and display devices, among other hardware devices provided as infrastructures in an IaaS computing services environment, may be accessed by alternative modules (240 in FIG. 2) And may serve as an alternative point to be identified from which sub-types are provided as options provided by the various IaaS service providers.

4 is a block diagram of an application model or dependency graph 400 in a SaaS scenario, in accordance with an example of the principles described herein. In one example, the replacement module 240 may generate the dependency graph 400 of FIG. In the example of FIG. 4, block 402 of the dependency graph 400 identifies the SaaS model generated by the replacement module (240 of FIG. 2). In the example of FIG. 4, the software package is provided as a service that assists the administrator 160 in setting up a user to an email account. In this example, replacement module 240 determines whether the user will be associated with a particular email account either voluntarily or through data received from manager 160.

In the example of FIG. 4, the replacement module (240 of FIG. 2) identifies the SaaS replacement point at block 404. At this time, the replacement module (240 in FIG. 2) identifies a number of sub-types of emails available to the user in blocks 406 and 408. The "<< Extends >>" in FIG. 4 represents a sub-type of a base application (e. G., E-mail account service) available to the user as an option for the base application. The number and type of sub-types will depend on what type of base application the system is constrained to work with.

Sub-types of email accounts that may be provided by a cloud network service provider may include, for example, an EXCHANGE SERVER email account developed and distributed by Microsoft Corporation and a GMAIL email account service developed and distributed by Google Inc. The example of the e-mail account service illustrated in Fig. 4 is not limited to this.

As described above with reference to FIG. 3, the dependency graph, such as the dependency graph 400 of FIG. 4, can be applied to any number of applications that can potentially be scaled out to another network in a cloud bursting operation Lt; RTI ID = 0.0 &gt; a &lt; / RTI &gt; Other software packages, such as an operating system, a calendaring software package, a word processing software package, an internet purchasing software package, and a security software package, which are provided as software packages in a SaaS computing service environment, (240 in FIG. 2) within which the sub-type is provided as an option provided by the various SaaS service providers.

5, there is shown a block diagram of an application model or dependency graph 500 representing a scale point 518 and a replacement point 510, in accordance with an example of the principles described herein. 5 illustrates a cloud burst scenario where, for example, an increase in network activity in network A 104 causes horizontal scaling of resources into another network, e.g., network B 106. [ At block 504, a number of load balancers, such as global load balancer 170 and load balancers 127 and 147 of networks 104 and 106, are updated and load balancers 127, 147, Lt; RTI ID = 0.0 &gt; a &lt; / RTI &gt; An application 506 to be horizontally scaled from network A 104 to network B 106 is indicated as block 506.

Scale point 518 is also shown in the dependency graph as a point in network usage where a cloud burst scenario occurs. In one example, range cardinality is expressed as a reference to the represented variable in model 500, e.g., "variation initnb is [1..20]" and "variation maxnb is [1..20]" do. These range cardinality expressions are used to determine when the application performs horizontal scaling and how many instances of the application (e.g., between 1 and 20) can be scaled out to another network.

In the example above, up to 20 instances and at least one instance of the application can be scaled out to another network at the start of the horizontal scaling process. This range cardinality is just an example. In another example, the administrator 160 may want to start horizontal scaling when it is determined that at least two additional instances of the application are scaled horizontally to another network. In this manner, the administrator 160 can be assured that horizontal scaling will be effective in alleviating the cost of acquiring available resources on the external network at the time of purchase from the cloud computing resource supplier and from the network within the internal network and horizontal scaling . When the parameters specified in the range cardinality are satisfied, the star-scaling or cloud bursting is initiated and multiple applications, each having its own individual operating system, virtual machine, and other devices and environment, are communicated from the cloud computing resource provider .

The dependency graph 500 also shows the dependencies that are dependent upon the instantiation of each of the operating system (block 508) and the application (block 506) including the virtual machine 510. Similarly, as described above, the replacement point can be determined to be at any point in the dependency graph 500, as determined by the replacement module (240 in FIG. 2). In the example of FIG. 5, a replacement point exists at block 510 with respect to the virtual machine (VM) used to execute the application (block 506) and its associated operating system 508. The IaaS sub-types available in this example as VMs include an internal VM (block 512), an EC2 VM (block 514), and a RackSpace VM (block 516).

With reference to Figures 3-5, a dependency graph is generated for each instance of horizontal scaling of the application. In this manner, whenever an application is run on an internal network, such as network A (104 in FIG. 1), and an instance of the application is horizontally scaled out to an external network such as network B (106 in FIG. 1) The replacement module (240 in FIG. 2) of the device 102 generates a model or dependency graph of the design conditions of the application (124 and 144 in FIG. 1) and determines a number of alternative points in the model or dependency graph. Then, to create and execute a policy that determines what type of services and devices are to be used at the alternate point and to generate a number of matching cloud service providers with the cloud environment conditions provided by the policy, the cloud management device 102 Other modules 250, 260, and 270 stored in the processor 202 are executed by the processor 202.

Referring again to FIG. 2, the cloud management device 102, when executed by the processor 202, generates a static binding policy to generate a plurality of binding statements of the form abstract model == sub-type Module 250 may be further included. The static binding policy generation module 270 associates this binding statement at run time with a provisioning request made to the resource provider that the administrator 160 may wish to receive the cloud computing service. The provisioning request informs the runtime system to use a specified sub-type instead of an abstraction model when scaling from a private network, such as network A (104 in Fig. 1) to an open network, such as network B (106 in Fig. 1).

In addition, the cloud management device 102, when executed by the processor 202, provides a scoring function that is evaluated at run time to inform the resource provider about the best sub-type to select when the request is processed And may include a dynamic binding policy generation module 260 to generate a plurality of policies. For example, if the system 100 is to use one of the EC2 VM or the RACKSPACE VM, then the system 100 may determine that the aggregate internal hypervisor capacity in the system 100 is below the threshold, , The system 100 may request that the internal VM be used. In another example, the manager 160 may also specify that an even distribution across the RACKSPACE server and the EC2 server be maintained such that scaling out of resources is evenly distributed across the cloud service provider.

In addition, the cloud management device 102, when executed by the processor 202, can compile, filter out and comply with the constraints provided by the policy to create multiple policies, The policy enforcement module 270 may include a policy enforcement module 270 to enforce policies to determine multiple resource candidates.

A method of using the various hardware and software elements of the cloud management device 102 to dynamically scale computing resources between two or more networks by the processor 202 of the cloud management device 102 will now be described with reference to FIGS. Will be described in more detail. 6 is a flow chart illustrating a method for managing network resources, in accordance with an example of the principles described herein. The method illustrated in FIG. 6 is performed once and the results can be averaged for future instances of horizontal scaling. In addition, the method of Fig. 6 may be referred to as design time performed before runtime. The method may be initiated by a processor (202 of FIG. 2) executing an alternate module 240 to create (block 602) a model of the application. In one example, the model is captured as a dependency graph, and Figures 3-5 are examples of such. Replacement module 240 identifies a number of alternative points in a model of the application (block 604).

Alternate module 240 represents the alternate point as an abstraction model with a set of sub-types representing available sourcing options (block 606). The system 100 may use a number of policies that express which sourcing options to use for each alternate point, using the static binding policy generation module 250 (FIG. 2) and the dynamic binding policy generation module 260 (FIG. 2) (Block 608). &Lt; / RTI &gt; In this manner, block 608 will act to bind the alternate point in the abstraction model at runtime.

In one example, the policy may be expressed based on information contained within network A 104, network B 106, cloud management device 102, or a combination thereof. In this example, the overall system 100 may receive input as to how the policy should be expressed from the network for which scaling is to occur, the system for which scaling is to occur, another device, or a combination thereof.

There are two types of policy: static binding policy and dynamic binding policy. A static binding policy is a definitive selection of a specific device or service that can be done spontaneously or by an administrator (160 of FIG. 1) that defines a number of specific devices or services. For example, the system 100 or the administrator 160 may specify whether the server should be a server supporting the RackSpace VM. In an example where an administrator specifies a device or service, this information may be entered manually by the administrator 160 through a number of user interfaces displayed on the display device 210 (FIG. 2).

A dynamic binding policy is a codified rule that ranks various devices or services based on the fitness of various devices or services or the best score based on a particular purpose or situation. For example, the system 100 may select an Exchange Server email account prior to a Gmail or Hotmail email account based on the user's needs. As another example, the system 100 may be implemented by AMAZON WEB SERVICES, developed and provided by Amazon.com Inc., among a number of other cloud computing service providers, or very affordable cloud computing, such as cloud computing services developed and provided by Microsoft Inc. And may select a particular sub-type of VM from the service provider.

7 is a flow chart illustrating a method for managing network resources, according to another example of the principles described herein. The method shown in Fig. 7 is performed for all instances of horizontal scaling of an application and is referred to as a runtime process performed after the design time described with respect to Fig.

The method may be initiated by a processor (202 in FIG. 2) receiving a request to instantiate a model (block 702). This request is a request to horizontally scale an application to an external network, such as network B 106. [ The policy enforcement module 270 then applies a number of policies for selecting resource candidates (block 704). In one example, the policy is a coordinated policy at block 608 of FIG. Thus, the policy binds a replacement point in the abstraction model. Block 704 includes telling a number of cloud computing provisioning systems that provide a resource candidate that the alternative point of the abstraction model will be replaced by a sub-type of device or service at that alternate point.

The method of FIG. 7 also includes obtaining (706) the resources of the selected candidate. In one example, only one candidate is returned at block 704. In this example, the system 100 selects that one candidate.

In another example, more than one candidate may be returned. In this example, more than one candidate may be filtered by a system that meets the criteria set by rules and policies. The administrator can select from any of the plurality of candidates returned.

The results in Figures 6 and 7 are a list of the resources of these providers that meet the criteria set by cloud computing service providers and static binding policies and dynamic binding policies. For example, if the static binding policy and the dynamic binding policy determine that the application 124 of network A 104 to be horizontally scaled to the external network requires a particular networking environment, then the system 100 may provide a cloud computing service provider And will filter out their respective services and infrastructures that meet the criteria set by the static binding policy and the dynamic binding policy. The designated networking environment may include, among other computing devices, a required computing device of an application, such as an operating system, a virtual machine, or a server. The designated networking environment may also include, among other computing device specifications, the specifications of these computing devices, including, for example, memory capacity, memory type, and central processing unit specifications.

The designated networking environment may also include services provided by a cloud computing service provider, such as a payment service that a user of an application when implemented on an external network may use to purchase, for example, goods or services that the application provides to these users . If the network B 106 meets the criteria set by the static binding policy and the dynamic binding policy, any number and any combination of static binding policies and dynamic binding policies may be used to scale the application horizontally into the external network B 106 .

Figures 8 and 9 are flow charts illustrating a method for managing network resources in accordance with another example of the principles described herein. The method of Figures 7 and 8 is a more detailed representation of the method of Figures 6 and 7 where each dependency graph model is generated by a processor 202 (block 802) to generate a plurality of dependency graph models ) May be initiated to execute the replacement module (240 of FIG. 2). The replacement module (240 in FIG. 2) determines which part of the dependency graph model is a replacement point (block 804). Therefore, at block 704, the replacement module (240 of FIG. 2) determines which part of the dependency graph model as illustrated in FIGS. 3-5 is modifiable for replacement.

The system determines whether the alternate point is explicitly defined as a sub-type by the administrator (block 806). As described above, static binding policies and dynamic binding policies are used to filter possible cloud computing service providers to find a large number of cloud computing service providers that meet the criteria defined by static binding policies and dynamic binding policies. The static binding policy is generated using the static binding policy generation module 250 executed by the processor 202. The static binding policy generation module 250, when executed by the processor 202, may cause a plurality of user interfaces to appear on the display device 210 to allow an administrator to interface and create a static binding policy.

Similarly, the dynamic binding policy is generated using the dynamic binding policy generation module 260, which is executed by the processor 202. The dynamic binding policy generation module 260, when executed by the processor 202, may cause a plurality of user interfaces to appear on the display device 210 to allow an administrator to interface, and create a dynamic binding policy.

If the replacement point is explicitly defined as a sub-type by the administrator (determined as YES in block 806), the policy is a static binding policy and the processor 202 receives the static binding policy as an input (block 808) . If the alternate point is not explicitly defined as a sub-type by the administrator (determined as "NO" at block 806), then the policy is a dynamic binding policy and the system 100 may provide multiple parameters of the abstraction model as sub- - as a policy for selecting a type (block 810). At block 612, the processor (202 of FIG. 2) defines a number of instances of the application 124 deployment in the external network that may be reused. Thus, block 812 allows the system 100 to effectively deploy additional instances of the same application without performing the analysis of blocks 802-806 for the dynamic binding policy. Processor 202 defines a number of packages based on the same model as generated at block 802 (block 814). Thus, block 814 outlines the system 100's many possible deployment packages of applications in the external network and possibly reuses these packages whenever the same model is created at block 802 Let's do it.

At block 816, the rules used to create multiple dynamic binding policies are compiled. This leaves the rule in a state for execution by the processor 202 when filtering out a possible candidate cloud computing service provider. A dynamic binding policy is a policy based on a number of rules. A rule is a profile of a policy that aids the administrator in making decisions when multiple cloud computing resource providers are providing multiple, same kinds of resources. The resource can be divided into three sub-types: model policy 1002, instance policy 1004, and package policy 1006. 10 is a block diagram of dependency graph 1000 depicting the dependencies of a number of rules in a policy, according to another example of the principles described herein.

As shown in FIG. 10, the model policy 1002, the instance policy 1004, and the package policy 1006 constitute a rule policy 1008. The model policy 1002 is used to help select sub-types of replacement points in the above-described model. The instance policy 1004 is used to help select an existing instance of the deployment of an application on the external network that can be used as described above in connection with block 812 of Figures 8 and 9. The package policy 1006 is used to select the best suitable package from the plurality of packages based on the same model as described above with respect to blocks 814 of FIGS. 8 and 9.

Rule policy 1008 is comprised of a set of rules 1010 and a set of ratings 1012, inheriting model policy 1002, instance policy 1004, and package policy 1006. The rule 1010 includes a plurality of rules 1014, each rule 1014 consisting of a plurality of predicates or constraints 1016. Evaluation 1012 defines how to obtain the value used as constraint 1016 in rule 1014. [ The evaluation 1012 includes two sub-types: a formula evaluation 1018 and a resource evaluation 1020. The equation evaluation 1018 defines a formula for calculating the value of the constraint 1016 and the resource evaluation 1020 defines the function used to obtain the value of the constraint 1016. The resource evaluation 1020 defines a plurality of arguments 1022 to be used within the method of Figures 6 to 9 in determining the best resource of a candidate cloud computing service provider.

Since the policy is used to select the best resource among the resources provided by the cloud computing resource provider, the system can be used by the cloud computing resource provider as a candidate to obtain the best matching resource provided by the best-matching cloud computing resource provider Use static binding and dynamic binding policies to filter through the available resources provided. Referring again to Figures 8 and 9, the processor 202 receives an input regarding a dynamic binding policy (block 818). The processor 202 stores the static binding policy 255 and the dynamic binding policy 265, for example, in the data storage 204 of the cloud management device 102 (block 820).

The processor 202 uses the replacement module 240 to bind (block 822) a plurality of alternative points by informing the provisioning system that a portion of the model generated at block 802 will be replaced with a sub-type. Policy enforcement module 270 is performed by processor 202 to determine (block 824) a number of resource candidates for scaling. The processor 202 uses the color rendering module 270 to apply the policy for filtering candidates (block 826). The processor 202 returns a number of matching candidates to provide resources to horizontally scale the application 124 to the external network 106 (block 828). In one example, only one candidate is returned. More candidates, for example more than one candidate, are returned. In this example, the administrator can simply select from any of the multiple candidates returned.

The system 100 acquires and utilizes the resources of the matching candidate cloud computing service provider (block 830). Block 830 can include contracting with a candidate cloud computing service provider and horizontally scaling application 124 into the network of a candidate cloud computing service provider.

Examples of XML code resources and examples of rule languages are described below. As described above, the candidates are the inputs and outputs of the policy enforcement module 270. The expression "candidate" represents all possible candidate inputs, such as a possible cloud computing service provider, and "candidate " represents each individual candidate in the following example. In addition to the policy and other information known by the policy enforcement module 270, the following example may be used to generate a current order key, a current model reference key, a current root resource, And so on. This additional information is known by the system 100 and is not changed. This additional information is global data.

In one example, rules may also have relationships between themselves. In this example, the action of the rule may also lead to another rule. for example:

Rule rule1: when predicate1 then rule2

Rule rule2: when predicate2 then candidate

A rule can also have a priority from zero to infinity. The higher the number, the higher the priority of the rule. In one example, 0 is the default priority number. for example,

Rule rule1 Priority 1: when predicate1 then candidate

Rule rule2 Priority 0: when predicate2 then candidate

Predicate logic can be used for a symbolic formal system such as first-order logic, second-order logic, many-sorted logic, or infinite logic. It is a general term. The formatting system of the present application is distinguished from other systems in that the formula includes variables that can be quantified. A predicate calculus symbol can represent either a variable, a constant, a function, or a predicate. Constants refer to specific objects or properties in the domain of discourse. Therefore, George, tree, tall, and blue are examples of well formed constant symbols. Constants (true) and (false) are sometimes included.

The variable symbol is used to specify the overall class, object, or property in the domain of the discourse. A function represents a mapping to a unique element of another set of multiple elements (the range of functions) in a set (called the domain of a function). The elements of domain and range are objects in Disco's world. Every function symbol has an associated arity that indicates the number of elements in the domain that are mapped onto each element of the range.

In a rule policy, a constant is typically a number, a type of resource, a required value of a resource element. The variable is typically generic data, or input data from the policy enforcement module 270. For convenience of explanation, it may be defined to represent variables such as "OrderKey" and "Target ".

A function is a function provided by an instance, a model, a static binding policy, and a package resource. These functions are used to help get the value of the element. The function may also be a mathematical calculation, a composition of constant, a variable or a function. The keywords "Maximum" and "Minimum" can be defined as one type of function.

Constants and variables can be defined directly in the predicate logic. The function may be a complex function, and evaluations 918 and 920 are provided for expressing them. In predicate logic, variables can be used to represent these functions. for example,

Predicate: cpu is greater than 5; cpu is the variable to represent function below.

Evaluation: cpu is getAttributeValue (instance, attributename, tag)

A function can also end with a function. for example,

Predicate: value is Maximum;

Evaluation: value is formula (0.5 \ * cpu \ + 0.5 * memory);

cpu is getAttributeValue (instance, attributename, tag) &nbsp; \ ---> in predicate calculus, the attributename will bind to "cpu"

memory is getAttributeValue (instance, attributename, tag) &nbsp; \ -> in predicate calculus, the attributename will bind to "memory"

Thus, the policy enforcement module 270 has two parts: a compile part and a runtime part. Depending on the relationship between policy and predicate features and predicates, an association between rules is built in the compiler step. Predicate interpretation and rule mapping are handled at runtime. Thereafter, the rule priority is handled.

Here is an example XML resource:

Figure pct00001

Figure pct00002

Figure pct00003

Figure pct00004

Figure pct00005

Figure pct00006

Example rule language:

Figure pct00007

Like the load balancers 127 and 147, the global load balancer 170 includes a set of policies that define where the transaction requests are directed. In this manner, the global load balancer 170 can be used to provide a load balancing capability across the system 100 or other resources to achieve optimal resource utilization, maximize throughput, minimize response time, . Global load balancer 170 acts as a load balancer for network B 106 during execution of applications that were scaled horizontally into network B 106. In one example, the policy in global load balancer 170 may be updated to redirect traffic from network A 104 to network B 106 and vice versa. In one example, the processor (202 in FIG. 2) of the cloud management device 102 has access to the global load balancer 170 and can control the global load balancer 170.

The above-described method may be accomplished by a computer program product comprising a computer readable medium having computer usable program code embodied in the computer readable medium, which, when executed, performs the method described above. In particular, the computer usable program code may be used to create a model of the application, define a number of alternative points in the model, represent the alternate point as an abstraction model with a set of sub-types, You can use a number of policies to express the option to use.

The computer usable program code may also receive a request to instantiate the model, apply a number of policies to select a resource candidate, and obtain the resources of the selected candidate.

In addition, the computer usable program code, when executed by the processor, may perform the process described above with respect to Figures 3 to 10. [ In one example, the computer readable medium is a computer readable storage medium as described above. In this example, the computer-readable storage medium may be a tangible or non-volatile medium.

The specification and drawings describe a method and system for network resource management. The method comprises the steps of: generating, by a processor, a model of an application; defining a plurality of alternative points in the model; representing the alternate point as an abstraction model with a set of sub-types; And a step of coordinating a number of policies representing which sourcing options are to be used. The method may also include receiving a request to instantiate a model, applying a plurality of policies to select a resource candidate, and obtaining a selected candidate's resources. These systems and methods have the advantages of: (1) helping an administrator determine if an application used in the internal network can be deployed to an external network using cloud bursting technology; and (2) Including the benefits of assisting the administrator in determining how it can be deployed within the enterprise.

The foregoing description is provided to illustrate and describe examples of the principles disclosed herein. This description is not intended to be exhaustive or to limit the invention to the precise form disclosed. Numerous modifications and variations are possible in light of the above teachings.

Claims (15)

A network resource management method comprising:
With the processor,
Creating a model of the application;
Defining a plurality of substitution points in the model;
Expressing the alternate point as an abstract model having a set of sub-types; And
Codifying a plurality of policies representing which sourcing option to use for each said alternate point;
The network resource management method comprising:
The method according to claim 1,
Receiving a request to instantiate a model;
Applying a plurality of policies to select a resource candidate; And
Acquiring a resource of the selected candidate
The method comprising the steps of:
3. The method of claim 2,
Wherein the policy comprises a plurality of static binding policies and a plurality of dynamic binding policies.
The method according to claim 1,
Determining if the alternate point is explicitly defined as a sub-type;
If the replacement point is explicitly defined as a sub-type, storing the explicitly defined sub-type as a static binding policy; And
If the alternate point is not explicitly defined as a sub-type, then creating a plurality of dynamic binding policies
The method comprising the steps of:
5. The method of claim 4,
Wherein the generating the plurality of dynamic binding policies comprises:
Defining a plurality of parameters of the abstraction model as a policy for selecting a sub-type of the abstraction model;
Compiling a number of rules used to create the dynamic binding policy; And
Storing the dynamic binding policy
/ RTI &gt; The method of claim 1,
The method according to claim 1,
Further comprising informing a provisioning system that a portion of the model will be replaced.
The method according to claim 1,
Further comprising determining a number of resource candidates for scaling the application.
The method of claim 3,
Wherein applying a plurality of policies to select the resource candidates comprises:
Applying the static binding policy and the dynamic binding policy to filter the resource candidates; And
Returning a plurality of resource candidates matching the conditions defined by the static binding policy and the dynamic binding policy
/ RTI &gt; The method of claim 1,
The method of claim 3,
Wherein the static binding policy is defined by an administrator and the dynamic binding policy is defined by a number of rules defined in the dynamic binding policy.
In a cloud management device,
A processor; And
A memory communicatively coupled to the processor,
Wherein the memory comprises:
An alternative module stored in the memory for creating a model of a to-be-scaled application and defining a plurality of alternative points in the model when executed by the processor;
A static binding policy creation module stored in the memory for generating a plurality of explicit statements that, when executed by the processor, define sub-types within the alternate point; And
Stored in the memory, for generating a plurality of policies including a scoring function evaluated at runtime that, when executed by the processor, informs the resource provider about the best sub-type to select; (dynamic binding policy creation module)
And a cloud management device.
11. The method of claim 10,
A policy execution module for executing the static binding policy and the dynamic binding policy to filter a plurality of resource candidates matching a constraint provided by a static binding policy and a dynamic binding policy, Includes a cloud management device.
11. The method of claim 10,
Wherein the cloud management device is integrated into a network in which the application to be scaled is located.
A computer program product for managing network resources,
A computer readable storage medium comprising computer usable program code embodied in a computer readable storage medium, the computer usable program code comprising:
Computer usable program code, when executed by a processor, for creating a model defining an application to be scaled and defining a plurality of alternative points in the model;
Computer-usable program code for generating, when executed by a processor, a plurality of explicit statements defining sub-types within the alternate point; And
Computer-usable program code for generating a plurality of policies, including a scoring function evaluated at runtime, that when executed by a processor informs the resource provider about the best sub-type to select;
A computer program product for managing network resources.
14. The method of claim 13,
Further comprising computer usable program code for executing the static binding policy and the dynamic binding policy to filter a plurality of resource candidates when executed by a processor that conforms to the constraints provided by the static binding policy and the dynamic binding policy , A computer program product for managing network resources.
15. The method of claim 14,
Wherein the policy is expressed based on information included by a system for which scaling is to occur, a system for which scaling occurs, or a combination thereof.
KR1020157016151A 2012-12-07 2012-12-07 Network resource management KR20150086516A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020157016151A KR20150086516A (en) 2012-12-07 2012-12-07 Network resource management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020157016151A KR20150086516A (en) 2012-12-07 2012-12-07 Network resource management

Publications (1)

Publication Number Publication Date
KR20150086516A true KR20150086516A (en) 2015-07-28

Family

ID=53875661

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020157016151A KR20150086516A (en) 2012-12-07 2012-12-07 Network resource management

Country Status (1)

Country Link
KR (1) KR20150086516A (en)

Similar Documents

Publication Publication Date Title
US10057183B2 (en) Network resource management
US8806003B2 (en) Forecasting capacity available for processing workloads in a networked computing environment
US9009294B2 (en) Dynamic provisioning of resources within a cloud computing environment
US9418146B2 (en) Optimizing a clustered virtual computing environment
US9225604B2 (en) Mapping requirements to a system topology in a networked computing environment
US9503549B2 (en) Real-time data analysis for resource provisioning among systems in a networked computing environment
US8745233B2 (en) Management of service application migration in a networked computing environment
US10686891B2 (en) Migration of applications to a computing environment
US20140344808A1 (en) Dynamically modifying workload patterns in a cloud
Jrad et al. A utility–based approach for customised cloud service selection
US9996888B2 (en) Obtaining software asset insight by analyzing collected metrics using analytic services
US10305752B2 (en) Automatically orchestrating the compliance of cloud services to selected standards and policies
GB2582223A (en) Determining an optimal computing environment for running an image
US20170222888A1 (en) Assessing a service offering in a networked computing environment
US20130262189A1 (en) Analyzing metered cost effects of deployment patterns in a networked computing environment
JP2022538897A (en) container-based application
US10574527B2 (en) Compartmentalized overcommitting of resources
US10530842B2 (en) Domain-specific pattern design
JP7257726B2 (en) Method, computer system and program for implementing dynamic and automatic modification of user profiles for improved performance
Singh et al. A review: towards quality of service in cloud computing
Aldawsari et al. A survey of resource management challenges in multi-cloud environment: Taxonomy and empirical analysis
KR20150086516A (en) Network resource management
Thakur et al. Interoperability issues and standard architecture for service delivery in federated cloud: A review
JP2023541470A (en) Detecting and handling excessive resource usage in distributed computing environments

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application