WO2012125143A1 - Systèmes et procédés pour optimiser de façon transparente des charges de travail - Google Patents

Systèmes et procédés pour optimiser de façon transparente des charges de travail Download PDF

Info

Publication number
WO2012125143A1
WO2012125143A1 PCT/US2011/028230 US2011028230W WO2012125143A1 WO 2012125143 A1 WO2012125143 A1 WO 2012125143A1 US 2011028230 W US2011028230 W US 2011028230W WO 2012125143 A1 WO2012125143 A1 WO 2012125143A1
Authority
WO
WIPO (PCT)
Prior art keywords
abstraction
containment
bound
workload
cpu
Prior art date
Application number
PCT/US2011/028230
Other languages
English (en)
Inventor
Jason A. Hoffman
James Duncan
Mark G. Mayo
David P. Young
Original Assignee
Joyent, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Joyent, Inc. filed Critical Joyent, Inc.
Priority to PCT/US2011/028230 priority Critical patent/WO2012125143A1/fr
Publication of WO2012125143A1 publication Critical patent/WO2012125143A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing

Definitions

  • the present technology relates generally to transparently optimizing workloads, and more specifically, but not by way of limitation, to systems and methods for transparently optimizing workloads of containment abstractions within cloud computing systems.
  • a cloud is a resource that typically combines the computational power of a large grouping of processors and/or that combines the storage capacity of a large grouping of computer memories or storage devices.
  • systems that provide a cloud resource may be utilized exclusively by their owners, such as Google or Yahoo! , or such systems may be accessible to outside users who deploy applications within the computing infrastructure to obtain the benefit of large computational or storage resources.
  • the cloud may be formed, for example, by a network of servers with each server providing processor and/or storage resources.
  • the present technology may be directed to methods for transparently optimizing a workload of a containment abstraction by: (a) determining if an at least partially hardware bound containment abstraction should be converted to an entirely central processing unit (CPU) bound containment abstraction based upon the workload of the at least partially hardware bound containment abstraction; (b) converting the at least partially hardware bound containment abstraction to being an entirely (CPU) bound containment abstraction by placing the containment abstraction in a memory store, based upon the workload; and (c) allocating the workload of the entirely CPU bound containment abstraction across at least a portion of a data center to optimize the workload of the entirely CPU bound containment abstraction.
  • CPU central processing unit
  • the present technology may be directed to systems for transparently optimizing a workload of a containment abstraction that include: (a) a memory for storing executable instructions for transparently
  • a processor configured to execute the instructions stored in the memory to: (i) determine if an at least partially hardware bound containment abstraction should be converted to an entirely central processing unit (CPU) bound containment abstraction based upon the workload of the at least partially hardware bound containment abstraction; (ii) convert the at least partially hardware bound containment abstraction to being an entirely (CPU) bound containment abstraction by placing the containment
  • the present technology may be directed to methods for transparently converting asynchronous output of a containment abstraction to synchronous output by: (a) determining if the
  • asynchronous output of the containment abstraction indicates that the containment abstraction is busy, the containment abstraction being at least partially hardware bound; (b) responsive to determining, converting the containment abstraction from being at least partially hardware bound to being entirely central processing unit (CPU) bound by placing the containment abstraction in a memory store; (c) aggregating the asynchronous output of the entirely CPU bound containment abstraction; and (c) synchronously providing the aggregated asynchronous output to a data store.
  • CPU central processing unit
  • the present technology may be directed to methods for transparently optimizing a workload of a containment abstraction by: (a) determining if an at least partially hardware bound containment abstraction should be converted to an entirely central processing unit (CPU) bound containment abstraction based upon the workload of the at least partially hardware bound containment abstraction; (b) placing the at least partially hardware bound
  • FIG. 1 illustrates an exemplary cloud system for practicing aspects of the present technology.
  • FIG. 2 illustrates an exemplary flow diagram of a method for transparently optimizing workloads.
  • FIG. 3 is an exemplary flow diagram of a method for transparently converting asynchronous output of a containment abstraction to synchronous output.
  • FIG. 4 is a block diagram of an exemplary computing system that may be utilized to practice aspects of the present disclosure.
  • the systems and methods of the present invention may be directed to transparently optimizing workloads. More specifically, the systems and methods may be adapted to transparently optimize the workloads of a plurality of containment abstractions that operate within a cloud computing system.
  • a cloud is a resource that typically combines the computational power of a large grouping of processors and/or that combines the storage capacity of a large grouping of computer memories or storage devices.
  • systems that provide a cloud resource may be utilized exclusively by their owners, such as Google or Yahoo!, or such systems may be accessible to outside users who deploy applications within the computing infrastructure to obtain the benefit of large computational or storage resources.
  • the cloud may be formed, for example, by a network of servers with each server providing processor and/or storage resources. These servers may manage workloads provided by multiple users (e.g., cloud resource customers or other users). Typically, each user places workload demands upon the cloud that vary in real-time, sometimes dramatically. The nature and extent of these variations typically depends on the type of business associated with the user.
  • connection abstraction may be understood to include an abstraction of a computing environment, such as an operating system.
  • Common containment abstractions include, but are not limited to, containers and associated file systems, virtual machines, applications, programs, operating system
  • a containment abstraction may include a virtual machine and the file system utilized by the virtual machine.
  • containment abstractions may be implemented in the context of the cloud such that the containment abstractions utilize the shared compute resources of the cloud. That is, each of the plurality of servers dedicates their individual compute resources to the workloads of the individual containment abstractions. Stated otherwise, the compute power of the plurality exceeds the compute power of the individual servers alone. Moreover, workloads may be balanced across the plurality of servers based upon their respective workload. The systems and methods may select which of the plurality of servers are utilized based upon their respective workloads. For example, only servers that have a current minimal workload may be selected to share their compute resources.
  • an end user may utilize the containment abstraction in the same way that the end user would utilize an entirely physical computing structure (computing system, server, etc.) with an operating system and ancillary applications, with the added benefit of the shared compute resources of the cloud, rather than the limited hardware capabilities of a single physical computing structure.
  • a containment abstraction is allocated compute resources from the cloud based upon an expected workload of the containment abstraction.
  • Containment abstractions with higher expected workloads may be allocated more resources.
  • the systems and methods provided herein are adapted to facilitate multi-tenancy of containment abstractions for a plurality of end users. That is, a plurality of containment abstractions may be "virtualized" and reside with the cloud such that the plurality of containment abstractions utilize the compute resources of the cloud.
  • the term "workload” may be understood to include the amount of processing that a containment abstraction has been given to perform over a given period of time.
  • the workload of a containment abstraction may be understood to include certain measurements of latency and bandwidth.
  • Latency may include time-based metrics (e.g., time delay experienced by an end user of the containment abstraction) of the containment abstraction while bandwidth may include measurable I/O metrics for a variety of data communications of a
  • the workload may consist of one or more applications that are executed in the containment abstraction and a number of end users that are connected to and interacting with the one or more applications.
  • the expected workload of a containment abstraction may be utilized as a benchmark to evaluate the performance of the containment abstraction.
  • the performance of the containment abstraction may be understood to include the ability of the compute resources allocated to the containment abstraction to perform the workload relative to an acceptable response time or a desired throughput (e.g., the amount of data that the containment abstraction is expected to process) of the containment abstraction.
  • FIG. 1 illustrates an exemplary cloud system 100 for practicing aspects of the present technology.
  • the system 100 is shown as including a "data center" or cloud 105 including servers 110A, HOB, and HON (cloud 105 may include any number of servers), and a cloud control system 120 according to one embodiment.
  • Cloud 105 manages the hardware resources (e.g., processor, memory, and/or storage space) of servers 110A-N coupled by network 125 (e.g., a local-area or other data network) or otherwise.
  • network 125 e.g., a local-area or other data network
  • Users or users of cloud 105 may access the services of the cloud 105 via a user system 130 (e.g., a website server) or user device 135 (e.g., a phone or PDA) running an application program interface (API).
  • User system 130 and user device 135 communicatively couple to the cloud 105 using an access network 140 (e.g., the Internet or other telecommunications network).
  • Access network 140 may communicate, for example, directly with server 110A or with another computing device in cloud control system 120. It will be understood that the user system 130 and user device 135 may be generally described with reference to computing system 400, as described in greater detail with reference to FIG. 4.
  • Each of many potential customers may configure one or more containment abstractions to run in cloud 105.
  • Each containment abstraction runs one or many processing workloads of a customer (e.g., serving of a website, etc.), which places processing and other demands on the compute resources of cloud 105.
  • server 110A handles processing for a workload 115A, as illustrated in FIG. 1.
  • a user may access cloud 105 by going to a website and ordering a containment abstraction, which is then provisioned by cloud control system 120. Then, the user has a private API that exists on all of their services. This will be made public to the customers of the user, and the user can use an API to interact with the infrastructure. Every system may have a "public" interface, a private interface, an administrative interface and a storage interface. This is reflected, for example, from switch to port to NIC to physical interface to logical interface to virtual interface.
  • the cloud control system 120 may be adapted to determine if an at least partially hardware bound containment abstraction should be converted to an entirely central processing unit (CPU) bound containment abstraction based upon the workload of the at least partially hardware bound containment abstraction. If the containment abstraction is to be converted, the cloud control system 120 may convert the at least partially hardware bound containment abstraction to being an entirely (CPU) bound containment abstraction by placing the containment abstraction in a memory store, based upon the workload. Next, the cloud control system 120 may allocate the workload of the entirely CPU bound containment abstraction across at least a portion of a data center to optimize the workload of the entirely CPU bound containment abstraction. Finally, the cloud control system 120 may revert the entirely CPU bound containment abstraction back to an at least partially hardware bound state.
  • CPU central processing unit
  • the cloud control system 120 is adapted to transparently optimize the workloads of a plurality of containment abstractions contained within the cloud (multi- tenancy) within the context of entirely "virtualized” processes. That is, because the containment abstractions exist as "virtual" entities within the cloud, all operations performed on the containment abstractions within the cloud are virtualized.
  • the memory store into which the containment abstractions are placed are virtualized (e.g., a virtual memory store). That is, the memory store (once loaded with a containment abstraction) is virtualized, end user specific (dependent upon the needs of the end user), and sensitive to the dynamic workload of the containment abstraction (e.g., latency, performance, etc.), as will be discussed in greater detail below. Stated otherwise, the in-memory cache itself is virtualized, customer-aware, and latency-aware.
  • the system begins in multi-tenancy cloud computing and stays within multi-tenancy cloud computing after virtual caching.
  • end user specific may include customer desires, quality of service requests, service latency requirements, economic considerations of the customer as well as the cloud administrators. Therefore, the virtualized memory includes a system-latency aware.
  • the cloud control system 120 reverts the entirely CPU bound containment abstraction back to an at least partially hardware bound state
  • the reverted containment abstraction is also in a virtualized state.
  • the containment abstractions remain in a virtualized state during all processes performed on the containment abstractions within the cloud 105.
  • Each containment abstraction uses a portion of the hardware resources of cloud 105.
  • These hardware resources include storage and processing resources distributed onto each of the plurality of servers, and these resources are provisioned to handle the containment abstraction as minimally specified by a user.
  • Cloud control system 120 dynamically provisions the hardware resources of the servers in the cloud 105 during operation as the cloud 105 handles varying customer workload demands.
  • Cloud control system 120 may be implemented on many different types of computing devices, such as dedicated hardware platform(s) and/or distributed across many physical nodes. Some of these physical nodes may include, for example, one or more of the servers 110A-N or other servers in cloud 105.
  • a typical workload of a containment abstraction includes the production of asynchronous I/O.
  • the containment abstraction is a database program
  • the containment abstraction may routinely output data to a storage disk associated with one of the servers 110A-N of the cloud 105.
  • This type of randomized data output is commonly referred to as asynchronous output.
  • Containment abstractions that generate asynchronous output may be referred to as being at least partially hardware bound because they require their output data to be written to a physical disk. Therefore, the reliance of the containment abstraction on the performance of the physical disk creates a limiting condition on the performance of the containment abstraction.
  • containment abstractions that are at least partially hardware bound are less efficient than containment abstractions that are memory or central processing unit (CPU) bound. That is, the only performance limiting condition on an entirely CPU bound containment abstraction is CPU processing resources dedicated to the containment abstraction. Because the cloud 105 may allocate CPU resources as necessary, this limiting condition may be easily overcome relative to limiting conditions associated with physical hardware (e.g., storage devices). Physical limitations may be difficult to overcome due to certain physical constraints of physical systems. Stated otherwise, CPU resources are infinitely expandable whereas physical resources are constrained by the physical properties or behaviors of the physical resources (e.g., disk speed, etc.).
  • the cloud control system 120 may further be adapted to convert containment abstractions that are at least partially hardware bound to being entirely CPU bound.
  • the cloud control system 120 may convert the containment abstraction by placing the containment abstraction in a storage object that may be cached in memory of one of the servers 110A-N of the cloud.
  • a storage object may be most generally described as a virtual entity for grouping data together that has been determined by an end user to be logically related. Therefore, a storage object may include a containment abstraction or a plurality of logically related containment abstractions (e.g., related programs in a program suite or platform).
  • the cloud control system 120 may determine which of the containment abstractions should be converted by monitoring the workload of the containment abstraction to determine when the containment abstraction is "busy.” It will be understood that the cloud control system 120 may recognize the containment abstraction as "busy" when the workload of the containment abstraction exceeds an expected workload for the containment abstraction. In additional embodiments, the containment abstraction may be required to exceed the expected workload for a period of time. Also, a containment abstraction may be determined to be busy when the containment abstraction produces a predetermined amount of asynchronous output (e.g., random output to a physical disk).
  • asynchronous output e.g., random output to a physical disk
  • the cloud control system 120 may place the containment abstraction in a storage object.
  • the cloud control system 120 may move the storage object to a memory store 155 that is associated with, for example, the server 110A, via cloud 105.
  • the storage object may be moved to the memory store (not shown) of any server 110A within cloud 105 or may be distributed across memory stores of a plurality of servers.
  • the action of moving the storage object that includes the containment abstraction into the memory store 155 converts the containment abstraction to being entirely CPU bound.
  • the cloud control system 120 may allocate CPU resources from the cloud 105 to process the workload of the containment abstraction.
  • the act of allocating CPU resources may also be referred to as arbitraging CPU resources. That is, the cloud control system 120 may leverage unused CPU resources of the cloud 105 to process the workload of the containment abstraction.
  • the containment abstraction continues to generate output just as it did when it was at least partially hardware bound. Because the containment abstraction is CPU bound in memory, rather than the generated output of the containment abstraction being communicated to a physical storage medium (e.g., a storage disk), the random or asynchronous output of the containment abstraction may be aggregated by an aggregation module 145. Aggregated asynchronous output may be provided to a physical storage medium in batches, rather than singular transaction. For example, a containment abstraction that abstracts a database program generates output such as updates to the database each time an end user inputs data. Because end users may constantly input data during a containment abstraction session, the input data is asynchronously written to the database that exists on a physical disk. Rather than writing data to the disk for each transaction, the aggregation module 145 may aggregate the data output together for the containment abstraction session and push the aggregated data output to the physical disk at the end of the containment abstraction session.
  • a containment abstraction that abstracts a database program generates output such as updates to
  • the cloud 105 may utilize a prioritization module 150 that is adapted to determine the amount of CPU resources of cloud 105 that are allocated to a particular containment abstraction based upon one or more factors, such as an importance of the containment abstraction relative to other containment abstractions operating the cloud 105 and a magnitude of the workflow of a containment abstraction. It will be understood that the functionality of the prioritization module 150 may be implemented in addition to the ability of the cloud control system 120 to allocate cloud 105 resources. For example, servers already tasked with providing CPU resources to other containment abstractions of the same or greater importance may not be immediately selected by the cloud control system 120.
  • the cloud control system 120 may first determine the workload of a containment abstraction.
  • the cloud control system 120 may be adapted to determine if the workload of the containment abstraction meets or exceeds an expected w r orkload for the containment abstraction. Details regarding the expected workload may be established by cloud administrators (individuals tasked with creating and implementing workload policies for the cloud 105).
  • the containment abstraction users via their user system 130, may establish the expected workload for the containment abstraction.
  • the prioritization module 150 may be adapted to determine if the containment abstraction should be converted from being at least partially hardware bound to being entirely CPU bound. If the prioritization module 150 determines that the containment abstraction should be converted, the cloud control system 120 converts the containment abstraction from being at least partially hardware bound to being entirely CPU bound. [0051] Upon determining that the containment abstraction is no longer eligible to be entirely CPU bound, or upon determining that the workload of the entirely CPU bound containment abstraction has fallen below the expected workload for the containment abstraction (also known as a reversion event), the cloud control system 120 may revert containment abstraction from being entirely CPU bound to being at least partially hardware bound again.
  • the prioritization module 150 may determine that the containment abstraction is no longer eligible to be entirely CPU bound by comparing the relative priority of the containment abstraction to the priority of other containment abstractions to determine if one or more of the other containment abstractions should be CPU bound instead of the instant containment abstraction.
  • the prioritization module 150 may
  • the cloud control system 120 may prioritize the allocation of resources as described above based upon the relative priority of already CPU bound containment abstractions in the cloud 105.
  • the cloud control system 120 and prioritization module 150 may utilize statistical analyses of the workload of containment abstractions gathered by the cloud control system 120 over a given period of time. Moreover, prioritization may be predicated upon subjective data received from cloud administrators. For example, the cloud administrators may establish information for ranking one containment abstraction above another containment abstraction based upon the size of the containment abstraction user (e.g., how many containment abstractions they purchase, how often they utilize their containment abstractions, and so forth). [0055] Referring now to FIG. 2, an exemplary flow chart of a method 200 for transparently optimizing workloads is provided.
  • the method 200 may include a step 205 of monitoring the workload of an at least partially hardware bound containment abstraction that exists on a server within a cloud computing system. If the workload of the containment abstraction meets or exceeds an expected workload amount established for the containment abstraction (e.g., a conversion event), the method 200 may include the step 210 of determining if the containment abstraction is eligible to be converted from being at least partially hardware bound to being entirely CPU bound.
  • an expected workload amount established for the containment abstraction e.g., a conversion event
  • the step 210 may include comparing the workloads of a plurality of containment abstractions to one another to determine the relative workload of the containment abstraction, or may include a statistical analysis or subjective analysis of the importance of containment abstractions relative to one another. If the
  • the method 200 may include the step 215 of placing the containment abstraction in a storage object. If the
  • containment abstraction is not eligible to be converted, the containment abstraction remains the at least partially bound to hardware.
  • the method 200 may include the step 220 of placing (e.g., caching) the storage object that includes the containment abstraction in a memory store to convert the containment abstraction from being at least partially hardware bound to being entirely CPU bound.
  • the storage object may be distributed across memory stores of a plurality of servers within the cloud.
  • the method 200 may include the step 225 of aggregating asynchronous output generated by the entirely CPU bound containment abstraction.
  • the system continues to monitor the workload of the entirely CPU bound containment abstraction to determine if the workload of the entirely CPU bound containment abstraction falls below the expected workload amount (e.g., a reversion event). This may include the workload of the containment abstraction staying at or below the expected workload amount for a period of time. If the workload of the entirely CPU bound containment abstraction falls below the expected workload amount, the method may include the step of 230 reverting the containment abstraction (e.g., the containment abstraction in the storage object) from being entirely CPU bound to being at least partially hardware bound.
  • the expected workload amount e.g., a reversion event
  • the aggregated asynchronous data gathered during the step 225 may be synchronously communicated to a storage device (e.g., disk storage) of a server in step 235, after the step 230 of reverting the containment abstraction. Also, aggregated asynchronous data may be periodically aggregated and synchronously communicated while the containment abstraction is entirely CPU bound, as indicated by line 240.
  • a storage device e.g., disk storage
  • aggregated asynchronous data may be periodically aggregated and synchronously communicated while the containment abstraction is entirely CPU bound, as indicated by line 240.
  • FIG. 3 illustrates an exemplary flow chart of a method 300 for
  • the method 300 may be initiated by a step 305 of monitoring the asynchronous output of an at least partially hardware bound containment abstraction. If the system determines that the asynchronous output of the containment abstraction indicates that the containment abstraction is not "busy,” then the method 300 ends.
  • the method may include the step 315 of converting the containment abstraction from being at least partially hardware bound to being entirely central processing unit (CPU) bound by placing the containment abstraction in a memory store of, for example, a server.
  • the method 300 may include the step of 320 aggregating the asynchronous output of the entirely CPU bound containment abstraction and the step 325 of synchronously providing the aggregated asynchronous output to a data store. It will be understood that the step 320 may occur when the containment abstraction is CPU bound, or may occur upon the occurrence of a reversion event when the containment abstraction is reverted back to being at least partially hardware bound, as shown by line 330.
  • FIG. 4 illustrates an exemplary computing system 400 that may be used to implement an embodiment of the present technology.
  • the computing system 400 of FIG. 4 includes one or more processors 410 and memory 420.
  • Main a memory store 420 stores, in part, instructions and data for execution by processor 410.
  • Main a memory store 420 can store the executable code when the system 400 is in operation.
  • the system 400 of FIG. 4 may further include a mass storage device 430, portable storage medium drive(s) 440, output devices 450, user input devices 460, a graphics display 440, and other peripheral devices 480.
  • FIG. 4 The components shown in FIG. 4 are depicted as being connected via a single bus 490.
  • the components may be connected through one or more data transport means.
  • Processor unit 410 and main a memory store 420 may be connected via a local microprocessor bus, and the mass storage device 430, peripheral device(s) 480, portable storage device 440, and display system 470 may be connected via one or more input/output (I/O) buses.
  • I/O input/output
  • Mass storage device 430 which may be implemented with a magnetic disk drive or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor unit 410. Mass storage device 430 can store the system software for implementing embodiments of the present technology for purposes of loading that software into main a memory store 410.
  • Portable storage device 440 operates in conjunction with a portable nonvolatile storage medium, such as a floppy disk, compact disk or digital video disc, to input and output data and code to and from the computing system 400 of FIG. 4.
  • the system software for implementing embodiments of the present technology may be stored on such a portable medium and input to the computing system 400 via the portable storage device 440.
  • Input devices 460 provide a portion of a user interface.
  • Input devices 460 may include an alphanumeric keypad, such as a keyboard, for inputting
  • the system 400 as shown in FIG. 4 includes output devices 450. Suitable output devices include speakers, printers, network interfaces, and monitors.
  • Display system 470 may include a liquid crystal display (LCD) or other suitable display device.
  • Display system 470 receives textual and graphical information, and processes the information for output to the display device.
  • LCD liquid crystal display
  • Peripherals 480 may include any type of computer support device to add additional functionality to the computing system.
  • Peripheral device(s) 480 may include a modem or a router.
  • the components contained in the computing system 400 of FIG. 4 are those typically found in computing systems that may be suitable for use with embodiments of the present technology and are intended to represent a broad category of such computer components that are well known in the art.
  • the computing system 400 of FIG. 4 can be a personal computer, hand held computing system, telephone, mobile computing system, workstation, server, minicomputer, mainframe computer, or any other computing system.
  • the computer can also include different bus configurations, networked platforms, multi-processor platforms, etc.
  • Various operating systems can be used including UNIX, Linux, Windows, Macintosh OS, Palm OS, and other suitable operating systems.
  • Some of the above-described functions may be composed of instructions that are stored on storage media (e.g., computer-readable medium).
  • the instructions may be retrieved and executed by the processor.
  • Some examples of storage media are memory devices, tapes, disks, and the like.
  • the instructions are operational when executed by the processor to direct the processor to operate in accord with the technology. Those skilled in the art are familiar with instructions, processor(s), and storage media.
  • Non-volatile media include, for example, optical or magnetic disks, such as a fixed disk.
  • Volatile media include dynamic memory, such as system RAM.
  • Transmission media include coaxial cables, copper wire and fiber optics, among others, including the wires that comprise one embodiment of a bus.
  • Transmission media can also take the form of acoustic or light waves, such as those generated during radio frequency (RF) and infrared (IR) data communications.
  • RF radio frequency
  • IR infrared
  • Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic medium, a CD-ROM disk, digital video disk (DVD), any other optical medium, any other physical medium with patterns of marks or holes, a RAM, a PROM, an EPROM, an EEPROM, a FLASHEPROM, any other memory chip or data exchange adapter, a carrier wave, or any other medium from which a computer can read.
  • Various forms of computer-readable media may be involved in carrying one or more sequences of one or more instructions to a CPU for execution.
  • a bus carries the data to system RAM, from which a CPU retrieves and executes the instructions.
  • the instructions received by system RAM can optionally be stored on a fixed disk either before or after execution by

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

La présente invention concerne des systèmes, des procédés et des supports pour optimiser de façon transparente une charge de travail d'une abstraction de confinement. Les procédés peuvent comprendre la surveillance d'une charge de travail de l'abstraction de confinement, celle-ci étant au moins partiellement liée en ce qui concerne le matériel, la charge de travail correspondant à une utilisation de ressource de l'abstraction de confinement, la conversion de l'abstraction de confinement d'un état au moins partiellement lié, en ce qui concerne le matériel, en un état entièrement lié à une unité centrale de traitement (CPU) en plaçant l'abstraction de confinement dans un magasin de mémoire, sur la base de la charge de travail, et l'attribution de la charge de travail de l'abstraction de confinement sur au moins une partie d'un centre de données afin d'optimiser la charge de travail de l'abstraction de confinement.
PCT/US2011/028230 2011-03-11 2011-03-11 Systèmes et procédés pour optimiser de façon transparente des charges de travail WO2012125143A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2011/028230 WO2012125143A1 (fr) 2011-03-11 2011-03-11 Systèmes et procédés pour optimiser de façon transparente des charges de travail

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2011/028230 WO2012125143A1 (fr) 2011-03-11 2011-03-11 Systèmes et procédés pour optimiser de façon transparente des charges de travail

Publications (1)

Publication Number Publication Date
WO2012125143A1 true WO2012125143A1 (fr) 2012-09-20

Family

ID=46831010

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/028230 WO2012125143A1 (fr) 2011-03-11 2011-03-11 Systèmes et procédés pour optimiser de façon transparente des charges de travail

Country Status (1)

Country Link
WO (1) WO2012125143A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8775485B1 (en) 2013-03-15 2014-07-08 Joyent, Inc. Object store management operations within compute-centric object stores
US8782224B2 (en) 2011-12-29 2014-07-15 Joyent, Inc. Systems and methods for time-based dynamic allocation of resource management
US8789050B2 (en) 2011-03-11 2014-07-22 Joyent, Inc. Systems and methods for transparently optimizing workloads
US8881279B2 (en) 2013-03-14 2014-11-04 Joyent, Inc. Systems and methods for zone-based intrusion detection
US8943284B2 (en) 2013-03-14 2015-01-27 Joyent, Inc. Systems and methods for integrating compute resources in a storage area network
US8959217B2 (en) 2010-01-15 2015-02-17 Joyent, Inc. Managing workloads and hardware resources in a cloud resource
US9092238B2 (en) 2013-03-15 2015-07-28 Joyent, Inc. Versioning schemes for compute-centric object stores
US9104456B2 (en) 2013-03-14 2015-08-11 Joyent, Inc. Zone management of compute-centric object stores
US9582327B2 (en) 2013-03-14 2017-02-28 Joyent, Inc. Compute-centric object stores and methods of use

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060107087A1 (en) * 2004-10-26 2006-05-18 Platespin Ltd System for optimizing server use in a data center
US20100050172A1 (en) * 2008-08-22 2010-02-25 James Michael Ferris Methods and systems for optimizing resource usage for cloud-based networks
US20110016214A1 (en) * 2009-07-15 2011-01-20 Cluster Resources, Inc. System and method of brokering cloud computing resources

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060107087A1 (en) * 2004-10-26 2006-05-18 Platespin Ltd System for optimizing server use in a data center
US20100050172A1 (en) * 2008-08-22 2010-02-25 James Michael Ferris Methods and systems for optimizing resource usage for cloud-based networks
US20110016214A1 (en) * 2009-07-15 2011-01-20 Cluster Resources, Inc. System and method of brokering cloud computing resources

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8959217B2 (en) 2010-01-15 2015-02-17 Joyent, Inc. Managing workloads and hardware resources in a cloud resource
US9021046B2 (en) 2010-01-15 2015-04-28 Joyent, Inc Provisioning server resources in a cloud resource
US8789050B2 (en) 2011-03-11 2014-07-22 Joyent, Inc. Systems and methods for transparently optimizing workloads
US8782224B2 (en) 2011-12-29 2014-07-15 Joyent, Inc. Systems and methods for time-based dynamic allocation of resource management
US8881279B2 (en) 2013-03-14 2014-11-04 Joyent, Inc. Systems and methods for zone-based intrusion detection
US8943284B2 (en) 2013-03-14 2015-01-27 Joyent, Inc. Systems and methods for integrating compute resources in a storage area network
US9104456B2 (en) 2013-03-14 2015-08-11 Joyent, Inc. Zone management of compute-centric object stores
US9582327B2 (en) 2013-03-14 2017-02-28 Joyent, Inc. Compute-centric object stores and methods of use
US8898205B2 (en) 2013-03-15 2014-11-25 Joyent, Inc. Object store management operations within compute-centric object stores
US8775485B1 (en) 2013-03-15 2014-07-08 Joyent, Inc. Object store management operations within compute-centric object stores
US9075818B2 (en) 2013-03-15 2015-07-07 Joyent, Inc. Object store management operations within compute-centric object stores
US9092238B2 (en) 2013-03-15 2015-07-28 Joyent, Inc. Versioning schemes for compute-centric object stores
US9792290B2 (en) 2013-03-15 2017-10-17 Joyent, Inc. Object store management operations within compute-centric object stores

Similar Documents

Publication Publication Date Title
US8789050B2 (en) Systems and methods for transparently optimizing workloads
US11237870B1 (en) Dynamically modifying program execution capacity
JP7189997B2 (ja) 仮想コンピュータリソースのスケジューリングのためのローリングリソースクレジット
US8468251B1 (en) Dynamic throttling of access to computing resources in multi-tenant systems
WO2012125143A1 (fr) Systèmes et procédés pour optimiser de façon transparente des charges de travail
US8782224B2 (en) Systems and methods for time-based dynamic allocation of resource management
US10360083B2 (en) Attributing causality to program execution capacity modifications
US9547534B2 (en) Autoscaling applications in shared cloud resources
US10355934B2 (en) Vertical scaling of computing instances
JP6144346B2 (ja) 仮想計算機インスタンスのスケーリング
US7774457B1 (en) Resource evaluation for a batch job and an interactive session concurrently executed in a grid computing environment
US20140280970A1 (en) Systems and methods for time-based dynamic allocation of resource management
US8832218B2 (en) Determining priorities for cached objects to order the transfer of modifications of cached objects based on measured network bandwidth
US9483375B2 (en) Streaming operator with trigger
EP1769353A2 (fr) Procede et appareil pour la gestion de ressources de memoire dynamique
US10411977B2 (en) Visualization of workload distribution on server resources
US9652357B2 (en) Analyzing physical machine impact on business transaction performance
CN107251007B (zh) 集群计算服务确保装置和方法
US10021008B1 (en) Policy-based scaling of computing resource groups
US10148592B1 (en) Prioritization-based scaling of computing resources
US11621919B2 (en) Dynamic load balancing in reactive systems
Dargie et al. Dynamic Power Management in Data Centres

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11860975

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11860975

Country of ref document: EP

Kind code of ref document: A1