US20140282513A1 - Instruction set architecture for compute-based object stores - Google Patents
Instruction set architecture for compute-based object stores Download PDFInfo
- Publication number
- US20140282513A1 US20140282513A1 US13/831,349 US201313831349A US2014282513A1 US 20140282513 A1 US20140282513 A1 US 20140282513A1 US 201313831349 A US201313831349 A US 201313831349A US 2014282513 A1 US2014282513 A1 US 2014282513A1
- Authority
- US
- United States
- Prior art keywords
- operating system
- virtual operating
- objects
- compute
- request
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45504—Abstract machines for programme code execution, e.g. Java virtual machine [JVM], interpreters, emulators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/465—Distributed object oriented systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/547—Remote procedure calls [RPC]; Web services
- G06F9/548—Object oriented; Remote method invocation [RMI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5017—Task decomposition
Definitions
- the present technology relates generally to an instruction set architecture (ISA) for compute-centric object stores.
- ISAs of the present technology allow for efficient scheduling and management of compute operations across distributed object stores.
- the ISAs provide a means for expressing compute operations within the context of a distributed object store, as well as a mechanism for coordinating how data flows through a compute-centric object store system.
- a cloud-based computing environment is a resource that typically combines the computational power of a large model of processors and/or that combines the storage capacity of a large model of computer memories or storage devices.
- systems that provide a cloud resource may be utilized exclusively by their owners; or such systems may be accessible to outside users who deploy applications within the computing infrastructure to obtain the benefit of large computational or storage resources.
- the cloud may be formed, for example, by a network of servers with each server (or at least a plurality thereof) providing processor and/or storage resources. These servers may manage workloads provided by multiple users (e.g., cloud resource consumers or other users). Typically, each user places workload demands upon the cloud that vary in real-time, sometimes dramatically. The nature and extent of these variations typically depend on the type of business associated with the user.
- a virtual machine is an emulation of a real-world computing system.
- the virtual machine provides a user with one or more different operating systems than the operating system of the local machine (“host”) that is running the virtual machine.
- the VM provides a complete system platform that provides the one or more operating systems.
- the VM is typically managed by a hypervisor that mediates computing resources of the host machine for use by the VM via hardware emulation.
- the use of hardware emulation is often deleterious to VM performance and, in turn, reduces the number of VMs that may run on a given host machine.
- the hypervisor must coordinate the varying workloads of the VMs to prevent instability.
- systems that provide data centered distributed applications such as systems that run applications on large clusters of shared hardware are often programming intensive for users. That is, these systems require users to create complex programs for executing compute operations against objects or data stores. This is often caused by the complexity of the hardware and/or software frameworks required to manage data flow through and/or hardware resource virtualization within these systems.
- the present technology may be directed to systems that comprise: (a) one or more processors; and (b) logic encoded in one or more tangible media for execution by the one or more processors and when executed operable to perform operations comprising: (i) receiving a request from a user, the request identifying a compute operation that is to be executed against an object in a distributed object store; (ii) locating the object within the distributed object store, the object being stored on a physical node; (iii) assigning a virtual operating system container to the object; (iv) providing an instruction set to a daemon associated with the object, the daemon controlling execution of the compute operation by the virtual operating system container according to the instruction sets; and (v) storing output of the virtual operating system container in the distributed object store.
- the present technology may be directed to a multitenant object storage system that comprises: (a) receiving a request from a user, the request identifying parameters of a compute operation that is to be executed against objects in a distributed object store, the request also comprising an identifier; (b) generating a set of tasks from the request that comprise an instruction set for a daemon; (c) locating the objects within the distributed object store, the objects being stored on a physical node; (d) providing the set of tasks to a daemon of the physical node, the daemon controlling execution of the compute operation by a virtual operating system container based upon the set of tasks; and (e) storing an output of the virtual operating system container in the distributed object store.
- FIG. 1 is a block diagram of an exemplary architecture in which embodiments of the present technology may be practiced
- FIG. 2 is a schematic diagram of an exemplary guest virtual operating system container
- FIG. 3 is a schematic diagram illustrating the colocation of guest virtual operating system containers for multiple tenants on an object store
- FIG. 4 is a schematic diagram of a guest virtual operating system container applied onto an object store
- FIG. 5 is a flowchart of an exemplary method for executing a compute flow using a set of tasks, according to an instruction set architecture
- FIG. 6 illustrates an exemplary computing system that may be used to implement embodiments according to the present technology.
- FIG. 1 is a block diagram of an exemplary architecture 100 in which embodiments of the present technology may be practiced.
- the architecture 100 comprises a plurality of client devices 105 A-N that communicatively couple with a compute-centric object store system, hereinafter “system 110 .”
- system 110 may include a plurality of systems, such as system 110 .
- the plurality of client devices 105 A-N may communicatively couple with the system 110 via any one or combination of a number of private and/or public networks, such as the Internet.
- the client devices 105 A-N may submit requests or jobs to a network service 110 B, which is a constituent part of the system 110 .
- the network service 110 E evaluates request received from users to determine one or more physical nodes that comprise objects that correspond to the request.
- the system 110 comprises an object store 110 A that provides “compute” as a first class citizen of an object store 110 A. More specifically, compute operations (e.g., instructing the system to compute on objects in the object store) of the present technology resemble a top-level API function, similar to processes like storing or fetching objects in the object store 110 A.
- object store comprise a network service for storing unstructured, arbitrary-sized chunks of data (objects). It will be further understood that the object store may not support modifications to existing objects, but supports full object replacement operations, although systems that support both object modification and full object replacement operations may also utilize the features of the present technology to perform compute operations directly on (e.g., in-situ) objects within the object store.
- the system 110 may be configured to receive a request to perform a compute operation on at least a portion of an object store, from a first user. Again, the user may be associated with one of the client devices. The request identifies parameters of the compute operation as well as objects against which the compute operation is executed.
- the system 110 may assign virtual operating system containers to a user, based upon a request.
- the system 110 may map objects to the containers that are associated with the user. Typically, these objects are identified by the user in the request.
- a virtual operating system container performs the compute operation on an object according to the identified parameters of the request.
- the system 110 may then clear the virtual operating system containers and return the virtual operating system containers to a pool of virtual operating system containers. Additional aspects of the system 110 will be described in greater detail below.
- a compute-centric object store may be created to operate without the user of virtual operating system (global kernel) or virtual operating system containers. While such an object store would provide advantages such as in-situ computation of data (where objects are processed directly on the object store), the object store may not isolate tenants in the similarly to systems that utilize a virtual operating system and/or virtual operating system containers.
- the compute-centric object store may be configured to receiving a request to perform a compute operation on at least a portion of an object store from a first user via a network service, the request identifying parameters of the compute operation.
- the object store may also execute an operating system process for the objects identified in the request.
- the operating system process may perform the compute operation on the object according to the identified parameters of the request. Additionally, once the compute operation has been executed, the operating system process may be terminated by the virtual operating system.
- in-situ computation will be understood to include the execution of compute operations against objects in an object store, where the objects not moved or copied from or within the object store.
- the system 110 is comprised of a hardware layer 115 that provides a logical interface with at least one or more processors and a memory which stores logic that is executed by the one or more processors.
- the hardware layer 115 controls one or more of the hardware components of a computing system, such as the computing system 600 of FIG. 6 , which will be described in greater detail below.
- the hardware layer 115 may manage the hardware components of a server blade or another similar device.
- the hardware layer 115 provides access to the physical hardware that services a global operating system kernel 120 that cooperates with the hardware layer 115 .
- the global operating system kernel 120 may also be referred to as a host operating system kernel.
- the global operating system kernel 120 is configured to administer and manage a pool of guest virtual operating system containers, such as containers 125 A-N.
- the containers 125 A-N may operate on a distributed object store in a multitenant manner, where multiple containers can operate on the same object store simultaneously. It will be understood that each user is assigned container from the pool, on an as-needed basis. When a container is applied to an object store the container is referred to as a tenant.
- the system kernel 120 may be utilized to setup the pool of guest virtual operating system containers.
- the system kernel 120 may also be configured to provide a command line interpreter interface that allows users to request jobs, execute other operating system implemented applications, and interact with a virtual operating system in a manner that is substantially indistinguishable relative to an operating system executing on a bare metal device.
- a job may be input by a user via a command line interpreter, such as a Unix shell terminal. More specifically, the user may express a computation using the same language as the language used by a Unix shell terminal.
- the actual request is submitted to the network service 110 B. Indeed, a request may be submitted as an HTTP request to the network service 110 B.
- the body of the request describes the computation to perform in terms of what commands are input into the command line interpreter, which is running within a container.
- the user may specify one or more desired compute operations that are to be executed against objects (such as object 130 ) within an object store 110 A (see FIG. 3 ).
- object store 110 A may include, for example, a local or distributed object store that maintains contiguous blobs, blocks, or chunks of data.
- objects stored in the object store 110 A are complete objects, such as files or other similar data structures.
- the compute operations executed against the object store 110 A may be performed in such a way that partial stores of data are avoided.
- the system kernel 120 may collocate containers 125 A-N onto the object store 110 A, and execute the containers 125 A-N simultaneously.
- a plurality of containers such as container 125 A has been placed onto each of a plurality of objects within the object store 110 A.
- a virtual operating system container is assigned to each of the plurality of objects specified in the user request.
- the assignment of a single container to a single object occurs when the system executes a “map” phase operation. The details of map and reduce phases provide by the system 110 will be described in greater detail below.
- a virtual operating system container may be a lightweight virtualization solution offering a complete and secure user environment that operates on a single global kernel (system kernel 120 ), providing performance characteristics that are similar to operating systems that operate on bare metal devices. That is, a virtual machine operates on emulated hardware and is subject to control by a hypervisor, which produces computing inefficiencies.
- a virtual operating system container may operate without the computing inefficiencies of a typical virtual machine.
- the system kernel 120 may utilize a KVM (Kernel Virtual Machine) that improves the efficiency of the a virtual operating system, such as the global operating system kernel, by leveraging CPU virtualization extensions to eliminate a substantial majority of the binary translation (i.e., hardware emulation) that are frequently required by VMs.
- KVM Kernel Virtual Machine
- an exemplary virtual operating system container 125 A ( FIG. 1 ) is shown as comprising a quick emulation layer (QEMU) 135 , a virtual guest operating system 140 , and a compute application 145 that is managed by the virtual guest operating system 140 .
- the QEMU 135 provides hardware emulation and is also VMM (virtual machine monitor). It is noteworthy that in some embodiments the QEMU 135 is not a strict hypervisor layer, but rather each QEMU 135 may be independent in some exemplary embodiments. That is, there may be one QEMU 135 one per container instead of a single QEMU 135 supporting several VMs.
- the operations of both a VM and a VMM may be combined into the QEMU 135 .
- the compute application 145 that is executed may include a primitive O/S compute operation.
- Exemplary compute operations may include operating system primitive operations, such as query, word count, send, receive, and so forth. Additionally, the operations may comprise more sophisticated operations, such as operations that include audio or video transcoding. Additionally, in some instances, users may store programs or applications in the object store itself. Users may then execute the programs as a part of a compute operation.
- the compute operations may include one or more phases such as a map phase, followed by a reduce phase.
- a map phase may include an operation that is executed against each of a plurality of objects individually, by a plurality of containers.
- a unique container is assigned to each object that is to be processed.
- a reduce phase may be executed by a single container against a plurality of objects in a batch manner.
- the objects of the object store 135 may comprise text files.
- the application 145 may execute a map phase to count the words in each of the text files.
- the output of the application 145 may be stored in a plurality of output objects that are stored in the object store 135 .
- a compute application 145 of another container may execute a reduce phase that sums the output objects of the map phase and generates a word count for all objects within the object store 135 .
- system kernel 120 may schedule and coordinate various compute operations (and phases) performed by the compute applications 145 of all containers.
- system kernel 120 may act similarly to a hypervisor that manages the compute operations of the various active containers.
- the system kernel 120 may instruct the containers to perform a series of map functions, as well as a reduce functions.
- the map and reduce functions may be coordinated to produce the desired output specified in the request.
- the system kernel 120 may select a first set of containers, which includes container 125 A from the pool of containers. This container 125 A is assigned to a user. In response to receiving a request from a second user, the system kernel 120 may also select a second set of containers from the pool of containers.
- the system kernel 120 may map the first set of containers to a plurality of objects, such as object 130 , stored in the object store 110 A. Likewise, the system kernel 120 may map a second set of containers to a plurality of different objects stored in the object store 110 A for the second user.
- the objects and containers for the first user may be referred to as a compute zone of the first user, while the objects mapped to the container 125 N may be referred to as a compute zone of the second user.
- the maintenance of compute zones allows the system kernel 120 to provide multitenant access to the object store 110 A, even when the first and second users are potentially adversarial. For example, the first and second users may be commercial competitors.
- the system kernel 120 maintains compute zones in order to balkanize object storage and prevent access to objects of other users. Additionally, the balkanization of object storage also ensures fair distribution of resources between users.
- system kernel 120 may maintain as many containers and compute zones as allowed by the processor(s) of the hardware layer 115 . Additionally, the system kernel 120 assigns a container to a user on an as-needed basis, meaning that containers may not be assigned permanently to a user, which would result in a monopolization of resources when the user is not performing compute operations.
- FIG. 4 illustrates the placement of the container 125 A onto the data store 110 A. It is understood that the container 125 A encircles a plurality of objects in the data store 110 A. This mapping of multiple object to a single container would be commonly seen in a reduce phase, where the container is performing a concatenating or summation process on the outputs of individual containers, such as the containers shown in FIG. 3 .
- the system kernel 120 need not transfer objects from the object store 110 A into the container for processing in some exemplary embodiments.
- the container operates directly on the objects of the object store 110 A.
- the containers 125 A-N managed by the system kernel 120 are empty when the containers 125 A-N are in the pool. After objects are mapped to the container, compute operations may be executed by the container on the objects, and a desired output is generated, the system kernel 120 may clear the container and return the container to the pool.
- the system kernel 120 may not generate containers until a request is received from a user. That is, the system kernel 120 may “spin up” or launch containers when a request is received from the user. This allows for minimum impact to the bare metal resources, such as the CPU, as the system kernel 120 need not even maintain a pool of virtual operating system containers, which are awaiting user requests. That is, maintaining a pool of containers requires CPU and memory resources.
- the system kernel 120 may terminate the containers, rather than clearing the containers and returning the containers to a pool.
- an instruction set architecture may be implemented within the system 110 .
- the instruction set architecture may specify an application programming interface that allows the system 110 to interact with the distributed object store.
- the system 110 communicatively couples with the object store 110 A using a services related application programming interface (SAPI) 155 , which provides features such as automatic discovery of object stores, dynamic configuration of object stores, and an API for a user portal.
- SAPI services related application programming interface
- the SAPI allows users to configure, deploy, and upgrade applications using a set of loosely-coupled, federated services.
- the SAPI may include an underlying API and an autoconfig agent, also referred to as a daemon 150 .
- a SAPI client may also be disseminated to clients. It will be understood that the daemon 150 may be associated with a physical node 160 of the object store 110 A.
- various object stores such as object store 110 A of FIGS. 3 and 4 , comprise a single SAPI zone.
- the SAPI zone may be stateless and the SAPI zone may be configured to write objects into the object store 110 A.
- the SAPI zone may also communicatively couple with a VM API to provision zones and a network API (NAPI) to reserve network interface controllers (NIC) and lookup network universal unique identifiers (UUID).
- NAPI network API
- NIC network interface controllers
- UUID lookup network universal unique identifiers
- SAPI 155 may comprise three main object types such as applications, services, and instances. It is noteworthy that an application may comprise one or more services, and each service may comprise one or more instances. Moreover, instances may represent actual object store zones, and such zones inherit zone parameters and metadata from their associated applications and services.
- the application, service, and instance information may be used by the compute application of a virtual operating system container that is placed onto an object store.
- the daemon 150 may control the operation of the containers operating on the daemon's object store.
- Each application, service and instance may include three sets of properties.
- “params” may comprise zone parameters like a zone's RAM size, disk quota, image UUID, and so forth. These parameters are evaluated when a zone is provisioned.
- Another property comprises “metadata”, which defines metadata available to the daemon 150 . These metadata keys and values form the input of a script template in a configuration manifest (described below). As these values are updated, the daemon 150 may rewrite any configuration and make reference to changed metadata values.
- Yet another property comprises “manifests” that define a set of configuration manifests are indexed by name to facilitate inheriting manifest from parent objects.
- Creating applications and services have no effect on running zones.
- a zone is provisioned using the above information from its associated application, service, and instance.
- applications and services e.g., a job or request
- a job may be thought of abstractly as a workflow template.
- objects need only be defined by the user. The workflow template is then applied against the objects.
- the daemon 150 of a zone may be tasked with maintaining configuration inside that zone.
- the daemon 150 queries the SAPI 155 directly to determine which files to write and where to write them within the object store 110 A.
- the daemon 150 uses objects called configuration manifests; those objects describe the contents, location, and semantics of configuration files for a zone.
- Those manifests contain a script template which is rendered using the metadata from the associated application, service, and instance.
- the system kernel 120 may coordinate a compute flow of compute operations which are managed by the daemon 150 . That is, the system kernel 120 may receive a request or “job” from a user, via a command line interpreter. The request identifies parameters of a compute operation that is to be executed against objects in a distributed object store. For example, a request may include performing a word count operation on a file.
- the system kernel 120 may assign an identifier for the request.
- This identifier provides a unique identifier that allows objects and outputs of compute operations to be correlated to the user. Objects previously stored in the object store may be correlated to the user utilizing a unique identifier.
- the identifier comprises the name of an input object or job name. This name may be specified by an end user submitting a job/request to the system or may be generated by the system from the request.
- An exemplary find object command may include Find
- the system kernel 120 may query various daemons of object stores to locate the objects within the distributed object store. After the object have been located, the system kernel 120 may generate a set of tasks (e.g., an instruction set) that defines the various compute operations that are to be performed by the daemon of the located object store.
- the set of tasks may include only one word count task that is provided to a single daemon of an object store (e.g., physical node). This relatively simple compute operation does not require coordination or scheduling of operations of multiple objects.
- the daemon 150 may provide instructions to one or more virtual operating system containers that are placed onto the object store by the system kernel 120 . That is, the instruction sets provided to the containers is based upon the task assigned to the daemon 150 from the system kernel 120 .
- the set of tasks may include a more complex arrangement of operations that are executed against a plurality of objects stores.
- the system kernel 120 may interact with the daemon to coordinate processing of these objects in a specified order.
- the set of tasks may define various map phases that are to be executed on the objects of the object store, as well as various reduce phases that are executed on the outputs of the map phases.
- objects within the workflow may be tracked and correlated together using the identifier. For example, if an instruction set passed to a daemon requires performing a word count compute operation on 100 text files, each of the objects of the compute operation would be correlated using the identifier. Thus, the objects of the compute operation would comprise 100 objects that each includes a word count value for their corresponding text file.
- the identifier may be appended to the object as metadata.
- a map phase may result in multiple outputs, which are generated from a single input object. For example, assume that usage logs for a computing device are stored for a 24 hour time period. To determine hourly usage rates, the 24 hour log object may be separated into 24 distinct objects. Thus, the map phase may receive the 24 hour log object and may split the same into constituent output objects to complete the map phase.
- a more complex request may require a more complicated set of tasks (e.g., phases). For example, if the user desires to look at all 11 p.m. to 12 p.m. user logs for a plurality of computing devices, the set of tasks may require not only the map task where a single input object is processed into multiple objects, but also a reduce phase that sums a plurality of 11 p.m. to 12 p.m. user logs for a plurality of devices.
- the system kernel 120 will provide a daemon with tasks that include a map phase for generating the hour increment logs from various input objects. Additionally, the tasks also inform the daemon to return the output objects, which may be stored as an aggregate 11 p.m. to 12 p.m. log object within the object store.
- daemon of a physical node may control execution of compute operations by the one or more virtual operating system containers that are placed onto the object store via the system kernel 120 .
- intermediate output objects may not be output to the user directly, but may be fed back into the system for additional processing, such as with the map and reduce phases described above.
- the set of tasks generated by the system kernel 120 may include any number of map phases and/or reduce phases, which vary according to the steps required to produce the desired output.
- FIG. 5 is a flowchart of an exemplary method 500 for executing a compute flow using a set of tasks, according to an instruction set architecture.
- the method 500 may include a step 505 of receiving a request from a user, the request identifying parameters of a compute operation that is to be executed against objects in a distributed object store.
- the method may include a step 510 of locating one or more objects within the distributed object store, the one or more objects being stored on one or more physical nodes. Once objects have been located, the method may include a step 515 of generating a set of tasks from the request that comprise instructions for a daemon. Again, the set of tasks may define applications, services and instances that are that can be utilized by virtual operating system containers. It will be understood that the set of tasks comprises an instruction set that is a translation of the user request into meaningful input that can be executed by one or more virtual operating system containers.
- the method Upon generating the set of tasks, the method includes a step 520 of providing the set of tasks to a daemon of the physical node.
- the daemon controls execution of the compute operation by one or more virtual operating system containers based upon the set of tasks.
- the daemon functions as a virtual operating system hypervisor that coordinates the operation and execution of the containers.
- the method includes a step 525 of storing an output of the virtual operating system container in the distributed object store.
- FIG. 6 illustrates an exemplary computing system 600 that may be used to implement an embodiment of the present systems and methods.
- the system 600 of FIG. 6 may be implemented in the contexts of the likes of computing systems, networks, servers, or combinations thereof.
- the computing system 600 of FIG. 6 includes one or more processors 610 and main memory 620 .
- Main memory 620 stores, in part, instructions and data for execution by processor 610 .
- Main memory 620 may store the executable code when in operation.
- the system 600 of FIG. 6 further includes a mass storage device 630 , portable storage device 640 , output devices 650 , user input devices 660 , a display system 670 , and peripheral devices 680 .
- FIG. 6 The components shown in FIG. 6 are depicted as being connected via a single bus 690 .
- the components may be connected through one or more data transport means.
- Processor unit 610 and main memory 620 may be connected via a local microprocessor bus, and the mass storage device 630 , peripheral device(s) 680 , portable storage device 640 , and display system 670 may be connected via one or more input/output (I/O) buses.
- I/O input/output
- Mass storage device 630 which may be implemented with a magnetic disk drive or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor unit 610 . Mass storage device 630 may store the system software for implementing embodiments of the present technology for purposes of loading that software into main memory 620 .
- Portable storage device 640 operates in conjunction with a portable non-volatile storage medium, such as a floppy disk, compact disk, digital video disc, or USB storage device, to input and output data and code to and from the computer system 600 of FIG. 6 .
- a portable non-volatile storage medium such as a floppy disk, compact disk, digital video disc, or USB storage device.
- the system software for implementing embodiments of the present technology may be stored on such a portable medium and input to the computer system 600 via the portable storage device 640 .
- User input devices 660 provide a portion of a user interface.
- User input devices 660 may include an alphanumeric keypad, such as a keyboard, for inputting alpha-numeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys.
- Additional user input devices 660 may comprise, but are not limited to, devices such as speech recognition systems, facial recognition systems, motion-based input systems, gesture-based systems, and so forth.
- user input devices 660 may include a touchscreen.
- the system 600 as shown in FIG. 6 includes output devices 650 . Suitable output devices include speakers, printers, network interfaces, and monitors.
- Display system 670 may include a liquid crystal display (LCD) or other suitable display device.
- Display system 670 receives textual and graphical information, and processes the information for output to the display device.
- LCD liquid crystal display
- Peripherals device(s) 680 may include any type of computer support device to add additional functionality to the computer system. Peripheral device(s) 680 may include a modem or a router.
- the components provided in the computer system 600 of FIG. 6 are those typically found in computer systems that may be suitable for use with embodiments of the present technology and are intended to represent a broad category of such computer components that are well known in the art.
- the computer system 600 of FIG. 6 may be a personal computer, hand held computing system, telephone, mobile computing system, workstation, server, minicomputer, mainframe computer, or any other computing system.
- the computer may also include different bus configurations, networked platforms, multi-processor platforms, etc.
- Various operating systems may be used including Unix, Linux, Windows, Mac OS, Palm OS, Android, iOS (known as iPhone OS before June 2010), QNX, and other suitable operating systems.
- Computer-readable storage media refer to any medium or media that participate in providing instructions to a central processing unit (CPU), a processor, a microcontroller, or the like. Such media may take forms including, but not limited to, non-volatile and volatile media such as optical or magnetic disks and dynamic memory, respectively. Common forms of computer-readable storage media include a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic storage medium, a CD-ROM disk, digital video disk (DVD), any other optical storage medium, RAM, PROM, EPROM, a FLASHEPROM, any other memory chip or cartridge.
- Computer program code for carrying out operations for aspects of the present technology may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be coupled with the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Stored Programmes (AREA)
Abstract
Description
- The present technology relates generally to an instruction set architecture (ISA) for compute-centric object stores. ISAs of the present technology allow for efficient scheduling and management of compute operations across distributed object stores. The ISAs provide a means for expressing compute operations within the context of a distributed object store, as well as a mechanism for coordinating how data flows through a compute-centric object store system.
- Various methods and systems for providing multitenant computing systems, such as cloud computing, have been attempted. In general, a cloud-based computing environment is a resource that typically combines the computational power of a large model of processors and/or that combines the storage capacity of a large model of computer memories or storage devices. For example, systems that provide a cloud resource may be utilized exclusively by their owners; or such systems may be accessible to outside users who deploy applications within the computing infrastructure to obtain the benefit of large computational or storage resources.
- The cloud may be formed, for example, by a network of servers with each server (or at least a plurality thereof) providing processor and/or storage resources. These servers may manage workloads provided by multiple users (e.g., cloud resource consumers or other users). Typically, each user places workload demands upon the cloud that vary in real-time, sometimes dramatically. The nature and extent of these variations typically depend on the type of business associated with the user.
- Oftentimes, these cloud computing systems leverage virtual machines for their users. A virtual machine (“VM”) is an emulation of a real-world computing system. Often, the virtual machine provides a user with one or more different operating systems than the operating system of the local machine (“host”) that is running the virtual machine. The VM provides a complete system platform that provides the one or more operating systems. The VM is typically managed by a hypervisor that mediates computing resources of the host machine for use by the VM via hardware emulation. The use of hardware emulation is often deleterious to VM performance and, in turn, reduces the number of VMs that may run on a given host machine. Additionally, as the number of VMs on a host machine increases and they begin to operate concurrently, the hypervisor must coordinate the varying workloads of the VMs to prevent instability.
- In general, systems that provide data centered distributed applications, such as systems that run applications on large clusters of shared hardware are often programming intensive for users. That is, these systems require users to create complex programs for executing compute operations against objects or data stores. This is often caused by the complexity of the hardware and/or software frameworks required to manage data flow through and/or hardware resource virtualization within these systems.
- According to some embodiments, the present technology may be directed to systems that comprise: (a) one or more processors; and (b) logic encoded in one or more tangible media for execution by the one or more processors and when executed operable to perform operations comprising: (i) receiving a request from a user, the request identifying a compute operation that is to be executed against an object in a distributed object store; (ii) locating the object within the distributed object store, the object being stored on a physical node; (iii) assigning a virtual operating system container to the object; (iv) providing an instruction set to a daemon associated with the object, the daemon controlling execution of the compute operation by the virtual operating system container according to the instruction sets; and (v) storing output of the virtual operating system container in the distributed object store.
- According to some embodiments, the present technology may be directed to a multitenant object storage system that comprises: (a) receiving a request from a user, the request identifying parameters of a compute operation that is to be executed against objects in a distributed object store, the request also comprising an identifier; (b) generating a set of tasks from the request that comprise an instruction set for a daemon; (c) locating the objects within the distributed object store, the objects being stored on a physical node; (d) providing the set of tasks to a daemon of the physical node, the daemon controlling execution of the compute operation by a virtual operating system container based upon the set of tasks; and (e) storing an output of the virtual operating system container in the distributed object store.
- Certain embodiments of the present technology are illustrated by the accompanying figures. It will be understood that the figures are not necessarily to scale and that details not necessary for an understanding of the technology or that render other details difficult to perceive may be omitted. It will be understood that the technology is not necessarily limited to the particular embodiments illustrated herein.
-
FIG. 1 is a block diagram of an exemplary architecture in which embodiments of the present technology may be practiced; -
FIG. 2 is a schematic diagram of an exemplary guest virtual operating system container; -
FIG. 3 is a schematic diagram illustrating the colocation of guest virtual operating system containers for multiple tenants on an object store; -
FIG. 4 is a schematic diagram of a guest virtual operating system container applied onto an object store; -
FIG. 5 is a flowchart of an exemplary method for executing a compute flow using a set of tasks, according to an instruction set architecture; and -
FIG. 6 illustrates an exemplary computing system that may be used to implement embodiments according to the present technology. - While this technology is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail several specific embodiments with the understanding that the present disclosure is to be considered as an exemplification of the principles of the technology and is not intended to limit the technology to the embodiments illustrated.
- The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present technology. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
- It will be understood that like or analogous elements and/or components, referred to herein, may be identified throughout the drawings with like reference characters. It will be further understood that several of the figures are merely schematic representations of the present technology. As such, some of the components may have been distorted from their actual scale for pictorial clarity.
-
FIG. 1 is a block diagram of anexemplary architecture 100 in which embodiments of the present technology may be practiced. Thearchitecture 100 comprises a plurality ofclient devices 105A-N that communicatively couple with a compute-centric object store system, hereinafter “system 110.” It will be understood that thearchitecture 100 may include a plurality of systems, such assystem 110. For the sake of brevity and clarity, a detailed description of anexemplary system 110 will be provided below, although the features of thesystem 110 apply equally to all of the plurality of systems. The plurality ofclient devices 105A-N may communicatively couple with thesystem 110 via any one or combination of a number of private and/or public networks, such as the Internet. According to some embodiments, theclient devices 105A-N may submit requests or jobs to anetwork service 110B, which is a constituent part of thesystem 110. In some instances, the network service 110E evaluates request received from users to determine one or more physical nodes that comprise objects that correspond to the request. - In general, the
system 110 comprises anobject store 110A that provides “compute” as a first class citizen of anobject store 110A. More specifically, compute operations (e.g., instructing the system to compute on objects in the object store) of the present technology resemble a top-level API function, similar to processes like storing or fetching objects in theobject store 110A. - It will be understood that the terms “object store” comprise a network service for storing unstructured, arbitrary-sized chunks of data (objects). It will be further understood that the object store may not support modifications to existing objects, but supports full object replacement operations, although systems that support both object modification and full object replacement operations may also utilize the features of the present technology to perform compute operations directly on (e.g., in-situ) objects within the object store.
- In some embodiments, the
system 110 may be configured to receive a request to perform a compute operation on at least a portion of an object store, from a first user. Again, the user may be associated with one of the client devices. The request identifies parameters of the compute operation as well as objects against which the compute operation is executed. - In some instances, the
system 110 may assign virtual operating system containers to a user, based upon a request. Thesystem 110 may map objects to the containers that are associated with the user. Typically, these objects are identified by the user in the request. A virtual operating system container performs the compute operation on an object according to the identified parameters of the request. Thesystem 110 may then clear the virtual operating system containers and return the virtual operating system containers to a pool of virtual operating system containers. Additional aspects of thesystem 110 will be described in greater detail below. - It will be understood that a compute-centric object store may be created to operate without the user of virtual operating system (global kernel) or virtual operating system containers. While such an object store would provide advantages such as in-situ computation of data (where objects are processed directly on the object store), the object store may not isolate tenants in the similarly to systems that utilize a virtual operating system and/or virtual operating system containers.
- In these instances, the compute-centric object store may be configured to receiving a request to perform a compute operation on at least a portion of an object store from a first user via a network service, the request identifying parameters of the compute operation. The object store may also execute an operating system process for the objects identified in the request. The operating system process may perform the compute operation on the object according to the identified parameters of the request. Additionally, once the compute operation has been executed, the operating system process may be terminated by the virtual operating system.
- The terms in-situ computation will be understood to include the execution of compute operations against objects in an object store, where the objects not moved or copied from or within the object store.
- In some embodiments, the
system 110 is comprised of ahardware layer 115 that provides a logical interface with at least one or more processors and a memory which stores logic that is executed by the one or more processors. Generally, thehardware layer 115 controls one or more of the hardware components of a computing system, such as thecomputing system 600 ofFIG. 6 , which will be described in greater detail below. By way of non-limiting example, thehardware layer 115 may manage the hardware components of a server blade or another similar device. Thehardware layer 115 provides access to the physical hardware that services a globaloperating system kernel 120 that cooperates with thehardware layer 115. The globaloperating system kernel 120 may also be referred to as a host operating system kernel. - Generally, the global
operating system kernel 120 is configured to administer and manage a pool of guest virtual operating system containers, such ascontainers 125A-N. The containers 125A-N may operate on a distributed object store in a multitenant manner, where multiple containers can operate on the same object store simultaneously. It will be understood that each user is assigned container from the pool, on an as-needed basis. When a container is applied to an object store the container is referred to as a tenant. - According to some embodiments, the
system kernel 120 may be utilized to setup the pool of guest virtual operating system containers. Thesystem kernel 120 may also be configured to provide a command line interpreter interface that allows users to request jobs, execute other operating system implemented applications, and interact with a virtual operating system in a manner that is substantially indistinguishable relative to an operating system executing on a bare metal device. - Generally, a job may be input by a user via a command line interpreter, such as a Unix shell terminal. More specifically, the user may express a computation using the same language as the language used by a Unix shell terminal. The actual request is submitted to the
network service 110B. Indeed, a request may be submitted as an HTTP request to thenetwork service 110B. The body of the request describes the computation to perform in terms of what commands are input into the command line interpreter, which is running within a container. Contrastingly systems that utilize multiple VMs that each comprises an operating system kernel, which are managed by a hypervisor, often require users to construct complex programs or scripts to perform compute operations. Compute operations for traditional VM systems require complex programming due to a complex framework that is used by the hypervisor to coordinate hardware emulation for each of the VMs. - Using the command line interpreter interface, the user may specify one or more desired compute operations that are to be executed against objects (such as object 130) within an
object store 110A (seeFIG. 3 ). It is noteworthy that theobject store 110A may include, for example, a local or distributed object store that maintains contiguous blobs, blocks, or chunks of data. It will be understood that the objects stored in theobject store 110A are complete objects, such as files or other similar data structures. Moreover, the compute operations executed against theobject store 110A may be performed in such a way that partial stores of data are avoided. - In order to perform compute operations on objects for multiple users, the
system kernel 120 may collocatecontainers 125A-N onto theobject store 110A, and execute thecontainers 125A-N simultaneously. InFIG. 3 , a plurality of containers, such ascontainer 125A has been placed onto each of a plurality of objects within theobject store 110A. Thus, a virtual operating system container is assigned to each of the plurality of objects specified in the user request. Most frequently, the assignment of a single container to a single object occurs when the system executes a “map” phase operation. The details of map and reduce phases provide by thesystem 110 will be described in greater detail below. - Broadly speaking, a virtual operating system container may be a lightweight virtualization solution offering a complete and secure user environment that operates on a single global kernel (system kernel 120), providing performance characteristics that are similar to operating systems that operate on bare metal devices. That is, a virtual machine operates on emulated hardware and is subject to control by a hypervisor, which produces computing inefficiencies. A virtual operating system container may operate without the computing inefficiencies of a typical virtual machine.
- In some instances, the
system kernel 120 may utilize a KVM (Kernel Virtual Machine) that improves the efficiency of the a virtual operating system, such as the global operating system kernel, by leveraging CPU virtualization extensions to eliminate a substantial majority of the binary translation (i.e., hardware emulation) that are frequently required by VMs. - Turning to
FIG. 2 , an exemplary virtualoperating system container 125A (FIG. 1 ) is shown as comprising a quick emulation layer (QEMU) 135, a virtualguest operating system 140, and acompute application 145 that is managed by the virtualguest operating system 140. TheQEMU 135 provides hardware emulation and is also VMM (virtual machine monitor). It is noteworthy that in some embodiments theQEMU 135 is not a strict hypervisor layer, but rather eachQEMU 135 may be independent in some exemplary embodiments. That is, there may be oneQEMU 135 one per container instead of asingle QEMU 135 supporting several VMs. Advantageously, the operations of both a VM and a VMM may be combined into theQEMU 135. - According to some embodiments, the
compute application 145 that is executed may include a primitive O/S compute operation. Exemplary compute operations may include operating system primitive operations, such as query, word count, send, receive, and so forth. Additionally, the operations may comprise more sophisticated operations, such as operations that include audio or video transcoding. Additionally, in some instances, users may store programs or applications in the object store itself. Users may then execute the programs as a part of a compute operation. - In some instances the compute operations may include one or more phases such as a map phase, followed by a reduce phase. Generally, a map phase may include an operation that is executed against each of a plurality of objects individually, by a plurality of containers. In some instances, a unique container is assigned to each object that is to be processed.
- In contrast, a reduce phase may be executed by a single container against a plurality of objects in a batch manner. Using an example such as word count, it will be assumed that the objects of the
object store 135 may comprise text files. Theapplication 145 may execute a map phase to count the words in each of the text files. The output of theapplication 145 may be stored in a plurality of output objects that are stored in theobject store 135. Acompute application 145 of another container may execute a reduce phase that sums the output objects of the map phase and generates a word count for all objects within theobject store 135. - It will be understood that the
system kernel 120 may schedule and coordinate various compute operations (and phases) performed by thecompute applications 145 of all containers. In sum, thesystem kernel 120 may act similarly to a hypervisor that manages the compute operations of the various active containers. Based upon the request input by the user, thesystem kernel 120 may instruct the containers to perform a series of map functions, as well as a reduce functions. The map and reduce functions may be coordinated to produce the desired output specified in the request. - Turning to
FIG. 3 , after receiving a request from a user, thesystem kernel 120 may select a first set of containers, which includescontainer 125A from the pool of containers. Thiscontainer 125A is assigned to a user. In response to receiving a request from a second user, thesystem kernel 120 may also select a second set of containers from the pool of containers. - Based upon the request received from the first tenant, the
system kernel 120 may map the first set of containers to a plurality of objects, such asobject 130, stored in theobject store 110A. Likewise, thesystem kernel 120 may map a second set of containers to a plurality of different objects stored in theobject store 110A for the second user. The objects and containers for the first user may be referred to as a compute zone of the first user, while the objects mapped to thecontainer 125N may be referred to as a compute zone of the second user. The maintenance of compute zones allows thesystem kernel 120 to provide multitenant access to theobject store 110A, even when the first and second users are potentially adversarial. For example, the first and second users may be commercial competitors. For security, thesystem kernel 120 maintains compute zones in order to balkanize object storage and prevent access to objects of other users. Additionally, the balkanization of object storage also ensures fair distribution of resources between users. - It will be understood that the
system kernel 120 may maintain as many containers and compute zones as allowed by the processor(s) of thehardware layer 115. Additionally, thesystem kernel 120 assigns a container to a user on an as-needed basis, meaning that containers may not be assigned permanently to a user, which would result in a monopolization of resources when the user is not performing compute operations. -
FIG. 4 illustrates the placement of thecontainer 125A onto thedata store 110A. It is understood that thecontainer 125A encircles a plurality of objects in thedata store 110A. This mapping of multiple object to a single container would be commonly seen in a reduce phase, where the container is performing a concatenating or summation process on the outputs of individual containers, such as the containers shown inFIG. 3 . - Additionally, because the container is placed onto the object store, the
system kernel 120 need not transfer objects from theobject store 110A into the container for processing in some exemplary embodiments. Advantageously, the container operates directly on the objects of theobject store 110A. - According to some embodiments, the
containers 125A-N managed by thesystem kernel 120 are empty when thecontainers 125A-N are in the pool. After objects are mapped to the container, compute operations may be executed by the container on the objects, and a desired output is generated, thesystem kernel 120 may clear the container and return the container to the pool. - In some instances, the
system kernel 120 may not generate containers until a request is received from a user. That is, thesystem kernel 120 may “spin up” or launch containers when a request is received from the user. This allows for minimum impact to the bare metal resources, such as the CPU, as thesystem kernel 120 need not even maintain a pool of virtual operating system containers, which are awaiting user requests. That is, maintaining a pool of containers requires CPU and memory resources. When the compute operations have been completed, thesystem kernel 120 may terminate the containers, rather than clearing the containers and returning the containers to a pool. - In accordance with the present disclosure, an instruction set architecture may be implemented within the
system 110. In some embodiments, the instruction set architecture may specify an application programming interface that allows thesystem 110 to interact with the distributed object store. - According to some embodiments, the
system 110 communicatively couples with theobject store 110A using a services related application programming interface (SAPI) 155, which provides features such as automatic discovery of object stores, dynamic configuration of object stores, and an API for a user portal. In sum, the SAPI allows users to configure, deploy, and upgrade applications using a set of loosely-coupled, federated services. In some embodiments, the SAPI may include an underlying API and an autoconfig agent, also referred to as adaemon 150. A SAPI client may also be disseminated to clients. It will be understood that thedaemon 150 may be associated with a physical node 160 of theobject store 110A. - In accordance with some embodiments according to the present disclosure, various object stores, such as
object store 110A ofFIGS. 3 and 4 , comprise a single SAPI zone. It will be understood that the SAPI zone may be stateless and the SAPI zone may be configured to write objects into theobject store 110A. In addition to storing objects, the SAPI zone may also communicatively couple with a VM API to provision zones and a network API (NAPI) to reserve network interface controllers (NIC) and lookup network universal unique identifiers (UUID). - It will be understood that the
SAPI 155 may comprise three main object types such as applications, services, and instances. It is noteworthy that an application may comprise one or more services, and each service may comprise one or more instances. Moreover, instances may represent actual object store zones, and such zones inherit zone parameters and metadata from their associated applications and services. - Also, the application, service, and instance information may be used by the compute application of a virtual operating system container that is placed onto an object store. The
daemon 150 may control the operation of the containers operating on the daemon's object store. - Each application, service and instance may include three sets of properties. For example, “params” may comprise zone parameters like a zone's RAM size, disk quota, image UUID, and so forth. These parameters are evaluated when a zone is provisioned. Another property comprises “metadata”, which defines metadata available to the
daemon 150. These metadata keys and values form the input of a script template in a configuration manifest (described below). As these values are updated, thedaemon 150 may rewrite any configuration and make reference to changed metadata values. Yet another property comprises “manifests” that define a set of configuration manifests are indexed by name to facilitate inheriting manifest from parent objects. - It is noteworthy that creating applications and services have no effect on running zones. When an instance is created, a zone is provisioned using the above information from its associated application, service, and instance. Stated otherwise, applications and services (e.g., a job or request) may be defined separate from the objects that the applications and services are to be executed against. Thus, a job may be thought of abstractly as a workflow template. Advantageously, when the user requests the execution of a job, objects need only be defined by the user. The workflow template is then applied against the objects.
- In some embodiments, the
daemon 150 of a zone may be tasked with maintaining configuration inside that zone. Thedaemon 150 queries theSAPI 155 directly to determine which files to write and where to write them within theobject store 110A. - The
daemon 150 uses objects called configuration manifests; those objects describe the contents, location, and semantics of configuration files for a zone. Those manifests contain a script template which is rendered using the metadata from the associated application, service, and instance. - When a user provides a request to the
system 110, thesystem kernel 120 may coordinate a compute flow of compute operations which are managed by thedaemon 150. That is, thesystem kernel 120 may receive a request or “job” from a user, via a command line interpreter. The request identifies parameters of a compute operation that is to be executed against objects in a distributed object store. For example, a request may include performing a word count operation on a file. - To facilitate compute flow during the compute process, the
system kernel 120 may assign an identifier for the request. This identifier provides a unique identifier that allows objects and outputs of compute operations to be correlated to the user. Objects previously stored in the object store may be correlated to the user utilizing a unique identifier. According to some embodiments, the identifier comprises the name of an input object or job name. This name may be specified by an end user submitting a job/request to the system or may be generated by the system from the request. - The user may also identify objects for the compute operation, using, for example, the command line interpreter. An exemplary find object command may include Find|User|Object Store Location; where the Object Store Location defines the object store that includes the object(s) which are necessary for execution of the compute operation.
- In some instances, the
system kernel 120 may query various daemons of object stores to locate the objects within the distributed object store. After the object have been located, thesystem kernel 120 may generate a set of tasks (e.g., an instruction set) that defines the various compute operations that are to be performed by the daemon of the located object store. In the example provided above, the set of tasks may include only one word count task that is provided to a single daemon of an object store (e.g., physical node). This relatively simple compute operation does not require coordination or scheduling of operations of multiple objects. - The
daemon 150 may provide instructions to one or more virtual operating system containers that are placed onto the object store by thesystem kernel 120. That is, the instruction sets provided to the containers is based upon the task assigned to thedaemon 150 from thesystem kernel 120. - In some instances, the set of tasks may include a more complex arrangement of operations that are executed against a plurality of objects stores. The
system kernel 120 may interact with the daemon to coordinate processing of these objects in a specified order. - Additionally, the set of tasks may define various map phases that are to be executed on the objects of the object store, as well as various reduce phases that are executed on the outputs of the map phases. It will be understood that objects within the workflow may be tracked and correlated together using the identifier. For example, if an instruction set passed to a daemon requires performing a word count compute operation on 100 text files, each of the objects of the compute operation would be correlated using the identifier. Thus, the objects of the compute operation would comprise 100 objects that each includes a word count value for their corresponding text file. The identifier may be appended to the object as metadata.
- It will also be understood that a map phase may result in multiple outputs, which are generated from a single input object. For example, assume that usage logs for a computing device are stored for a 24 hour time period. To determine hourly usage rates, the 24 hour log object may be separated into 24 distinct objects. Thus, the map phase may receive the 24 hour log object and may split the same into constituent output objects to complete the map phase.
- It will be understood that a more complex request may require a more complicated set of tasks (e.g., phases). For example, if the user desires to look at all 11 p.m. to 12 p.m. user logs for a plurality of computing devices, the set of tasks may require not only the map task where a single input object is processed into multiple objects, but also a reduce phase that sums a plurality of 11 p.m. to 12 p.m. user logs for a plurality of devices.
- In sum, the
system kernel 120 will provide a daemon with tasks that include a map phase for generating the hour increment logs from various input objects. Additionally, the tasks also inform the daemon to return the output objects, which may be stored as an aggregate 11 p.m. to 12 p.m. log object within the object store. - It will be understood that the daemon of a physical node (e.g., object store) may control execution of compute operations by the one or more virtual operating system containers that are placed onto the object store via the
system kernel 120. - Thus, it is appreciated with that intermediate output objects may not be output to the user directly, but may be fed back into the system for additional processing, such as with the map and reduce phases described above. Moreover, the set of tasks generated by the
system kernel 120 may include any number of map phases and/or reduce phases, which vary according to the steps required to produce the desired output. -
FIG. 5 is a flowchart of anexemplary method 500 for executing a compute flow using a set of tasks, according to an instruction set architecture. Themethod 500 may include a step 505 of receiving a request from a user, the request identifying parameters of a compute operation that is to be executed against objects in a distributed object store. - According to some embodiments, the method may include a step 510 of locating one or more objects within the distributed object store, the one or more objects being stored on one or more physical nodes. Once objects have been located, the method may include a
step 515 of generating a set of tasks from the request that comprise instructions for a daemon. Again, the set of tasks may define applications, services and instances that are that can be utilized by virtual operating system containers. It will be understood that the set of tasks comprises an instruction set that is a translation of the user request into meaningful input that can be executed by one or more virtual operating system containers. - Upon generating the set of tasks, the method includes a
step 520 of providing the set of tasks to a daemon of the physical node. Again, the daemon controls execution of the compute operation by one or more virtual operating system containers based upon the set of tasks. The daemon functions as a virtual operating system hypervisor that coordinates the operation and execution of the containers. - Finally, the method includes a
step 525 of storing an output of the virtual operating system container in the distributed object store. -
FIG. 6 illustrates anexemplary computing system 600 that may be used to implement an embodiment of the present systems and methods. Thesystem 600 ofFIG. 6 may be implemented in the contexts of the likes of computing systems, networks, servers, or combinations thereof. Thecomputing system 600 ofFIG. 6 includes one ormore processors 610 andmain memory 620.Main memory 620 stores, in part, instructions and data for execution byprocessor 610.Main memory 620 may store the executable code when in operation. Thesystem 600 ofFIG. 6 further includes amass storage device 630,portable storage device 640,output devices 650,user input devices 660, adisplay system 670, andperipheral devices 680. - The components shown in
FIG. 6 are depicted as being connected via asingle bus 690. The components may be connected through one or more data transport means.Processor unit 610 andmain memory 620 may be connected via a local microprocessor bus, and themass storage device 630, peripheral device(s) 680,portable storage device 640, anddisplay system 670 may be connected via one or more input/output (I/O) buses. -
Mass storage device 630, which may be implemented with a magnetic disk drive or an optical disk drive, is a non-volatile storage device for storing data and instructions for use byprocessor unit 610.Mass storage device 630 may store the system software for implementing embodiments of the present technology for purposes of loading that software intomain memory 620. -
Portable storage device 640 operates in conjunction with a portable non-volatile storage medium, such as a floppy disk, compact disk, digital video disc, or USB storage device, to input and output data and code to and from thecomputer system 600 ofFIG. 6 . The system software for implementing embodiments of the present technology may be stored on such a portable medium and input to thecomputer system 600 via theportable storage device 640. -
User input devices 660 provide a portion of a user interface.User input devices 660 may include an alphanumeric keypad, such as a keyboard, for inputting alpha-numeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. Additionaluser input devices 660 may comprise, but are not limited to, devices such as speech recognition systems, facial recognition systems, motion-based input systems, gesture-based systems, and so forth. For example,user input devices 660 may include a touchscreen. Additionally, thesystem 600 as shown inFIG. 6 includesoutput devices 650. Suitable output devices include speakers, printers, network interfaces, and monitors. -
Display system 670 may include a liquid crystal display (LCD) or other suitable display device.Display system 670 receives textual and graphical information, and processes the information for output to the display device. - Peripherals device(s) 680 may include any type of computer support device to add additional functionality to the computer system. Peripheral device(s) 680 may include a modem or a router.
- The components provided in the
computer system 600 ofFIG. 6 are those typically found in computer systems that may be suitable for use with embodiments of the present technology and are intended to represent a broad category of such computer components that are well known in the art. Thus, thecomputer system 600 ofFIG. 6 may be a personal computer, hand held computing system, telephone, mobile computing system, workstation, server, minicomputer, mainframe computer, or any other computing system. The computer may also include different bus configurations, networked platforms, multi-processor platforms, etc. Various operating systems may be used including Unix, Linux, Windows, Mac OS, Palm OS, Android, iOS (known as iPhone OS before June 2010), QNX, and other suitable operating systems. - It is noteworthy that any hardware platform suitable for performing the processing described herein is suitable for use with the systems and methods provided herein. Computer-readable storage media refer to any medium or media that participate in providing instructions to a central processing unit (CPU), a processor, a microcontroller, or the like. Such media may take forms including, but not limited to, non-volatile and volatile media such as optical or magnetic disks and dynamic memory, respectively. Common forms of computer-readable storage media include a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic storage medium, a CD-ROM disk, digital video disk (DVD), any other optical storage medium, RAM, PROM, EPROM, a FLASHEPROM, any other memory chip or cartridge.
- Computer program code for carrying out operations for aspects of the present technology may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be coupled with the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present technology has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the present technology in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present technology. Exemplary embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, and to enable others of ordinary skill in the art to understand the present technology for various embodiments with various modifications as are suited to the particular use contemplated.
- Aspects of the present technology are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the present technology. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present technology. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
- While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. The descriptions are not intended to limit the scope of the technology to the particular forms set forth herein. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments. It should be understood that the above description is illustrative and not restrictive. To the contrary, the present descriptions are intended to cover such alternatives, modifications, and equivalents as may be included within the spirit and scope of the technology as defined by the appended claims and otherwise appreciated by one of ordinary skill in the art. The scope of the technology should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.
Claims (19)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/831,349 US8826279B1 (en) | 2013-03-14 | 2013-03-14 | Instruction set architecture for compute-based object stores |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/831,349 US8826279B1 (en) | 2013-03-14 | 2013-03-14 | Instruction set architecture for compute-based object stores |
Publications (2)
Publication Number | Publication Date |
---|---|
US8826279B1 US8826279B1 (en) | 2014-09-02 |
US20140282513A1 true US20140282513A1 (en) | 2014-09-18 |
Family
ID=51400213
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/831,349 Active US8826279B1 (en) | 2013-03-14 | 2013-03-14 | Instruction set architecture for compute-based object stores |
Country Status (1)
Country | Link |
---|---|
US (1) | US8826279B1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8881279B2 (en) | 2013-03-14 | 2014-11-04 | Joyent, Inc. | Systems and methods for zone-based intrusion detection |
US8898205B2 (en) | 2013-03-15 | 2014-11-25 | Joyent, Inc. | Object store management operations within compute-centric object stores |
US8943284B2 (en) | 2013-03-14 | 2015-01-27 | Joyent, Inc. | Systems and methods for integrating compute resources in a storage area network |
US8959217B2 (en) | 2010-01-15 | 2015-02-17 | Joyent, Inc. | Managing workloads and hardware resources in a cloud resource |
US9092238B2 (en) | 2013-03-15 | 2015-07-28 | Joyent, Inc. | Versioning schemes for compute-centric object stores |
US9104456B2 (en) | 2013-03-14 | 2015-08-11 | Joyent, Inc. | Zone management of compute-centric object stores |
US9582327B2 (en) | 2013-03-14 | 2017-02-28 | Joyent, Inc. | Compute-centric object stores and methods of use |
US9740705B2 (en) | 2015-12-04 | 2017-08-22 | International Business Machines Corporation | Storlet workflow optimization leveraging clustered file system roles |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9424089B2 (en) * | 2012-01-24 | 2016-08-23 | Samsung Electronics Co., Ltd. | Hardware acceleration of web applications |
US10585712B2 (en) | 2017-05-31 | 2020-03-10 | International Business Machines Corporation | Optimizing a workflow of a storlet architecture |
Family Cites Families (145)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6393495B1 (en) | 1995-11-21 | 2002-05-21 | Diamond Multimedia Systems, Inc. | Modular virtualizing device driver architecture |
US6792606B2 (en) | 1998-07-17 | 2004-09-14 | International Business Machines Corporation | Method and apparatus for object persistence |
US6901594B1 (en) | 1999-04-23 | 2005-05-31 | Nortel Networks Ltd. | Apparatus and method for establishing communication between applications |
US6553391B1 (en) | 2000-06-08 | 2003-04-22 | International Business Machines Corporation | System and method for replicating external files and database metadata pertaining thereto |
US20020069356A1 (en) | 2000-06-12 | 2002-06-06 | Kwang Tae Kim | Integrated security gateway apparatus |
GB2366401B (en) | 2000-08-25 | 2005-06-01 | Mitel Corp | Resource sharing with sliding constraints |
US7379994B2 (en) | 2000-10-26 | 2008-05-27 | Metilinx | Aggregate system resource analysis including correlation matrix and metric-based analysis |
US20020198995A1 (en) | 2001-04-10 | 2002-12-26 | International Business Machines Corporation | Apparatus and methods for maximizing service-level-agreement profits |
US20020156767A1 (en) | 2001-04-12 | 2002-10-24 | Brian Costa | Method and service for storing records containing executable objects |
US8370936B2 (en) | 2002-02-08 | 2013-02-05 | Juniper Networks, Inc. | Multi-method gateway-based network security systems and methods |
US7640547B2 (en) | 2002-02-08 | 2009-12-29 | Jpmorgan Chase & Co. | System and method for allocating computing resources of a distributed computing system |
US20030229794A1 (en) | 2002-06-07 | 2003-12-11 | Sutton James A. | System and method for protection against untrusted system management code by redirecting a system management interrupt and creating a virtual machine container |
US7047254B2 (en) | 2002-10-31 | 2006-05-16 | Hewlett-Packard Development Company, L.P. | Method and apparatus for providing aggregate object identifiers |
US7899901B1 (en) | 2002-12-02 | 2011-03-01 | Arcsight, Inc. | Method and apparatus for exercising and debugging correlations for network security system |
US7774191B2 (en) * | 2003-04-09 | 2010-08-10 | Gary Charles Berkowitz | Virtual supercomputer |
US7496892B2 (en) | 2003-05-06 | 2009-02-24 | Andrew Nuss | Polymorphic regular expressions |
WO2005004370A2 (en) | 2003-06-28 | 2005-01-13 | Geopacket Corporation | Quality determination for packetized information |
JP4025260B2 (en) | 2003-08-14 | 2007-12-19 | 株式会社東芝 | Scheduling method and information processing system |
US8417673B2 (en) | 2003-10-07 | 2013-04-09 | International Business Machines Corporation | Method, system, and program for retaining versions of files |
US7265754B2 (en) | 2003-11-12 | 2007-09-04 | Proto Manufacturing Ltd. | Method for displaying material characteristic information |
US7437730B2 (en) | 2003-11-14 | 2008-10-14 | International Business Machines Corporation | System and method for providing a scalable on demand hosting system |
US20050188075A1 (en) | 2004-01-22 | 2005-08-25 | International Business Machines Corporation | System and method for supporting transaction and parallel services in a clustered system based on a service level agreement |
US7761923B2 (en) | 2004-03-01 | 2010-07-20 | Invensys Systems, Inc. | Process control methods and apparatus for intrusion detection, protection and network hardening |
CA2486103A1 (en) | 2004-10-26 | 2006-04-26 | Platespin Ltd. | System and method for autonomic optimization of physical and virtual resource use in a data center |
US8181182B1 (en) | 2004-11-16 | 2012-05-15 | Oracle America, Inc. | Resource allocation brokering in nested containers |
US7685148B2 (en) | 2005-01-31 | 2010-03-23 | Computer Associates Think, Inc. | Automatically configuring a distributed computing system according to a hierarchical model |
EP1717755B1 (en) | 2005-03-08 | 2011-02-09 | Oculus Info Inc. | System and method for large scale information analysis using data visualization techniques |
US20110016214A1 (en) | 2009-07-15 | 2011-01-20 | Cluster Resources, Inc. | System and method of brokering cloud computing resources |
US7870256B2 (en) | 2005-03-25 | 2011-01-11 | Hewlett-Packard Development Company, L.P. | Remote desktop performance model for assigning resources |
US7774457B1 (en) | 2005-03-25 | 2010-08-10 | Hewlett-Packard Development Company, L.P. | Resource evaluation for a batch job and an interactive session concurrently executed in a grid computing environment |
US8010498B2 (en) | 2005-04-08 | 2011-08-30 | Microsoft Corporation | Virtually infinite reliable storage across multiple storage devices and storage services |
US8886778B2 (en) | 2005-04-29 | 2014-11-11 | Netapp, Inc. | System and method for proxying network management protocol commands to enable cluster wide management of data backups |
US8667179B2 (en) | 2005-04-29 | 2014-03-04 | Microsoft Corporation | Dynamic utilization of condensing metadata |
US7933870B1 (en) | 2005-10-12 | 2011-04-26 | Adobe Systems Incorporated | Managing file information |
US7558859B2 (en) | 2005-10-17 | 2009-07-07 | Microsoft Corporation | Peer-to-peer auction based data distribution |
US7603671B2 (en) | 2005-11-04 | 2009-10-13 | Sun Microsystems, Inc. | Performance management in a virtual computing environment |
US20070118653A1 (en) | 2005-11-22 | 2007-05-24 | Sabre Inc. | System, method, and computer program product for throttling client traffic |
US7801912B2 (en) * | 2005-12-29 | 2010-09-21 | Amazon Technologies, Inc. | Method and apparatus for a searchable data service |
US7529780B1 (en) | 2005-12-30 | 2009-05-05 | Google Inc. | Conflict management during data object synchronization between client and server |
US20070174429A1 (en) | 2006-01-24 | 2007-07-26 | Citrix Systems, Inc. | Methods and servers for establishing a connection between a client system and a virtual machine hosting a requested computing environment |
US8417746B1 (en) | 2006-04-03 | 2013-04-09 | F5 Networks, Inc. | File system management with enhanced searchability |
US8104041B2 (en) | 2006-04-24 | 2012-01-24 | Hewlett-Packard Development Company, L.P. | Computer workload redistribution based on prediction from analysis of local resource utilization chronology data |
US9703285B2 (en) | 2006-04-27 | 2017-07-11 | International Business Machines Corporation | Fair share scheduling for mixed clusters with multiple resources |
US7814465B2 (en) | 2006-05-12 | 2010-10-12 | Oracle America, Inc. | Method and apparatus for application verification |
US7721091B2 (en) | 2006-05-12 | 2010-05-18 | International Business Machines Corporation | Method for protecting against denial of service attacks using trust, quality of service, personalization, and hide port messages |
US8555288B2 (en) | 2006-05-17 | 2013-10-08 | Teradata Us, Inc. | Managing database utilities to improve throughput and concurrency |
US20080080396A1 (en) | 2006-09-28 | 2008-04-03 | Microsoft Corporation | Marketplace for cloud services resources |
US7917599B1 (en) | 2006-12-15 | 2011-03-29 | The Research Foundation Of State University Of New York | Distributed adaptive network memory engine |
US20080155110A1 (en) | 2006-12-22 | 2008-06-26 | Morris Robert P | METHODS AND SYSTEMS FOR DETERMINING SCHEME HANDLING PROCEDURES FOR PROCESSING URIs BASED ON URI SCHEME MODIFIERS |
US7673113B2 (en) | 2006-12-29 | 2010-03-02 | Intel Corporation | Method for dynamic load balancing on partitioned systems |
US8380880B2 (en) | 2007-02-02 | 2013-02-19 | The Mathworks, Inc. | Scalable architecture |
US8464251B2 (en) | 2007-03-31 | 2013-06-11 | Intel Corporation | Method and apparatus for managing page tables from a non-privileged software domain |
US8706914B2 (en) | 2007-04-23 | 2014-04-22 | David D. Duchesneau | Computing infrastructure |
US8141090B1 (en) | 2007-04-24 | 2012-03-20 | Hewlett-Packard Development Company, L.P. | Automated model-based provisioning of resources |
US20090077235A1 (en) | 2007-09-19 | 2009-03-19 | Sun Microsystems, Inc. | Mechanism for profiling and estimating the runtime needed to execute a job |
US9621649B2 (en) | 2007-09-28 | 2017-04-11 | Xcerion Aktiebolag | Network operating system |
US8151265B2 (en) | 2007-12-19 | 2012-04-03 | International Business Machines Corporation | Apparatus for and method for real-time optimization of virtual machine input/output performance |
US7849111B2 (en) | 2007-12-31 | 2010-12-07 | Teradata Us, Inc. | Online incremental database dump |
US8006079B2 (en) | 2008-02-22 | 2011-08-23 | Netapp, Inc. | System and method for fast restart of a guest operating system in a virtual machine environment |
JP4724730B2 (en) | 2008-04-09 | 2011-07-13 | 株式会社日立製作所 | Information processing system operation management method, operation management program, operation management apparatus, and information processing system |
US8904383B2 (en) | 2008-04-10 | 2014-12-02 | Hewlett-Packard Development Company, L.P. | Virtual machine migration according to environmental data |
US8365167B2 (en) | 2008-04-15 | 2013-01-29 | International Business Machines Corporation | Provisioning storage-optimized virtual machines within a virtual desktop environment |
US8849971B2 (en) | 2008-05-28 | 2014-09-30 | Red Hat, Inc. | Load balancing in cloud-based networks |
US9069599B2 (en) | 2008-06-19 | 2015-06-30 | Servicemesh, Inc. | System and method for a cloud computing abstraction layer with security zone facilities |
US9842004B2 (en) | 2008-08-22 | 2017-12-12 | Red Hat, Inc. | Adjusting resource usage for cloud-based networks |
US8103776B2 (en) | 2008-08-29 | 2012-01-24 | Red Hat, Inc. | Systems and methods for storage allocation in provisioning of virtual machines |
US8271974B2 (en) | 2008-10-08 | 2012-09-18 | Kaavo Inc. | Cloud computing lifecycle management for N-tier applications |
US9141381B2 (en) | 2008-10-27 | 2015-09-22 | Vmware, Inc. | Version control environment for virtual machines |
US7912951B2 (en) | 2008-10-28 | 2011-03-22 | Vmware, Inc. | Quality of service management |
US8645837B2 (en) | 2008-11-26 | 2014-02-04 | Red Hat, Inc. | Graphical user interface for managing services in a distributed computing system |
US9870541B2 (en) | 2008-11-26 | 2018-01-16 | Red Hat, Inc. | Service level backup using re-cloud network |
US8099411B2 (en) | 2008-12-15 | 2012-01-17 | Teradata Us, Inc. | System, method, and computer-readable medium for applying conditional resource throttles to facilitate workload management in a database system |
US8799895B2 (en) | 2008-12-22 | 2014-08-05 | Electronics And Telecommunications Research Institute | Virtualization-based resource management apparatus and method and computing system for virtualization-based resource management |
US8001247B2 (en) | 2009-02-27 | 2011-08-16 | Red Hat, Inc. | System for trigger-based “gated” dynamic virtual and physical system provisioning |
US20130129068A1 (en) | 2009-03-02 | 2013-05-23 | Twilio, Inc. | Method and system for a multitenancy telephone network |
US8375195B2 (en) | 2009-03-05 | 2013-02-12 | Oracle America, Inc. | Accessing memory locations for paged memory objects in an object-addressed memory system |
US7904540B2 (en) | 2009-03-24 | 2011-03-08 | International Business Machines Corporation | System and method for deploying virtual machines in a computing environment |
US8103847B2 (en) | 2009-04-08 | 2012-01-24 | Microsoft Corporation | Storage virtual containers |
US8433749B2 (en) | 2009-04-15 | 2013-04-30 | Accenture Global Services Limited | Method and system for client-side scaling of web server farm architectures in a cloud data center |
US8856783B2 (en) | 2010-10-12 | 2014-10-07 | Citrix Systems, Inc. | Allocating virtual machines according to user-specific virtual machine metrics |
CA2760251A1 (en) | 2009-05-19 | 2010-11-25 | Security First Corp. | Systems and methods for securing data in the cloud |
US9450783B2 (en) | 2009-05-28 | 2016-09-20 | Red Hat, Inc. | Abstracting cloud management |
US20100306767A1 (en) | 2009-05-29 | 2010-12-02 | Dehaan Michael Paul | Methods and systems for automated scaling of cloud computing systems |
US20100332629A1 (en) | 2009-06-04 | 2010-12-30 | Lauren Ann Cotugno | Secure custom application cloud computing architecture |
US20100318609A1 (en) | 2009-06-15 | 2010-12-16 | Microsoft Corporation | Bridging enterprise networks into cloud |
US8201169B2 (en) | 2009-06-15 | 2012-06-12 | Vmware, Inc. | Virtual machine fault tolerance |
US8286178B2 (en) | 2009-06-24 | 2012-10-09 | International Business Machines Corporation | Allocation and regulation of CPU entitlement for virtual processors in logical partitioned platform |
CA2674402C (en) | 2009-07-31 | 2016-07-19 | Ibm Canada Limited - Ibm Canada Limitee | Optimizing on demand allocation of virtual machines using a stateless preallocation pool |
US9740533B2 (en) | 2009-08-03 | 2017-08-22 | Oracle International Corporation | Altruistic dependable memory overcommit for virtual machines |
US8694819B2 (en) | 2009-08-24 | 2014-04-08 | Hewlett-Packard Development Company, L.P. | System and method for gradually adjusting a virtual interval timer counter value to compensate the divergence of a physical interval timer counter value and the virtual interval timer counter value |
US8271653B2 (en) | 2009-08-31 | 2012-09-18 | Red Hat, Inc. | Methods and systems for cloud management using multiple cloud management schemes to allow communication between independently controlled clouds |
US8862720B2 (en) | 2009-08-31 | 2014-10-14 | Red Hat, Inc. | Flexible cloud management including external clouds |
US20110078303A1 (en) | 2009-09-30 | 2011-03-31 | Alcatel-Lucent Usa Inc. | Dynamic load balancing and scaling of allocated cloud resources in an enterprise network |
US10268522B2 (en) | 2009-11-30 | 2019-04-23 | Red Hat, Inc. | Service aggregation using graduated service levels in a cloud network |
US9842006B2 (en) | 2009-12-01 | 2017-12-12 | International Business Machines Corporation | Application processing allocation in a computing system |
US8490087B2 (en) | 2009-12-02 | 2013-07-16 | International Business Machines Corporation | System and method for transforming legacy desktop environments to a virtualized desktop model |
US20110138382A1 (en) | 2009-12-03 | 2011-06-09 | Recursion Software, Inc. | System and method for loading resources into a virtual machine |
US8726334B2 (en) | 2009-12-09 | 2014-05-13 | Microsoft Corporation | Model based systems management in virtualized and non-virtualized environments |
US9009294B2 (en) | 2009-12-11 | 2015-04-14 | International Business Machines Corporation | Dynamic provisioning of resources within a cloud computing environment |
US8452835B2 (en) | 2009-12-23 | 2013-05-28 | Citrix Systems, Inc. | Systems and methods for object rate limiting in multi-core system |
US8984503B2 (en) | 2009-12-31 | 2015-03-17 | International Business Machines Corporation | Porting virtual images between platforms |
EP2521976B1 (en) | 2010-01-08 | 2018-04-18 | Sauce Labs, Inc. | Real time verification of web applications |
US8583945B2 (en) | 2010-01-14 | 2013-11-12 | Muse Green Investments LLC | Minimizing power consumption in computers |
US9021046B2 (en) | 2010-01-15 | 2015-04-28 | Joyent, Inc | Provisioning server resources in a cloud resource |
US8301746B2 (en) | 2010-01-26 | 2012-10-30 | International Business Machines Corporation | Method and system for abstracting non-functional requirements based deployment of virtual machines |
US9130912B2 (en) | 2010-03-05 | 2015-09-08 | International Business Machines Corporation | System and method for assisting virtual machine instantiation and migration |
US20110270968A1 (en) | 2010-04-30 | 2011-11-03 | Salsburg Michael A | Decision support system for moving computing workloads to public clouds |
US8719804B2 (en) | 2010-05-05 | 2014-05-06 | Microsoft Corporation | Managing runtime execution of applications on cloud computing systems |
US8661132B2 (en) | 2010-05-28 | 2014-02-25 | International Business Machines Corporation | Enabling service virtualization in a cloud |
DE102010017215A1 (en) | 2010-06-02 | 2011-12-08 | Aicas Gmbh | Method for carrying out a memory management |
US9495427B2 (en) | 2010-06-04 | 2016-11-15 | Yale University | Processing of data using a database system in communication with a data processing framework |
US8935317B2 (en) | 2010-06-23 | 2015-01-13 | Microsoft Corporation | Dynamic partitioning of applications between clients and servers |
US8434081B2 (en) | 2010-07-02 | 2013-04-30 | International Business Machines Corporation | Storage manager for virtual machines with virtual storage |
CN102971710B (en) | 2010-07-06 | 2016-06-29 | 松下电器(美国)知识产权公司 | Virtual computer system, virtual computer control method and integrated circuit |
EP2609502A4 (en) | 2010-08-24 | 2017-03-29 | Jay Moorthi | Method and apparatus for clearing cloud compute demand |
US20120054742A1 (en) | 2010-09-01 | 2012-03-01 | Microsoft Corporation | State Separation Of User Data From Operating System In A Pooled VM Environment |
US9152464B2 (en) | 2010-09-03 | 2015-10-06 | Ianywhere Solutions, Inc. | Adjusting a server multiprogramming level based on collected throughput values |
US8544007B2 (en) | 2010-09-13 | 2013-09-24 | Microsoft Corporation | Customization, deployment and management of virtual and physical machine images in an enterprise system |
US8769534B2 (en) | 2010-09-23 | 2014-07-01 | Accenture Global Services Limited | Measuring CPU utilization in a cloud computing infrastructure by artificially executing a bursting application on a virtual machine |
US9235442B2 (en) | 2010-10-05 | 2016-01-12 | Accenture Global Services Limited | System and method for cloud enterprise services |
US9946582B2 (en) | 2010-10-14 | 2018-04-17 | Nec Corporation | Distributed processing device and distributed processing system |
WO2011110026A1 (en) | 2010-10-29 | 2011-09-15 | 华为技术有限公司 | Method and apparatus for realizing load balance of resources in data center |
US8336051B2 (en) | 2010-11-04 | 2012-12-18 | Electron Database Corporation | Systems and methods for grouped request execution |
US20120131156A1 (en) | 2010-11-24 | 2012-05-24 | Brandt Mark S | Obtaining unique addresses and fully-qualified domain names in a server hosting system |
KR101738641B1 (en) | 2010-12-17 | 2017-05-23 | 삼성전자주식회사 | Apparatus and method for compilation of program on multi core system |
US8863138B2 (en) | 2010-12-22 | 2014-10-14 | Intel Corporation | Application service performance in cloud computing |
US20120173709A1 (en) | 2011-01-05 | 2012-07-05 | Li Li | Seamless scaling of enterprise applications |
US20120179874A1 (en) | 2011-01-07 | 2012-07-12 | International Business Machines Corporation | Scalable cloud storage architecture |
US8713566B2 (en) | 2011-01-31 | 2014-04-29 | International Business Machines Corporation | Method and system for delivering and executing virtual container on logical partition of target computing device |
US8984269B2 (en) | 2011-02-28 | 2015-03-17 | Red Hat, Inc. | Migrating data among cloud-based storage networks via a data distribution service |
US8555276B2 (en) | 2011-03-11 | 2013-10-08 | Joyent, Inc. | Systems and methods for transparently optimizing workloads |
US8429282B1 (en) | 2011-03-22 | 2013-04-23 | Amazon Technologies, Inc. | System and method for avoiding system overload by maintaining an ideal request rate |
US8615676B2 (en) | 2011-03-24 | 2013-12-24 | International Business Machines Corporation | Providing first field data capture in a virtual input/output server (VIOS) cluster environment with cluster-aware vioses |
US8875240B2 (en) | 2011-04-18 | 2014-10-28 | Bank Of America Corporation | Tenant data center for establishing a virtual machine in a cloud environment |
US9124494B2 (en) | 2011-05-26 | 2015-09-01 | Kaseya Limited | Method and apparatus of performing remote management of a managed machine |
US8412945B2 (en) | 2011-08-09 | 2013-04-02 | CloudPassage, Inc. | Systems and methods for implementing security in a cloud computing environment |
US8631131B2 (en) | 2011-09-07 | 2014-01-14 | Red Hat Israel, Ltd. | Virtual machine pool cache |
US8806005B2 (en) | 2011-09-12 | 2014-08-12 | Microsoft Corporation | Cross-machine event log correlation |
US20130086590A1 (en) | 2011-09-30 | 2013-04-04 | John Mark Morris | Managing capacity of computing environments and systems that include a database |
US9395920B2 (en) | 2011-11-17 | 2016-07-19 | Mirosoft Technology Licensing, LLC | Throttle disk I/O using disk drive simulation model |
US9170849B2 (en) | 2012-01-09 | 2015-10-27 | Microsoft Technology Licensing, Llc | Migration of task to different pool of resources based on task retry count during task lease |
US8893140B2 (en) | 2012-01-24 | 2014-11-18 | Life Coded, Llc | System and method for dynamically coordinating tasks, schedule planning, and workload management |
US8972986B2 (en) | 2012-05-25 | 2015-03-03 | International Business Machines Corporation | Locality-aware resource allocation for cloud computing |
US8924977B2 (en) | 2012-06-18 | 2014-12-30 | International Business Machines Corporation | Sequential cooperation between map and reduce phases to improve data locality |
US8677359B1 (en) | 2013-03-14 | 2014-03-18 | Joyent, Inc. | Compute-centric object stores and methods of use |
-
2013
- 2013-03-14 US US13/831,349 patent/US8826279B1/en active Active
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8959217B2 (en) | 2010-01-15 | 2015-02-17 | Joyent, Inc. | Managing workloads and hardware resources in a cloud resource |
US9021046B2 (en) | 2010-01-15 | 2015-04-28 | Joyent, Inc | Provisioning server resources in a cloud resource |
US8881279B2 (en) | 2013-03-14 | 2014-11-04 | Joyent, Inc. | Systems and methods for zone-based intrusion detection |
US8943284B2 (en) | 2013-03-14 | 2015-01-27 | Joyent, Inc. | Systems and methods for integrating compute resources in a storage area network |
US9104456B2 (en) | 2013-03-14 | 2015-08-11 | Joyent, Inc. | Zone management of compute-centric object stores |
US9582327B2 (en) | 2013-03-14 | 2017-02-28 | Joyent, Inc. | Compute-centric object stores and methods of use |
US8898205B2 (en) | 2013-03-15 | 2014-11-25 | Joyent, Inc. | Object store management operations within compute-centric object stores |
US9075818B2 (en) | 2013-03-15 | 2015-07-07 | Joyent, Inc. | Object store management operations within compute-centric object stores |
US9092238B2 (en) | 2013-03-15 | 2015-07-28 | Joyent, Inc. | Versioning schemes for compute-centric object stores |
US9792290B2 (en) | 2013-03-15 | 2017-10-17 | Joyent, Inc. | Object store management operations within compute-centric object stores |
US9740705B2 (en) | 2015-12-04 | 2017-08-22 | International Business Machines Corporation | Storlet workflow optimization leveraging clustered file system roles |
Also Published As
Publication number | Publication date |
---|---|
US8826279B1 (en) | 2014-09-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8677359B1 (en) | Compute-centric object stores and methods of use | |
US8826279B1 (en) | Instruction set architecture for compute-based object stores | |
US9104456B2 (en) | Zone management of compute-centric object stores | |
US9891942B2 (en) | Maintaining virtual machines for cloud-based operators in a streaming application in a ready state | |
US9792290B2 (en) | Object store management operations within compute-centric object stores | |
US9983863B2 (en) | Method to optimize provisioning time with dynamically generated virtual disk contents | |
US9628353B2 (en) | Using cloud resources to improve performance of a streaming application | |
US9710292B2 (en) | Allowing management of a virtual machine by multiple cloud providers | |
US20180267824A1 (en) | Replicating a virtual machine implementing parallel operators in a streaming application based on performance | |
US20150334039A1 (en) | Bursting cloud resources to affect state change performance | |
US9407523B2 (en) | Increasing performance of a streaming application by running experimental permutations | |
US9338229B2 (en) | Relocating an application from a device to a server | |
US20150373078A1 (en) | On-demand helper operator for a streaming application | |
US20160191617A1 (en) | Relocating an embedded cloud for fast configuration of a cloud computing environment | |
US20150134774A1 (en) | Sharing of portable initialized objects between computing platforms | |
Chapke | Auto Provisioning Portal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: JOYENT, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PACHECO, DAVID;CAVAGE, MARK;XIAO, YUNONG;AND OTHERS;SIGNING DATES FROM 20130313 TO 20130314;REEL/FRAME:030397/0070 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FEPP | Fee payment procedure |
Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551) Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |