EP3042305A1 - Migration sélective de ressources - Google Patents

Migration sélective de ressources

Info

Publication number
EP3042305A1
EP3042305A1 EP13893153.0A EP13893153A EP3042305A1 EP 3042305 A1 EP3042305 A1 EP 3042305A1 EP 13893153 A EP13893153 A EP 13893153A EP 3042305 A1 EP3042305 A1 EP 3042305A1
Authority
EP
European Patent Office
Prior art keywords
node
resource
physical
virtual
core
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP13893153.0A
Other languages
German (de)
English (en)
Other versions
EP3042305A4 (fr
Inventor
Isaac R. Nassi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TidalScale Inc
Original Assignee
TidalScale Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TidalScale Inc filed Critical TidalScale Inc
Publication of EP3042305A1 publication Critical patent/EP3042305A1/fr
Publication of EP3042305A4 publication Critical patent/EP3042305A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5077Logical partitioning of resources; Management or configuration of virtualized resources

Definitions

  • Figure 1 illustrates an embodiment of a computer system.
  • Figure 2 illustrates the physical structure of the computer system as a hierarchy.
  • Figure 3A depicts a virtualized computing environment in which multiple virtual machines (with respective multiple guest operating systems) run on a single physical machine.
  • Figure 3B depicts a virtualized computing environment in which multiple physical machines collectively run a single virtual operating system.
  • Figure 4A depicts an example of a software stack.
  • Figure 4B depicts an example of a software stack.
  • Figure 5 depicts an example of an operating system's view of hardware on an example system.
  • Figure 6A depicts an example of a hyperthread's view of hardware on a single node.
  • Figure 6B depicts an example of a HyperKernel's view of hardware on an example system.
  • Figure 7 depicts an example of an operating system's view of hardware on an example of an enterprise supercomputer system.
  • Figure 8 illustrates an embodiment of a process for selectively migrating resources.
  • Figure 9 illustrates an embodiment of a process for performing hierarchical dynamic scheduling.
  • Figure 10 illustrates an example of an initial memory assignment and processor assignment.
  • Figure 11 illustrates an updated view of the memory assignment and an unchanged view of the processor assignment.
  • Figure 12 illustrates a memory assignment and an updated view of the processor assignment.
  • the invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor.
  • these implementations, or any other form that the invention may take, may be referred to as techniques.
  • the order of the steps of disclosed processes may be altered within the scope of the invention.
  • a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task.
  • the term 'processor' refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
  • FIG. 1 illustrates an embodiment of a computer system.
  • System 100 is also referred to herein as an "enterprise supercomputer” and a “mainframe.”
  • system 100 includes a plurality of nodes (e.g., nodes 102-108) located in close proximity (e.g., located within the same rack).
  • nodes e.g., nodes 102-108
  • racks of nodes e.g., located within the same facility
  • the techniques described herein can also be used in conjunction with distributed systems.
  • the nodes are interconnected with a high-speed interconnect (110) such as 10- gigabit Ethernet, direct PCI-to-PCI, and/or InfiniBand.
  • Each node comprises commodity server- class hardware components (e.g., a blade in a rack with its attached or contained peripherals).
  • each node includes multiple physical processor chips.
  • Each physical processor chip also referred to as a "socket" includes multiple cores, and each core has multiple hyperthreads.
  • each enterprise supercomputer (e.g., system 100) runs a single instance of an operating system.
  • Both the operating system, and any applications, can be standard commercially available software and can run on system 100.
  • the operating system is Linux, however other operating systems can also be used, such as Microsoft Windows, Mac OS X, or FreeBSD.
  • multiple virtual machines may run on a single physical machine. This scenario is depicted in Figure 3 A.
  • three virtual machines (302-306) are running three guest operating systems on a single physical machine (308), which has its own host operating system.
  • multiple physical machines (354-358) collectively run a single virtual operating system (352), as depicted in Figure 3B.
  • FIG. 4A One example of a software stack is depicted in Figure 4A. Such a stack may typically be used in traditional computing environments.
  • an application (402) sits above a database engine (404), which in turn sits upon an operating system (406), underneath which lies hardware (408).
  • Figure 4B depicts a software stack used in some embodiments.
  • an application (452) sits above a database engine (454), which in turn sits upon an operating system (456).
  • a layer of software referred to herein as a
  • HyperKemel that observes the system running in real time and optimizes the system resources to match the needs of the system as it operates.
  • the HyperKemel conceptually unifies the RAM, processors, and I/O (Input Output resources for example Storage, Networking resources) of a set of commodity servers, and presents that unified set to the operating system. Because of this abstraction, the operating system will have the view of a single large computer, containing an aggregated set of processors, memory, and I/O.
  • the HyperKemel optimizes use of resources.
  • the HyperKemel can also help optimize other I/O system resources such as networks and storage.
  • performance indicators hints
  • upper layers e.g., database management systems
  • the HyperKemel can be ported to all major microprocessors, memory, interconnect, persistent storage, and networking architectures. Further, as hardware technology evolves (e.g., with new processors, new memory technology, new interconnects, and so forth), the HyperKemel can be modified as needed to take advantage of industry evolution.
  • operating system 456 is running collectively across a series of nodes (458-462), each of which has a HyperKemel running on server hardware. Specifically, the operating system is running on a virtual environment that is defined by the collection of HyperKernels. As will be described in more detail below, the view for operating system 456 is that it is running on a single hardware platform that includes all of the hardware resources of the individual nodes 458-462. Thus, if each of the nodes includes 1 TB of RAM, the operating system will have as a view that it is running on a hardware platform that includes 3 TB of RAM. Other resources, such as processing power, and I/O resources can similarly be collectively made available to the operating system's view.
  • Figure 5 depicts an example of an operating system's view of hardware on an example system.
  • operating system (502) runs on top of processors 504-508 and physical shared memory 510.
  • an operating system can run on either a traditional computing system or on an enterprise supercomputer such as is shown in Figure 1. In either case, the view of the operating system will be that it has access to processors 504-508 and physical shared memory 510.
  • Figure 6A depicts an example of a hyperthread's view of hardware on a single node.
  • a node has four hyperthreads denoted HI (602) through H4 (608). Each hyperthread can access all portions of physical shared memory 612. Physical shared memory 612 is linear, labeled location 0 through a maximum amount, "max.” The node also includes three levels of cache (610).
  • FIG. 6B depicts an example of a HyperKernel's view of hardware on an example system.
  • three nodes (652-656) are included in an enterprise supercomputer.
  • Each of the three nodes has four hyperthreads, a physical shared memory, and cache (i.e., each node is an embodiment of node 600 shown in Figure 6A).
  • a hyperthread on a given node (e.g., node 652) has a view that is the same as that shown in Figure 6A.
  • the HyperKernel is aware of all of the resources on all of the nodes (i.e., the HyperKernel sees twelve hyperthreads, and all of the physical shared memory).
  • a given hyperthread (e.g., hyperthread 658, "HI -4") is labeled with its node number (e.g., "1") followed by a hyperthread number (e.g., "4").
  • Figure 7 depicts an example of an operating system's view of hardware on an example of an enterprise supercomputer system.
  • the operating system sees a plurality of
  • virtualized processors denoted in Figure 7 as PI through Pmax (702).
  • the virtualized processors correspond to the total number of hyperthreads across all nodes included in the enterprise supercomputer. Thus, using the example of Figure 6B, if a total of twelve hyperthreads are present across three nodes, a total of twelve virtualized processors would be visible to an operating system running on the enterprise supercomputer. The operating system also sees "virtualized physical memory” (704) that appears to be a large, physical, linear memory of a size equal to the total amount of physical memory across all nodes. [0032] As will be described in more detail below, the HyperKemel dynamically optimizes the use of cache memory and virtual processor placement based on its observations of the system as it is running.
  • a “virtual processor” is a computing engine known to its guest operating system, i.e., one that has some operating system context or state.
  • a “shadow processor” is an anonymous virtual processor, i.e., one that had been a virtual processor but has now given up its operating system context and has context known only to the HyperKemel.
  • each node has an array of memory addresses representing locations in memory.
  • memory in a physical configuration with three nodes (e.g., as depicted in Figure 6B), there are three memory locations each of which has address 0x123456.
  • all memory addresses are unique and represent the sum total of all memory contained in those three nodes.
  • all memory is shared, and all memory caches are coherent.
  • memory is further subdivided into a series of contiguous blocks, with monotonically increasing memory addresses.
  • each page has 4K bytes of memory, however, other subdivisions can also be used, as applicable.
  • the term"blocks" is used herein to describe contiguous arrays of memory locations. In some embodiments, the "blocks" are "pages.”
  • a virtual processor (e.g., virtual processor 706 of Figure 7), as seen by the operating system, is implemented on a hyperthread in the physical configuration, but can be location independent. Thus, while the operating system thinks it has 500 processors running on a single physical server, in actuality it might have 5 nodes of 100 processors each. (Or, as is shown in Figure 6B, the operating system will think it has twelve processors running on a single physical server.)
  • the computation running on a virtual processor is described either by the physical configuration on a hyperthread when the computation is running, or in a "continuation," when the virtual processor is not running (i.e., the state of an interrupted or stalled computation).
  • Has processor state i.e., saved registers, etc.
  • Has a set of performance indicators that guide a scheduler object with information about how to intelligently assign continuations to leaf nodes for execution.
  • Has a virtual-processor identifier that indicates the processor the operating system thinks is the physical processor to which this continuation is assigned.
  • I/O systems observe a similar paradigm to processors and memory.
  • Devices have a physical address in the physical configuration and virtual addresses in the virtual configuration.
  • the I/O devices used will likely perform better if they are co-located with the memory with which they are associated, and can be moved accordingly.
  • Resource maps are used to translate between virtual and physical configurations.
  • a "physical resource map” is a table that describes the physical resources that are available on each node. It contains, for example, the number and type of the processors on each node, the devices, the memory available and its range of physical addresses, etc. In some embodiments, this table is read-only and is fixed at boot time.
  • An "initial virtual resource map" is fixed prior to the booting of the operating system and describes the virtual resources that are available from the point of view of the operating system.
  • the configuration is readable by the operating system.
  • a "current resource map" is created and maintained by each HyperKernel instance.
  • This map describes the current mapping between the virtual resource map and the physical resource map from the point of view of each node. For each entry in the virtual resource map, a definition of the physical resources currently assigned to the virtual resources is maintained. Initially (e.g., at boot time), the current resource map is a copy of the initial virtual resource map.
  • the HyperKernel modifies the current resource map over time as it observes the characteristics of the resource load and dynamically changes the mapping of physical resources to virtual resources (and vice-versa). For example, the definition of the location of the Ethernet controller eth27 in the virtualized machine may at different times refer to different hardware controllers.
  • the current resource map is used by the HyperKernel to dynamically modify the virtual hardware resource mappings, such as the virtual memory subsystem, as required.
  • virtualized resources can be migrated between physical locations.
  • the operating system is provided with information about the virtualized system, but that information need not agree with the physical system.
  • node3 there may be one or more other cores in other nodes (e.g., "node3") that are also trying to access the same area block of memory as needed by node2 above.
  • Node3 might be attempting to access the same data, or it might be accessing different data contained in the memory that was moved (also referred to as "false sharing").
  • the data could be moved to node3, but if the core on node2 asks for the data a second time, the data would need to be moved back to node2 (i.e., potentially moving the data back and forth repeatedly), which can be slow and wasteful.
  • One way to avoid moving data back and forth between cores is to recognize that both cores and the associated block of data should be co-located. Using the techniques described herein, the memory and the computation can be migrated so that they reside on the same node. Doing so will result in a higher likelihood of faster access to data, and a higher probability of sharing data stored in local caches.
  • an event is triggered (in a system dependent way) to which the HyperKemel responds.
  • One example of how such an event can be handled is by the invocation of a panic routine. Other approaches can also be used, as applicable.
  • the HyperKemel examines the cause of the event and determines an appropriate strategy (e.g., low level transaction) for handling the event.
  • an appropriate strategy e.g., low level transaction
  • one way to handle the event is for one or more blocks of HyperKemel virtualized memory to be transferred from one node's memory to another node's memory. The transfer would then be initiated and the corresponding resource maps would be updated.
  • a continuation would be built poised to be placed in a local table in shared memory called the event table (discussed below) so that the next thing the continuation does when it is resumed would be to return control to the operating system after the transfer is completed.
  • a decision could also be made to move the virtual processor to the node that contains the memory being requested or to move the virtualized memory (and its virtualized memory address) from one node to another.
  • the HyperKemel makes three decisions when handling an event: which (virtual) resources should move, when to move them, and to where (in terms of physical locations) they should move.
  • the physical hierarchical structure depicted in Figure 2 has an analogous software hierarchy comprising a set of "scheduler objects” (i.e., data structures), each of which has a set of characteristics described below.
  • the scheduler objects form a "TidalTree,” which is an in-memory tree data structure in which each node of the tree is a scheduler object.
  • Each scheduler object corresponds to an element of the physical structure of the supercomputer (but not necessarily vice versa), so there is one node for the entire machine (e.g., node 100 as shown in Figure 2), one node for each physical node of the system (e.g., node 102 as shown in Figure 2), one node for each multicore socket on the physical nodes that comprise the entire machine (e.g., node 202 as shown in Figure 2), one node for each core of each socket (e.g., node 210 as shown in Figure 2), and one node for each hyperthread on that core (e.g., node 232 as shown in Figure 2). [0059] Each scheduler object s:
  • ⁇ Is associated with a physical component e.g., rack, blade, socket, core,
  • Has a set of children each of which is a scheduler object. This is the null set for a leaf (e.g., hyperthread) node. As explained in more detail below, it is the responsibility of a scheduler object s to manage and assign (or re-assign) work to its children, and indirectly to its grandchildren, etc. (i.e., s manages all nodes in the subtree rooted at s).
  • Has a (possibly empty) set of I/O devices that it also has the responsibility to manage and assign (or re-assign) work.
  • Each node can potentially be associated with a layer of some form of cache memory.
  • Cache hierarchy follows the hierarchy of the tree in the sense that the higher the scheduler object is, the slower it will usually be for computations to efficiently utilize caches at the corresponding level of hierarchy.
  • the cache of a scheduler object corresponding to a physical node can be a cache of memory corresponding to that node.
  • the memory on the physical node can be thought of as a cache of the memory of the virtual machine.
  • the HyperKemel simulates part of the virtual hardware on which the virtual configuration resides. It is an event-driven architecture, fielding not only translated physical hardware events, but soft events, such as receipt of inter-node HyperKemel messages generated by HyperKemel code running on other nodes.
  • the HyperKemel makes a decision of how to respond to the interrupt. Before control is returned to the operating system, any higher priority interrupts are recognized and appropriate actions are taken. Also as explained above, the HyperKemel can make three separate decisions: (1) which resources to migrate upon certain events, (2) when to migrate them, and (3) to where those resources should move. [0069] In the following example, suppose a scheduler object "s" in a virtual machine is in steady state. Each scheduler object corresponding to a physical node has a set of physical processor sockets assigned to it. Hyperthreads in these sockets may or may not be busy.
  • the physical node also has some fixed amount of main memory and a set of I/O devices, including some network devices.
  • Scheduler object s when corresponding to a node, is also responsible for managing the networks and other I/O devices assigned to nodes in the subtree rooted at s. The following is a description of how resources can migrate upon either synchronous or asynchronous events.
  • Leaf node scheduler object s is assumed to be executing an application or operating system code on behalf of an application. Assuming the leaf node is not in an infinite loop, p will eventually run out of work to do (i.e., stall) for some reason (e.g., waiting for completion of an I/O operation, page fault, etc.). Instead of allowing p to actually stall, the
  • HyperKemel decides whether to move the information about the stalled computation to some other node, making one of that other node's processors “responsible” for the stalled continuation, or to keep the "responsibility" of the stalled computation on the node and instead move the relevant resources to the current node.
  • the stall is thus handled in either of two ways: either the computation is moved to the physical node that currently has the resource, or else the resource is moved to the physical node that has requested the resource.
  • Example pseudo code for the handling of a stall is provided below (as the "OnStall” routine) in the "EXAMPLE ROUTINES" section below.
  • Decisions such as how to handle a stall can be dependent on many things, such as the order of arrival of events, the state of the computation running on the virtual machine, the state of the caches, the load on the system or node, and many other things. Decisions are made dynamically, i.e., based on the best information available at any given point in time.
  • a continuation has a status that can be, for example, "waiting-for-event” or "ready.”
  • a stalled computation gets recorded as a newly created continuation with status "waiting-for-event.”
  • the status of the corresponding continuation is changed to "ready.”
  • Each continuation with status "ready” is stored in a "wait queue” of a scheduler object so that eventually it gets scheduled for execution.
  • any continuation with status "waiting-for-event” will not be stored in any scheduler object's wait queue. Instead, it is stored in the local shared memory of the physical node where the hardware event that stalled the corresponding computation is expected to occur, such as receipt of a missing resource.
  • the newly created continuation is associated with the stalling event that caused its creation.
  • This mapping between (stalling) events and continuations awaiting these events permits fast dispatch of asynchronous events (see the "handleEvent” described below).
  • the mapping between continuations and events is stored in a table called “event table” and is kept in the shared memory of the corresponding physical node.
  • Each physical node has its own event table, and an event table of a physical node is directly addressable by every core on that physical node. All anticipated events recorded in an event table of a physical node correspond to hardware events that can occur on that physical node.
  • the scheduler object s mapped to a physical node n represents n, and the event table of n is associated with s. In some cases, several continuations may be waiting on the same event, and so some disambiguation may be required when the event is triggered.
  • Continuations are built using the "InitContinuation" routine. If a decision is made to move the computation, the remote physical node holding the resource will build a continuation that corresponds to the stalled computation and will store it in the remote physical node's event table. When that continuation resumes, the resource will be available. In effect, the HyperKernel has transferred the virtual processor to a different node.
  • e be the event that stalled virtual processor p. Assume that e is triggered by local hardware of some physical node n. In particular, assume r is the resource, which caused the stalling event to occur. Resource r could be a block of memory, or an I/O operation, or a network operation. Assume that p is assigned to scheduler object s, which belongs to the subtree rooted at the scheduler object that represents physical node n.
  • the migration-continuation function returns true if and only if processor p in node n decides that the resource should not move, i.e., the computation should move. This can be determined by a number of factors such as history and frequency of movement of r between nodes, the type of r, the cost of movement, the number of events in n's local event table waiting for r, system load, etc. For example, it may not be desirable to move a resource if there is a continuation stored in n's local event table that is waiting for it.
  • a cost function cost(s,c) can be used to guide the search up the tree. If multiple ancestors of p have non-empty queues, then p may not want to stop its search at the first ancestor found with a nonempty wait queue. Depending on the metrics used in the optimizing strategy, p's choice may not only depend on the distance between p and its chosen ancestor but on other parameters such as length of the wait queues.
  • find-best-within(s) can be used to return the "best-fit" continuation in a
  • the cost and find-best- within functions can be customized as applicable within a given system.
  • Examples of asynchronous events include: receipt of a packet, completion of an I/O transfer, receipt of a resource, receipt of a message requesting a resource, etc.
  • asynchronous events include: receipt of a packet, completion of an I/O transfer, receipt of a resource, receipt of a message requesting a resource, etc.
  • HyperKernel that receives an event corresponding to a hardware device managed by the operating system needs to deliver a continuation associated with that event to a scheduler object s. By doing so, s will make this continuation available to an appropriate scheduler object and then ultimately to the computation managed by the operating system represented by that continuation. If, on the other hand, the event is the receipt of a message from a HyperKernel on another physical node, the HyperKernel can handle it directly. [0095] To simplify explanation, in the examples described herein, an assumption is made that there is only one continuation associated with an event. The procedures described herein can be generalized for the case where multiple continuations are associated with the same event, as needed.
  • the search for a scheduler object on which to place the continuation starts at the leaf of the tree that built the continuation and then proceeds upward (if the computation previously executed on this node). By doing so, the likelihood of reusing cache entries is increased.
  • the cost function, cost(s,c) is a function that helps determine the suitability of assigning c to scheduling object s.
  • the cost function can depend on a variety of parameters such as the size of the wait queues, the node traversal distance between s and the original scheduling node for c (to increase the probability that cache entries will be reused), and the history of the virtual processor, the physical-processor, and the continuation. If the wait queues of the scheduler objects close to s already contain too many continuations, then it may take a relatively longer time until any newly added continuation is scheduled for execution.
  • Example conditions contributing to cost(s,c) are described below, and the conditions can be customized as applicable.
  • Cost functions are used to evaluate options when selecting continuations and scheduling objects. Cost functions can be expressed as the summation of a sum of weighted factors:
  • cost wifi x i + w 2 f 2 x 2 + ... + w n f n x n ,
  • Weights w; and exponents x can be determined in a variety of ways, such as empirically and by simulation. Initial weights and exponents can be tuned to various application needs, and can be adjusted by an administrator to increase performance. The weights can be adjusted while the system is active, and changing weights does not change the semantics of the HyperKemel, only the operational performance characteristics.
  • Reservation status i.e., it may be the case that some application has reserved this resource for a specific reason.
  • Node specification i.e., the node itself might have been taken out of service, or is problematic, has in some way a specialized function, etc.
  • Group membership of the continuation i.e., the continuation may be part of a computation group, each element of which has some affinity for other members of the group).
  • Figure 8 illustrates an embodiment of a process for selectively migrating resources.
  • process 800 is performed by a HyperKemel, such as in conjunction with the OnStall routine.
  • the process begins at 802 when an indication is received that a core (or hyperthread included in a core, depending on whether the processor chip supports hyperthreads) is blocked.
  • a hyperthread receives a request, directly or indirectly, for a resource that the hyperthread is not able to access (e.g., RAM that is located on a different node than the node which holds the hyperthread).
  • a hyperthread fails to access the resource (i.e., an access violation occurs)
  • an interrupt occurs, which is intercepted, caught, or otherwise received by the HyperKemel at 802.
  • the HyperKemel receives an indication at 802 that the hyperthread is blocked (because it cannot access a resource that it has been instructed to provide).
  • the hyperthread provides information such as the memory address it was instructed to access and what type of access was attempted (e.g., read, write, or modify).
  • the HyperKemel determines whether the needed memory should be moved
  • the workload of a node is determined based at least in part on the average queue length in the TidalTree.
  • the HyperKemel determines that the memory should be moved, the HyperKemel uses its current resource map to determine which node is likely to hold the needed memory and sends a message to that node, requesting the resource.
  • the HyperKemel also creates a continuation and places it in its event table. The hyperthread that was blocked at 802 is thus freed to take on other work, and can be assigned to another virtual processor using the assignProcessor routine.
  • the HyperKemel checks its message queue on a high-priority basis.
  • HyperKemel receives a message from the node it contacted (i.e., the "first contacted node"), in some embodiments, one of two responses will be received.
  • the response might indicate that the first contacted node has the needed resource (and provide the resource).
  • the message might indicate that the contacted node no longer has the resource (e.g., because the node provided the resource to a different node).
  • the first contacted node will provide the identity of the node to which it sent the resource (i.e., the "second node"), and the HyperKemel can send a second message requesting the resource - this time to the second node.
  • the HyperKemel may opt to send the continuation to the third node, rather than continuing to request the resource.
  • Other thresholds can be used in determining whether to send the continuation or continuing the resource (e.g., four attempts). Further, a variety of criteria can be used in determining whether to request the resource or send the continuation (e.g., in accordance with a cost function).
  • the HyperKernel provides the remote node (i.e., the one with the needed resource) with information that the remote node can use to build a continuation in its own physical address space. If the remote node (i.e., the one receiving the continuation) has all of the resources it needs (i.e., is in possession of the resource that caused the initial access violation), the continuation need not be placed into the remote node's event table, but can instead be placed in its TidalTree. If the remote node needs additional resources to handle the continuation, the received continuation is placed in the remote node's event table.
  • FIG. 9 illustrates an embodiment of a process for performing hierarchical dynamic scheduling.
  • process 900 is performed by a HyperKernel, such as in conjunction with the assignProcessor routine.
  • the process begins at 902 when an indication is received that a hyperthread should be assigned.
  • Process 900 can be invoked in multiple ways. As one example, process 900 can be invoked when a hyperthread is available (i.e., has no current work to do). This can occur, for example, when the HyperKernel determines (e.g., at 804) that a continuation should be made. The previously blocked hyperthread will become available because it is no longer responsible for handling the computation on which it blocked (i.e., the hyperthread becomes an "anonymous shadow processor").
  • process 900 can be invoked when a message is received (e.g., by the HyperKernel) that a previously unavailable resource is now available.
  • the HyperKernel will need to locate a hyperthread to resume the computation that needed the resource. Note that the hyperthread that was originally blocked by the lack of a resource need not be the one that resumes the computation once the resource is received.
  • the TidalTree is searched for continuations that are ready to run, and one is selected for the hyperthread to resume.
  • the TidalTree is searched from the leaf-level, upward, and a cost function is used to determine which continuation to assign to the hyperthread. As one example, when a hyperthread becomes available, the continuation that has been queued for the longest amount of time could be assigned. If no continuations are waiting at the leaf level, or are outside a threshold specified by a cost function, a search will be performed up the TidalTree (e.g., the core level, then the socket level, and then the node level) for an appropriate continuation to assign to the hyperthread.
  • the HyperKernel for that node contacts the root.
  • One typical reason for no continuations to be found at the node level is that there is not enough work for that node to be fully utilized.
  • the node or a subset of the node can enter an energy conserving state.
  • Figure 10 illustrates an example of an initial memory assignment and processor assignment.
  • region 1002 of Figure 10 depicts a HyperKernel's mapping between physical blocks of memory (on the left hand side) and the current owner of the memory (the center column). The right column shows the previous owner of the memory. As this is the initial memory assignment, the current and last owner columns hold the same values.
  • Region 1004 of Figure 10 depicts a HyperKernel's mapping between system virtual processors (on the left hand side) and the physical nodes (center column) / core numbers (right column).
  • virtual processor P00 makes a memory request to read location 8FFFF and that the HyperKernel decides to move one or more memory blocks containing 8FFFF to the same node as P00 (i.e., node 0).
  • Block 8FFFF is located on node 2. Accordingly, the blocks containing 8FFFF are transferred to node 0, and another block is swapped out (if evacuation is required and the block is valid), as shown in Figure 11.
  • Locks are used, for example, to insert queue and remove queue continuations on scheduler objects and to maintain the event table.
  • the (maximum) length of all code paths is determined through a static code analysis, resulting in estimable and bounded amounts of time spent in the HyperKernel itself. All data structures can be pre-allocated, for example, as indexed arrays. The nodes of the TidalTree are determined at boot time and are invariant, as are the number of steps in their traversal. One variable length computation has to do with the length of the work queues, but even that can be bounded, and a worst-case estimate computed. In other embodiments, other variable length computations are used.
  • all data structures needed in the HyperKernel are static, and determined at boot time, so there is no need for dynamic memory allocation or garbage collection.
  • All memory used by the HyperKernel is physical memory, so no page tables or virtual memory is required for its internal operations (except, e.g., to manage the virtual resources it is managing), further helping the HyperKernel to co-exist with an operating system.
  • changes in one node's data structures are coordinated with corresponding ones in a different node.
  • Many of the data structures described herein are "node local,” and either will not need to move, or are constant and replicated.
  • the data structures that are node local are visible to and addressable by all hyperthreads on the node. Examples of data structures that are not node local (and thus require coordination) include the current resource map (or portions thereof), the root of the TidalTree, and migratory continuations (i.e., continuations that might have to logically move from one node to another).
  • Each physical node n starts off (e.g., at boot time) with the same copy of the physical resource map, the initial virtual resource map, and the current resource map. Each node maintains its own copy of the current resource map.
  • each entry for resource r in the current resource map has the following:
  • the count k is used to deal with unbounded chasing of resources. If k exceeds a threshold, a determination is made that it is better to move the newly built continuation rather than chasing the resource around the system.
  • Node n sends a request for resource r to n' .
  • Node n' receives a request for resource r from n.
  • Node n' may send a "deny” message to n under certain circumstances, otherwise it can "accept” and will send the resource r.
  • Node n will receive a "deny" message from n' if the resource r cannot be sent by n' at this point in time. It may be that r is needed by n', or it may be that r is being transferred somewhere else at the arrival of the request. If the request is denied, it can send a "forwarding" address of the node to which it's transferring the resource. It may be that the forwarding address is n' itself, which is the equivalent of "try again later.” When node n receives the deny message, it can resend the request to the node suggested by n', often the new owner of the resource. To avoid n chasing the resource around the system, it can keep track of the number of attempts to get the resource, and switches strategy if the number of attempts exceeds a threshold.
  • Node n will receive the resource r if n' can send the resource. In this case, n needs to schedule the continuation c that was awaiting r, so that c can be resumed.
  • one physical node of the set of nodes in the system is designated as a "master node.”
  • This node has the responsibility at boot time for building the initial virtual resource map and other data structures, replicating them to the other nodes, and booting the operating system (e.g., Linux).
  • the master node can be just like any other node after the system is booted up, with one exception.
  • At least one physical node needs to store the root of the TidalTree.
  • the master node is one example of a place where the root can be placed. Updates to the event queue of the TidalTree root scheduling object are handled in each node by sending a message to the master node to perform the update.
  • HyperKernel will adapt and locality will continually improve if resource access patterns of the operating system and the application permit.
  • Timestamps [00164] In some embodiments, access to a free-running counter is visible to all of the nodes.
  • a needed resource is on disk (or persistent flash)
  • such resources are treated as having a heavier gravitational field than a resource such as RAM.
  • disk/flash resources will tend to not migrate very often. Instead, continuations will more frequently migrate to the physical nodes containing the required persistent storage, or to buffers associated with persistent storage, on a demand basis.
  • init-continuation Initializes a continuation when a computation is stalled.
  • assignProcessor Routine that assigns a new continuation to a shadow processor (if possible).
  • migrate-computation(computational-state,r,n) Message to request migration of a computational state to another node n which you hope has resource r.
  • handle-event(e) Routine executed when the HyperKemel is called on to handle an asynchronous event.
  • request-resource(r,n) Request transfer of resource r from node n.
  • on-request-transfer-response(r,n,b) The requested transfer of r from n was accepted or rejected, b is true if rejected.
  • migration-continuation(r) True if and only if it is better to migrate a continuation than move a resource.
  • parent(s) Returns the parent scheduler-object of scheduler object s.
  • cost(s,c) Used to evaluate placement of continuation c in the wait-queue of scheduler-object s.
  • find-best- within(s) A cost function that returns a continuation stored in the wait- queue of scheduler-object s.
  • resume-continuation(c) Resume the computation represented by c in the processor executing this function at the point.
  • valid(i) Boolean function that returns true if and only if interrupt i is still valid.
  • insert-queue(s,c) Insert continuation c into the wait-queue of scheduler-object s.
  • return-from-virtual-interrupt Resume execution that was temporarily paused due to the interrupt.
  • r.owner Returns the node where resource r is local.
  • get-state() Returns processor's state.
  • scheduler-object(p) Returns scheduler-object currently associated with processor p.
  • on-request-transfer-response(r,m, response) Response to request of transferring resource r from node m. Response can be either true if "rejected” or false if "accepted.”
  • processor p in physical node n becomes a shadow processor, it gives up its O/S identity and starts looking for a continuation with which to resume execution, p will look for such a continuation in wait-queues as follows: */
  • OnStall is invoked when the hardware detects an inconsistency between the virtual and physical configurations. More specifically, node n requests resource r which the hardware cannot find on node n. */
  • nn owner(r)
  • request-transfer(owner(r),r) /* send a request to the owner of r */

Abstract

L'invention porte sur la migration sélective de ressources. Un système informatique comprend une mémoire physique et une pluralité de processeurs physiques. Chacun de ces processeurs possède un ou plusieurs cœurs, et chaque cœur instancie un ou plusieurs processeurs virtuels qui exécutent un code de programme. Chaque cœur est conçu pour appeler un hyperkernel sur son processeur physique hébergeur lorsqu'il ne peut pas accéder à une partie de la mémoire physique dont il a besoin. L'hyperkernel rapproche sélectivement la mémoire nécessaire d'un emplacement auquel le processeur physique peut accéder ou met le processeur virtuel en correspondance avec un autre cœur.
EP13893153.0A 2013-09-05 2013-09-05 Migration sélective de ressources Withdrawn EP3042305A4 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2013/058262 WO2015034506A1 (fr) 2013-09-05 2013-09-05 Migration sélective de ressources

Publications (2)

Publication Number Publication Date
EP3042305A1 true EP3042305A1 (fr) 2016-07-13
EP3042305A4 EP3042305A4 (fr) 2017-04-05

Family

ID=52628808

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13893153.0A Withdrawn EP3042305A4 (fr) 2013-09-05 2013-09-05 Migration sélective de ressources

Country Status (2)

Country Link
EP (1) EP3042305A4 (fr)
WO (1) WO2015034506A1 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11240334B2 (en) 2015-10-01 2022-02-01 TidalScale, Inc. Network attached memory using selective resource migration
US10579274B2 (en) 2017-06-27 2020-03-03 TidalScale, Inc. Hierarchical stalling strategies for handling stalling events in a virtualized environment
US10817347B2 (en) 2017-08-31 2020-10-27 TidalScale, Inc. Entanglement of pages and guest threads
US11175927B2 (en) 2017-11-14 2021-11-16 TidalScale, Inc. Fast boot
CN109992366B (zh) * 2017-12-29 2023-08-22 华为技术有限公司 任务调度方法及调度装置

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2419701A (en) * 2004-10-29 2006-05-03 Hewlett Packard Development Co Virtual overlay infrastructure with dynamic control of mapping

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7222221B1 (en) * 2004-02-06 2007-05-22 Vmware, Inc. Maintaining coherency of derived data in a computer system
US8533716B2 (en) * 2004-03-31 2013-09-10 Synopsys, Inc. Resource management in a multicore architecture
US8621458B2 (en) * 2004-12-21 2013-12-31 Microsoft Corporation Systems and methods for exposing processor topology for virtual machines
US9753754B2 (en) * 2004-12-22 2017-09-05 Microsoft Technology Licensing, Llc Enforcing deterministic execution of threads of guest operating systems running in a virtual machine hosted on a multiprocessor machine
US20070226795A1 (en) * 2006-02-09 2007-09-27 Texas Instruments Incorporated Virtual cores and hardware-supported hypervisor integrated circuits, systems, methods and processes of manufacture
US7802073B1 (en) * 2006-03-29 2010-09-21 Oracle America, Inc. Virtual core management
US8838935B2 (en) * 2010-09-24 2014-09-16 Intel Corporation Apparatus, method, and system for implementing micro page tables
CN104011680B (zh) * 2011-12-26 2017-03-01 英特尔公司 在物理处理单元中调度虚拟机的虚拟中央处理单元

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2419701A (en) * 2004-10-29 2006-05-03 Hewlett Packard Development Co Virtual overlay infrastructure with dynamic control of mapping

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO2015034506A1 *

Also Published As

Publication number Publication date
EP3042305A4 (fr) 2017-04-05
WO2015034506A1 (fr) 2015-03-12

Similar Documents

Publication Publication Date Title
US11159605B2 (en) Hierarchical dynamic scheduling
US11403135B2 (en) Resource migration negotiation
US11803306B2 (en) Handling frequently accessed pages
US20220174130A1 (en) Network attached memory using selective resource migration
US11656878B2 (en) Fast boot
US20220229688A1 (en) Virtualized i/o
EP3042305A1 (fr) Migration sélective de ressources
EP3042282A1 (fr) Planification dynamique hiérarchique

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20160215

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RAP1 Party data changed (applicant data changed or rights of an application transferred)

Owner name: TIDALSCALE INC.

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20170307

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 15/78 20060101AFI20170301BHEP

Ipc: G06F 9/50 20060101ALI20170301BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20190426

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20210618