WO2006028520A1 - Migration de taches dans un systeme informatique - Google Patents

Migration de taches dans un systeme informatique Download PDF

Info

Publication number
WO2006028520A1
WO2006028520A1 PCT/US2005/013122 US2005013122W WO2006028520A1 WO 2006028520 A1 WO2006028520 A1 WO 2006028520A1 US 2005013122 W US2005013122 W US 2005013122W WO 2006028520 A1 WO2006028520 A1 WO 2006028520A1
Authority
WO
WIPO (PCT)
Prior art keywords
computing device
tasks
migration
computing
pac
Prior art date
Application number
PCT/US2005/013122
Other languages
English (en)
Inventor
Timothy G. Mortsolf
Original Assignee
Starent Networks, Corp.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Starent Networks, Corp. filed Critical Starent Networks, Corp.
Priority to EP05738099A priority Critical patent/EP1815333A4/fr
Publication of WO2006028520A1 publication Critical patent/WO2006028520A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • G06F9/4856Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration

Definitions

  • the present invention relates to computing systems. More particularly, this invention relates to the migration of tasks between computing devices in computing systems, such as communications systems.
  • symmetric multiprocessing In a variant of the multiple processor computing system described above, called symmetric multiprocessing (SMP), the tasks running on the computing system are distributed equally across all processors. In many cases, these processors also share memory.
  • asymmetric multiprocessing AMP
  • AMP a single processor acts as a "master” processor, while the remainder of the processors act as “slave” processors. In this configuration, all tasks, including those of the operating system, must pass through the master processor before being passed onto the slave processors.
  • Computing performance can also be increased by dedicating computing resources (e.g., machines, CPUs, cards, etc.) to a task and optimally tuning the computing resources to that particular task.
  • computing resources e.g., machines, CPUs, cards, etc.
  • this approach has not been widely adopted because many (or most) situations involve uncoordinated application development, and because it is especially difficult (e.g., expensive) to dedicate resources among tasks in environments where the task mix is constantly changing.
  • dedicated resources it is essentially impossible (or at least very difficult) to quickly and easily migrate resources from one computing device to another, especially if different developers have been involved.
  • even when such a migration can be performed it typically involves the intervention of a system administrator, and may require at least some of the computer systems to be powered down and rebooted.
  • a computing system can be partitioned with hardware to make a subset of the resources on the computing system available to one or more particular tasks only.
  • This approach avoids dedicating the resources permanently, given that the partitions can be changed, but potentially problematic issues still remain concerning, for example, performance improvements by means of load balancing of resources among partitions, and resource availability.
  • clusters of computing systems may be used in which each system or node has its own memory and is controlled by its own operating system.
  • the various computing systems interact by sharing data storage resources and passing messages among themselves using a communications network.
  • a cluster system has the advantage that additional systems can easily be added to the cluster as needed.
  • networks and clusters may suffer from a lack of shared memory, as well as from limited interconnect bandwidth which generally places limitations on performance.
  • VM virtual machine
  • International Business Machines Corporation of Armonk, NY
  • Each of those virtual machines has access, at least in principle, to all the physical resources of the underlying real (physical) computing system.
  • the assignment of resources to each virtual machine is controlled by a program called a "hypervisor.”
  • hypervisor There is only one hypervisor in the system, and it is responsible for all the physical resources. Consequently, the hypervisor, not the other operating systems, deals with the allocation of physical hardware.
  • the hypervisor intercepts requests for resources from the various operating systems that may be operating in the computing system, and deals with the requests in a globally-correct manner.
  • the VM architecture supports the concept of a logical partition (LPAR).
  • LPAR contains some of the available physical CPUs (and other resources) which are logically assigned to the partition. The same resources can be assigned to more than one partition. LPARs are generally set up statically by an administrator, but are able to respond to changes in load (e.g., resource demand) dynamically, and without rebooting, in several ways. For example, assume that two logical partitions, each containing ten logical CPUs, are shared on a single physical computing system containing ten physical CPUs. In this case, if the logical partitions have complementary peak loads, each partition can take over the entire ten physical CPUs as the workload shifts (generally without requiring a re-boot or operator intervention).
  • the logical CPUs assigned to each partition can be turned on and off dynamically using normal operating system operator commands (also generally without requiring a re-boot).
  • LPAR weights can be used to define the portion of the total CPU resources which is to be given to each partition. These weights can be changed by system administrators, on-the-fly, with no disruption. Nevertheless, the undesirable limitation still exists that the number of CPUs active at system initialization is the maximum number of CPUs that can be turned on in any partition.
  • FIG. 10 Another known system is called a Parallel Sysplex, and is also marketed and developed by the International Business Machines Corporation.
  • This architecture consists of a set of computers that are clustered via a hardware entity (called a "coupling facility") that is attached to each CPU.
  • the coupling facilities on each node are connected (e.g., via a fiber ⁇ optic link), and each node operates as a traditional SMP machine, with a maximum of, e.g., 10 CPUs.
  • Certain CPU instructions directly invoke the coupling facility. For example, a node registers a data structure with the coupling facility, and then the coupling facility takes care of keeping the data structures coherent within the local memory of each node.
  • An Enterprise 10000 Unix server developed and marketed by Sun
  • Dynamic System Domains uses a partitioning arrangement called Dynamic System Domains to logically divide the resources of a single physical server into multiple partitions, or domains, each of which operates as a stand-alone server.
  • Each of the partitions has CPUs, memory and input/output (I/O) hardware.
  • Dynamic reconfiguration allows a system administrator to create, resize, or delete domains on the fly and without rebooting. Every domain remains logically isolated from any other domain in the system, isolating it completely from any software error or CPU, memory, or I/O error generated by any other domain. There is no sharing of resources between any of the domains.
  • a Hive Project conducted at Stanford University concerns an architecture which is structured as a set of cells.
  • each cell is assigned a range of nodes, each having memory and I/O devices, that the cell owns throughout execution.
  • Each Hive cell manages the processors, memory and I/O devices on those nodes as if it were an independent operating system.
  • the cells cooperate to present the illusion of a single system to user-level processes.
  • Hive cells are not responsible for deciding how to divide their resources between local and remote requests. Each cell is responsible only for maintaining its internal resources and for optimizing performance within the resources it has been allocated. Global resource allocation is carried out by a user-level process called "wax.” The Hive system attempts to prevent data corruption by using certain fault containment boundaries between the cells. In order to implement the tight sharing expected from a multiprocessor system, despite the fault containment boundaries between cells, resource sharing is implemented through the cooperation of the various cell kernels. However, the policy is implemented outside the kernels, in the wax process. Both memory and processors can be shared.
  • the Cellular IRDC architecture distributes global kernel text and data into optimized SMP-sized chunks (cells), which represent a control domain consisting of one or more machine modules, where each module consists of processors, memory, and I/O devices. Tasks running on these cells rely extensively on a full set of local operating system services, including local copies of operating system text and kernel data structures, but only one instance of the operating system exists on the entire system. Inter-cell coordination allows tasks to directly and transparently utilize processing, memory and I/O resources from other cells without incurring the overhead of data copies or extra context switches.
  • NUMA-Q Another existing architecture is NUMA-Q, developed and marketed by
  • NUMA-Q Sequent Computer Systems, Inc., of Beaverton, OR.
  • groups of four processors (quads) per portion of memory are used as the basic building block for SMP nodes. Adding I/O to each quad further improves performance.
  • the NUMA-Q architecture not only distributes physical memory, but also puts a predetermined number of processors and PCI slots next to each processor.
  • the memory in each quad is not local memory in the traditional sense. Rather, it is a portion of the physical memory address space and has a specific address range.
  • the address map is divided evenly over memory, with each quad containing a contiguous portion of address space. Only one copy of the operating system is running and, as in any SMP system, it resides in memory and runs processes without distinction and simultaneously on one or more processors.
  • one computing device can take over providing services previously provided by another computing device without disrupting such services.
  • migration can be performed efficiently by avoiding treatment of all tasks alike.
  • the invention provides a method for migrating tasks between computing devices in a computing system, where the method includes at least one of the sequential, sequence independent and non-sequential steps of receiving an indication that migration of one or more restartable and/or migratable tasks from a first computing device is to be initiated, starting one or more restartable tasks on a second computing device corresponding to one or more restartable tasks running on the first computing device, transmitting state information for one or more migratable tasks running on the first computing device to the second computing device, and starting the one or more migratable tasks on the second computing device using the transmitted state information.
  • the invention provides a method for migrating tasks between computing devices in a computing system, where the method includes at least one of the sequential, sequence independent and non-sequential steps of initiating migration of a first task using a first technique for task migration between the first computing device and a second computing device, and initiating migration of a second task using a second technique for task migration between the first computing device and the second computing device, wherein the second computing device is not running in lockstep with the first computing device prior to the migration of the first and second tasks from the first computing device to the second computing device.
  • the invention provides a system for migrating tasks between computing devices in a computing system, where the system includes means for initiating migration of a first task using a first technique for task migration between the first computing device and a second computing device, and means for initiating migration of a second task using a second technique for task migration between the first computing device and the second computing device, wherein the second computing device is not running in lockstep with the first computing device prior to the migration of the first and second tasks from the first computing device to the second computing device.
  • FIG. 1 is a simplified illustration of a chassis in a computing system that includes multiple PACs among which migration of tasks according to the principles of the present invention may be accomplished;
  • FIG. 2 is a simplified illustration of an active packet accelerator card (PAC) that includes four CPUs from which migration of tasks according to the principles of the present invention may be accomplished;
  • PAC active packet accelerator card
  • FIG. 3 is a simplified illustration of a standby PAC that includes four CPUs to which migration of tasks according to the principles of the present invention may be accomplished;
  • FIG. 4 is a flow chart illustrating the steps performed according to one embodiment of the present invention in migrating tasks from a first computing device to a second computing device in a computing system;
  • FIG. 5 is a more detailed flow chart of one of the steps depicted in FIG.4 according to one embodiment of the present invention.
  • FIG. 6 is a simplified illustration of an active PAC that hosts critical, restartable, and migratable tasks to be migrated to a standby PAC, and a special task sitCPU which monitors these critical, restartable, and migratable tasks according to the principles of the present invention
  • FIG. 7 is a simplified illustration of an standby PAC that hosts a special task sitCPU for monitoring critical, restartable, and migratable tasks according to the principles of the present invention
  • FIG. 8 is a simplified illustration of a standby PAC that hosts critical, restartable, and migratable tasks which have been migrated from an active PAC, and a special task sitCPU which monitors these critical, restartable, and migratable tasks according to the principles of the present invention.
  • Tasks with heavy demand for CPU processing are common in many fields, including communications systems, and particularly wireless (mobile) communications systems.
  • Such systems have, for example, demanding, ongoing tasks that present significant challenges with respect to dynamic reconfiguration, e.g., reconfiguration under control of the operating system(s) running on the computing system and without system administrator intervention.
  • dynamic reconfiguration e.g., reconfiguration under control of the operating system(s) running on the computing system and without system administrator intervention.
  • attempts have been made at providing a flexible computer system having mechanisms for reconfiguring the computing system on the fly.
  • FIG. 1 shows a chassis 100 in a computing system that includes multiple computing devices, such as electronic circuitry cards, for handling various tasks.
  • electronic circuitry cards such as packet accelerator cards.
  • ST-16 communications oriented computing platform
  • ST-16 communications oriented computing platform
  • PACs packet accelerator cards
  • PACs 101-1 14 among which migration of tasks according to the invention may be accomplished.
  • PAC 102 serves as a redundant or backup PAC for operational PAC 101
  • PAC 104 serves as a backup PAC for operational PAC 103
  • PAC 1 14 which serves as a backup PAC for operational PAC 1 13.
  • Chassis 100 shown in FIG. 1 shows a 1 :1 redundancy of backup PACs to operational PACs, the invention is not limited in this manner. Rather, it will be understood that a 1:N redundancy is used according to various embodiments of the invention, as explained below. Chassis 100 shown in FIG.
  • chassis 100 also includes a management card, such as a Switch Processor Card (SPC) 115 as developed by Starent Networks Corporation of Tewksbury, MA, for controlling some or all of the chassis operations (e.g., starting chassis 100, managing PACs 101-114, handling recovery tasks, etc.).
  • SPC Switch Processor Card
  • chassis 100 also includes a redundant SPC (or RSPC) 116.
  • PACs can be classified as either active PACs or standby PACs.
  • a standby PACs can be classified as either active PACs or standby PACs.
  • an active PAC 200 includes four CPUs 202, 204, 206, and 208.
  • a standby PAC 300 includes four CPUs 302, 304, 306, and 308. The number of CPUs included in each of PACS 200 and 300, however, is not limited in this manner.
  • each CPU 202, 204, 206 and 208 of PAC 200, and each CPU 302, 304, 306, and 308 of PAC 300 executes a set of tasks specific to the host PAC.
  • each of these CPUs executes a special task, referred to as sitCPU or a monitoring task, which keeps track of (e.g., monitors) all the other tasks running on the respective CPU.
  • the sitCPUs of CPUs 202, 204, 206 and 208 may maintain a current list of all the other tasks running on the respective CPUs, their task ID numbers and task types (e.g., critical, restartable, or migratable), etc.
  • PAC migrations can occur: graceful and ungraceful.
  • tasks are transferred between a first PAC and a second PAC while the first PAC is still fully functional.
  • a graceful migration may take place, for example, maintenance purposes.
  • graceful migrations have required that the second PAC (to which tasks are transferred) mirror the first PACs state from initialization and up to the point of the migration.
  • this requirement is often burdensome and subject to errors, due, e.g., to timing inaccuracies.
  • the first and second PACs are not said to be running in "active" and "standby" mode.
  • the present invention permits a 1 :N redundancy. In other words, it is not required that a separate standby PAC be running in lockstep (as a mirror) for each active PAC for which redundancy is desired.
  • a standby PAC is able to resume some or all of the functions of an active PAC without previously mirroring the standby pack, is it possible for a single standby PAC to provide redundancy (e.g., in the event that migration is required for failure reasons or simply desired for maintenance reasons) for more than one active PAC.
  • FIG. 4 is a flow diagram outlining the steps involved during a PAC migration according to one embodiment of the invention.
  • a migration is initiated by any one (or more than one) of several system inputs.
  • the administrator of the computing system in which PACs 200 and 300 operate may want to service a particular active PAC 200 while ensuring that any tasks currently running on the active PAC 200 are not lost.
  • CLI system Command Line Interface
  • the administrator can indicate the particular active PAC 200 from which tasks are to be migrated to a standby PAC 300.
  • the administrator can also initiate a migration, for example, by manually removing an active PAC 200 card.
  • a migration can also be initiated if the diagnostic system senses that a particular active PAC 200 card is experiencing one or more failures, for example, and needs to be shut down. It will be understood by persons versed in the art that a migration may be initiated at step 402 by other means as well, and that the invention is not limited to the particular examples provided above.
  • a dedicated process or task is used which monitors some or all of the chassis components (e.g., PACs 101-1 14 of chassis 100), and monitors system and manual inputs to determine whether a migration is necessary.
  • This task which is also referred to herein as a Card/Slot/Port task (CSP) generally runs on either an SPC (e.g., SPC 115) or, when the SPC is not available or is not functioning properly, on an RSPC (e.g., RSPC 1 16) of a chassis (e.g., chassis 100). For example, if a diagnostic system determines that a specific active PAC 200 card is overheating, it sends a request for migration to the CSP.
  • SPC e.g., SPC 115
  • RSPC e.g., RSPC 1 16
  • the CSP determines whether the active PAC 200 card is in the correct state for task migration, or whether the active PAC should continue to run for a certain amount of time before migrating its tasks. In this manner, the CSP acts as an arbitrator for all the inputs and determines whether any particular PAC 200 should be allowed to migrate its tasks to a standby PAC 300.
  • step 406 If it is determined by the CSP at step 406 that a migration should not occur, then at step 408, the process illustrated by the flow chart of FIG. 4 ends (and tasks are not migrated from active PAC 200 to standby PAC 300). According to other embodiments of the present invention, a time delay, for example, may be introduced into the migration process, rather than the process coming to an end.
  • a Recovery Control Task (RCT) running on the SPC (or RSPC) begins to direct migration.
  • RCT Recovery Control Task
  • the RCT may be any suitable task (such as develpoped by Starent Networks Corporation of Tewksbury, MA) that initiates some (or all) of the desired recovery actions when a task has failed and needs to be restarted.
  • the RCT may handle card level recoveries (where all the tasks on a card require recovery), CPU level recoveries (where all the tasks on a CPU require recovery), and/or single task failure recoveries.
  • the RCT asks each CPU of active PAC 200 to begin migrating its respective tasks (as identified by the respective sitCPU of each CPU) to one or more CPUs of standby PAC 300.
  • RCT instructs, or orders one or more of the CPUs of active PAC 200 to migrate its tasks .
  • each CPU of active PAC 200 migrates its respective tasks to a corresponding CPU of standby PAC 300.
  • tasks from CPU 202 migrate to CPU 302, from CPU 204 to CPU 304, and so on.
  • standby PAC 300 notifies the CSP that migration is complete. The CSP then optionally restarts active PAC 200 as a standby PAC, and standby PAC 300 serves as an active PAC for the tasks which have migrated to it.
  • FIG. 5 is a more detailed flow diagram outlining the migration process associated with step 412 of the flow chart shown in FIG. 4.
  • each active PAC 200 hosts several tasks which fall into at least one of the categories of critical tasks (CTs), restartable tasks (RTs), and migratable tasks (MTs).
  • CTs are those tasks that are necerney for the basic operation of all PAC cards, and therefore, generally run on both active and standby PACs. Examples of CTs include a resource manager task that provides the CPU load, and a task that monitors the PAC card's temperature.
  • CTs 240 running on CPU 202 are generally also running on CPU 302 of PAC 300 (because CTs, in general, are necessary for the basic operation of all PAC cards).
  • standby PAC 300 reports to the CSP that it is in a ready, or operational state. If CTs 240 are not already running on CPU 302 of PAC 300 (e.g., because of a failure, or because PAC 300 is not yet operational), then a report indicating this is sent to the CSP at step 502, and according to various embodiments of the invention, a different standby PAC 300 is used for the migration process.
  • a report will not be sent when CTs 240 are running on CPU 302 of PAC 300, but rather, will only be sent when they are not (e.g., to indicate that another PAC 300 must be used). According to yet other embodiments, a report will never be sent, regardless of whether CTs 240 are running on CPU 302 of PAC 300.
  • the RTs 250 running on CPU 202 of PAC 200 are migrated to CPU 302 of standby PAC 300 (as requested or ordered by the RCT).
  • sitCPU 270 of CPU 202 shown in FIG. 6 determines which tasks are restartable, it terminates each RT 250 running on CPU 202, and sends a message to sitCPU 370 of CPU 302 (which is shown in FIG. 7) to restart each RT 250.
  • the RTs 250 originally running on CPU 202 can be restarted on CPU 302, and can reacquire their state at PAC 300 by communicating with other components of the system.
  • the MTs 260 running on CPU 202 of PAC 200 are migrated to CPU 302 of standby PAC 300.
  • the process for migrating MTs 260 differs from the process used to transfer RTs 250, as is now explained in detail.
  • sitCPU 270 executes pre-migration for each MT 260 running on
  • CPU 202 This includes invoking one or more checkpointing operating system (OS) kernel calls (e.g., using a standard software LINUX tool). For example, during this checkpointing operation, the OS saves the full state of each MT 260, which includes most (or all) of the memory controlled by the MTs 260, and the internal CPU registers used by the MTs 260.
  • OS operating system
  • the checkpointing OS kernel call records state information associated with each MT 260, and sitCPU 270 of active PAC 200 transmits the state information to sitCPU 370 of standby PAC 300.
  • sitCPU 370 of standby PAC 300 invokes a de-checkpointing OS kernel call for each MT 260 using the received state information. For example, during this de-checkpointing operation, the OS restores the full state of each MT 260 (including the respective memory and the internal CPU registers), and resumes executing the MTs 260 from the state at which the checkpointing operation took place.
  • sitCPU 370 of standby PAC 300 executes post-migration, including, e.g., task-specific post-migration. For example, once the checkpointing and de- checkpointing operations performed by the OS are complete, it may be necessary to reestablish any resources required on the new system PAC 300. This may include, among other things, reopening files that were previously being used (because the files are no longer "seen" once the MTs 260 have been moved), or reestablishing network connections with other internal or external components.
  • FIG. 8 is a simplified illustration showing the tasks running on CPU 302 after PAC migration has completed, where CTs 340 correspond to CTs 240 of PAC 200, and RTs 350 and MTs 360 respectively correspond to migrated versions of RTs 250 and MTs 260 of PAC 200.
  • CTs 340 correspond to CTs 240 of PAC 200
  • RTs 350 and MTs 360 respectively correspond to migrated versions of RTs 250 and MTs 260 of PAC 200.
  • the other CPUs of PAC 300 will also have similar tasks running thereon (corresponding to the tasks originally running on other CPUs of PAC 200) after migration from PAC 200 to PAC 300 is complete.
  • PAC 200 to standby PAC 300 is described above in accordance with a particular sequence, other sequences are also contemplated according to the invention.
  • MTs may be migrated from active PAC 200 to standby PAC 300 before RTs are started on standby PAC 300.
  • the invention is not limited in this manner.
  • a wrapper process may be used so that each time an event occurs on certain active PACs, a message is transmitted to respective standby PACs so that the PACs execute in lockstep.
  • the events may include, for example, messages and data. Time, data, and messages are coordinated, and memory is managed to help avoid instances in which the PACs react differently to the same event.
  • each of the PACs may be directed to report its time so that synchronization can analyzed.
  • the invention is not limited in this manner.
  • Tasks may be categorized ahead of time or on the fly.
  • different types of tasks may be migrated in different orders from that described above. Partial migration may be also performed.
  • migration may be performed from between active PACs 200.
  • some or all of the critical tasks, restartable tasks, and migratable tasks being migrated from active PAC 200 to standby PAC 300 may remain on active PAC 200 even after migration is complete.
  • tasks from a single CPU of PAC 200 are migrated to multiple CPUs of PAC 300 (with, or without redundancy).
  • tasks from multiple CPUs of PAC 200 are migrated to a single CPU of PAC 300.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Retry When Errors Occur (AREA)
  • Hardware Redundancy (AREA)

Abstract

L'invention concerne des procédés et systèmes destinés à la migration de tâches (406, 410, 412, 414) entre des dispositifs de calcul d'un système informatique, tel qu'un système informatique de communications. Par exemple, selon l'invention, un dispositif informatique peut prendre en charge des services précédemment fournis par un autre dispositif informatique sans interrompre de ces services. En outre, selon divers modes de réalisation de la présente invention, la migration peut être réalisée efficacement (404, 406) en évitant le traitement de toutes les tâches pareilles. Plus particulièrement, les tâches à migrer doivent être séparées en groupes comportant des tâches critiques, des tâches pouvant être relancées et des tâches pouvant être migrées, chacune étant gérée de manière unique selon l'invention.
PCT/US2005/013122 2004-09-07 2005-04-18 Migration de taches dans un systeme informatique WO2006028520A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP05738099A EP1815333A4 (fr) 2004-09-07 2005-04-18 Migration de taches dans un systeme informatique

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US60817704P 2004-09-07 2004-09-07
US60817304P 2004-09-07 2004-09-07
US60/608,173 2004-09-07
US60/608,177 2004-09-07

Publications (1)

Publication Number Publication Date
WO2006028520A1 true WO2006028520A1 (fr) 2006-03-16

Family

ID=36036671

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2005/013126 WO2006028521A1 (fr) 2004-09-07 2005-04-18 Migration et verification de procede dans des systemes informatiques
PCT/US2005/013122 WO2006028520A1 (fr) 2004-09-07 2005-04-18 Migration de taches dans un systeme informatique

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/US2005/013126 WO2006028521A1 (fr) 2004-09-07 2005-04-18 Migration et verification de procede dans des systemes informatiques

Country Status (2)

Country Link
EP (2) EP1815333A4 (fr)
WO (2) WO2006028521A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2443277A (en) * 2006-10-24 2008-04-30 Advanced Risc Mach Ltd Performing diagnostic operations upon an asymmetric multiprocessor apparatus
WO2008081288A2 (fr) * 2006-12-29 2008-07-10 Nokia Corporation Transfert de l'accomplissement d'une tâche à un autre dispositif
GB2455915A (en) * 2007-12-27 2009-07-01 Intec Netcore Inc Providing service to an end user terminal
WO2011139963A3 (fr) * 2010-05-04 2012-04-05 Robert Bosch Gmbh Transfert d'état d'application et d'activité entre des dispositifs
CN116226894A (zh) * 2023-05-10 2023-06-06 杭州比智科技有限公司 一种基于元仓的数据安全治理系统及方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6085086A (en) * 1995-08-22 2000-07-04 Lucent Technologies Inc. Network-based migrating user agent for personal communication services
US6415315B1 (en) * 1997-12-01 2002-07-02 Recursion Software, Inc. Method of moving objects in a computer network
US6442663B1 (en) * 1998-06-19 2002-08-27 Board Of Supervisors Of Louisiana University And Agricultural And Mechanical College Data collection and restoration for homogeneous or heterogeneous process migration
US6769121B1 (en) * 1999-01-22 2004-07-27 Nec Corporation Program execution device and process migrating method thereof and storage medium which stores process migration control program

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6161219A (en) * 1997-07-03 2000-12-12 The University Of Iowa Research Foundation System and method for providing checkpointing with precompile directives and supporting software to produce checkpoints, independent of environment constraints
US5893912A (en) * 1997-08-13 1999-04-13 International Business Machines Corporation Thread context manager for relational databases, method and computer program product for implementing thread context management for relational databases
US6161193A (en) * 1998-03-18 2000-12-12 Lucent Technologies Inc. Methods and apparatus for process replication/recovery in a distributed system
US7080159B2 (en) 2000-12-15 2006-07-18 Ntt Docomo, Inc. Method and system for effecting migration of application among heterogeneous devices
US6912569B1 (en) * 2001-04-30 2005-06-28 Sun Microsystems, Inc. Method and apparatus for migration of managed application state for a Java based application

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6085086A (en) * 1995-08-22 2000-07-04 Lucent Technologies Inc. Network-based migrating user agent for personal communication services
US6415315B1 (en) * 1997-12-01 2002-07-02 Recursion Software, Inc. Method of moving objects in a computer network
US6442663B1 (en) * 1998-06-19 2002-08-27 Board Of Supervisors Of Louisiana University And Agricultural And Mechanical College Data collection and restoration for homogeneous or heterogeneous process migration
US6769121B1 (en) * 1999-01-22 2004-07-27 Nec Corporation Program execution device and process migrating method thereof and storage medium which stores process migration control program

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7809989B2 (en) 2006-10-24 2010-10-05 Arm Limited Performing diagnostic operations upon an asymmetric multiprocessor apparatus
GB2443277B (en) * 2006-10-24 2011-05-18 Advanced Risc Mach Ltd Performing diagnostics operations upon an asymmetric multiprocessor apparatus
GB2443277A (en) * 2006-10-24 2008-04-30 Advanced Risc Mach Ltd Performing diagnostic operations upon an asymmetric multiprocessor apparatus
WO2008081288A2 (fr) * 2006-12-29 2008-07-10 Nokia Corporation Transfert de l'accomplissement d'une tâche à un autre dispositif
WO2008081288A3 (fr) * 2006-12-29 2008-10-09 Nokia Corp Transfert de l'accomplissement d'une tâche à un autre dispositif
US8583090B2 (en) 2006-12-29 2013-11-12 Nokia Corporation Transferring task completion to another device
US8549063B2 (en) 2007-12-27 2013-10-01 Intec Inc. System and method for providing service
GB2455915A (en) * 2007-12-27 2009-07-01 Intec Netcore Inc Providing service to an end user terminal
US8239507B2 (en) 2007-12-27 2012-08-07 Intec Inc. System and method for providing service
GB2455915B (en) * 2007-12-27 2013-02-06 Intec Inc System and method for providing service
WO2011139963A3 (fr) * 2010-05-04 2012-04-05 Robert Bosch Gmbh Transfert d'état d'application et d'activité entre des dispositifs
US8494439B2 (en) 2010-05-04 2013-07-23 Robert Bosch Gmbh Application state and activity transfer between devices
CN116226894A (zh) * 2023-05-10 2023-06-06 杭州比智科技有限公司 一种基于元仓的数据安全治理系统及方法
CN116226894B (zh) * 2023-05-10 2023-08-04 杭州比智科技有限公司 一种基于元仓的数据安全治理系统及方法

Also Published As

Publication number Publication date
EP1815333A4 (fr) 2010-08-25
EP1815332A1 (fr) 2007-08-08
WO2006028521A1 (fr) 2006-03-16
EP1815332A4 (fr) 2009-07-15
EP1815333A1 (fr) 2007-08-08

Similar Documents

Publication Publication Date Title
US11627041B2 (en) Dynamic reconfiguration of resilient logical modules in a software defined server
US9904570B2 (en) Exchanging and adjusting memory pre-copy convergence times (MPCT) among a group of virtual in order to converge at a pre-copy convergence time window
US6199179B1 (en) Method and apparatus for failure recovery in a multi-processor computer system
US7774785B2 (en) Cluster code management
US7984108B2 (en) Computer system para-virtualization using a hypervisor that is implemented in a partition of the host system
US9519795B2 (en) Interconnect partition binding API, allocation and management of application-specific partitions
US9760408B2 (en) Distributed I/O operations performed in a continuous computing fabric environment
US6226734B1 (en) Method and apparatus for processor migration from different processor states in a multi-processor computer system
US6247109B1 (en) Dynamically assigning CPUs to different partitions each having an operation system instance in a shared memory space
US8473692B2 (en) Operating system image management
US7743372B2 (en) Dynamic cluster code updating in logical partitions
US20070061441A1 (en) Para-virtualized computer system with I/0 server partitions that map physical host hardware for access by guest partitions
US20070067366A1 (en) Scalable partition memory mapping system
US20050044301A1 (en) Method and apparatus for providing virtual computing services
US20050120160A1 (en) System and method for managing virtual servers
US20050251806A1 (en) Enhancement of real-time operating system functionality using a hypervisor
TW200817920A (en) Method, apparatus, and computer usable program code for migrating virtual adapters from source physical adapters to destination physical adapters
EP1815333A1 (fr) Migration de taches dans un systeme informatique
WO2023125482A1 (fr) Procédé et dispositif de gestion de grappes, et système informatique
Ouchi Technologies of ETERNUS VS900 storage virtualization switch
Awadallah et al. The vMatrix: Server Switching
Tripathy et al. On a Virtual Shared Memory Cluster System with VirtualMachines
Zarrabi Dynamic Transparent General Purpose Process Migration For Linux

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2005738099

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2005738099

Country of ref document: EP