WO2015132941A1 - 計算機 - Google Patents
計算機 Download PDFInfo
- Publication number
- WO2015132941A1 WO2015132941A1 PCT/JP2014/055878 JP2014055878W WO2015132941A1 WO 2015132941 A1 WO2015132941 A1 WO 2015132941A1 JP 2014055878 W JP2014055878 W JP 2014055878W WO 2015132941 A1 WO2015132941 A1 WO 2015132941A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- operating system
- cache memory
- virtual
- memory area
- data
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0804—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
- G06F11/1441—Resetting or repowering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1448—Management of the data involved in backup or backup restore
- G06F11/1451—Management of the data involved in backup or backup restore by selection of backup contents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1469—Backup restoration techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/109—Address translation for multiple virtual address spaces, e.g. segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/16—Protection against loss of memory contents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/805—Real-time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/815—Virtual
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/82—Solving problems relating to consistency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1032—Reliability improvement, data loss prevention, degraded operation etc
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/28—Using a specific disk cache architecture
- G06F2212/283—Plural cache memories
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/65—Details of virtual memory and virtual address translation
- G06F2212/657—Virtual address space management
Definitions
- the present invention relates to a cache data backup technique and a restore technique in a computer to which a multi-OS technique and a virtual machine technique are applied.
- the client device reads / writes data to / from a server device such as a file server or database server connected via a LAN configured with Ethernet, etc., and the server device is configured with Fiber Channel or the like as a back end.
- a cache memory is generally mounted on the storage apparatus and each server apparatus from the viewpoint of improving system performance.
- the cache memory is typically arranged on a system memory composed of a volatile memory such as a DRAM.
- a large-scale storage device equipped with a large-capacity HDD is equipped with a large-capacity cache memory on the order of several to several tens of gigabytes. Responds to / O access requests.
- such a device In order to prevent a huge amount of data on the cache memory from being lost, such a device temporarily supplies power from a battery as a secondary power source when the power is unexpectedly shut down, and accesses faster than the HDD. Backup to a possible NVRAM (nonvolatile memory) such as SSD. Further, in the case of an apparatus constituted by multiprocessors, in order to suppress power consumption during battery operation, a device is devised in which only one processor is powered and only that processor performs a cache memory backup operation.
- Patent Document 1 In recent years, in order to effectively use hardware resources on a physical computer, a technique for starting two or more OSs (operating systems) on one physical computer has attracted attention (for example, Patent Document 1). And Patent Document 2).
- Patent Document 1 states that a multi-operating system computer includes a first OS (operating system) that executes a plurality of tasks with priorities set in order of priority, and a second OS that is different from the first OS.
- a multi-operating system computer that alternately operates as an operating OS, and is a predetermined task having a predetermined priority set among the plurality of tasks when the first OS is operating as an operating OS.
- an OS switching unit that switches the operating OS from the first OS to the second OS when a task used as a switching trigger task for identifying a switching trigger for the operating OS is executed. Has been.
- Patent Document 2 states that “(1) a multi-OS booting apparatus that boots at least two OSes, a first OS and a second OS, has (1) a memory area, and a memory space for the memory area. (2) a secondary storage unit that stores a second boot loader and the second OS, and (3) a context indicating control information for the CPU, the first OS By operating a first boot loader operating under the first OS operating in the first context that is a context for the first OS operating under the first context, The memory area of the primary storage unit defined by the first context as the first memory space managed by the first OS is transferred from the secondary storage unit to the memory area of the primary storage unit.
- the second boot loader context is generated in the second boot loader and the second boot loader context is defined from the first context to the generated second boot loader context. Let's switch to the context for the second bootloader By executing the second boot loader, the memory of the primary storage unit is stored in the memory area of the primary storage unit defined as the second memory space included in the third memory space by the first boot loader.
- the second OS loaded in the area is loaded into the second boot loader and the context for the second OS is generated, and the context for the second boot loader is switched to the generated context for the second OS.
- a loader execution unit that causes the second boot loader to start the second OS under the context for the second OS ”.
- a storage control OS is installed as a first OS (first host OS), a virtual machine monitor (VMM) is installed as a second OS (second host OS), and
- a server control OS as an OS (guest OS1 to guest OSn) on a plurality of virtual machines (VM1 to VMn) on the virtual machine monitor, space saving and cost reduction can be expected.
- I / O data read / write which was conventionally performed via SAN, is performed via the shared memory between the host OS1 and guest OS, thereby improving the I / O data read / write performance. I can expect.
- a first host OS storage control OS
- a second host OS VMM
- guest OSs server control OSs
- the first host OS cannot grasp where the cache memory allocated to each guest OS is located in the physical address space. Therefore, in the prior art, the first host OS cannot back up the cache memory data allocated to each guest OS to NVRAM or the like.
- the first host OS needs to restore the data backed up in the NVRAM to the cache memory assigned to each guest OS. is there.
- the arrangement of the cache memory allocated to each guest OS in the physical address space may be different from the arrangement before the restart. For this reason, the first host OS cannot grasp the arrangement of the cache memory assigned to each guest OS, and therefore cannot properly restore the backup data of each guest OS.
- one host OS stores data stored in the cache memory of the host OS, and the like. It is an object of the present invention to provide a method for backing up data stored in a cache memory assigned to each guest OS operating on the host OS.
- the present invention provides a method for properly restoring backup data of each guest OS even when the arrangement of the cache memory allocated to each guest OS in the physical address space changes before and after the restart.
- the purpose is to provide.
- a typical example of the invention disclosed in the present application is as follows. That is, a computer that operates a plurality of operating systems, and the computer is connected as a physical resource to a processor, a volatile memory connected to the processor, a non-volatile memory connected to the processor, and the processor.
- the plurality of operating systems includes a first operating system and a second operating system that generates a plurality of virtual machines, and the first operating system includes the processor.
- a first logical processor logically divided, a first logical volatile memory logically divided from the volatile memory, and a first logical I / O logically divided from the I / O device Runs on the first logical resource including the device and shuts off the power to the computer
- the second operating system includes: a second processor in which the processor is logically divided; and a second logical volatile memory in which the volatile memory is logically divided
- the I / O device operates on a second logical resource including a second logically divided I / O device, and a third operating system operates on each of the plurality of virtual machines,
- the first operating system secures a first cache memory area for temporarily storing data in the first logical volatile memory when the first operating system is started, and the first operating system A first address indicating a position of the first cache memory area in a physical address space of the first logical volatile memory managed by the system;
- the second operating system generates at least one virtual machine and starts the third operating system
- Second arrangement information indicating a position of the second cache memory area in a physical address space is generated, and the first operating system detects the first arrangement when power-off of the computer is detected. Based on the information, the first data stored in the first cache memory area. The first data is stored in the non-volatile memory, the second placement information is obtained, and is stored in the second cache memory area based on the second placement information. The second data is obtained from the second logical volatile memory, and the second data is stored in the nonvolatile memory.
- the first operating system cache data of the first operating system and the cache data of the third operating system without executing the address conversion process or the like when the power-off is detected. Can be backed up quickly.
- FIG. 1 is a block diagram for explaining the outline of the present invention.
- the outline of the present invention will be described by taking a computer system including the physical computer 10 and the external storage apparatus 20 as an example.
- the physical computer 10 includes two host OSes, a first host OS 251 and a second host OS 252. As will be described later, each of the first host OS 251 and the second host OS 252 is assigned a resource (divided hardware resource) obtained by logically dividing hardware resources included in the physical computer 10. In FIG. 1, the first divided VRAM 221, the NVRAM 130, and the power supply unit 150 are allocated to the first host OS 251 as divided hardware resources.
- the first host OS 251 is a storage control OS that controls data read processing and data write processing for the external storage apparatus 20, and the second host OS 252 is a VMM that controls a plurality of virtual machines. (Virtual Machine Monitor).
- the VMM generates a plurality of VMs (Virtual Machines) and operates the guest OS 400 on the generated VMs.
- HOS_VCM volatile for the host OS
- Cache memory a part of the storage area of the first divided VRAM 221 allocated to the first host OS 251 is stored as HOS_VCM (volatile for the host OS). Cache memory) 230.
- a server control OS such as a file server or a database server
- GOS_VCM a volatile cache memory for the guest OS
- the first host OS 251 when the power supply unit 150 detects that the power supply is shut off, the first host OS 251 backs up the data stored in the HOS_VCM 230 as the HOS_NVCM (non-volatile cache memory for the host OS) 710 in the NVRAM 130. Further, the first host OS 251 specifies the address of the GOS_VCM 420 in the physical address space, and backs up the data stored in the GOS_VCM 420 to the GOS_NVCM 720 (non-volatile cache memory for guest OS).
- the first host OS 251 restores the data stored in the HOS_NVCM 710 to the newly secured HOS_VCM 230. To do.
- the first host OS 251 restores data stored in the GOS_NVCM 720 to the GOS_VCM 420 on the newly allocated virtual VRAM 410 in cooperation with the second host OS 252 and the guest OS 400.
- FIG. 2 is an explanatory diagram illustrating an example of a physical hardware configuration of the computer system according to the first embodiment of this invention.
- the computer system includes a physical computer 10 and an external storage device 20.
- the physical computer 10 is connected to the external storage apparatus 20 directly or via a network.
- SAN etc. comprised by FC (Fibre Channel) can be considered.
- the physical computer 10 may include a storage device inside the device.
- the physical computer 10 includes a processor 110, a VRAM (Volatile Random Access Memory) 120, an NVRAM (Non-Volatile Random Access Memory) 130, an I / O device 140, and a power supply unit 150.
- VRAM Volatile Random Access Memory
- NVRAM Non-Volatile Random Access Memory
- the processor 110 executes a program stored in the VRAM 120.
- the processor 110 has a plurality of CPU cores 111. Functions such as an OS are realized by the processor 110 executing the program.
- processing is mainly described with respect to the processor 110, it indicates that the program is being executed by the processor 110.
- the VRAM 120 is a storage medium composed of volatile storage elements.
- the VRAM 120 stores a program executed by the processor 110 and information necessary for executing the program.
- the VRAM 120 includes a work area for each program.
- the NVRAM 130 is a storage medium composed of nonvolatile storage elements.
- the NVRAM 130 stores program codes and the like of various firmware that operates in the physical computer 10.
- the NVRAM 130 also includes a storage area for temporarily storing data when the power supply is cut off unexpectedly.
- the storage capacity of the NVRAM 130 is smaller than that of the external storage device, but can be accessed from the physical computer 10 at high speed.
- the NVRAM 130 may be a storage medium such as an SSD (Solid State Drive).
- the NVRAM 130 is used as a nonvolatile storage medium, but other nonvolatile storage media may be used.
- the I / O device 140 is a device for inputting information from outside and outputting information to the outside by connecting to an external device.
- the I / O device 140 may be, for example, NIC, FC HBA, or the like.
- FIG. 2 shows four NICs 141 and four FC HBAs 142 as the I / O devices 140.
- One NIC 141 or one FC HBA 142 corresponds to one I / O device 140.
- the power supply unit 150 controls the power supply of the physical computer 10.
- the power supply unit 150 includes a power cutoff detection unit 151 and a battery power source 152.
- the power cutoff detection unit 151 monitors unexpected power cutoff and controls to supply power from the battery power source 152 when the power cutoff is detected.
- the external storage device 20 stores an OS program, program codes of various applications operating on the OS, data handled by the application, and the like.
- the external storage apparatus 20 provides a storage area for temporarily saving data stored in the VRAM 120 when the OS supports page-in and page-out.
- the external storage device 20 includes a plurality of storage media.
- the storage medium may be a storage medium such as an HDD (Hard Disk Drive).
- the external storage device 20 includes four HDDs 190 as storage media.
- the physical computer 10 may be connected to a storage system including a plurality of storage devices and a controller.
- FIG. 3 is an explanatory diagram illustrating an example of a logical configuration of the computer system according to the first embodiment of this invention.
- the hardware of the physical computer 10 is divided into a first divided H / W (hardware) 201, a second divided H / W (hardware) 202, and a multi-OS technique.
- the shared H / W (hardware) 203 is logically divided into three.
- the first host OS 251 is activated on the first divided H / W 201 and the second host OS 252 is activated on the second divided H / W 202.
- the first divided H / W 201 is exclusively used by the first host OS 251
- the second divided H / W 202 is exclusively used by the second host OS 252.
- the first divided H / W 201 includes a first divided processor 211, a first divided VRAM 221, an NVRAM 130, a first divided I / O device 241, and a power supply unit 150.
- the first divided processor 211 is a logical processor to which two CPU cores 111 among the four CPU cores 111 included in the processor 110 are assigned.
- the first divided VRAM 221 is a logical VRAM to which a part of the storage area of the VRAM 120 is allocated. As shown in FIG. 2, a part of the storage area of the first divided VRAM 221 is secured as HOS_VCM 230.
- the CPU core 111 assigned to the first divided processor 211 and the storage area assigned to the first divided VRAM 221 are controlled not to be used by the second host OS 252 running on the second divided H / W 202.
- the For example, the storage area allocated to the first divided VRAM 221 is not mapped to the HVA (host virtual address) space of the second host OS 252.
- I / O devices 140 included in the physical computer 10 are assigned as the first divided I / O devices 241.
- the NVRAM 130 and the power supply unit 150 are allocated so that the first divided H / W 201 occupies.
- the second divided H / W 202 includes a second divided processor 212, a second divided VRAM 222, and a second divided I / O device 242.
- the second divided processor 212 is a logical processor to which two CPU cores 111 among the four CPU cores 111 included in the processor 110 are assigned.
- the second divided VRAM 222 is a logical VRAM to which a part of the storage area of the VRAM 120 is allocated.
- some I / O devices 140 included in the physical computer 10 are allocated as the second divided I / O devices 242.
- the shared H / W 203 is hardware that can be used by both the first host OS 251 and the second host OS 252.
- the shared H / W 203 includes a shared VRAM 223 to which a part of the storage area of the VRAM 120 is allocated.
- the storage areas allocated to the shared VRAM 223 are mapped to the HVA space of the first host OS 251 and the HVA space of the second host OS 252, respectively. That is, each of the first host OS 251 and the second host OS 252 can access the shared VRAM 223.
- the first host OS 251 of this embodiment corresponds to the storage control OS.
- the storage control OS controls data read processing and data write processing for the external storage apparatus 20.
- the first host OS 251 has a function of managing the memory address space of a general OS, and manages the mapping between the HPA (host physical address) space of the first divided VRAM 221 and the HVA space.
- the HPA space of the first divided VRAM 221 is an address space indicating the physical location of the first divided VRAM managed by the first host OS 251
- the HVA space of the first divided VRAM 221 is the first This is an address space that indicates the position of a virtual memory that is allocated to an application or the like by the host OS 251.
- the first host OS 251 executes backup processing and restore processing between the VCM and the NVCM.
- backup processing and restore processing between VCM and NVCM are also simply referred to as backup processing and restore processing.
- the second host OS 252 of this embodiment corresponds to the VMM.
- the VMM generates a plurality of VMs 300 by allocating a part of the second divided H / W 202 using a virtual machine technology, and operates the guest OS 400 on each of the generated VMs 300.
- the VMM manages the mapping between the GPA (guest physical address) space of the virtual VRAM 410 allocated to the VM 300 and the HPA space of the second divided VRAM 222.
- the GPA space of the virtual VRAM 410 is an address space indicating the physical location of the virtual VRAM 410 managed by the guest OS 400
- the HPA space of the second divided VRAM 222 is the first space managed by the second host OS 252. This is an address space indicating the physical position of the two divided VRAMs 222.
- a virtual VRAM 410 is allocated to the VM 300 on which the guest OS 400 is running. Further, a virtual processor, a virtual I / O device, and the like (not shown) are allocated to the guest OS 400.
- the guest OS 400 recognizes the virtual VRAM 410 as a physical VRAM.
- the guest OS 400 has a function of managing a memory address space of a general OS, and manages mapping between the GPA space of the virtual VRAM 410 and the GVA (guest virtual address) space.
- the GVA space of the virtual VRAM 410 is an address space indicating the position of a virtual memory allocated to an application or the like by the guest OS 400.
- FIG. 4A is an explanatory diagram illustrating information stored in the VRAM 120 according to the first embodiment of this invention.
- FIG. 4B is an explanatory diagram illustrating information stored in the NVRAM 130 according to the first embodiment of this invention.
- the description will focus on information necessary for backup processing and restore processing.
- the physical address space of the VRAM 120 is logically divided into three storage areas: a storage area assigned to the first divided VRAM 221, a storage area assigned to the second divided VRAM 222, and a storage area assigned to the shared VRAM 223.
- the physical address space of the VRAM 120 is divided into three storage areas so that the addresses are continuous, but the division method is not limited to this.
- the first divided VRAM 221 is managed by the first host OS 251.
- the first host OS 251 reserves a partial storage area of the first divided VRAM 221 as the HOS_VCM 230.
- the first divided VRAM 221 stores HVA-HPA mapping information 240 and HOS_VCM HPA space arrangement information 250 as information necessary for backup processing and restoration processing.
- the HVA-HPA mapping information 240 is information for managing mapping between the HPA space managed by the first host OS 251 and the HVA space.
- the HVA-HPA mapping information 240 is generated by the first host OS 251.
- the HVA-HPA mapping information 240 generally corresponds to what is called a page table.
- HPA is a physical address of the first divided VRAM 221.
- the HVA is a virtual address used by the first host OS 251 and an application operating on the first host OS 251.
- the HOS_VCM HPA space arrangement information 250 is information indicating the arrangement of the HOS_VCM 230 in the HPA space managed by the first host OS 251.
- the HOS_VCM HPA space arrangement information 250 is generated by the first host OS 251.
- a plurality of storage areas with discontinuous physical addresses are allocated to the HOS_VCM 230. Therefore, the HOS_VCM HPA space allocation information 250 includes a plurality of entries in which the physical start address of one storage area allocated to the HOS_VCM 230 and the size are associated.
- the second host OS 252 places the second host OS 252 itself in a partial storage area of the second divided VRAM 222. In addition, the second host OS 252 allocates a part of the second divided VRAM 222 to the VM 300 as the virtual VRAM 410.
- GPA-HPA mapping information 500 is stored as information necessary for backup processing and restoration processing.
- the GPA-HPA mapping information 500 is information for managing mapping between the GPA space managed by the guest OS 400 and the HPA space managed by the second host OS 252.
- the GPA-HPA mapping information 500 is generated by the second host OS 252.
- the GPA-HPA mapping information 500 corresponds to what is called an EPT (extended page table).
- GPA is a physical address of the virtual VRAM 410 that the VM 300 recognizes as a physical VRAM.
- the GPA-HPA mapping information 500 will be described later with reference to FIG.
- a part of the storage area of the virtual VRAM 410 is reserved as GOS_VCM 420. Also, the virtual VRAM 410 stores GVA-GPA mapping information 430 and GOS_VCM GPA space arrangement information 440 as information necessary for backup processing and restoration processing.
- the GVA-GPA mapping information 430 is information for managing mapping between the GVA space managed by the guest OS 400 and the GPA space.
- the GVA-GPA mapping information 430 is generated by the guest OS 400. Similar to the HVA-HPA mapping information 240, the GVA-GPA mapping information 430 is generally called a page table.
- GVA is a virtual address used by the guest OS 400 and applications operating on the guest OS 400.
- the GVA-GPA mapping information 430 will be described later with reference to FIG.
- GOS_VCM GPA space arrangement information 440 is information indicating the arrangement of GOS_VCM 420 in the GPA space managed by the guest OS 400.
- the GOS_VCM GPA space arrangement information 440 is generated by the guest OS 400.
- the GOS_VCM GPA space arrangement information 440 will be described later with reference to FIG.
- the shared VRAM 223 stores GOS_VCM HPA space arrangement information 600 as information necessary for backup processing and restoration processing.
- the GOS_VCM HPA space arrangement information 600 is information indicating the arrangement of the GOS_VCM 420 in the HPA space managed by the second host OS 252.
- the GOS_VCM HPA space layout information 600 is generated by the second host OS 252.
- the GOS_VCM HPA space arrangement information 600 will be described later with reference to FIG.
- a part of the storage area of the NVRAM 130 is secured as HOS_NVCM 710 and GOS_NVCM 720. Further, the NVRAM 130 stores the NVCM NVA management information 700 as information necessary for the backup process and the restore process.
- the NVCM NVA management information 700 is information for managing the address (NVA) of the NVRAM 130 in the storage area secured as the HOS_NVCM 710 and the GOS_NVCM 720.
- the NVCM NVA management information 700 will be described later with reference to FIG.
- FIG. 5 is an explanatory diagram illustrating an example of the GPA-HPA mapping information 500 according to the first embodiment of this invention.
- GPA-HPA mapping information 500 one entry is registered for one storage area (for example, a page) in the GPA space managed by the guest OS 400.
- the entries registered in the GPA-HPA mapping information 500 include GPA 501, HPA 502, and size 503.
- GPA 501 is the top address of one storage area in the GPA space.
- the HPA 502 is a head address of one storage area in the HPA space that is assigned to the storage area corresponding to the GPA 501.
- the size 503 is the size of one storage area corresponding to the GPA 501.
- FIG. 6 is an explanatory diagram illustrating an example of the GVA-GPA mapping information 430 according to the first embodiment of this invention.
- GVA-GPA mapping information 430 one entry is registered for one storage area (for example, page) in the GVA space managed by the guest OS 400.
- the entries registered in the GVA-GPA mapping information 430 include GVA 431, GPA 432, and size 433.
- GVA431 is the head address of one storage area in the GVA space.
- GPA 432 is the head address of one storage area in the GPA space that is assigned to the storage area corresponding to GVA 431.
- the size 433 is the size of one storage area corresponding to the GVA 431.
- FIG. 7 is an explanatory diagram illustrating an example of the GOS_VCM GPA space layout information 440 according to the first embodiment of this invention.
- GOS_VCM GPA space layout information 440 one entry is registered for one storage area group (for example, block) constituting GOS_VCM 420.
- the storage area group indicates a plurality of storage areas (pages) having consecutive addresses.
- the entry registered in the GOS_VCM GPA space layout information 440 includes an ID 441, a GPA 442, and a size 443.
- the ID 441 is an identifier for uniquely identifying an entry registered in the GOS_VCM GPA space layout information 440.
- the GPA 442 is a head address of one storage area group constituting the GOS_VCM 420 in the GPA space managed by the guest OS 400.
- the size 443 is the size of one storage area group corresponding to the GPA 442.
- FIG. 8 is an explanatory diagram illustrating an example of the GOS_VCM HPA space layout information 600 according to the first embodiment of this invention.
- the GOS_VCM HPA space layout information 600 one entry is registered for one storage area group (for example, block) constituting the GOS_VCM 420.
- the GOS_VCM HPA space layout information 600 includes ID 601, HPA 602, and size 603.
- the ID 601 is an identifier for uniquely identifying an entry registered in the GOS_VCM HPA space layout information 600.
- the HPA 602 is a head address of one storage area group constituting the GOS_VCM 420 in the HPA space managed by the VMM.
- the size 603 is the size of one storage area group corresponding to the HPA 602.
- FIG. 9 is an explanatory diagram illustrating an example of the NVCM NVA management information 700 according to the first embodiment of this invention.
- NVCM NVA management information 700 one entry is registered for each HOS_NVCM 710 and each GOS_NVCM 720.
- the entry registered in the NVCM NVA management information 700 includes an OS_ID 701, an NVA 702, and a size 703.
- OS_ID 701 is an identifier for uniquely identifying an entry registered in the NVCM NVA management information 700.
- NVA 702 is the start address of the storage area corresponding to HOS_NVCM 710 or GOS_NVCM 720.
- the size 703 is the size of the storage area corresponding to the NVA 702.
- FIG. 10 is a flowchart for explaining the outline of the startup process of the computer system according to the first embodiment of this invention.
- step S100 When the physical computer 10 is powered on, first, the first host OS 251 is started up (step S100). Specifically, the following processing is executed.
- one CPU core 111 After the power is turned on, one CPU core 111 reads the boot loader from the external storage device 20 or the like, and loads the boot loader into the first divided VRAM 221. Furthermore, the CPU core 111 executes the loaded boot loader.
- the boot loader sets the shared VRAM 223 in the VRAM 120 based on preset resource definition information (not shown), and writes the resource definition information in the shared VRAM 223.
- the boot loader writes the image of the first host OS 251 in the first divided VRAM 221 based on the resource definition information, and starts the first host OS 251.
- the first host OS 251 is activated using the first divided H / W 201 based on the resource definition information. Details of the startup process of the first host OS 251 will be described later with reference to FIG.
- startup method of the first host OS 251 is not limited to this, and may be started using another known method.
- the above is the description of the processing in step S100.
- the first host OS 251 determines whether or not it is a restart process after power-off (step S101).
- the power supply unit 150 detects power-off
- a method of setting information indicating that power-off has been detected in the NVRAM 130 may be considered.
- the first host OS 251 determines whether or not the information is set in the NVRAM 130.
- the determination method described above is an example, and the present invention is not limited to this.
- the first host OS 251 may determine that the first host OS 251 is a restart process after power-off.
- the first host OS 251 starts the start process of the second host OS 252 (step S102).
- the first host OS 251 writes the image of the second host OS 252 in the second divided VRAM 222 based on the resource definition information and starts it.
- the second host OS 252 is activated using the second divided H / W 202 based on the resource definition information. Details of the startup process of the second host OS 252 will be described later with reference to FIG.
- the second host OS 252 starts the startup process of the guest OS 400 (step S103). Details of the startup processing of the guest OS 400 will be described later with reference to FIG.
- the first host OS 251 executes a normal operation and starts a power supply monitoring process.
- the first host OS 251 determines whether or not power-off is detected (step S109).
- the first host OS 251 When the power shutdown is not detected, the first host OS 251 continues the power monitoring process. On the other hand, when power-off is detected, the first host OS 251 starts backup processing (step S110). The first host OS 251 stops power supply to the physical computer 10 after the backup processing is completed. Details of the backup processing will be described later with reference to FIG.
- step S101 When it is determined in step S101 that the process is a restart process after power-off, the first host OS 251 executes the restore process of the first host OS 251 (step S105).
- the first host OS 251 restores the data stored in the HOS_NVCM 710 to the first divided VRAM 221 based on the NVCM NVA management information 700 and the newly generated HOS_VCM HPA space layout information 250.
- the restore processing of the first host OS 251 will be described later with reference to FIG.
- step S106 the first host OS 251 starts the activation process of the second host OS 252 (step S106).
- the process in step S106 is the same as the process in step S102.
- step S107 the second host OS 252 executes a startup process of the guest OS 400 (step S107).
- the process in step S107 is the same as the process in step S103.
- the guest OS 400 detects that the guest OS 400 has been restarted, and executes a restore process for the guest OS 400 in cooperation with the first host OS 251 and the second host OS 252 (step S108). Thereafter, after the restoration process of the guest OS 400 is completed, the first host OS 251 executes a normal job and starts a power supply monitoring process. The first host OS 251 determines whether or not power-off is detected (step S109). Since the processing after step S109 has already been described, a description thereof will be omitted.
- the data stored in the GOS_NVCM 720 is restored in the virtual VRAM 410.
- the restoration process of the guest OS 400 will be described later with reference to FIGS. 17A, 17B, and 17C.
- the guest OS 400 accesses the GOS_NVCM 720 and determines whether data is stored in the GOS_NVCM 720. When data is stored in GOS_NVCM 720, the guest OS 400 determines that the guest OS 400 has been restarted.
- the guest OS 400 acquires a log specifying the cause at the time of shutdown, and determines whether or not the guest OS 400 has been restarted based on the log after startup.
- a method in which the second host OS 252 notifies that the guest OS 400 has been restarted after starting the guest OS 400 in the startup processing of the guest OS 400 is also conceivable.
- the determination method described above is an example, and the present invention is not limited to this.
- FIG. 11 is a flowchart illustrating details of the startup process of the first host OS 251 according to the first embodiment of this invention.
- the first host OS 251 executes the startup process of the first host OS 251 using the first divided processor 211.
- the first host OS 251 generates HVA-HPA mapping information 240 (step S200). Since the generation method of the HVA-HPA mapping information 240 is a known method, the description thereof is omitted.
- the first host OS 251 reserves a storage area for HOS_VCM on the first divided VRAM 221 (step S201). Specifically, the first host OS 251 reserves a storage area where addresses in the HVA space are continuous as a storage area for HOS_VCM.
- the first host OS 251 determines whether page-in and page-out are valid (step S202).
- the first host OS 251 proceeds to step S204.
- the first host OS 251 uses the storage area so that data stored in the storage area reserved for the HOS_VCM is not evicted as a swap file. Pin it (step S203).
- the first host OS 251 generates the HOS_VCM HPA space arrangement information 250 (step S204) and ends the process. Specifically, the following processing is executed.
- the first host OS 251 refers to the HVA-HPA mapping information 240 based on the HVA of the storage area reserved for the HOS_VCM, and sets a plurality of storage area groups in the HPA space constituting the storage area reserved for the HOS_VCM. Identify.
- the first host OS 251 adds an entry corresponding to each storage area group specified in the HOS_VCM HPA space layout information 250. Further, the first host OS 251 sets the start address (HPA) and size of each storage area group in each entry.
- HPA start address
- the first host OS 251 rearranges the entries of the HOS_VCM HPA space layout information 250 according to the order of the HVA of each of the plurality of storage area groups. For example, the first host OS 251 rearranges the entries in ascending order or descending order of HVA.
- the first host OS 251 assigns IDs in ascending order from the entry on the HOS_VCM HPA space layout information 250. As a result, the same information as the GOS_VCM HPA space arrangement information 600 is generated. The above is the description of the process in step S204.
- FIG. 12 is a flowchart for explaining the details of the startup process of the second host OS 252 according to the first embodiment of this invention.
- the second host OS 252 executes the startup process of the second host OS 252 using the second divided processor 212.
- the second host OS 252 first executes an initialization process (step S300). Since the VMM initialization process is a known process, it is omitted here.
- the second host OS 252 generates the VM 300 after the initialization process is completed (step S301). Since the VM generation process is a known process, a description thereof is omitted here.
- the second host OS 252 generates GPA-HPA mapping information 500 corresponding to the generated VM 300 (step S302). Since the generation process of the GPA-HPA mapping information 500 is a known process, description thereof is omitted. Note that the second host OS 252 manages the GPA-HPA mapping information 500 in association with the identification information of the VM 300.
- the second host OS 252 activates the VM 300 (step S303). Furthermore, the activated VM 300 starts the activation process of the guest OS 400. The activation process of the guest OS 400 will be described later with reference to FIG.
- the second host OS 252 determines whether a notification indicating the storage location of the GOS_VCM GPA space layout information 440 has been received from the guest OS 400 (step S304).
- the second host OS 252 continues to wait until the notification is received from the guest OS 400.
- the second host OS 252 When it is determined that the notification is received from the guest OS 400, the second host OS 252 generates GOS_VCM HPA space arrangement information 600 corresponding to the VM 300 (step S305). Specifically, the following processing is executed.
- the second host OS 252 refers to the GPA-HPA mapping information 500 based on the GPA included in the notification received from the guest OS 400, and reads the GOS_VCM GPA space allocation information 440 stored in the virtual VRAM 410.
- the second host OS 252 selects one entry from the GOS_VCM GPA space layout information 440. Here, it is assumed that entries are selected in ascending order of ID 441.
- the second host OS 252 refers to the GPA-HPA mapping information 500 based on the GPA 442 of the selected entry, and searches for an entry in which the GPA 501 matches the GPA 442. That is, the address of the storage area constituting the GOS_VCM 420 in the HPA space managed by the second host OS 252 is specified.
- the second host OS 252 generates an entry in the GOS_VCM HPA space layout information 600 and sets an identification number in ascending order in the ID 601. Further, the second host OS 252 sets an address stored in the HPA 502 of the searched entry in the HPA 602 of the generated entry, and further stores a value stored in the size 443 of the selected entry. Set.
- the second host OS 252 transmits a response to the notification from the guest OS 400 to the guest OS 400 (step S306), and ends the process.
- the processing from step S301 to step S305 is repeatedly executed for one guest OS 400.
- FIG. 13 is a flowchart for explaining an example of the boot process of the guest OS 400 according to the first embodiment of the present invention.
- the guest OS 400 generates GVA-GPA mapping information 430 (step S400). Since the generation method of the GVA-GPA mapping information 430 is a known method, description thereof is omitted.
- the guest OS 400 reserves a storage area for GOS_VCM on the virtual VRAM 410 (step S401). Specifically, the guest OS 400 reserves a storage area where addresses in the GVA space are continuous as a storage area for GOS_VCM.
- the guest OS 400 determines whether page-in and page-out are valid (step S402).
- step S404 If it is determined that page-in and page-out are not valid, the guest OS 400 proceeds to step S404.
- the guest OS 400 pins the storage area so that data stored in the storage area reserved for the GOS_VCM 420 is not evicted as a swap file. (Step S403).
- the guest OS 400 generates GOS_VCM GPA space layout information 440 (step S404), and sends a notification indicating the storage location of the GOS_VCM GPA space layout information 440 to the second host OS 252 (step S405). Specifically, the following processing is executed.
- the guest OS 400 refers to the GVA-GPA mapping information 430 based on the GVA of the storage area reserved for GOS_VCM, and identifies a plurality of storage area groups in the GPA space constituting the storage area reserved for GOS_VCM.
- the guest OS 400 adds an entry corresponding to each storage area group specified in the GOS_VCM GPA space layout information 440.
- the guest OS 400 also sets the start address (GPA) and size of each storage area group in the GPA 442 and size 443 of each entry.
- the guest OS 400 rearranges the plurality of entries of the GOS_VCM GPA space layout information 440 in the order of the GPA 442. For example, the guest OS 400 rearranges the entries in ascending order or descending order of GVA.
- the guest OS 400 assigns IDs in ascending order from the entry on the GOS_VCM GPA space layout information 440. As a result, GOS_VCM HPA space arrangement information 600 is generated. The guest OS 400 notifies the second host OS 252 of the address (GPA) of the GPA space where the GOS_VCM HPA space arrangement information 600 is stored.
- GPS address
- the guest OS 400 determines whether a response is received from the second host OS 252 (step S406).
- the guest OS 400 If it is determined that a response is not received from the second host OS 252, the guest OS 400 continues to wait until a response is received from the second host OS 252. On the other hand, when it is determined that a response has been received from the second host OS 252, the guest OS 400 ends the process. Thereafter, the guest OS 400 executes normal processing.
- FIG. 14 is an explanatory diagram illustrating an example of an arrangement in the GVA space, an arrangement in the GPA space, and an arrangement in the HPA space of the GOS_VCM 420 according to the first embodiment of this invention.
- the first host OS 251 generates a mapping between the HPA space and the HVA space each time it starts, and uses the HVA space as a storage area to be allocated to the HOS_VCM 230. A storage area having consecutive addresses is secured. Each time the guest OS 400 is activated, it generates a mapping between the GPA space and the GVA space, and secures a storage area in which addresses in the GVA space are continuous as a storage area to be allocated to the GOS_VCM 420.
- mapping relationship between the HPA space and the HVA space and the mapping relationship between the GPA space and the GVA space are likely to change every time the physical computer 10 is started. Therefore, the storage area of the HPA space allocated to the HOS_VCM 230 and the storage area of the GPA space allocated to the GOS_VCM 420 may change.
- the first host OS 251 has the arrangement of the HOS_VCM 230 in the HPA space managed by the first host OS 251 and the second host OS 252 of the GOS_VCM 420 in order to back up the data stored in the HOS_VCM 230 and the GOS_VCM 420 at high speed. It is necessary to grasp the arrangement in the HPA space managed by the company.
- the arrangement of the HOS_VCM 230 in the HPA space managed by the first host OS 251 and the arrangement of the GOS_VCM 420 in the HPA space managed by the second host OS 252 are different each time the physical computer 10 is started.
- the first host OS 251 detects the power interruption and starts the backup process, in order to grasp the arrangement of the HOS_VCM 230 in the HPA space managed by the first host OS 251, the HVA-HPA mapping information Referring to 240, it is necessary to grasp the arrangement of the HOS_VCM 230. Also, the first host OS 251 needs to communicate with the second host OS 252 in order to grasp the arrangement of the GOS_VCM 420 in the HPA space managed by the second host OS 252.
- the first host OS 251 generates HOS_VCM HPA space arrangement information 250 indicating the arrangement of the HOS_VCM 230 in the HPA space managed by the first host OS 251 at the time of activation.
- the second host OS 252 generates GOS_VCM HPA space arrangement information 600 indicating the arrangement of the GOS_VCM 420 in the HPA space managed by the second host OS 252 in cooperation with the guest OS 400 at the time of activation.
- the first host OS 251 can easily and quickly grasp the storage area in which the data to be backed up is stored, it is possible to realize a backup process with reduced power consumption in a short time.
- the GOS_VCM 420 shown in FIG. 14 is mapped to three storage area groups, that is, block 1 (801), block 2 (802), and block 3 (803) in the GPA space managed by the guest OS 400. Further, block 1 (801) of the GPA space managed by the guest OS 400 is mapped to block 1 (901) in the HPA space managed by the second host OS 252 and block 2 (802) of the GPA space managed by the guest OS 400. Are mapped to block 2 (902) and block 3 (903) of the HPA space managed by the second host OS 252, and further, the block 3 of the GPA space managed by the guest OS 400 is managed by the second host OS 252. Map to block 4 (904) of HPA space. Therefore, it can be seen that the GOS_VCM 420 is assigned a storage area group in which addresses of the HPA space managed by the second host OS 252 are discontinuous.
- the guest OS 400 generates GOS_VCM GPA space layout information 440 in which entries in three blocks of the GPA space managed by the guest OS 400 are arranged in a continuous storage area of the GVA space managed by the guest OS 400. (Step S404). Further, the second host OS 252 generates GOS_VCM HPA space arrangement information 600 in which the entries of the four blocks in the GPA space are arranged in the order of consecutive addresses in the GVA space managed by the guest OS 400.
- the first host OS 251 reads the data in the order of entries in the GOS_VCM HPA space arrangement information 600 and writes the data in the GOS_NVCM 720, thereby making the same as the image data of the continuous GOS_VCM 420 in the GVA space managed by the guest OS 400 In this state, data is stored in the GOS_NVCM 720.
- the HOS_VCM HPA space layout information 250 has the same technical characteristics.
- FIG. 15 is a flowchart illustrating an example of backup processing according to the first embodiment of this invention.
- the first host OS 251 acquires the address (NVA) of the HOS_NVCM 710 with reference to the NVCM NVA management information 700 when the power-off is detected (step S500). That is, the first host OS 251 specifies the head address of the HOS_NVCM 710 that stores the data stored in the HOS_VCM 230.
- the first host OS 251 refers to the HOS_VCM HPA space layout information 250 stored in the first divided VRAM 221 and backs up the data stored in the HOS_VCM 230 to the HOS_NVCM 710 (step S501). Specifically, the following processing is executed.
- the first host OS 251 selects the entries of the HOS_VCM HPA space layout information 250 in order from the top.
- the first host OS 251 determines the address of the HOS_NVCM 710 that writes data corresponding to the selected entry.
- the NVA acquired in step S500 is determined as the address of the HOS_NVCM 710 that writes data.
- the first host OS 251 determines the address of the HOS_NVCM 710 to which data is written from the previously determined address and the size of the previously written data.
- the first host OS 251 reads data from the HOS_VCM 230 and stores the data in the HOS_NVCM 710. At this time, data is written from the HOS_VCM 230 to the HOS_NVCM 710 using DMA transfer. In the DMA transfer, the HPA stored in the selected entry and the determined address are used. The above is the description of the processing in step S501.
- the first host OS 251 refers to the NVCM NVA management information 700 and selects the guest OS 400 to be processed (step S502).
- the first host OS 251 selects the target guest OS 400 in order from the entry on the NVCM NVA management information 700. At this time, the first host OS 251 acquires the NVA 702 of the selected entry. Also, the first host OS 251 acquires the GOS_VCM HPA space arrangement information 600 corresponding to the guest OS 400 to be processed from the shared VRAM 223 based on the OS_ID 701 of the selected entry.
- the first host OS 251 backs up the data stored in the selected GOS_VCM 420 to the GOS_NVCM 720 (step S503). Specifically, the following processing is executed.
- the first host OS 251 selects one entry of the acquired GOS_VCM HPA space arrangement information 600. Here, the entries are selected in order from the top entry.
- the first host OS 251 determines the address of the GOS_NVCM 720 to which data corresponding to the HPA 602 of the selected entry is written.
- the address determination method may be the same method as in step S501.
- the first host OS 251 reads data from the GOS_VCM 420 based on the HPA 602 and the size 603 of the selected entry, and stores the read data in the GOS_NVCM 720. At this time, data is written from GOS_VCM 420 to GOS_NVCM 720 using DMA transfer. In the DMA transfer, the HPA 602 and the size 603 stored in the selected entry and the determined address are used.
- the first host OS 251 since the GOS_VCM 420 is included in the second divided VRAM 222, the first host OS 251 normally does not access. However, when executing the backup process and the restore process, the first host OS 251 is set so that it can temporarily access the second divided VRAM 222. As another method, a method in which the first host OS 251 issues a read command to the second host OS 252 can be considered.
- the first host OS 251 stores the data stored in the GOS_VCM 420 in the order of entries of the GOS_VCM HPA space arrangement information 600 in the GOS_NVCM 720. Therefore, as shown in FIG. 14, the data stored in the GOS_NVCM 720 is continuous image data like the GOS_VCM 420. The above is the description of the processing in step S503.
- the first host OS 251 determines whether or not the processing has been completed for all the guest OSs 400 (step S504). That is, it is determined whether or not the processing has been completed for all the guest OS 400 entries registered in the NVCM NVA management information 700.
- the first host OS 251 returns to Step S502 and executes the same processing for the new guest OS 400.
- the first host OS 251 ends the processing.
- FIG. 16 is a flowchart for explaining the restore processing of the first host OS 251 according to the first embodiment of this invention.
- the first host OS 251 starts the restore process after the startup process of the first host OS 251 is completed.
- the first host OS 251 refers to the NVCM NVA management information 700 and acquires the address (NVA) of the HOS_NVCM 710 (step S600). That is, the first host OS 251 specifies the head address of the HOS_NVCM 710 from which backup data is read.
- the first host OS 251 refers to the HOS_VCM HPA space layout information 250 stored in the first divided VRAM 221, restores the data stored in the HOS_NVCM 710 to the HOS_VCM 230 (step S601), and ends the process. Specifically, the following processing is executed.
- the first host OS 251 selects the entries of the HOS_VCM HPA space layout information 250 in order from the top. The first host OS 251 acquires the size of the selected entry. The first host OS 251 determines an address for reading data from the HOS_NVCM 710.
- the NVA acquired in step S600 is determined as the address of the HOS_NVCM 710 from which data is read.
- the first host OS 251 determines the address of the HOS_NVCM 710 that reads data from the previously determined address and the size of the previously read data.
- the first host OS 251 reads data for the size of the entry selected from the HOS_NVCM 710 and stores the data in the HOS_VCM 230. At this time, data is written from the HOS_NVCM 710 to the HOS_VCM 230 using DMA transfer. In the DMA transfer, the HPA and size stored in the selected entry and the determined address are used. The above is the description of the process in step S601.
- the HOS_VCM HPA space layout information 250 referred to in step S601 is generated after the restart.
- the contents differ from the HOS_VCM HPA space arrangement information 250 referred to in the backup process.
- FIG. 17A, FIG. 17B, and FIG. 17C are flowcharts for explaining the restore processing of the guest OS 400 according to the first embodiment of this invention.
- FIG. 18 is an explanatory diagram illustrating an example of an arrangement in the GVA space, an arrangement in the GPA space, and an arrangement in the HPA space of the GOS_VCM 420 according to the first embodiment of this invention.
- the guest OS 400 transmits a data restore request to the GOS_VCM 420 to the second host OS 252 after the activation process of the guest OS 400 is completed (step S700). Thereafter, the guest OS 400 determines whether or not a response has been received from the second host OS 252 (step S701).
- the guest OS 400 continues to wait until a response is received from the second host OS 252. When it is determined that the response is received from the second host OS 252, the guest OS 400 ends the process.
- the second host OS 252 When the second host OS 252 receives a data restore request from the guest OS 400 to the GOS_VCM 420 (step S800), the second host OS 252 transfers the data restore request to the GOS_VCM 420 to the first host OS 251 (step S801).
- the restore request includes the guest OS 400 identifier.
- the second host OS 252 determines whether a response has been received from the first host OS 251 (step S802). If it is determined that a response has not been received from the first host OS 251, the second host OS 252 continues to wait until a response is received from the first host OS 251.
- the second host OS 252 transmits a response to the guest OS 400 (step S 803) and ends the process.
- the first host OS 251 When the first host OS 251 receives a data restore request from the second host OS 252 to the GOS_VCM 420 (step S900), the first host OS 251 refers to the NVCM NVA management information 700 based on the identifier of the guest OS 400 included in the restore request, and The address (NVA) of GOS_NVCM 720 corresponding to the OS 400 is acquired (step S901).
- the first host OS 251 refers to the GOS_VCM HPA space allocation information 600 stored in the shared VRAM 223, restores the data stored in the GOS_NVCM 720 to the GOS_VCM 420 (step S902), and then transmits a response to the second host OS 252. (Step S903), and the process ends. Specifically, the following processing is executed.
- the first host OS 251 selects the GOS_VCM HPA space layout information 600 from the top entry in order.
- the first host OS 251 acquires the size 703 of the selected entry.
- the first host OS 251 determines an address for reading data from the GOS_NVCM 720.
- the NVA acquired in step S901 is the address of GOS_NVCM 720 from which data is read.
- the first host OS 251 determines the address of GOS_NVCM 720 for reading data from the previously determined address and the previously read data size 703.
- the first host OS 251 reads data corresponding to the entry size 703 selected from the GOS_NVCM 720 and stores the data in the GOS_VCM 420. At this time, data is written from GOS_NVCM 720 to GOS_VCM 420 using DMA transfer. In DMA transfer, the NVA 702 and size 703 stored in the selected entry and the determined are used. The above is the description of the process in step S902.
- the GOS_VCM HPA space layout information 600 referred to in step S902 is generated after the restart.
- the contents are different from the GOS_VCM HPA space arrangement information 600 referenced in the backup process.
- the GOS_VCM HPA space layout information 600 entries are rearranged in the order of addresses on the GVA managed by the guest OS 400. Therefore, when the first host OS 251 restores the data to the GOS_VCM 420 in the order of entry of the GOS_VCM HPA space allocation information 600, the data stored in the GOS_VCM 420 is stored in the GOS_VCM 420 before the restart as shown in FIG. Similarly, the address of the GVA space is restored as continuous image data. That is, it is possible to restore the same memory state as before the power interruption.
- the first host OS 251 since the GOS_VCM 420 is included in the second divided VRAM 222, the first host OS 251 normally does not access the second divided VRAM 222. However, when executing the backup process and the restore process, the first host OS 251 sets the shared VRAM 223 so that the GOS_NVCM 720 can temporarily access the second divided VRAM 222. As another method, a method in which the first host OS 251 issues a read command to the second host OS 252 can be considered. A method of writing the data read from the GOS_NVCM 720 to the shared VRAM 223 and issuing a data write command to the second host OS 252 is also conceivable.
- the first host OS 251 When the physical computer 10 is started, memory space address conversion processing necessary for backup processing and restoration processing is executed. That is, the first host OS 251 generates the HOS_VCM HPA space layout information 250 indicating the layout of the HOS_VCM 230 in the HPA space managed by the first host OS, and the second host OS 252 (VMM) GOS_VCM HPA space layout information 600 indicating the layout of GOS_VCM 420 in the HPA space managed by the host OS 252 is generated.
- VMM second host OS 252
- the first host OS 251 can easily and quickly back up the data stored in the HOS_VCM 230 by referring to the HOS_VCM HPA space layout information 250 when the power-off is detected. Further, by referring to the GOS_VCM HPA space arrangement information 600, data stored in the GOS_VCM 420 of each guest OS 400 can be backed up easily and quickly.
- the first host OS 251 does not perform complicated address conversion processing when performing backup processing using the power of the battery power supply 152, and does not perform complicated address conversion processing in a short time and with reduced power consumption. Backup can be realized.
- entries are arranged so that the addresses of the GVA space managed by the guest OS 400 are continuous.
- the first host OS 251 backs up the data stored in the HOS_VCM 230 in the order of entries of the HOS_VCM HPA space arrangement information 250 to the HOS_NVCM 710, thereby storing the data stored in the HOS_VCM 230 in the HVA space managed by the first host OS 251. Backup is possible in the state when the power is cut off. Further, the first host OS 251 backs up the data stored in the GOS_VCM 420 in the order of entries of the GOS_VCM HPA space allocation information 600 to the GOS_NVCM 720, thereby cutting off the data stored in the GOS_VCM 420 in the GVA space managed by the guest OS 400. You can back up in time.
- the first host OS 251 can restore the data stored in the HOS_NVCM 710 to the HOS_VCM 230 easily and quickly by referring to the HOS_VCM HPA space allocation information 250, and also the GOS_VCM HPA space.
- the data stored in the GOS_NVCM 720 can be restored to the GOS_VCM 420 easily and quickly.
- the first host OS 251 can restore the data of the HOS_VCM 230 in the power-off state by restoring the data stored in the HOS_NVCM 710 in the order of entries of the HOS_VCM HPA space allocation information 250 to the HOS_VCM 230. . Also, the first host OS 251 can restore the data of the GOS_VCM 420 in the power-off state by restoring the data stored in the GOS_NVCM 720 to the GOS_VCM 420 in the order of entry of the GOS_VCM HPA space layout information 600.
- this invention is not limited to the above-mentioned Example, Various modifications are included. Further, for example, the above-described embodiments are described in detail for easy understanding of the present invention, and are not necessarily limited to those provided with all the described configurations. Further, a part of the configuration of each embodiment can be added to, deleted from, or replaced with another configuration.
- each of the above-described configurations, functions, processing units, processing means, and the like may be realized by hardware by designing a part or all of them with, for example, an integrated circuit.
- the present invention can also be realized by software program codes that implement the functions of the embodiments.
- a storage medium in which the program code is recorded is provided to the computer, and a processor included in the computer reads the program code stored in the storage medium.
- the program code itself read from the storage medium realizes the functions of the above-described embodiments, and the program code itself and the storage medium storing it constitute the present invention.
- Examples of storage media for supplying such program codes include flexible disks, CD-ROMs, DVD-ROMs, hard disks, SSDs (Solid State Drives), optical disks, magneto-optical disks, CD-Rs, magnetic tapes, A non-volatile memory card, ROM, or the like is used.
- program code for realizing the functions described in this embodiment can be implemented by a wide range of programs or script languages such as assembler, C / C ++, perl, shell, PHP, Java, and the like.
- a storage means such as a hard disk or memory of a computer or a storage medium such as a CD-RW or CD-R.
- a processor included in the computer may read and execute the program code stored in the storage unit or the storage medium.
- control lines and information lines indicate those that are considered necessary for the explanation, and do not necessarily indicate all the control lines and information lines on the product. All the components may be connected to each other.
Abstract
Description
Claims (10)
- 複数のオペレーティングシステムが稼働する計算機であって、
前記計算機は、物理リソースとして、プロセッサ、前記プロセッサに接続される揮発性メモリ、前記プロセッサに接続される不揮発性メモリ、および前記プロセッサに接続されるI/Oデバイスを備え、
前記複数のオペレーティングシステムは、第1のオペレーティングシステムと、複数の仮想計算機を生成する第2のオペレーティングシステムと、を含み、
前記第1のオペレーティングシステムは、
前記プロセッサが論理的に分割された第1の論理プロセッサ、前記揮発性メモリが論理的に分割された第1の論理揮発性メモリ、および前記I/Oデバイスが論理的に分割された第1論理I/Oデバイスを含む第1の論理リソース上で稼働し、
前記計算機の電源の遮断を検出する電源遮断検出部を有し、
前記第2のオペレーティングシステムは、前記プロセッサが論理的に分割された第2のプロセッサ、前記揮発性メモリが論理的に分割された第2の論理揮発性メモリ、前記I/Oデバイスが論理的に分割された第2のI/Oデバイスを含む第2の論理リソース上で稼働し、
前記複数の仮想計算機の各々では、第3のオペレーティングシステムが稼動し、
前記第1のオペレーティングシステムは、
当該第1のオペレーティングシステムの起動時に、前記第1の論理揮発性メモリに、一時的にデータを格納する第1のキャッシュメモリ領域を確保し、
前記第1のオペレーティングシステムが管理する前記第1の論理揮発性メモリの物理アドレス空間における前記第1のキャッシュメモリ領域の位置を示す第1の配置情報を生成し、
前記第2のオペレーティングシステムは、
少なくとも一つの仮想計算機を生成して、前記少なくとも一つの仮想計算機上で前記第3のオペレーティングシステムを起動し、
前記第3のオペレーティングシステムは、前記少なくとも一つの仮想計算機に割り当てられた仮想メモリに、第2のキャッシュメモリ領域を確保し、
前記第2のオペレーティングシステムは、当該第2のオペレーティングシステムが管理する前記第2の論理揮発性メモリの物理アドレス空間における前記第2のキャッシュメモリ領域の位置を示す第2の配置情報を生成し、
前記第1のオペレーティングシステムは、
前記計算機の電源の遮断が検出された場合に、前記第1の配置情報に基づいて、前記第1のキャッシュメモリ領域に格納される第1のデータを取得し、
前記第1のデータを前記不揮発性メモリに格納し、
前記第2の配置情報を取得し、
前記第2の配置情報に基づいて、前記第2のキャッシュメモリ領域に格納される第2のデータを前記第2の論理揮発性メモリから取得し、
前記第2のデータを前記不揮発性メモリに格納することを特徴とする計算機。 - 請求項1に記載の計算機であって、
前記第1のオペレーティングシステムは、
前記計算機が再起動した場合に、前記第1の論理揮発性メモリに、新たに前記第1のキャッシュメモリ領域を確保し、
新たに前記第1の配置情報を生成し、
前記不揮発性メモリに格納される前記第1のデータを取得し、
前記新たに生成された第1の配置情報に基づいて、前記新たに確保された第1のキャッシュメモリ領域に前記第1のデータをリストアし、
前記第2のオペレーティングシステムは、
前記計算機が再起動した場合に、新たに前記少なくとも一つの仮想計算機を生成して、前記新たに生成された少なくとも一つの仮想計算機上で前記第3のオペレーティングシステムを起動し、
前記第3のオペレーティングシステムは、前記新たに生成された少なくとも一つの仮想計算機に新たに割り当てられた仮想メモリに、新たに前記第2のキャッシュメモリ領域を確保し、
前記第2のオペレーティングシステムは、新たに前記第2の配置情報を生成し、
前記第1のオペレーティングシステムに、前記少なくとも一つの仮想計算機上で稼動する第3のオペレーティングシステムの前記第2のデータのリストア要求を送信し、
前記第1のオペレーティングシステムは、
前記不揮発性メモリから、前記少なくとも一つの仮想計算機に対応する前記第2のデータを取得し、
前記新たに生成された第2の配置情報を取得し、
前記新たに生成された第2の配置情報に基づいて、前記新たに確保された第2のキャッシュメモリ領域に前記第2のデータをリストアすることを特徴とする計算機。 - 請求項2に記載の計算機であって、
前記第2のオペレーティングシステムは、
前記少なくとも一つの仮想計算機を生成する場合に、前記第2のオペレーティングシステムが管理する物理アドレス空間と、前記第3のオペレーティングシステムが管理する物理アドレス空間との間のマッピング関係を管理する第1のマッピング情報を生成し、
前記第3のオペレーティングシステムは、
前記第3のオペレーティングシステムが管理する物理アドレス空間と、前記第3のオペレーティングシステムが管理する仮想アドレス空間との間の対応関係を管理する第2のマッピング情報を生成し、
前記第3のオペレーティングシステムが管理する仮想アドレス空間で連続する記憶領域を前記第2のキャッシュメモリ領域として確保し、
前記第2のマッピング情報に基づいて、前記第3のオペレーティングシステムが管理する仮想メモリの物理アドレス空間における前記第2のキャッシュメモリ領域を構成する複数の記憶領域の位置を特定し、
前記第3のオペレーティングシステムが管理する仮想メモリの物理アドレス空間における前記特定された複数の記憶領域の各々の物理アドレスを含むエントリを複数生成し、
前記第3のオペレーティングシステムが管理する仮想アドレス空間における前記第2のキャッシュメモリ領域のアドレス順に、前記生成された複数のエントリを並び替えることによって、前記第3のオペレーティングシステムが管理する前記仮想メモリの物理アドレス空間における前記第2のキャッシュメモリ領域の位置を示す第3の配置情報を生成し、
前記第2のオペレーティングシステムは、
前記第2の配置情報を生成する場合に、前記第3の配置情報および前記第1のマッピング情報に基づいて、前記第2のオペレーティングシステムが管理する第2の論理揮発性メモリの物理アドレス空間における前記第2のキャッシュメモリ領域を構成する複数の記憶領域の位置を特定し、
前記第2のオペレーティングシステムが管理する第2の論理揮発性メモリの物理アドレス空間における前記特定された複数の記憶領域の各々の物理アドレスを含むエントリを複数生成し、
前記第3のオペレーティングシステムが管理する仮想アドレス空間における前記第2のキャッシュメモリ領域のアドレス順に、前記生成された複数のエントリを並び替えることによって、前記第2の配置情報を生成することを特徴とする計算機。 - 請求項3に記載の計算機であって、
前記第1のオペレーティングシステムは、前記仮想メモリに格納される前記第2のデータをバックアップする場合に、前記仮想メモリにおける前記第3のオペレーティングシステムが管理する前記仮想アドレス空間におけるアドレス順に、前記第2のデータを前記不揮発性メモリに格納することを特徴とする計算機。 - 請求項4に記載の計算機であって、
前記不揮発性メモリは、アドレスが連続する記憶領域から構成される前記第1のデータを格納する第1の不揮発性キャッシュメモリ領域、およびアドレスが連続する記憶領域から構成される前記少なくとも一つの仮想計算機の前記第2のデータを格納する第2の不揮発性キャッシュメモリ領域を含み、
前記第1のオペレーティングシステムは、
前記第1の配置情報のエントリ順に、前記第1のキャッシュメモリ領域を構成する記憶領域に格納されるデータを読み出し、
前記第1の不揮発性キャッシュメモリ領域のアドレス順に、前記第1のキャッシュメモリ領域を構成する記憶領域から読み出されたデータを格納し、
前記第2の配置情報のエントリ順に、前記第2のキャッシュメモリ領域を構成する記憶領域に格納されるデータを読み出し、
前記第1の不揮発性キャッシュメモリ領域のアドレス順に、前記第2のキャッシュメモリ領域を構成する記憶領域から読み出されたデータを格納することを特徴とする計算機。 - 複数のオペレーティングシステムが稼働する計算機におけるキャッシュデータ管理方法であって、
前記計算機は、物理リソースとして、プロセッサ、前記プロセッサに接続される揮発性メモリ、前記プロセッサに接続される不揮発性メモリ、および前記プロセッサに接続されるI/Oデバイスを備え、
前記複数のオペレーティングシステムは、第1のオペレーティングシステムと、複数の仮想計算機を生成する第2のオペレーティングシステムと、を含み、
前記第1のオペレーティングシステムは、
前記プロセッサが論理的に分割された第1の論理プロセッサ、前記揮発性メモリが論理的に分割された第1の論理揮発性メモリ、および前記I/Oデバイスが論理的に分割された第1論理I/Oデバイスを含む第1の論理リソース上で稼働し、
前記計算機の電源の遮断を検出する電源遮断検出部を有し、
前記第2のオペレーティングシステムは、前記プロセッサが論理的に分割された第2のプロセッサ、前記揮発性メモリが論理的に分割された第2の論理揮発性メモリ、前記I/Oデバイスが論理的に分割された第2のI/Oデバイスを含む第2の論理リソース上で稼働し、
前記複数の仮想計算機の各々では、第3のオペレーティングシステムが稼動し、
前記キャッシュデータ管理方法は、
前記第1のオペレーティングシステムが、当該第1のオペレーティングシステムの起動時に、前記第1の論理揮発性メモリに、一時的にデータを格納する第1のキャッシュメモリ領域を確保する第1のステップと、
前記第1のオペレーティングシステムが、前記第1のオペレーティングシステムが管理する前記第1の論理揮発性メモリの物理アドレス空間における前記第1のキャッシュメモリ領域の位置を示す第1の配置情報を生成する第2のステップと、
前記第2のオペレーティングシステムが、少なくとも一つの仮想計算機を生成して、前記少なくとも一つの仮想計算機上で前記第3のオペレーティングシステムを起動する第3のステップと、
前記第3のオペレーティングシステムが、前記少なくとも一つの仮想計算機に割り当てられた仮想メモリに、第2のキャッシュメモリ領域を確保する第4のステップと、
前記第2のオペレーティングシステムが、前記第2のオペレーティングシステムは、当該第2のオペレーティングシステムが管理する前記第2の論理揮発性メモリの物理アドレス空間における前記第2のキャッシュメモリ領域の位置を示す第2の配置情報を生成する第5のステップと、
前記第1のオペレーティングシステムが、前記計算機の電源の遮断が検出された場合に、前記第1の配置情報に基づいて、前記第1のキャッシュメモリ領域に格納される第1のデータを取得し、前記不揮発性メモリに前記第1のデータを格納する第6のステップと、
前記第1のオペレーティングシステムが、前記第2の配置情報を取得する第7のステップと、
前記第1のオペレーティングシステムが、前記第2の配置情報に基づいて、前記第2のキャッシュメモリ領域に格納される第2のデータを前記第2の論理揮発性メモリから取得し、前記不揮発性メモリに前記第2のデータを格納する第8のステップと、を含むことを特徴とするキャッシュデータ管理方法。 - 請求項6に記載のキャッシュデータ管理方法であって、
前記キャッシュデータ管理方法は、
前記第1のオペレーティングシステムが、前記計算機が再起動した場合に、前記第1の論理揮発性メモリに、新たに前記第1のキャッシュメモリ領域を確保する第9のステップと、
前記第1のオペレーティングシステムが、新たに前記第1の配置情報を生成する第10のステップと、
前記第1のオペレーティングシステムが、前記不揮発性メモリに格納される前記第1のデータを取得する第11のステップと、
前記第1のオペレーティングシステムが、前記新たに生成された第1の配置情報に基づいて、前記新たに確保された第1のキャッシュメモリ領域に前記第1のデータをリストアする第12のステップと、
前記第2のオペレーティングシステムが、前記計算機が再起動した場合に、新たに前記少なくとも一つの仮想計算機を生成して、前記新たに生成された少なくとも一つの仮想計算機上で前記第3のオペレーティングシステムを起動する第13のステップと、
前記第3のオペレーティングシステムが、前記新たに生成された少なくとも一つの仮想計算機に新たに割り当てられた仮想メモリに、新たに前記第2のキャッシュメモリ領域を確保する第14のステップと、
前記第2のオペレーティングシステムが、新たに前記第2の配置情報を生成する第15のステップと、
前記第2のオペレーティングシステムが、前記第1のオペレーティングシステムに、前記少なくとも一つの仮想計算機上で稼動する第3のオペレーティングシステムの前記第2のデータのリストア要求を送信する第16のステップと、
前記第1のオペレーティングシステムが、前記不揮発性メモリから、前記少なくとも一つの仮想計算機に対応する前記第2のデータを取得する第17のステップと、
前記第1のオペレーティングシステムが、前記新たに生成された第2の配置情報を取得する第18のステップと、
前記第1のオペレーティングシステムが、前記新たに生成された第2の配置情報に基づいて、前記新たに確保された第2のキャッシュメモリ領域に前記第2のデータをリストアする第19のステップと、を含むことを特徴とするキャッシュデータ管理方法。 - 請求項7に記載のキャッシュデータ管理方法であって、
前記第3のステップおよび前記第13のステップは、前記第2のオペレーティングシステムが、前記第2のオペレーティングシステムが管理する物理アドレス空間と、前記第3のオペレーティングシステムが管理する物理アドレス空間との間のマッピング関係を管理する第1のマッピング情報を生成するステップを含み、
前記第4のステップおよび前記第14のステップは、
前記第3のオペレーティングシステムが、前記第3のオペレーティングシステムが管理する物理アドレス空間と、前記第3のオペレーティングシステムが管理する仮想アドレス空間との間の対応関係を管理する第2のマッピング情報を生成するステップと、
前記第3のオペレーティングシステムが、前記第3のオペレーティングシステムが管理する仮想アドレス空間で連続する記憶領域を前記第2のキャッシュメモリ領域として確保するステップと、
前記第3のオペレーティングシステムが、前記第2のマッピング情報に基づいて、前記第3のオペレーティングシステムが管理する仮想メモリの物理アドレス空間における前記第2のキャッシュメモリ領域を構成する複数の記憶領域の位置を特定するステップと、
前記第3のオペレーティングシステムが、前記第3のオペレーティングシステムが管理する仮想メモリの物理アドレス空間における前記特定された複数の記憶領域の各々の物理アドレスを含むエントリを複数生成するステップと、
前記第3のオペレーティングシステムが、前記第3のオペレーティングシステムが管理する仮想アドレス空間における前記第2のキャッシュメモリ領域のアドレス順に、前記生成された複数のエントリを並び替えることによって、前記第3のオペレーティングシステムが管理する前記仮想メモリの物理アドレス空間における前記第2のキャッシュメモリ領域の位置を示す第3の配置情報を生成するステップと、を含み、
前記第5のステップおよび前記第15のステップは、
前記第2のオペレーティングシステムが、前記第3の配置情報および前記第1のマッピング情報に基づいて、前記第2のオペレーティングシステムが管理する第2の論理揮発性メモリの物理アドレス空間における前記第2のキャッシュメモリ領域を構成する複数の記憶領域の位置を特定するステップと、
前記第2のオペレーティングシステムが、前記第2のオペレーティングシステムが管理する第2の論理揮発性メモリの物理アドレス空間における前記特定された複数の記憶領域の各々の物理アドレスを含むエントリを複数生成するステップと、
前記第2のオペレーティングシステムが、前記第3のオペレーティングシステムが管理する仮想アドレス空間における前記第2のキャッシュメモリ領域のアドレス順に、前記生成された複数のエントリを並び替えることによって、前記第2の配置情報を生成するステップと、を含むことを特徴とするキャッシュデータ管理方法。 - 請求項8に記載のキャッシュデータ管理方法であって、
前記第10のステップでは、前記第1のオペレーティングシステムが、前記仮想メモリにおける前記第3のオペレーティングシステムが管理する前記仮想アドレス空間におけるアドレス順に、前記第2のデータを前記不揮発性メモリに格納することを特徴とするキャッシュデータ管理方法。 - 請求項9に記載のキャッシュデータ管理方法であって、
前記不揮発性メモリは、アドレスが連続する記憶領域から構成される前記第1のデータを格納する第1の不揮発性キャッシュメモリ領域、およびアドレスが連続する記憶領域から構成される前記少なくとも一つの仮想計算機の前記第2のデータを格納する第2の不揮発性キャッシュメモリ領域を含み、
前記第6のステップは、
前記第1のオペレーティングシステムが、前記第1の配置情報のエントリ順に、前記第1のキャッシュメモリ領域を構成する記憶領域に格納されるデータを読み出すステップと、
前記第1のオペレーティングシステムが、前記第1の不揮発性キャッシュメモリ領域のアドレス順に、前記第1のキャッシュメモリ領域を構成する記憶領域から読み出されたデータを格納するステップと、を含み、
前記第8のステップは、
前記第1のオペレーティングシステムが、前記第2の配置情報のエントリ順に、前記第2のキャッシュメモリ領域を構成する記憶領域に格納されるデータを読み出すステップと、
前記第1のオペレーティングシステムが、前記第1の不揮発性キャッシュメモリ領域のアドレス順に、前記第2のキャッシュメモリ領域を構成する記憶領域から読み出されたデータを格納するステップと、を含むことを特徴とするキャッシュデータ管理方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/114,114 US9977740B2 (en) | 2014-03-07 | 2014-03-07 | Nonvolatile storage of host and guest cache data in response to power interruption |
JP2016506043A JP6165964B2 (ja) | 2014-03-07 | 2014-03-07 | 計算機 |
PCT/JP2014/055878 WO2015132941A1 (ja) | 2014-03-07 | 2014-03-07 | 計算機 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2014/055878 WO2015132941A1 (ja) | 2014-03-07 | 2014-03-07 | 計算機 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015132941A1 true WO2015132941A1 (ja) | 2015-09-11 |
Family
ID=54054776
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/055878 WO2015132941A1 (ja) | 2014-03-07 | 2014-03-07 | 計算機 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9977740B2 (ja) |
JP (1) | JP6165964B2 (ja) |
WO (1) | WO2015132941A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017111146A (ja) * | 2015-12-18 | 2017-06-22 | エフ ホフマン−ラ ロッシュ アクチェン ゲゼルシャフト | 試料または試薬を処理するための機器の設定を復元するための方法、および試料または試薬を処理するための機器を含むシステム |
WO2018154967A1 (ja) * | 2017-02-24 | 2018-08-30 | 株式会社東芝 | 制御装置 |
WO2020157950A1 (ja) * | 2019-02-01 | 2020-08-06 | 三菱電機株式会社 | 情報処理装置、バックアップ方法、リストア方法およびプログラム |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10623565B2 (en) | 2018-02-09 | 2020-04-14 | Afiniti Europe Technologies Limited | Techniques for behavioral pairing in a contact center system |
JP2023037883A (ja) | 2021-09-06 | 2023-03-16 | キオクシア株式会社 | 情報処理装置 |
US11755496B1 (en) | 2021-12-10 | 2023-09-12 | Amazon Technologies, Inc. | Memory de-duplication using physical memory aliases |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009075759A (ja) * | 2007-09-19 | 2009-04-09 | Hitachi Ltd | ストレージ装置及びストレージ装置におけるデータの管理方法 |
US20110202728A1 (en) * | 2010-02-17 | 2011-08-18 | Lsi Corporation | Methods and apparatus for managing cache persistence in a storage system using multiple virtual machines |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3546678B2 (ja) | 1997-09-12 | 2004-07-28 | 株式会社日立製作所 | マルチos構成方法 |
US20060136765A1 (en) * | 2004-12-03 | 2006-06-22 | Poisner David L | Prevention of data loss due to power failure |
US8060683B2 (en) * | 2004-12-17 | 2011-11-15 | International Business Machines Corporation | System, method and program to preserve a cache of a virtual machine |
US8375386B2 (en) * | 2005-06-29 | 2013-02-12 | Microsoft Corporation | Failure management for a virtualized computing environment |
US8607009B2 (en) * | 2006-07-13 | 2013-12-10 | Microsoft Corporation | Concurrent virtual machine snapshots and restore |
JP2008276646A (ja) * | 2007-05-02 | 2008-11-13 | Hitachi Ltd | ストレージ装置及びストレージ装置におけるデータの管理方法 |
US8209686B2 (en) * | 2008-02-12 | 2012-06-26 | International Business Machines Corporation | Saving unsaved user process data in one or more logical partitions of a computing system |
KR101288700B1 (ko) | 2008-03-14 | 2013-08-23 | 미쓰비시덴키 가부시키가이샤 | 멀티 오퍼레이팅 시스템(os) 기동 장치, 컴퓨터 판독 가능한 기록 매체 및 멀티 os 기동 방법 |
JP5474762B2 (ja) * | 2008-03-19 | 2014-04-16 | 旭化成イーマテリアルズ株式会社 | 高分子電解質及びその製造方法 |
US8671258B2 (en) * | 2009-03-27 | 2014-03-11 | Lsi Corporation | Storage system logical block address de-allocation management |
JP5484117B2 (ja) * | 2010-02-17 | 2014-05-07 | 株式会社日立製作所 | ハイパーバイザ及びサーバ装置 |
US9804874B2 (en) * | 2011-04-20 | 2017-10-31 | Microsoft Technology Licensing, Llc | Consolidation of idle virtual machines on idle logical processors |
US9875115B2 (en) * | 2013-12-20 | 2018-01-23 | Microsoft Technology Licensing, Llc | Memory-preserving reboot |
-
2014
- 2014-03-07 US US15/114,114 patent/US9977740B2/en active Active
- 2014-03-07 WO PCT/JP2014/055878 patent/WO2015132941A1/ja active Application Filing
- 2014-03-07 JP JP2016506043A patent/JP6165964B2/ja not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009075759A (ja) * | 2007-09-19 | 2009-04-09 | Hitachi Ltd | ストレージ装置及びストレージ装置におけるデータの管理方法 |
US20110202728A1 (en) * | 2010-02-17 | 2011-08-18 | Lsi Corporation | Methods and apparatus for managing cache persistence in a storage system using multiple virtual machines |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017111146A (ja) * | 2015-12-18 | 2017-06-22 | エフ ホフマン−ラ ロッシュ アクチェン ゲゼルシャフト | 試料または試薬を処理するための機器の設定を復元するための方法、および試料または試薬を処理するための機器を含むシステム |
US11200326B2 (en) | 2015-12-18 | 2021-12-14 | Roche Diagnostics Operations, Inc. | Method of restoring settings of an instrument for processing a sample or a reagent and a system for processing a sample or reagent |
WO2018154967A1 (ja) * | 2017-02-24 | 2018-08-30 | 株式会社東芝 | 制御装置 |
JPWO2018154967A1 (ja) * | 2017-02-24 | 2019-11-07 | 株式会社東芝 | 制御装置 |
US11334379B2 (en) | 2017-02-24 | 2022-05-17 | Kabushiki Kaisha Toshiba | Control device |
WO2020157950A1 (ja) * | 2019-02-01 | 2020-08-06 | 三菱電機株式会社 | 情報処理装置、バックアップ方法、リストア方法およびプログラム |
JP6762452B1 (ja) * | 2019-02-01 | 2020-09-30 | 三菱電機株式会社 | 情報処理装置、バックアップ方法、リストア方法およびプログラム |
US11281395B2 (en) | 2019-02-01 | 2022-03-22 | Mitsubishi Electric Corporation | Information processing device, backup method, restore method, and program |
Also Published As
Publication number | Publication date |
---|---|
US20170004081A1 (en) | 2017-01-05 |
US9977740B2 (en) | 2018-05-22 |
JP6165964B2 (ja) | 2017-07-19 |
JPWO2015132941A1 (ja) | 2017-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10289564B2 (en) | Computer and memory region management method | |
US9606745B2 (en) | Storage system and method for allocating resource | |
JP6165964B2 (ja) | 計算機 | |
US10152409B2 (en) | Hybrid in-heap out-of-heap ballooning for java virtual machines | |
JP4961319B2 (ja) | 仮想ボリュームにおける仮想領域に動的に実領域を割り当てるストレージシステム | |
JP6029550B2 (ja) | 計算機の制御方法及び計算機 | |
JP5484117B2 (ja) | ハイパーバイザ及びサーバ装置 | |
US20120246644A1 (en) | Virtual computer system and controlling method of virtual computer | |
US8954706B2 (en) | Storage apparatus, computer system, and control method for storage apparatus | |
KR20070100367A (ko) | 하나의 가상 머신에서 다른 가상 머신으로 메모리를동적으로 재할당하기 위한 방법, 장치 및 시스템 | |
US10289563B2 (en) | Efficient reclamation of pre-allocated direct memory access (DMA) memory | |
US11593170B2 (en) | Flexible reverse ballooning for nested virtual machines | |
US10956189B2 (en) | Methods for managing virtualized remote direct memory access devices | |
WO2012155555A1 (zh) | 一种运行多个虚拟机的方法及系统 | |
JP2017037665A (ja) | ホスト側フラッシュストレージデバイスの容量を仮想マシンに利用可能にする技術 | |
US8566541B2 (en) | Storage system storing electronic modules applied to electronic objects common to several computers, and storage control method for the same | |
JP2011227766A (ja) | 記憶手段の管理方法、仮想計算機システムおよびプログラム | |
US11256585B2 (en) | Storage system | |
JP7125964B2 (ja) | 計算機システムおよび管理方法 | |
WO2014061068A1 (en) | Storage system and method for controlling storage system | |
WO2015122007A1 (ja) | 計算機、及び、ハイパバイザによる資源スケジューリング方法 | |
WO2024051292A1 (zh) | 数据处理系统、内存镜像方法、装置和计算设备 | |
US20190004956A1 (en) | Computer system and cache management method for computer system | |
JP2023102641A (ja) | 計算機システム及びスケールアップ管理方法 | |
JP2013206454A (ja) | 情報処理装置、装置管理方法および装置管理プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14884725 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2016506043 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15114114 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14884725 Country of ref document: EP Kind code of ref document: A1 |