WO2018127948A1 - Système informatique - Google Patents

Système informatique Download PDF

Info

Publication number
WO2018127948A1
WO2018127948A1 PCT/JP2017/000054 JP2017000054W WO2018127948A1 WO 2018127948 A1 WO2018127948 A1 WO 2018127948A1 JP 2017000054 W JP2017000054 W JP 2017000054W WO 2018127948 A1 WO2018127948 A1 WO 2018127948A1
Authority
WO
WIPO (PCT)
Prior art keywords
heap
area
nvram
memory
computer system
Prior art date
Application number
PCT/JP2017/000054
Other languages
English (en)
Japanese (ja)
Inventor
明男 島田
アビシェク ジョーリ
光雄 早坂
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Priority to PCT/JP2017/000054 priority Critical patent/WO2018127948A1/fr
Publication of WO2018127948A1 publication Critical patent/WO2018127948A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation

Definitions

  • the present invention relates to heap management of nonvolatile random access memory.
  • NVRAM Non-volatile Random access memory
  • HDD Hard Disk Drive
  • SSD Solid State Drive
  • An example of the NVRAM management technique is disclosed in Patent Document 1, for example.
  • NVRAM heap can be considered as a method for managing NVRAM.
  • the NVRAM heap is a virtual address space to which the NVRAM physical memory area is mapped.
  • a program executed on the computer acquires and releases the NVRAM memory area via the NVRAM heap.
  • a discontinuous use area in the heap is moved to form a continuous use area, and a continuous large free area is created in the NVRAM heap. This eliminates the fragmentation of the NVRAM heap.
  • a typical example of the present invention is a computer system including a processor and a nonvolatile random access memory, wherein the processor uses a first heap for a program to use a memory area of the nonvolatile random access memory. , Allocate and release the used area in the first heap, execute a process of collecting the separated used areas into one continuous area in the first heap, and The first management information indicating the relationship between the used area and the key for identifying the memory area allocated to the used area is updated according to the allocation, the release, and the processing, and the first management In the information, a search for an address in the first heap of the target memory area is executed by a key of the target memory area.
  • the program can access a desired NVRAM memory area.
  • FIG. 1 illustrates a hardware configuration of the storage apparatus according to the first embodiment.
  • FIG. 2 illustrates a logical configuration of the storage apparatus according to the first embodiment.
  • FIG. 3A illustrates the structure of the NVRAM heap managed by the memory management library in the storage apparatus according to the first embodiment.
  • FIG. 3B illustrates a method in which the memory management library constructs the NVRAM heap in the storage apparatus according to the first embodiment.
  • FIG. 4 illustrates an API (Application Programmable Interface) provided by the memory management library in the storage apparatus according to the first embodiment.
  • FIG. 5 is a conceptual diagram of memory acquisition and release processing from the NVRAM heap via the memory management library in the storage apparatus according to the first embodiment.
  • FIG. 6 illustrates a data structure of the NVRAM heap managed by the memory management library in the storage apparatus according to the first embodiment.
  • FIG. 7 shows a structure of a chunk management area in the NVRAM heap managed by the memory management library in the storage apparatus according to the first embodiment.
  • FIG. 8 shows the structure of the search index in the NVRAM heap managed by the memory management library in the storage apparatus according to the first embodiment.
  • FIG. 9A illustrates a memory acquisition process in the storage apparatus according to the first embodiment.
  • FIG. 9B illustrates a memory release process in the storage apparatus according to the first embodiment.
  • FIG. 9C illustrates a memory search process in the storage apparatus according to the first embodiment.
  • FIG. 10A illustrates a write process of the storage apparatus according to the first embodiment.
  • FIG. 10B illustrates a read process of the storage apparatus according to the first embodiment.
  • FIG. 11A illustrates a method in which the memory management library allocates NVRAM memory pages to the second half of the standby NVRAM heap in the storage apparatus according to the first embodiment.
  • region of the chunk file which the memory management library maps to the NVRAM heap and the standby NVRAM heap is shown.
  • FIG. 12 illustrates a data structure of a standby NVRAM heap managed by the memory management library in the storage apparatus according to the first embodiment.
  • FIG. 13 shows the structure of the search index in the standby NVRAM heap managed by the memory management library in the storage apparatus according to the first embodiment.
  • FIG. 14 shows the switching process between the NVRAM heap and the standby NVRAM heap.
  • FIG. 11A illustrates a method in which the memory management library allocates NVRAM memory pages to the second half of the standby NVRAM heap in the storage apparatus according to the first embodiment.
  • FIG. 15 illustrates a hardware configuration of a system according to the second embodiment.
  • FIG. 16 illustrates a logical configuration of the system according to the second embodiment.
  • FIG. 17 illustrates a data structure of the NVRAM heap managed by the memory management library in the storage apparatus according to the second embodiment.
  • FIG. 18 illustrates a structure of a chunk management area in the NVRAM heap managed by the memory management library in the storage apparatus according to the second embodiment. The structure of the search index for NVRAM heap which a memory management library manages in the storage apparatus which concerns on Example 2 is shown.
  • FIG. 20 illustrates a data structure of the NVRAM heap managed by the memory management library in the standby storage apparatus according to the second embodiment.
  • FIG. 20 illustrates a data structure of the NVRAM heap managed by the memory management library in the standby storage apparatus according to the second embodiment.
  • FIG. 21 illustrates a structure of a chunk management area in the NVRAM heap managed by the memory management library in the standby storage apparatus according to the second embodiment.
  • FIG. 22 shows a structure of a search index for NVRAM heap managed by the memory management library in the standby storage apparatus according to the second embodiment.
  • FIG. 23 illustrates an API of a mirroring function provided by the memory management library in the storage apparatus according to the second embodiment.
  • FIG. 24 illustrates mirroring processing in the storage apparatus and standby storage apparatus according to the second embodiment.
  • FIG. 25 illustrates processing from when the storage apparatus according to the second embodiment is activated until the storage program starts providing the storage service.
  • FIG. 26 illustrates a method by which the memory management library moves the used area on the NVRAM heap to a continuous area in the storage apparatus according to the second embodiment.
  • FIG. 27 shows a state before data movement in the chunk management area and a state after data movement in the storage apparatus according to the second embodiment.
  • FIG. 28 illustrates a state before the data movement of the search index and a state after the data movement in the storage apparatus according to the second embodiment.
  • the process may be described with “program” as the subject, but the program is executed by a processor (for example, a CPU (Central Processing Unit)), so that a predetermined process can be appropriately performed. Since the processing is performed using a storage resource (for example, a memory) and / or a communication interface device (for example, a port), the subject of processing may be a program.
  • a processor for example, a CPU (Central Processing Unit)
  • a storage resource for example, a memory
  • a communication interface device for example, a port
  • the processing described with the program as the subject may be processing performed by a processor or a computer having the processor (for example, a management computer, a host computer, a storage device, etc.).
  • a “library” is executed by a processor in the same way as a program. Therefore, the process may be described using the library as the subject.
  • a library is program code that is combined with a program as part of the program when the program is executed.
  • This disclosure relates to a technique for efficiently managing an NVRAM heap for acquiring a memory area of NVRAM (Non-volatile Random access memory).
  • the NVRAM heap is a virtual address space for the NVRAM memory area.
  • the program allocates a key to the acquired memory area when the NVRAM memory area is acquired via the NVRAM heap.
  • the program refers to information associating the key with the address of the NVRAM heap, and uses the key to search for the address of the NVRAM heap assigned (mapped) to the memory area acquired by the NVRAM. Thereby, even when the address in the NVRAM heap of the memory area acquired by the defragmentation process is changed, the program can search for the changed address.
  • the present embodiment discloses a computer system using NVRAM.
  • a storage apparatus which is an example of a computer system, executes a program that provides a storage service (hereinafter, a storage program).
  • the storage program uses the NVRAM heap to acquire and release the NVRAM memory area.
  • a standby NVRAM heap (NVRAM heap in standby mode) is prepared separately from the NVRAM heap (current NVRAM heap) used by the storage program.
  • the address area (use area) used in the current NVRAM heap is reflected in the standby NVRAM heap.
  • the discontinuous (separated) use areas in the standby NVRAM heap are combined (defragmented) into one continuous use area at a predetermined time.
  • the memory area used by the storage program in the NVRAM memory is mapped to a new continuous area of the standby NVRAM heap.
  • defragmentation of the standby NVRAM heap forms a single used area and a single free area in the standby NVRAM heap. This is the state with the least fragmentation. If fragmentation of the standby NVRAM heap is reduced, a plurality of used areas and / or a plurality of free areas may exist.
  • the standby NVRAM heap is defragmented each time a plurality of spaced use areas and / or a plurality of spaced free areas are formed.
  • the discontinuous use area or the discontinuous free area can be formed by the storage program acquiring or releasing the NVRAM memory area via the current NVRAM heap (the NVRAM heap in the current mode).
  • the storage program switches between the active NVRAM heap and the standby NVRAM heap at a predetermined timing. This eliminates the need to temporarily stop the execution of the storage program that uses the NVRAM heap in order to execute the defragmentation process of the NVRAM heap, and the influence of the defragmentation process on the service provided by the storage program Can be reduced.
  • FIG. 1 illustrates a hardware configuration example of a system including the storage apparatus 100 according to the first embodiment.
  • the storage apparatus 100 is an example of a computer system that uses the NVRAM heap of the present disclosure.
  • the storage apparatus 100 provides a storage service to a plurality of hosts 110.
  • the host 110 is connected to the network 130 via the communication cable 121, and is connected to the storage apparatus 100 via the network 130.
  • the host 110 issues an I / O request to the storage apparatus 100.
  • the network 130 includes a network switch 120 and communication cables 121 and 122.
  • the storage apparatus 100 includes a CPU 101 that is a processor, a VRAM (Volatile Random access memory) 102, an NVRAM 103, a disk device 104, and a bus 107 that interconnects them.
  • a CPU 101 that is a processor
  • VRAM Volatile Random access memory
  • NVRAM Non-Volatile Random access memory
  • a disk device 104 disk drive
  • bus 107 that interconnects them.
  • the CPU 101 operates as a predetermined functional unit by executing a program on the storage apparatus 100.
  • the CPU 101 is composed of one or a plurality of chips or modules.
  • the VRAM 102 is used for storing volatile data.
  • a program executed by the CPU 101 and data used by the program are stored in the VRAM 102.
  • the VRAM 102 is composed of one or a plurality of chips or modules.
  • NVRAM 103 is used for storing non-volatile data.
  • Write data issued from the host 110 to the storage apparatus 100 is stored (cached) in the NVRAM 103.
  • the NVRAM 103 is composed of one or a plurality of chips or modules. By caching the write data from the host 110 in the NVRAM 103, high fault tolerance, low latency, and high throughput can be realized.
  • the disk device 104 is used for storing nonvolatile data.
  • the data on the NVRAM 103 is moved to the disk device 104, and an empty area is created in the NVRAM 103.
  • the disk device 104 is configured by one or a plurality of storage drives, and the storage drive is, for example, an HDD (Hard Disk Drive) or an SSD (Solid State Drive).
  • the storage apparatus 100 is connected to the host 110 by a NIC (Network Interface Card) 105.
  • the storage apparatus 100 is connected to the network 130 from the port 106 of the NIC 105 via the cable 122 and is connected to the host 110 via the network 130.
  • the number of NICs 105 depends on the design.
  • FIG. 2 shows a logical configuration of the system according to the first embodiment.
  • An OS (Operating System) 210 is a program executed by the CPU 101 of the storage apparatus 100.
  • the OS 210 manages the allocation of hardware resources to the storage program 200 executed by the storage apparatus 100.
  • the memory management library 201 is a program for managing the memory used by the storage program 200 executed by the storage apparatus 100.
  • the memory management library 201 reserves an area for the VRAM heap 202 in the address space of the storage program 200 via the OS 210.
  • the memory management library 201 allocates the memory area of the VRAM 102 to the VRAM heap 202 via the OS 210.
  • the storage program 200 acquires the VRAM memory area from the VRAM heap 202 via the memory management library 201.
  • the memory management library 201 reserves areas for the current NVRAM heap 203 and the standby NVRAM heap 204 in the address space of the storage program 200 via the OS 210.
  • the memory management library 201 allocates the memory area of the NVRAM 103 to the active NVRAM heap 203 and the standby NVRAM heap 204 via the OS 210.
  • the storage program 200 acquires the NVRAM memory area from the NVRAM heap 203 via the memory management library 201.
  • the storage program 200 processes an I / O request issued from the host 110 to the storage apparatus 100.
  • the storage program 200 acquires the NVRAM memory area from the NVRAM heap 203 via the memory management library 201, and stores the write data received from the host 110 in the NVRAM memory area. .
  • the storage program 200 notifies the host 110 of completion of the I / O processing.
  • the storage program 200 periodically moves the data stored in the NVRAM memory area to the disk device 104.
  • the storage program 200 releases the NVRAM memory area storing the data moved to the disk device 104 via the memory management library 201.
  • FIG. 3A shows an example of the structure of the active NVRAM heap 203 in the storage apparatus according to the first embodiment.
  • the working NVRAM heap 203 is managed by the memory management library 201.
  • the OS 210 manages the storage area of the NVRAM 103 by dividing it into fixed-size blocks (hereinafter referred to as memory pages).
  • the memory management library 210 secures an area for the NVRAM heap 203 in the address space of the storage program 200 via the OS 210.
  • the memory management library 210 allocates memory pages to the area of the current NVRAM heap 203 via the OS 210. Although not shown, the standby NVRAM heap 204 is also assigned memory pages by the memory management library 201 in the same manner as the NVRAM heap 203.
  • FIG. 3B illustrates a method in which the memory management library 201 allocates memory pages to the active NVRAM heap 203 and the standby NVRAM heap 204 via the OS 210 in the storage apparatus according to the first embodiment.
  • the memory management library 201 uses the functions provided by the OS 210 to construct the file system 301 on the NVRAM Ramdisk 300.
  • the NVRAM Ramdisk 300 is configured by the memory pages of the NVRAM 103.
  • the NVRAM Ramdisk 300 is created by the OS 210 when the storage apparatus 100 is activated.
  • the memory management library 201 creates a management file 302, a management file 303, and a chunk file 304 on the file system 301 by using the function of the OS 210.
  • the memory management library 201 registers a memory page constituting the NVRAM Ramdisk 300 as a device for storing data written in the management file 302, the management file 303, and the chunk file 304 using the function provided by the OS 210.
  • the memory management library 201 uses the function provided by the OS 210 to map the management file 302 to the first half of the address space of the NVRAM heap 203 and the chunk file 304 to the second half. As a result, a memory page registered as a device for storing data written in the management file 302 and the chunk file 304 is allocated to the NVRAM heap 203.
  • the memory management library 201 maps the management file 303 to the first half of the address space of the standby NVRAM heap 204 using the function provided by the OS 210. As a result, a memory page registered as a device for storing data written in the management file 303 is allocated to the standby NVRAM heap 204.
  • the management file data can be held as non-volatile data like the chunk file data, and high-speed access to the management file 303 is possible.
  • FIG. 4 illustrates an example of an API (Application Programmable Interface) 400 provided by the memory management library 201 to the storage program 200 in the storage apparatus according to the first embodiment.
  • FIG. 4 shows the API 400 as a C language function, the API 400 may be in another language.
  • the memory management library 201 provides nvmalloc as a function for acquiring a memory area. nvmalloc takes the size of the memory area to be acquired and the key associated with the acquired memory as arguments. The key is a numerical value and may be any numerical value as long as the NVRAM memory area (or data stored therein) can be identified.
  • the memory management library 201 secures a memory area of the specified size from the NVRAM heap 203 and passes the start address of the memory to the storage program 200 as a return value of nvmalloc.
  • the memory management library 201 provides nvfree as a function for releasing the memory area. nvfree takes a key associated with a memory area to be released as an argument. When the storage program 200 calls nvfree, the memory management library 201 sets the memory associated with the key as a free area on the NVRAM heap 203.
  • the memory management library 201 provides nvlookup as a memory area search function.
  • nvlookup takes as arguments a key associated with the memory area to be searched, a pointer to a variable that stores the start address of the memory area to be searched, and a pointer to a variable that stores the size of the memory area to be searched.
  • the memory management library 201 searches the memory area associated with the key passed as an argument, and stores its address and size in the variable indicated by the pointer passed as the argument. A method for searching the memory area associated with the key by the memory management library 201 will be described later.
  • FIG. 5 is a conceptual diagram of acquisition and release of a memory area from the working NVRAM heap 203 via the memory management library 201 in the storage apparatus according to the first embodiment.
  • the memory management library 201 manages the NVRAM heaps 203 and 204 by dividing them into fixed-size blocks (hereinafter referred to as chunks) 205.
  • the NVRAM heaps 203 and 204 can be efficiently managed by managing in units of areas of a predetermined size.
  • the chunk size is a multiple of the size of the memory page (may be 1).
  • the chunk 205 used by the storage program 200 in the working NVRAM heap 203 is represented by diagonal lines.
  • the memory management library 201 allocates a number of consecutive chunks suitable for the requested memory size to the storage program 200. For example, when the chunk size is 64 bytes and the request size is 112 bytes, the memory management library 201 allocates two consecutive chunks to the storage program 200.
  • the chunk assigned to the storage program 200 is managed by the memory management library 201 as a chunk in use.
  • the memory management library 201 uses the start address of the first chunk of the allocated continuous chunk as the start address of the allocated NVRAM memory area, and passes it to the storage program 200 as a return value of nvmalloc.
  • the chunk assigned to the NVRAM memory area to be released is returned to the current NVRAM heap 203 and managed by the memory management library 201 as an unused chunk.
  • FIG. 6 shows an example of the data structure of the working NVRAM heap 203 managed by the memory management library 201 in the storage apparatus according to the first embodiment.
  • the management file 302 is mapped to the first half of the working NVRAM heap 203.
  • the first half part includes a flag 603, a chunk management area 600, a search index area 601, and a pointer (PTR) 602.
  • PTR pointer
  • the chunk file 304 is mapped to the latter half of the NVRAM heap 203.
  • the latter half is divided into fixed-size chunks 605 and used for allocation of NVRAM memory areas to the storage program 200.
  • FIG. 7 shows a structural example of the chunk management area 600 in the NVRAM heap 203 managed by the memory management library 201 in the storage apparatus according to the first embodiment.
  • the chunk management area 600 stores information for managing chunks in the NVRAM heap 203.
  • Each entry in the chunk management area 600 manages information related to the corresponding chunk.
  • Each entry has a chunk number, a state, and a chunk index of a chunk to be managed as fields.
  • the chunk number field stores the chunk number of the chunk managed by the entry.
  • the chunk number is an example showing the address of the NVRAM heap 203, which is determined by the position of the chunk in the NVRAM heap 203.
  • the chunk number of the chunk adjacent to the pointer 602 is determined to be 0, and thereafter the chunk number is incremented by 1.
  • the status field stores a value indicating the status of the chunk corresponding to the entry.
  • the chunk status field indicates “USED”.
  • the chunk status field indicates “FREE”.
  • the chunk index field stores an index indicating the element number of the chunk managed by the entry in the continuous chunk assigned to the storage program 200. For example, when consecutive chunks having chunk numbers 2, 3, 4 are allocated to the storage program 200, the chunk index fields of the entries of each chunk indicate 0, 1, 2 respectively.
  • the chunk field of the chunk whose status field is “FREE” is blank.
  • the key field stores a key assigned by the storage program 200 to the NVRAM memory area (or stored data). For example, when the value of the argument key of nvmalloc is 100 and consecutive chunks with chunk numbers 2, 3, and 4 are allocated to the storage program 200, the chunk index fields of the entries of each chunk are 100, 100, and 100, respectively. Indicates.
  • FIG. 8 shows a structural example of the search index 800 for the NVRAM heap 203 managed by the memory management library 201 in the storage apparatus according to the first embodiment.
  • the search index 800 is used for searching the NVRAM memory area acquired by the storage program 200.
  • Each element constituting the search index 800 (for example, only the root element is indicated by 801) stores information on the NVRAM memory area acquired by the storage program 200.
  • each element stores the key assigned by the storage program 200 to the NVRAM memory area, and the head address and size in the NVRAM heap 203 of the NVRAM memory area.
  • the memory management library 201 searches the search index 800 based on the key, and can search the NVRAM heap 203 for the address of the NVRAM memory area associated with the key.
  • the search index example of FIG. 8 has a binary tree structure that is sorted based on the numerical value of the key. However, if the search index can be sorted by the key, the search index has another structure. May be.
  • each element 801 of the search index 800 stores pointers to the left and right leaf elements.
  • the left leaf element has a key whose value is smaller than the key of the element.
  • the right leaf element has a key whose value is larger than the key of the element.
  • the key value of the root element 801 from which the memory management library 201 starts searching the search index is the median value of the key of the element registered in the search index.
  • the memory management library 201 searches for a specific NVRAM memory area using the search index 800, the memory management library 201 first refers to the root element 801 of the search index 800.
  • the memory management library 201 next refers to the left leaf element. If the key value of the NVRAM memory to be searched is larger than the key value registered in the element, the memory management library 201 next refers to the right leaf element. This process is repeated until the memory management library 201 finds an element having a key that matches the numeric value of the key of the NVRAM memory to be searched.
  • the pointer to the root element 801 of the search index 800 is stored in the pointer 602 in the NVRAM heap 203.
  • the area for each element of the search index 800 is obtained from the search index area 601 in the NVRAM heap 203. Therefore, the size of the search index 800 is limited by the size of the search index area 601. Any algorithm may be used for allocation of the memory area from the search index area 601.
  • FIG. 9A shows NVRAM memory area acquisition processing (nvmalloc processing) in the storage apparatus 100 according to the first embodiment.
  • the memory management library 201 refers to the chunk management area 600 and searches for an unused chunk whose status field indicates “FREE” (S101).
  • the memory management library 201 sets the value of the status field of the chunk to “USED” and the chunk index field. Update the value of to an appropriate value. Further, the key value passed from the storage program 200 is stored in the key field (S103).
  • the head address of the first chunk in the continuous chunk is the head address in the current NVRAM heap 203 of the NVRAM memory area allocated to the storage program 200.
  • the start address of a chunk can be calculated from the chunk number of the chunk. Add the size of the flag 603, the size of the chunk management area 600, and the size of the search index area 601 to the start address of the working NVRAM heap 203, and add the value obtained by multiplying the chunk number and the chunk size of the chunk to that value. The value obtained is the start address of the chunk.
  • the memory management library 201 passes the address NULL to the storage program 200 as a return value of nvmalloc, and nvmallloc The process related to is terminated.
  • the memory management library 201 creates an element to be registered in the search index 800 and inserts it in the search index 800 (S105).
  • the memory management library 201 registers the head address of the NVRAM memory area allocated to the storage program 200, the memory size requested from the storage program 200, and the key passed from the storage program 200. The position where the element is inserted is determined by the value of the key of the element. At this time, the contents of the pointer 602 are updated if necessary.
  • the memory management library 201 passes the head address of the NVRAM memory area to be assigned to the storage program 200 to the storage program 200 as a return value of nvmalloc, and ends the processing related to nvmalloc.
  • the memory management library 201 If the memory area for the element cannot be acquired from the search index area 601 (S104: NO), the memory management library 201 returns the chunk management area 600 to the state before the storage program 200 calls nvmalloc (S106), and nvmmalloc As the return value, the address NULL is passed to the storage program 200, and the process related to nvmalloc is terminated.
  • FIG. 9B shows NVRAM memory area release processing (nvfree processing) in the storage apparatus 100.
  • the memory management library 201 searches the search index 800 for the start address of the NVRAM memory area to be released based on the key passed as an argument (S131).
  • the memory management library 201 ends the process related to nvfree.
  • the memory management library 201 deletes the element storing the head address of the NVRAM memory from the search index 800 (S133). At this time, the value of the element pointer 602 is updated if necessary.
  • the memory management library 201 converts the start address in the current NVRAM heap 203 of the NVRAM memory area to be released obtained by searching the search index 800 into a chunk number, and refers to the entry of the chunk in the chunk management area 600 To do.
  • the chunk number is obtained by subtracting the sizes of the flag 603, the chunk management area 600, and the search index area 601 from the acquired address and dividing the value by the chunk size.
  • the memory management library 201 changes the value of the status field of the entry of the chunk to “FREE”. Also, the chunk index field and key field of the entry are left blank. Then, the next entry in the chunk management area 600 is referred to.
  • the memory management library 201 sets the status field value of the entry to “FREE”, sets the chunk index field and the key field to blank, and sets the next entry to the next entry. The same processing is performed for the above. This process is repeated until an entry having a chunk index field value of 0 or blank is found (S134).
  • the memory management library 201 deletes the element corresponding to the NVRAM memory area to be released from the search index 800 (S135). At this time, the value of the element pointer 602 is updated if necessary. Then, the NVRAM memory area release process (nvfree process) ends.
  • FIG. 9C shows a memory search process (nvlookup process) in the storage apparatus 100.
  • the memory management library 201 searches the search index 800 for the start address of the NVRAM memory area to be searched based on the key passed as an argument (S151).
  • the memory management library 201 passes the address as a return value of nvlookup to the storage program 200 and ends the processing related to nvlookup. If the NVRAM memory area corresponding to the key is not found even after searching 800 in the search index, the memory management library 201 returns NULL.
  • FIG. 10A shows I / O processing (Write processing) of the storage apparatus 100.
  • the storage program 200 operating on the storage apparatus 100 receives a write request and write data from the host 110 (S201).
  • the storage program 200 acquires an NVRAM memory area for copying Write data from the NVRAM heap 203 via the memory management library 201 (using nvmalloc) (S202).
  • key is an address on the disk device 104 that stores Write data. It is assumed that the address on the disk device 104 that stores the write data is included in the write request issued by the host 110. The address on the disk device 104 that stores the write data is recorded in one of the storage devices 100 (for example, an area on the disk device 104).
  • the storage program 200 copies the write data to the acquired NVRAM memory area (S203), and notifies the host 110 that the I / O processing has been completed (S204).
  • the storage program 200 periodically stores the data stored in the NVRAM 103 in the disk device 104.
  • the NVRAM memory is searched through the memory management library 201 (calling nvlookup) using the address on the disk device 104 of the write data recorded in any one of the storage devices 100 (such as the disk device 104) as the key.
  • the storage program 200 writes the acquired data on the NVRAM 103 to the disk device 104.
  • the address at which data is written is the address of the recorded write data on the disk device 104.
  • the storage program 200 releases the NVRAM memory area via the memory management library 201 (calling nvfree). The release of the NVRAM memory area is executed in a timely manner after writing to the disk device 104.
  • FIG. 10B illustrates an I / O process (Read process) of the storage apparatus 100 according to the first embodiment.
  • the storage program 200 receives a Read request from the host 110 (S231).
  • the storage program 200 searches the NVRAM memory area via the memory management library 201 (by calling nvlookup), using the address on the disk device 104 storing Read data as the key (S232).
  • the storage program 200 If the corresponding NVRAM memory area is found (S233: YES), the storage program 200 returns the NVRAM memory area data to the host 110 as Read data (S234). It is assumed that the address on the disk device 104 that stores the Read data is included in the Read request issued by the host 110.
  • the storage program 200 reads data from the disk device 104 (S235), returns it to the host 110 as Read data (S236), and ends the I / O processing. To do.
  • the address from which data is read is the address on the disk device 104 that stores the Read data included in the Read request.
  • FIG. 11A schematically shows an example of a method in which the memory management library 201 allocates the memory page of the NVRAM 103 to the latter half of the standby NVRAM heap 204.
  • FIG. 11A shows the corresponding use area 1 (UA1) and use area 2 (UA2) in the working NVRAM heap 203, the standby NVRAM heap 204, and the NVRAM Ramdisk 300.
  • the memory management library 201 uses the standby NVRAM heap 204 to store the areas UA1 and UA2 used by the storage program 200 among the memory pages allocated to the current NVRAM heap 203 via the OS 210. It maps to the continuous area of the latter half part (area consisting of chunks).
  • the memory management library 201 maps an area that is not used by the storage program 200 among the memory pages allocated to the current NVRAM heap 203 to the remaining area in the second half of the standby NVRAM heap 204. Thereby, a continuous unused area can be formed in the standby NVRAM heap 204. In the current NVRAM heap 203, the area used for the storage program 200 and the area not used can be confirmed with reference to the chunk management area 600.
  • the memory management library 201 maps the area on the chunk file 304 corresponding to the area used (or not used) on the current NVRAM heap 203 to the area of the standby NVRAM heap 204. As a result, the memory page used (or not used) in the storage program 200 is mapped to the area of the standby NVRAM heap 204.
  • the memory management library 201 manages the area of the chunk file 304 mapped to the active NVRAM heap 203 and the standby NVRAM heap 204 using the mapping management table 1100 shown in FIG. 11B. One is prepared for each of the active NVRAM heap 203 and the standby NVRAM heap 204.
  • the table 1100 manages the mapping between addresses (offsets) in the chunk file and addresses in the NVRAM heap.
  • the offset field of the table 1100 indicates the offset of the area of the chunk file 304 that is mapped to the NVRAM heap.
  • the size field indicates the size of the area.
  • the address field indicates the start address of the area on the NVRAM heap to which the area is mapped.
  • the table 1100 may be stored in the NVRAM 103 as a file on the file system 301 or may be stored in the disk device 104.
  • the table 1100 is updated by the memory management library 201 every time the area on the chunk file 304 is mapped to the NVRAM heap area.
  • the memory management library 201 refers to the table 1100 and identifies the correspondence between the area on the NVRAM heap and the area on the chunk file 304.
  • the memory management library 201 updates the mapping between the standby NVRAM heap 204 and the NVMAR 203 each time nvmalloc and nvfree for the current NVRAM heap 203 are executed.
  • the memory management library 201 newly maps the memory page mapped to the use area newly allocated in the current NVRAM heap 203 to the second half of the standby NVRAM heap 204 according to the execution of nvmalloc. As described above, a memory page is identified with a key. The memory management library 201 adds or deletes the memory page (or its key) added to or deleted from the mapping management table 1100 of the current NVRAM heap 203 to the mapping management table 1100 of the standby NVRAM heap 204.
  • the memory management library 201 maps to the area immediately after the last used area in the standby NVRAM heap 204.
  • the memory management library 201 may map the memory page to an unused area at an arbitrary position having the size of the memory page.
  • the memory management library 201 releases the used area in the standby NVRAM heap 204 corresponding to the area newly released from the current NVRAM heap 203 in accordance with the execution of nvfree. Specifically, the memory management library 201 releases the area mapped in the standby NVRAM heap 204 to the memory page mapped in the area newly released from the NVRAM heap 203.
  • the memory management library 201 executes defragmentation of the standby NVRAM heap 204 at a predetermined timing.
  • the defragmentation algorithm is arbitrary.
  • the memory management library 201 forms one area by packing the used areas so as to eliminate the unused area before each used area.
  • the memory management library 201 executes only the update of the chunk management area 600 and the search index 800 without moving the data in the NVRAM 103 during the defragmentation.
  • the memory management library 201 moves the entry in the chunk management area 600 and changes the address and size in the element in the search index 800. Since the standby NVRAM heap 204 is not used by the storage program 200, defragmentation does not affect the processing of the storage program 200.
  • the memory management library 201 releases an area in the standby NVRAM heap 204 (executes nvfree) and executes defragmentation whenever a discontinuous free area is formed.
  • the memory management library 201 may perform defragmentation whenever a discontinuous use area is formed.
  • the memory management library 201 moves the used area subsequent to the released area forward so as to fill the formed empty area.
  • the standby NVRAM heap 204 can be maintained in a non-fragmented state by newly placing an area mapped to the NVRAM memory area immediately after the use area.
  • the memory management library 201 may perform defragmentation with less frequency. For example, the defragmentation of the standby NVRAM heap 204 may be executed every time nvfree is executed a predetermined number of times for the current NVRAM heap 203. The memory management library 201 may determine the time of defragmentation based on the fragmentation state of the standby NVRAM heap 204. For example, defragmentation may be executed when the maximum unused area size is less than the threshold value or when the number of unused areas having a size less than the threshold value reaches the threshold value.
  • FIG. 12 shows an example of the data structure of the standby NVRAM heap 204 managed by the memory management library 201.
  • a management file 303 is mapped to the first half of the standby NVRAM heap 204.
  • the first half includes a flag 1203, a chunk management area 1200, a search index area 1201, and a pointer (PTR) 1202.
  • PTR pointer
  • the chunk file 304 is mapped to the second half of the standby NVRAM heap 204.
  • the latter half is divided into fixed-size chunks 1205 and used for allocation of NVRAM memory areas to the storage program 200.
  • FIG. 13 shows a structural example of the search index 1300 in the standby NVRAM heap 204 managed by the memory management library 201 in the storage apparatus 100 according to the first embodiment.
  • the search index 1300 is used for searching the NVRAM memory area acquired by the storage program 200.
  • the structure of the search index 1300 is the same as the structure of the search index 800.
  • a pointer to the root element 1301 of the search index 1300 is stored in the pointer 1202.
  • the memory area for each element constituting the search index 1300 is acquired from the search index area 1201. Any algorithm may be used for allocation of the memory area from the search index area 1201.
  • the latter half of the standby NVRAM heap 204 (the portion mapping the chunk file 304) is divided into fixed-size chunks 1205 and managed.
  • the chunk management area 1200 is used for management of the chunk 1205.
  • the memory management library 201 updates the chunk entry corresponding to the mapping area in the chunk management area 1200.
  • the memory management library 201 copies information on the used area in the chunk management area 600 to the chunk management area 1200 when a new used area is allocated in the current NVRAM heap 203 or when the used area is released.
  • the memory management library 201 uses the contents (state, index, key) of the entry in the chunk management area 600 corresponding to the chunks that constitute a new used area or a non-used area, on the standby NVRAM heap 204. Copy to the entry of the chunk management area 1200 corresponding to the chunk constituting the area or the unused area.
  • the search index 1300 needs to be updated. Similarly, when the memory management library 201 cancels the mapping of the standby NVRAM heap 204 to the memory page of the NVRAM 103, it is necessary to update the search index 1300.
  • the memory management library 201 refers to the updated entry in the chunk management area 1200, and examines the start address, size, and key value of the NVRAM memory area in use that has been moved to the continuous area. Then, the element in the corresponding search index 1300 is searched from the key, and the address field of the corresponding element is rewritten with the destination address.
  • the memory management library 201 provides the storage program 200 with a function of switching the NVRAM heap 203 for acquiring memory and the standby NVRAM heap 204 at an arbitrary timing.
  • FIG. 14 shows NVRAM heap switching processing.
  • the memory management library 201 rewrites the flag 1203 of the new working NVRAM heap 204 to 0 (S301), and rewrites the flag 603 of the new standby NVRAM heap 203 to 1 (S302).
  • the storage program 200 uses this function to switch the NVRAM heap, the subsequent NVRAM memory area acquisition, release, and search processes target the new current NVRAM heap 204 (the NVRAM heap with the flag 0). Done.
  • the NVRAM heap 203 that was the target of memory acquisition, release, and search processing before switching is changed to a standby NVRAM heap. Whether or not the NVRAM heap is a standby NVRAM heap is determined by the value stored in the flag area (flag 603 and flag 1203) at the head of the NVRAM heap.
  • the NVRAM heap having a value of 0 is used as the current NVRAM heap to be acquired, released, and searched, and the NVRAM heap having a value of 1 is used as the standby NVRAM heap.
  • the address in the NVRAM heap mapped to the NVRAM memory area is changed by the defragmentation process. Even in this case, it is possible to retrieve the changed address in the NVRAM heap.
  • the NVRAM heap and related processing of this embodiment can be applied to a computer system that provides a service different from the storage service.
  • the defragmentation process is executed while the service of the apparatus is stopped.
  • the storage program 200 executes the NVRAM heap defragmentation process while the service to the host is stopped. By executing the NVRAM heap defragmentation process while the service is stopped, the influence of the defragmentation on the service can be avoided.
  • An example of a period during which the service is stopped is a period from when the storage device is activated to when the service is started.
  • Another example of the period during which the service is stopped is a period in which a storage program operating on the standby node is waiting in a storage system having a two-node cluster configuration.
  • FIG. 15 illustrates a hardware configuration example of the system according to the second embodiment.
  • the active storage device 1500 provides a storage service to a plurality of hosts 1510.
  • the standby storage device 1540 provides a storage service to the host 1510 in place of the active storage device 1500 when a failure occurs in the active storage device 1500.
  • the host 1510 is connected to the network 1530 via the communication cable 1521, and is connected to the active storage device 1500 via the network 1530.
  • the host 1510 issues an I / O request to the active storage apparatus 1500.
  • the host 1510 issues an I / O request to the standby storage device 1540.
  • the network 1530 includes a network switch 1520, a communication cable 1521, a communication cable 1522, a communication cable 1523, and a communication cable 1524.
  • the active storage device 1500 includes a CPU 1501, which is a processor, a VRAM 1502, an NVRAM 1503, a disk device 1504, and a bus 1508 that interconnects them.
  • the CPU 1501 operates as a predetermined functional unit by executing a program on the active storage device 1500.
  • the CPU 1501 is composed of one or a plurality of chips or modules.
  • VRAM 1502 is used to store volatile data.
  • a program executed by the CPU 1501 and data used by the program are stored in the VRAM 1502.
  • the VRAM 1502 is composed of one or a plurality of chips or modules.
  • NVRAM 1503 is used for storing non-volatile data.
  • Write data issued from the host 1510 to the active storage device 1500 is stored in the NVRAM 1503.
  • the NVRAM 1503 is composed of one or a plurality of chips or modules.
  • the disk device 1504 is used for storing nonvolatile data.
  • the data on the NVRAM 1503 is moved to the disk device 1504, and an empty area is created in the NVRAM 1503.
  • the disk device 1504 is configured by one or a plurality of storage drives, and the storage drive is, for example, an HDD or an SSD.
  • the active storage device 1500 is connected to the host 1510 by the NIC 1505.
  • the active storage apparatus 1500 is connected from the port 1506 of the NIC 1505 to the network 1530 via the cable 1522 and is connected to the host 1510 via the network 1530. Further, the standby storage apparatus 1540 is connected from the port 1507 via the cable 1524.
  • the number of NICs 1505 depends on the design.
  • the standby storage device 1540 includes a CPU 1541 as a processor, a VRAM 1542, an NVRAM 1543, a disk device 1544, and a bus 1548 that interconnects them.
  • the CPU 1541 operates as a predetermined functional unit by executing a program on the standby storage device 1540.
  • the CPU 1541 is composed of one or a plurality of chips or modules.
  • VRAM 1542 is used to store volatile data. A program executed by the CPU 1541 and data used by the program are stored in the VRAM 1542.
  • the VRAM 1542 is composed of one or a plurality of chips or modules.
  • NVRAM 1543 is used for storing nonvolatile data.
  • Write data issued from the host 1510 to the storage apparatus 1540 is stored in the NVRAM 1543.
  • the NVRAM 1543 is composed of one or a plurality of chips or modules.
  • the disk device 1544 is used for storing nonvolatile data.
  • the data on the NVRAM 1543 is moved to the disk device 1544, and an empty area is created in the NVRAM 1543.
  • the disk device 1544 is configured by one or a plurality of storage drives, and the storage drive is, for example, an HDD or an SSD.
  • the standby storage device 1540 is connected to the host 1510 by the NIC 1545.
  • the standby storage device 1540 is connected from the port 1546 of the NIC 1545 to the network 1530 via the cable 1522, and is connected to the host 1510 via the network 1530.
  • the port 1547 is connected to the active storage apparatus 1500 via the cable 1524.
  • the number of NICs 1545 depends on the design.
  • FIG. 16 illustrates a logical configuration example of the system according to the second embodiment.
  • the OS 1610 is a program executed by the CPU 1501 of the storage apparatus 1500.
  • the OS 1610 manages allocation of hardware resources to the storage program 1600 executed by the storage apparatus 1500.
  • the memory management library 1601 manages the memory used by the storage program 1600 executed by the storage device 1540.
  • the memory management library 1601 reserves an area of the VRAM heap 1602 in the address space of the storage program 1600 via the OS 1610.
  • the memory management library 1601 allocates the memory area of the VRAM 1502 to the VRAM heap 1602 via the OS 1610.
  • the storage program 1600 acquires the VRAM memory area from the VRAM heap 1602 via the memory management library 1601.
  • the memory management library 1601 reserves an area of the NVRAM heap 1603 in the address space of the storage program 1600 via the OS 1610.
  • the memory management library 1601 allocates the memory area of the NVRAM 1503 to the NVRAM heap 1603 via the OS 1610.
  • the storage program 1600 acquires the NVRAM memory area from the NVRAM heap 1603 via the memory management library 1601.
  • the storage program 1600 processes an I / O request issued from the host 1510 to the storage apparatus 1500.
  • the storage program 1600 acquires the NVRAM memory area from the NVRAM heap 1603 via the memory management library 1601, and stores the write data received from the host 1510 in the NVRAM memory area. . Then, the host 1510 is notified that the I / O processing has been completed.
  • the OS 1630 is a program executed by the CPU 1541 of the standby storage apparatus 1540.
  • the OS 1630 manages the allocation of hardware resources to the storage program 1620 executed by the standby storage device 1640.
  • the memory management library 1621 manages the memory used by the storage program 1620 executed by the standby storage device 1640.
  • the memory management library 1621 reserves an area of the VRAM heap 1622 in the address space of the storage program 1620 via the OS 1630.
  • the memory management library 1621 allocates the memory area of the VRAM 1542 to the VRAM heap 1622 via the OS 1640.
  • the storage program 1620 acquires the VRAM memory area from the VRAM heap 1622 via the memory management library 1621.
  • the memory management library 1621 reserves an NVRAM heap 1623 area in the address space of the storage program 1620 via the OS 1630.
  • the memory management library 1621 allocates the memory area of the NVRAM 1543 to the NVRAM heap 1633 via the OS 1630.
  • the storage program 1620 acquires the NVRAM memory area from the NVRAM heap 1623 via the memory management library 1621.
  • the storage program 1620 processes an I / O request issued from the host 1510 to the storage apparatus 1540 when the storage apparatus 1500 is stopped due to a failure.
  • the storage program 1620 acquires the NVRAM memory area from the NVRAM heap 1623 via the memory management library 1621, and stores the write data received from the host 1510 in the NVRAM memory area. To do. Then, the host 1510 is notified that the I / O processing has been completed.
  • the cluster program 1624 cooperates with the cluster program 1604 to switch the storage program 1620 from the standby mode to the service mode.
  • the cluster program 1624 periodically transmits a heartbeat to the cluster program 1604.
  • the cluster program 1624 determines that a failure has occurred in the storage apparatus 1500 and switches the storage program 1620 from the standby mode to the service mode. When the mode is switched to the service mode, the storage program 1620 starts providing a storage service to the host 1510 (failover).
  • the memory management library 1601 allocates the memory page of the NVRAM 1503 to the NVRAM heap 1603 in the same manner as the memory management library 201 of the first embodiment.
  • the memory management library 1621 allocates the memory page of the NVRAM 1543 to the NVRAM heap 1623 in the same manner as the memory management library 201 of the first embodiment.
  • the memory management library 1601 provides the storage program 1600 with the same API as the memory management library 201 of the first embodiment.
  • the memory management library 1621 provides the storage program 1620 with the same API as the memory management library 201 of the first embodiment.
  • the memory management library 160 like the memory management library 201 of the first embodiment, divides the NVRAM heap 1603 into chunks (see chuck 1705 in FIG. 17), and manages allocation and release of memory areas for the storage program 1600. Similar to the memory management library 201 of the first embodiment, the memory management library 1621 divides the NVRAM heap 1623 into chunks, and manages allocation and release of memory areas to the storage program 1620.
  • FIG. 17 shows an example of the data structure of the NVRAM heap 1603 managed by the memory management library 1601 in the storage apparatus 1500 according to the second embodiment.
  • the NVRAM heap 1603 has the same structure as the NVRAM heap 203 of the first embodiment except that there is no flag area at the head.
  • FIG. 18 shows a structural example of the chunk management area 1700 in the NVRAM heap 1603 managed by the memory management library 1601 in the storage apparatus 1500 according to the second embodiment.
  • the chunk management area 1700 has the same structure as the chunk management area 600 of the first embodiment.
  • FIG. 19 shows a structural example of the search index 1900 of the NVRAM heap 1603 managed by the memory management library 1601 in the storage apparatus 1500 according to the second embodiment.
  • the search index 1900 has the same structure as the search index 800 in the first embodiment.
  • a pointer to the root element 1901 of the search index 1900 is stored in the pointer 1702.
  • the memory for each element of the search index 1900 is acquired from the search index area 1701.
  • the memory management library 1601 updates the chunk management area 1700 in the same manner as the memory management library 201 in the first embodiment, and adds elements to the search index 1900 in the same manner. insert.
  • the memory management library 1601 updates the chunk management area 1700 in the same manner as the memory management library 201 in the first embodiment, and deletes elements from the search index 1900 in the same manner. To do.
  • the memory management library 1601 searches the NVRAM memory area in the search index 1900 in the same manner as the memory management library 201 in the first embodiment when the storage program 1600 executes nvlookup.
  • FIG. 20 shows an example of the data structure of the NVRAM heap 1623 managed by the memory management library 1621 in the standby storage device 1540 according to the second embodiment.
  • the NVRAM heap 1623 includes a plurality of chunks 2005.
  • the NVRAM heap 1623 has the same structure as the NVRAM heap 203 of the first embodiment except that there is no flag area at the head.
  • FIG. 21 shows a structural example of the chunk management area 2000 in the NVRAM heap 1623 managed by the memory management library 1621 in the standby storage apparatus 1540 according to the second embodiment.
  • the chunk management area 2000 has the same structure as the chunk management area 600 of the first embodiment.
  • FIG. 22 shows a structure example of the search index 2200 for the NVRAM heap 1623 managed by the memory management library 1621 in the standby storage device 1540 according to the second embodiment.
  • the search index 2200 has the same structure as the search index 800 in the first embodiment.
  • a pointer to the root element 2201 of the search index 2200 is stored in the pointer 2002.
  • the memory for each element of the search index 2200 is acquired from the search index area 2001.
  • the memory management library 1621 updates the chunk management area 2000 by the same method as the memory management library 201 in the first embodiment, and adds elements to the search index 2200 by the same method. insert.
  • the memory management library 1621 updates the chunk management area 2000 in the same manner as the memory management library 201 in the first embodiment, and deletes elements from the search index 2200 in the same manner. To do.
  • the memory management library 1621 searches the NVRAM memory area in the search index 2200 by the same method as the memory management library 201 in the first embodiment when the storage program 1620 executes nvlookup.
  • the storage program 1600 and the storage program 1620 process I / O requests from the host 1510 in the same manner as the storage program 200 in the first embodiment.
  • the memory management library 1601 provides the storage program 1600 with a function of mirroring the data on the NVRAM heap 1603 to the NVRAM heap 1623 in cooperation with the memory management library 1621.
  • FIG. 23 shows an API of a mirroring function provided by the memory management library 1601 in the storage apparatus 1500 according to the second embodiment.
  • FIG. 24 shows mirroring processing in the active storage device 1500 and the standby storage device 1540 according to the second embodiment. The mirroring process is executed for all the write data received from the host 1510, for example, every time the write data is received from the host 1510.
  • the memory management library 1601 of the active storage device 1500 searches the search index 1900 for the NVRAM memory area corresponding to the key passed as an argument of nvmirror (S401). If the corresponding NVRAM memory area is not found (S402: NO), the memory management library 1601 ends this process.
  • the memory management library 1601 displays the data of the NVRAM memory area, the size of the NVRAM memory area, the key of the NVRAM memory area, and the memory management library of the standby storage device 1540. 1621 (S403).
  • the memory management library 1621 refers to the chunk management area 2000 of the NVRAM heap 1623 and searches for a continuous free chunk that can store the received data.
  • the memory management library 1621 writes the received data in continuous empty chunks found in the NVRAM heap 1623 (S404).
  • the memory management library 1621 updates the entry of the chunk management area 2000 of the continuous chunk into which the data has been written, and sets the continuous chunk as a use area by the same method as the nvmalloc processing described in the first embodiment (S405).
  • the memory management library 1621 creates an element to be registered in the search index 2200, and registers the first address of the first chunk of consecutive chunks, the received memory size, and the received key.
  • the memory management library 1621 inserts the created element into the search index 2200 (S406). Thus, the mirroring process ends.
  • the standby mode storage program 1620 stores the data stored in the NVRAM 1543 in the disk device 1544 at a predetermined timing, for example, periodically.
  • the storage program 1620 writes the data to the disk device 1544 and then releases the NVRAM memory area via the memory management library 1621. Release is performed in a timely manner after writing to the disk device 1544. For example, when the total size of the write data stored in the NVRAM 1543 exceeds a specified value, the standby storage device 1540 may store the oldest write data in the disk device 1544.
  • the standby storage device 1540 may store the data stored in the NVRAM 1543 in the disk device 1544 in accordance with an instruction from the active storage device 1500.
  • the current storage program 1600 transmits an instruction indicating the data to the standby storage device 1540 in accordance with the data storage from the NVRMA 1503 to the disk device 1504.
  • the standby storage program 1620 stores the instructed data in the disk device 1544.
  • the memory management library 1621 in the standby mode executes the defragmentation of the NVRAM heap 1623 at a predetermined timing.
  • the defragmentation of the NVRAM heap 1623 stores the data in the NVRAM 1543 together with the update of the chunk management area 2000 and the search index 2200.
  • the memory management library 1621 can execute defragmentation by the same method as in the first embodiment.
  • the memory management library 1621 may periodically perform defragmentation.
  • the memory management library 1621 performs defragmentation each time a region is released in the NVRAM heap 1623 and a discontinuous free region is formed. Thereby, the optimized NVRAM heap 1623 can always be provided in the failover from the storage apparatus 1500 to the storage apparatus 1540.
  • the memory management library 1621 may perform defragmentation whenever a discontinuous use area is formed.
  • the memory management library 1621 moves the used area subsequent to the released area forward. For example, the memory management library 1621 arranges an area that is newly mapped in the NVRAM memory area immediately after the continuous use area. Thereby, the NVRAM heap 1623 can be maintained in an unfragmented state.
  • FIG. 25 illustrates processing (failback) from when the storage apparatus 1500 is started (restarted) after the failover from the storage apparatus 1500 to the storage apparatus 1540 until the storage program 1600 starts providing the storage service. Show.
  • the storage device 1500 executes the OS 1610 (S451), and the storage program 1600 is executed by the OS 1610 (S452).
  • the OS 1610 links the memory management library 1601 to the storage program 1600 (S453).
  • the memory management library 1601 performs processing (defragmentation) for moving the used area on the NVRAM heap 1603 to a continuous area (S454).
  • the storage program 1600 receives update data after failover in the disk device 1544 from the storage device 1540 and stores it in the disk device 1504 in failback.
  • the memory management library 1601 performs a process of moving the used area on the NVRAM heap 1603 to a continuous area.
  • FIG. 26 shows a method (defragmentation method) of moving the used area on the NVRAM heap 1603 to the continuous area, that is, moving the used area to form one continuous area. The same applies to the method of moving the used area on the NVRAM heap 1623 to the continuous area.
  • the memory management library 1601 moves the NVRAM memory area 2600 area and the NVRAM memory area 2601 area used to continuous areas.
  • the memory management library 1601 temporarily copies data to the intermediate buffers 2602 and 2603 secured from the VRAM heap 1603, and copies the data from each intermediate buffer to a continuous area on the NVRAM heap 1603.
  • the memory management library 1601 updates each entry in the chunk management area 1700 as the data moves.
  • FIG. 27 shows a state of the chunk management area 1700 before the data movement and a state after the data movement.
  • the memory management library 1601 updates the chunk management area 1700 from the state of the chunk area 2700 before movement to the state of the chunk area 2701 after movement.
  • the memory management library 1601 updates the address field of the element of the search index 1900 as the data moves.
  • FIG. 28 shows a state of the search index 1900 before data movement and a state after data movement.
  • the memory management library 1601 updates the search index 1900 from the state of the search index 2800 before movement to the state of the search index 2801 after movement.
  • the storage apparatuses 1500 and 1540 may use the standby NVRAM heap described in the first embodiment.
  • the NVRAM heap and related processing of this embodiment can be applied to a computer system that provides a service different from the storage service.
  • the NVRAM heap is defragmented when the storage device is restarted in failback.
  • the storage apparatus may perform defragmentation of the NVRAM heap in restart without failback. In this case, data copy from the opposite storage device is not necessary, and the storage device moves the discrete use area of the NVRAM heap at the time of startup to form one continuous use area.
  • the cluster program in the service mode may decide to switch the storage device that provides the service based on the fragmented state of the NVRAM heap.
  • the cluster program monitors the NVRAM heap state (fragmentation state), and when the fragmentation state satisfies a specified condition, stops the service of the own storage device and starts the service to the storage device in the other standby mode. Instruct.
  • the method for determining the fragmentation state may be the same as in the first embodiment.
  • this invention is not limited to the above-mentioned Example, Various modifications are included.
  • the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described.
  • a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.
  • each of the above-described configurations, functions, processing units, and the like may be realized by hardware by designing a part or all of them with, for example, an integrated circuit.
  • Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor.
  • Information such as programs, tables, and files for realizing each function can be stored in a memory, a hard disk, a recording device such as an SSD (Solid State Drive), or a recording medium such as an IC card or an SD card.
  • control lines and information lines indicate what is considered necessary for the explanation, and not all control lines and information lines on the product are necessarily shown. In practice, it may be considered that almost all the components are connected to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

L'invention concerne un système informatique qui comprend un processeur et une mémoire vive non volatile. Le processeur : fixe un premier tas pour un programme pour utiliser une région de mémoire de la mémoire vive non volatile; exécute une attribution et une libération de régions d'utilisation dans le premier tas; exécute un traitement pour consolider les régions d'utilisation séparées en une région continue unique, dans le premier tas; met à jour des premières informations de gestion indiquant la relation entre une région d'utilisation dans le premier tas et une clé qui identifie la région de mémoire attribuée à la région d'utilisation, cette mise à jour étant conforme à l'attribution, la libération, et le traitement; et recherche le premier tas pour l'adresse d'une région de mémoire cible en utilisant la clé pour la région de mémoire cible dans les premières informations de gestion.
PCT/JP2017/000054 2017-01-04 2017-01-04 Système informatique WO2018127948A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/000054 WO2018127948A1 (fr) 2017-01-04 2017-01-04 Système informatique

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/000054 WO2018127948A1 (fr) 2017-01-04 2017-01-04 Système informatique

Publications (1)

Publication Number Publication Date
WO2018127948A1 true WO2018127948A1 (fr) 2018-07-12

Family

ID=62789225

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/000054 WO2018127948A1 (fr) 2017-01-04 2017-01-04 Système informatique

Country Status (1)

Country Link
WO (1) WO2018127948A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11436256B2 (en) 2020-01-20 2022-09-06 Fujitsu Limited Information processing apparatus and information processing system
US11846003B2 (en) 2018-10-31 2023-12-19 Jfe Steel Corporation High-strength steel sheet and method for manufacturing the same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6282605B1 (en) * 1999-04-26 2001-08-28 Moore Computer Consultants, Inc. File system for non-volatile computer memory
JP2010277268A (ja) * 2009-05-27 2010-12-09 Kyocera Mita Corp メモリ管理装置及びこれを備えたワンチップマイクロコンピュータ並びに組込システム
JP2013222310A (ja) * 2012-04-17 2013-10-28 Hitachi Ltd 業務継続方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6282605B1 (en) * 1999-04-26 2001-08-28 Moore Computer Consultants, Inc. File system for non-volatile computer memory
JP2010277268A (ja) * 2009-05-27 2010-12-09 Kyocera Mita Corp メモリ管理装置及びこれを備えたワンチップマイクロコンピュータ並びに組込システム
JP2013222310A (ja) * 2012-04-17 2013-10-28 Hitachi Ltd 業務継続方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11846003B2 (en) 2018-10-31 2023-12-19 Jfe Steel Corporation High-strength steel sheet and method for manufacturing the same
US11436256B2 (en) 2020-01-20 2022-09-06 Fujitsu Limited Information processing apparatus and information processing system

Similar Documents

Publication Publication Date Title
CN109542332B (zh) 存储器系统及控制非易失性存储器的控制方法
US10852959B2 (en) Data storage system, process and computer program for such data storage system for reducing read and write amplifications
JP7312251B2 (ja) 様々なデータ冗長性スキームを備えた、システムにおける利用可能なストレージ空間を改善すること
US10133511B2 (en) Optimized segment cleaning technique
US9367241B2 (en) Clustered RAID assimilation management
CN111587428B (zh) 分布式存储系统中的元数据日志
EP3036616B1 (fr) Gestion de métadonnées en fonction d'un domaine ayant des structures arborescentes denses dans une architecture de mémoire distribuée
US9135123B1 (en) Managing global data caches for file system
US10678452B2 (en) Distributed deletion of a file and directory hierarchy
US20160070644A1 (en) Offset range operation striping to improve concurrency of execution and reduce contention among resources
US11029862B2 (en) Systems and methods for reducing write tax, memory usage, and trapped capacity in metadata storage
US20120011340A1 (en) Apparatus, System, and Method for a Virtual Storage Layer
US9307024B2 (en) Efficient storage of small random changes to data on disk
US11409454B1 (en) Container ownership protocol for independent node flushing
US8856443B2 (en) Avoiding duplication of data units in a cache memory of a storage system
WO2015105666A1 (fr) Structure optimisée pour la mémoire flash et structurée pour la journalisation d'un système de fichiers
JP2019079113A (ja) ストレージ装置、データ管理方法、及びデータ管理プログラム
WO2018154667A1 (fr) Système de stockage de type évolutif
US10394484B2 (en) Storage system
WO2018127948A1 (fr) Système informatique
US11366700B1 (en) Hierarchical workload allocation in a storage system
KR20220006458A (ko) 키-밸류 저장 장치 및 키 분류 방법
US11886427B1 (en) Techniques for efficient journal space handling and recovery processing with multiple logs
US20230409218A1 (en) Container flush ownership assignment
JP5334048B2 (ja) メモリ装置および計算機

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17890448

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17890448

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP