WO2016088372A1 - Access device, migration device, distributed storage system, access method, and computer-readable recording medium - Google Patents

Access device, migration device, distributed storage system, access method, and computer-readable recording medium Download PDF

Info

Publication number
WO2016088372A1
WO2016088372A1 PCT/JP2015/005987 JP2015005987W WO2016088372A1 WO 2016088372 A1 WO2016088372 A1 WO 2016088372A1 JP 2015005987 W JP2015005987 W JP 2015005987W WO 2016088372 A1 WO2016088372 A1 WO 2016088372A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
storage
hold
notification
access
Prior art date
Application number
PCT/JP2015/005987
Other languages
French (fr)
Japanese (ja)
Inventor
真樹 菅
鈴木 順
佑樹 林
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Priority to JP2016562308A priority Critical patent/JPWO2016088372A1/en
Publication of WO2016088372A1 publication Critical patent/WO2016088372A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices

Definitions

  • the present invention relates to an access device, a migration device, a distributed storage system, an access method, and a computer-readable recording medium, and in particular, a distributed storage system in which the number of storage units can be increased or decreased, an access device that accesses such a distributed storage system, and a storage unit
  • the present invention relates to a migration device that performs data migration as the number of files increases and decreases, an access method using an access device, and a computer-readable recording medium.
  • a data store system for example, a database system, a file system, a cache system, etc.
  • a single computer or a plurality of computers is known.
  • the distributed storage system includes a plurality of general-purpose computers connected via a network.
  • the distributed storage system stores data and provides data using a storage unit (or storage device) mounted on these computers.
  • the storage unit is, for example, a hard disk (HDD: Hard Disk Drive), main memory (for example, DRAM: Dynamic Random Access Memory), or the like.
  • data is allocated to which computer, and which computer processes the data is determined by software or special hardware.
  • a method for determining a computer corresponding to data for example, a method using a random number or a hash value is adopted.
  • Cluster-based distributed storage and distributed database technologies have been developed on the premise of server architecture.
  • server architecture in order to access resources of a server other than itself, it is necessary to access via the corresponding server.
  • each resource (memory, storage, etc.) is connected via an interconnect network.
  • CPU Central Processing Unit
  • Such architectural changes bring innovation to distributed storage and distributed database technologies.
  • a distributed storage system it is necessary to allow a dynamic change in the number of storage units due to a failure or performance expansion of a computer or storage device that is a component of the system. For example, when a method of determining a responsible storage unit based on a hash value is used, when the number of storage units is changed, the data distributed arrangement state is changed, so that a data rebalancing process is required. In general, the determination of the responsible storage unit during the data rebalancing process is more complicated than normal.
  • Patent Document 1 discloses a technique for constructing a distributed storage by consistent hashing and minimizing data rebalancing processing.
  • Patent Document 2 discloses a technique for quickly recovering the number of replications when the number of information storage nodes storing data is reduced.
  • Patent Documents 1 and 2 The entire disclosure of the above Patent Documents 1 and 2 is incorporated herein by reference. The following analysis was made by the present inventors.
  • An object of the present invention is to provide an access device, a migration device, a distributed storage system, an access method, and a program that contribute to solving such a problem.
  • the data deletion process is performed after the data addition process is performed on the storage unit included in the storage apparatus in accordance with the change in the number of storage units included in the storage apparatus.
  • An access device is provided that includes receiving means for receiving a notification that the additional processing has been completed from a migration device that performs data rebalancing by executing.
  • the access device based on information indicating the storage means that should hold the data before and after the change, until the notification of completion is received
  • the data deletion process is performed after the data addition process is performed on the storage unit included in the storage apparatus in accordance with the change in the number of storage units included in the storage apparatus.
  • a migration apparatus is provided that includes data movement means for performing data rebalance by executing.
  • the migration apparatus includes a notification unit that notifies an access device that accesses a storage unit included in the storage apparatus that the addition process has been completed.
  • a distributed storage system including a migration device and an access device.
  • the migration device executes a data deletion process after executing a data addition process on the storage unit included in the storage apparatus in accordance with a change in the number of storage units included in the storage apparatus. Data rebalancing is performed, and the access device is notified that the additional processing has been completed. Further, when the storage unit that should hold the data to be accessed changes, the access device is based on the information indicating the storage unit that should hold the data before and after the change until the notification of completion is received. When the storage means included in the storage device is accessed and the notification of the completion is received, the storage means that should hold the data after the change is accessed.
  • an access method by an access device executes a data deletion process after executing a data addition process on the storage means included in the storage apparatus according to a change in the number of storage means included in the storage apparatus.
  • the access device receives a notification to the effect that the additional processing has been completed from a migration device that performs data rebalancing.
  • the access device shows the storage means that the access device should hold the data before and after the change until the notification of the completion is received. Based on the information, the storage means included in the storage device is accessed. Further, when the access method is notified of the completion, the access device accesses the storage means that should hold the data after the change.
  • the data deletion process is performed after the data addition process is performed on the storage unit included in the storage apparatus in accordance with the change in the number of storage units included in the storage apparatus.
  • a program is provided that causes a computer to execute a process of receiving a notification that the additional process has been completed from a migration apparatus that performs data rebalancing by executing. Further, when the storage means that should hold the data to be accessed fluctuates, the program stores the storage based on information indicating the storage means that should hold the data before and after the change until the notification of completion is received. The computer is caused to execute processing for accessing storage means included in the apparatus. Further, when the program receives the notification of completion, the program causes the computer to execute processing for accessing the storage means that should hold the data after the change.
  • program can also be provided as a program product recorded in a non-transitory computer-readable storage medium.
  • the access performance during the data rebalancing process when the number of storage units increases or decreases in the distributed storage system. Decline can be prevented.
  • FIG. 6 is a sequence diagram illustrating an operation during data migration processing in the distributed storage system according to the first embodiment of the invention. It is a figure for demonstrating the data access destination when a memory
  • the distributed storage system according to the first embodiment of the present invention it is a diagram for explaining the data access destination when the storage unit increases.
  • FIG. 1 is a block diagram illustrating the configuration of an access device 2 according to an embodiment.
  • the access device 2 includes a receiving unit 25 and an access unit 26.
  • the receiving unit 25 performs data addition processing on the storage unit included in the storage apparatus in accordance with the change in the number of storage units included in the storage apparatus. Thereafter, the reception unit 25 receives a notification that the addition processing is completed from the migration apparatus that performs data rebalancing by executing data deletion processing.
  • the access unit 26 stores the storage device based on information indicating the storage unit that should hold the data before and after the change until receiving a notification that the addition process has been completed. Is accessed. Further, when the access unit 26 receives a notification that the addition process has been completed, the access unit 26 accesses the storage unit that should hold the data after the change.
  • FIG. 2 is a block diagram illustrating the configuration of the migration apparatus 1 according to an embodiment.
  • the migration apparatus 1 includes a data moving unit 11 and a notification unit 14.
  • the data moving unit 11 executes data addition processing on the storage unit included in the storage device in accordance with fluctuations in the number of storage units included in the storage device, and then executes data deletion processing. Rebalance data.
  • the notification unit 14 notifies the access device that accesses the storage unit included in the storage device that the addition process has been completed.
  • the access device 2 and the migration device 1 it is possible to prevent a decrease in access performance during data rebalancing processing when the number of storage units increases or decreases in the distributed storage system.
  • FIG. 4 is a block diagram illustrating the configuration of a distributed storage system according to an embodiment.
  • the access device 2 switches the data read / write algorithm, while the migration device 1 executes only the data addition processing necessary for data rearrangement.
  • the migration device 1 issues an algorithm switching command to the access device 2 group, and performs only data relocation deletion processing after the switching is completed. Furthermore, the access device group 2 performs data access using the old and new data arrangement information when the external storage units 200 to 20N increase or decrease, and after receiving the migration device switching command, the new data arrangement information is used in the normal data access method. Operate.
  • the access performance during the data rebalancing process when the external storage units 200 to 20N increase or decrease does not deteriorate.
  • the reason is that the migration apparatus 1 notifies the transition information of migration to the access apparatus 2 so that it is not necessary to access data based on the old data arrangement information.
  • FIG. 3 is a diagram illustrating a schematic configuration of the distributed storage system according to the present embodiment.
  • the distributed storage system of this embodiment includes CPUs (Central Processing Units) 100 to 10M, external storage units 200 to 20N, and an interconnect network 300 that couples them.
  • CPUs Central Processing Units
  • external storage units 200 to 20N external storage units
  • interconnect network 300 that couples them.
  • a computer server is constructed by coupling the CPUs 100 to 10M and the external storage units 200 to 20N (and other resources) via the interconnect network 300.
  • the CPUs 100 to 10M are devices including an arithmetic processing device (for example, a CPU) constituting a server, an interface (IF) for connecting to the interconnect network 300, a high-speed storage circuit (register), a CPU cache, and the like.
  • arithmetic processing device for example, a CPU
  • IF interface
  • register high-speed storage circuit
  • CPU cache a CPU cache
  • the external storage units 200 to 20N include an interface for coupling with the CPUs 100 to 10M (that is, an IF to the interconnect network 300), a storage device (flash memory, DRAM (Dynamic Random Access Memory), MRAM (Magnetoresistive Random Access Memory). , HDD (Hard Disk Disk Drive), etc.), a function of performing access processing to the storage device, power supply means, and the like.
  • a storage device flash memory, DRAM (Dynamic Random Access Memory), MRAM (Magnetoresistive Random Access Memory).
  • HDD Hard Disk Disk Drive
  • the interconnect network 300 connects the CPUs 100 to 10M and the external storage units 200 to 20N, and exchanges data, control messages, and other messages.
  • the interconnect network 300 can be realized by, for example, an optical cable and a switch.
  • the interconnect network 300 can also be realized by a PCI-e (Peripheral-Component-Interconnect-Express) cable or the like.
  • a computer based on a conventional architecture may be handled as a CPU in this embodiment.
  • the interconnect network 300 is realized by an Ethernet, a PCI-e network, or the like.
  • the computer interconnect network PCI-e can be expanded. That is, an architecture similar to the resource separation architecture can be realized by holding the IF having the ExpEther function in the CPU 10X.
  • the external storage unit 20X includes a card having an ExpEther function, an arbitrary PCI-e device, and the like. The PCI-e device only needs to have some storage device.
  • PCI-e Flash For example, PCI-e Flash, RAID (Redundant Arrays of Inexpensive Disks) cards (via multiple HDDs and SSDs (Solid State Drive)), GPGPU (General Purpose computing computing on Graphics) Processing Units
  • GPGPU General Purpose computing computing on Graphics
  • An arithmetic board based on a MIC (Many Integrated Core) architecture such as Intel Xeon Phi (including storage means) can be used.
  • the interconnect network 300 is realized by Fiber Channel or FCoE (Fibre Channel over Ethernet (registered trademark)).
  • the CPUs 100 to 10M are realized as a storage apparatus / system provided with a host bus adapter (or Ethernet card) in a conventional computer, and the external storage units 200 to 20N are provided with IFs for these networks.
  • a network between the CPUs 100 to 10M is often prepared separately. For example, a form in which servers are connected by Ethernet (TCP / IP (TransmissionTransControl Protocol / Internet Protocol)) and servers and storages are connected by Fiber Channel is used.
  • FIG. 4 is a block diagram illustrating a more detailed configuration of the CPUs 100 to 10M.
  • a description will be mainly given of a method of performing data access during data migration processing after realizing a distributed storage database system by the computer shown in FIG.
  • FIG. 4 illustrates a configuration in which a plurality of computers and external storage units exist.
  • the migration device 1 and the access device 2 in FIG. 4 are assumed to operate as software that operates on the CPUs 100 to 10M in FIG. 3 or arbitrary hardware connected to the interconnect network 300, respectively.
  • the participating host storage unit 3 and the participating device storage unit 4 in FIG. 4 operate as software for realizing arbitrary storage means and storage means, and are realized by any one or more of the CPUs 100 to 10M in FIG.
  • the participating host storage unit 3 and the participating device storage unit 4 in FIG. 4 may also be realized by a computer and storage means connected to the interconnect network 300 in FIG. 3 or other arbitrary network.
  • the distributed storage system includes a migration device 1, one or more access devices 2, a participating host storage unit 3, a participating device storage unit 4, an interconnect network 300, and an external storage unit. 200 to 20N.
  • the access device 2 is software that implements a data store (database, KVS (Key Value Store), file system, etc.) realized in the present embodiment and provides a data access function.
  • the CPU 10X that implements the access device 2 has hardware and software that perform communication processing according to an arbitrary communication protocol via the interconnect network 300.
  • the migration apparatus 1 is software that executes data movement for correcting the data arrangement state when a change occurs in the number, capacity, etc. of the external storage units 200 to 20N constituting the distributed storage system.
  • the CPU 10X that implements the migration apparatus 1 also has hardware and software that perform communication processing according to an arbitrary communication protocol via the interconnect network 300.
  • the migration apparatus 1 needs to grasp the access apparatus 2 that accesses the distributed storage system as a notification destination that the switching has been performed.
  • the participating host storage unit 3 holds information related to the access device 2 that accesses the distributed storage system.
  • the migration device 1 refers to the participating host storage unit 3 and grasps the access device 2 that accesses the distributed storage system.
  • the participating host storage unit 3 As a specific example of the information held in the participating host storage unit 3, a list of IP addresses and process port numbers of servers of the CPUs 100 to 10M that operate the access device 2 can be considered. Further, as a specific embodiment, it is conceivable that the participating host storage unit 3 is operated as database software, and the participation notification unit 21 and the transition notification unit 13 each access the participating host storage unit 3 by an SQL command or the like. In addition, a shared file system or the like may be used as the participating host storage unit 3.
  • the participating device storage unit 4 is a storage unit that holds information necessary for realizing a distributed storage system that determines a data arrangement location based on a hash or a random number. In such a distributed storage system, it is necessary to hold list information of storage means (a computer when a computer is a storage means) constituting the distributed storage system.
  • a method of realizing the participating device storage unit 4 for example, a method of holding data by any one or more computers, a method of holding data by each CPU 10X, and updating them synchronously, distributed among CPUs 100 to 10M A storage system can be realized and a method of holding the storage system can be used.
  • the migration apparatus 1 includes a data movement unit 11, a device selection unit 12, and a transition notification unit 13.
  • the data moving unit 11 uses the data arrangement destination information determined by the device selection unit 12 and executes necessary data movement between the external storage units 200 to 20N in order to reflect the change of the data arrangement state.
  • the data movement processing includes data read from the migration source external storage unit 20Y, data write to the migration destination external storage unit 20Z, and deletion of the original data from the migration source external storage unit 20Y.
  • the device selection unit 12 calculates information for determining which external storage unit 20Y stores a specific data range from the storage unit list information constituting the distributed storage system held by the participating device storage unit 4. And hold. In the present embodiment, when the state of the participating device storage unit 4 changes, the device selection unit 12 holds both state information before and after the change.
  • the external storage unit 20Y as a storage destination is specified for a key using these pieces of information held by the device selection unit 12, or further external storage is performed.
  • the physical address to be accessed in the unit 20Y can be specified.
  • the device selection unit 12 operating in the migration apparatus 1 calculates the change state because the storage destination of each data is changed according to the change in the number of external storage units 20X.
  • the transition notification unit 13 notifies the migration transition from the participating host storage unit 3 to the access device 2.
  • the transition notification unit 13 notifies the access device 2 after the data write process in the migration is completed.
  • the transition notification unit 13 receives the migration transition information from the data migration unit 11 and refers to the information in the participating host storage unit 3 to notify the access device 2.
  • the access device 2 includes a participation notification unit 21, a device selection unit 24, an algorithm switching unit 23, and a read / write execution unit 22.
  • the access device 2 receives a storage access request from a higher-level application program.
  • the access request varies depending on the function of the storage software.
  • the access request is a data operation instruction as specified by SQL.
  • KVS Key Value Store
  • the access request is a processing request for obtaining, registering, or updating a value corresponding to the key.
  • the participation notification unit 21 notifies the participation / leaving of the system in order for the access device 2 to realize the function of accessing the distributed storage. Specifically, the participation notification unit 21 registers or deletes its own management IP address and port number in the participation host storage unit 3. However, the information registered by the participation notification unit 21 may be information that allows the transition notification unit 13 to identify the access device 2. When the influence on the system is considered to be small, the deletion (leaving) process may be omitted. However, it is preferable to perform processing such as deleting unused information later.
  • the device selection unit 24 of the access device 2 has the same function as the device selection unit 12 of the migration device 1.
  • the device selection unit 24 in the access device 2 selects the external storage unit 20Y as an access destination in response to an access request from a higher-order application, and arranges the necessary data on the external storage unit 20Y (that is, the address) ) Is calculated.
  • the read / write execution unit 22 issues a data access command to the access destination selected by the device selection unit 24.
  • the read / write execution unit 22 issues instructions to the plurality of external storage units 20Y in order to ensure redundancy and ensure reliability. Further, the read / write execution unit 22 changes the algorithm for executing the read / write in accordance with the instruction of the algorithm switching unit 23. Details of the algorithm will be described later.
  • the algorithm switching unit 23 issues an algorithm change request to the read / write execution unit 22 based on the information notified from the transition notification unit 13.
  • the data moving unit 11 performs rebalancing processing of data generated by changing the number of external storage units (devices) 200 to 20N according to the following procedure.
  • the transition notification unit 13 notifies the access device 2 of the start of migration.
  • the notification of migration start may be implicitly performed by reflecting the increase / decrease in devices.
  • the device selection unit 12 calculates a data arrangement change state. Further, the data moving unit 11 executes all necessary data addition processing. Thereafter, the transition notification unit 13 notifies the algorithm switching unit 23 of the algorithm change, and changes the algorithm of the read / write execution unit 22. After that, the data moving unit 11 executes all data deletion processes that are no longer necessary due to the change in the data arrangement.
  • FIG. 5 is a sequence diagram illustrating the operation of the migration process in the distributed storage system according to this embodiment.
  • the external storage unit 20Y is added to or deleted from the distributed storage system due to a failure of the external storage units 200 to 20N or system performance expansion.
  • the migration device 1 for example, the CPU 10X
  • the access device 2 After the configuration of the external storage units 200 to 20N participating in the distributed storage system is changed, the access device 2 provides information on both the data allocation destination determination method before the configuration change and the data allocation destination determination method after the configuration change. And read (READ) / write (WRITE) processing is performed (step S2). For example, in the case of a distributed storage system based on the consistent hashing method, the access device 2 holds two types of consistent hash rings, an old state and a new state. A specific algorithm for the read (READ) / write (WRITE) access will be described later.
  • the migration device 1 After each access device 2 switches to a method that uses old and new data location determination information, the migration device 1 starts data addition processing accompanying migration (step S3). By changing the data arrangement destination, an area where new data is copied and an area where data is deleted appear. Here, the migration apparatus 1 executes only a new copy process (addition process).
  • the migration device 1 issues a data placement destination determination information switching request to the access device 2 group after completion of the data addition processing (step S4).
  • the access device 2 When the access device 2 receives the switching request and completes the switching process, the access device 2 sends a switching completion notification to the migration device 1 (step S5).
  • the migration apparatus 1 executes a data deletion process associated with the migration (step S6).
  • the access device group 2 executes normal read / write access processing using only the new data location determination information after completion of switching (step S7).
  • a period before the data moving unit 11 completes the data addition processing after the increase / decrease of the external storage unit 20Y is referred to as “before switching notification reception”.
  • a period after the data moving unit 11 completes the data addition process is referred to as “after switching notification is received”.
  • different data access algorithms are used between these two periods.
  • FIG. 6 is an image diagram of a consistent hash ring in the consistent hash method.
  • the arrangement destination external storage unit 20Y is determined using a hash value of a key. If the value of the hash value is mapped onto the circumference, the hash value of the key is placed on one of the circumferences. Similarly, the external storage units 200 to 20N can be mapped onto the circumference.
  • the external storage units 200 to 20N are mapped on the circumference as nodes D0 to D7.
  • a single external storage unit can be mapped to a plurality of nodes using a mechanism called a virtual node.
  • the external storage units 200 to 20N and the nodes D0 to D7 correspond to 1: 1.
  • the external storage unit 20Y corresponding to the key can be selected as follows. As shown in FIG. 6, when the hash value of the key is between the node D0 and the node D7, the read / write execution unit 22 selects the external storage unit 200 corresponding to the node D0 as the storage unit. Further, assuming that a total of three copies are held for reliability, the read / write execution unit 22 selects the external storage units 201 and 202 corresponding to the next nodes D1 and D2 as the copy destination. In the description of the present embodiment, a case will be described in which such a method is used as an algorithm for selecting an external storage unit in the read / write execution unit 22. However, the duplication destination selection algorithm in the present invention is not limited to this.
  • the read / write execution unit 22 accesses the data by issuing a read or write command to the external storage unit 20Y at the placement destination.
  • the read / write execution unit 22 may access the plurality of external storage units 20Y, read a predetermined number of data and find no inconsistency in the data, and return a response to the client. For example, in the majority voting method, the read / write execution unit 22 considers that the processing is successful at the stage where two data are read for three replicas.
  • the read / write execution unit 22 may return a response after reading out three pieces of data, or may return a response when only one piece of data is read out. Good.
  • the read / write execution unit 22 also performs the same process as the read process in order to ensure reliability regarding the write process.
  • the number of accesses required for the read / write execution unit 22 to regard an access (read or write) as a success is referred to as a “minimum access number”.
  • the migration apparatus 1 needs to copy the data in the external storage unit 200 corresponding to the node D0 to the external storage units 203 and 204 corresponding to the nodes D3 and D4.
  • the read / write execution unit 22 executes read processing based on the data arrangement information before the storage unit reduction. At this time, the read / write execution unit 22 does not access the reduced external storage unit 20Y. That is, the read / write execution unit 22 excludes the external storage units 201 and 202 corresponding to the nodes D1 and D2 from the access destination.
  • the read / write execution unit 22 performs processing by reducing the number of storage units that are reduced from the minimum number of accesses (however, only the storage unit that stores data).
  • the read / write execution unit 22 reads the data in the external storage unit 200 corresponding to the node D0. Returns a response.
  • the read / write execution unit 22 is minimum when only one of the three external storage units 20Y holding the copy is deleted. Only 1 is subtracted from the number of accesses. The minimum value of the number of external storage units actually accessed by the read / write execution unit 22 is 1. Note that in a system that holds three replicas, data loss cannot be avoided if three external storage units fail simultaneously.
  • the read / write execution unit 22 After receiving the switching notification, the read / write execution unit 22 issues a read (READ) command based on the state after the external storage unit is reduced. That is, the read / write execution unit 22 accesses the external storage units 200, 203, and 204 corresponding to the nodes D0, D3, and D4.
  • the minimum access number used by the read / write execution unit 22 remains the system default value (3 in the case of FIG. 6).
  • the read / write execution unit 22 determines the access destination based on the old data arrangement state (before addition) before the switching notification is received. In addition, after receiving the switching notification, the read / write execution unit 22 determines the access destination based on the new data arrangement state (after addition).
  • FIG. 7 is an image diagram when one external storage unit 20Y is added (here, the external storage unit 201 corresponding to the node D1 is added). Again, the number of replicas is three.
  • the state of the storage destination of a certain key is changed from the nodes D0, D2, and D3 to the nodes D0, D1, and D2.
  • the read / write execution unit 22 Before the switching notification is received, the read / write execution unit 22 writes data in both the data arrangement state before the change and the data arrangement state after the change. That is, the read / write execution unit 22 writes data to the external storage units 200 to 203 corresponding to the four nodes D0 to D3. Therefore, the read / write execution unit 22 operates so as to temporarily increase the number of replicas by the number of additional external storage units. On the other hand, after receiving the switching notification, the read / write execution unit 22 performs writing only in the added state. That is, the read / write execution unit 22 performs a write process on the external storage units 200 to 202 corresponding to the nodes D0 to D2.
  • the reason why the read / write execution unit 22 writes (writes) to the external storage unit that is, the external storage unit 203 corresponding to the node D3) in which writing is performed only in the data arrangement state before the change This is to prevent inconsistency between the read algorithm and the data migration algorithm.
  • the read / write execution unit 22 omits writing to the external storage unit 203 corresponding to the node D3. May be. However, if the writing to the external storage unit 203 corresponding to the node D3 is omitted, it is necessary to perform version management so that old data is not written in the migration process.
  • the reason why the read / write execution unit 22 also writes to the external storage unit 201 corresponding to the new storage destination node D1 before receiving the switching notification is for consistency with the migration apparatus 1. .
  • processing can be performed regardless of whether or not the migration apparatus 1 has completed processing for the corresponding key.
  • it since it takes a load on the computer to check the history of the migration apparatus 1, it is considered that it is faster to execute the write process.
  • the read / write execution unit 22 performs the same processing as when the number of external storage units increases. That is, before the switching notification is received, the read / write execution unit 22 writes to both external storage units 20Y before and after the reduction. On the other hand, after receiving the switching notification, the read / write execution unit 22 writes only to the external storage unit 20Y in the reduced state.
  • the external storage unit 20Y that stores data before the reduction is an external storage unit corresponding to the nodes D0 to D2. 200 to 202.
  • external storage units that store data after reduction are external storage units 200, 203, and 204 corresponding to nodes D0, D3, and D4.
  • the read / write execution unit 22 before receiving the switching notification, writes to both the external storage units 20Y before and after the reduction. That is, the read / write execution unit 22 performs writing to the external storage units 200 to 204 corresponding to the nodes D0 to D4 before and after the reduction, based on the information regarding both external storage units 20Y before and after the reduction. However, since the external storage units 201 and 202 corresponding to the nodes D1 and D2 are not reduced, they are excluded from the write destination. As a result, the read / write execution unit 22 performs writing to the external storage units 200, 203, and 204 corresponding to the nodes D0, D3, and D4.
  • the read / write execution unit 22 writes only to the state after the reduction, that is, only the external storage units 200, 203, and 204 corresponding to the nodes D0, D3, and D4.
  • the write destinations are the external storage units 200, 203, and 204 corresponding to the nodes D0, D3, and D4, and do not change.
  • the external storage units to be accessed before reduction are the external storage units 207, 201, 202 corresponding to the nodes D7, D1, D2, and the external storage units 201, 202 corresponding to the nodes D1, D2 are If the reduced external storage units to be accessed are the external storage units 200, 203, and 204 corresponding to the nodes D0, D3, and D4, the read / write execution unit 22 performs before and after the reduction notification before receiving the switching notification. Are written in the external storage units 207, 200, 203, and 204 corresponding to both the nodes D7, D0, D3, and D4. On the other hand, after receiving the switching notification, the read / write execution unit 22 writes only to the external storage units 200, 203, and 204 corresponding to the nodes D0, D3, and D4.
  • Examples of utilization of the present invention include a database system using a storage unit outside a computer, a key-value store system, and a shared distributed data store system sharing a common storage unit among a plurality of computers.
  • the present invention can also be used when a storage system is operated on a server realized by a resource separation architecture.
  • the application destination of the present invention is not limited to these.
  • the access unit when the storage unit that should hold the data to be read is reduced, the access unit includes the storage unit that should hold the data before the reduction until the notification of the completion is received. If reading is performed from a storage unit other than the reduction target and a notification of completion is received, the data may be read from the storage unit that should hold the data after reduction.
  • the access unit when a storage unit that should hold data to be read is added, the access unit reads data from the storage unit that should hold the data before addition until receiving a notification of completion. When the notification of completion is received, the data may be read from the storage unit that should hold the data after the addition.
  • the access unit when the storage unit to hold the data to be written is added, the access unit writes the data to the storage unit to hold the data before and after the addition until the notification of the completion is received. When the notification of completion is received, the data may be written to the storage unit that should hold the data after the addition.
  • the access unit when the storage unit to hold data to be written is reduced, the access unit is a storage unit to hold the data before and after the reduction until the completion notification is received. If writing is performed to a storage unit other than the reduction target and a notification of the completion is received, writing may be performed to the storage unit that should hold the data after reduction.
  • a storage unit that should hold data may be determined based on a consistent hash method.
  • [Form 7] This is the same as the migration apparatus according to the second aspect.
  • [Form 8] The distributed storage system according to the third aspect is as described above.
  • [Form 9] In the distributed storage system, when the storage unit to hold the data to be read is reduced, the access device includes the storage unit to hold the data before the reduction until the notification of the completion is received. When reading is performed from a storage unit other than the reduction target and a notification of the completion is received, the data may be read from the storage unit that should hold the data after the reduction.
  • the access device when a storage unit to hold data to be read is added, the access device reads from the storage unit to hold the data before adding until receiving a notification of completion. When the notification of completion is received, the data may be read from the storage unit that should hold the data after the addition.
  • the access device when a storage unit to hold data to be written is added, the access device writes the data to the storage unit to hold the data before and after the addition until receiving notification of completion. When the notification of completion is received, the data may be written in the storage unit that should hold the data after the addition.
  • the access device when the storage unit to hold the data to be written is reduced, the access device is a storage unit to hold the data before and after the reduction until the notification of the completion is received. Then, the data may be written to a storage unit other than the reduction target, and when the notification of completion is received, the data may be written to the storage unit that should hold the data after the reduction.
  • a storage unit that should hold data In the distributed storage system, a storage unit that should hold data may be determined based on a consistent hash method.
  • the access method according to the fourth aspect is as described above.
  • the access device when the storage unit that should hold the data to be read is reduced, the access device includes the storage unit that should hold the data before the reduction until the notification of the completion is received. If reading is performed from a storage unit other than the reduction target and a notification of completion is received, the data may be read from the storage unit that should hold the data after reduction.
  • the access device when a storage unit to hold data to be read is added, the access device reads from the storage unit to hold the data before adding until receiving a notification of completion. When the notification of completion is received, the data may be read from the storage unit that should hold the data after the addition.
  • the access device when a storage unit to hold data to be written is added, the access device writes data to the storage unit to hold the data before and after the addition until receiving a notification of completion. When the notification of completion is received, the data may be written to the storage unit that should hold the data after the addition.
  • the access device when the storage unit to hold data to be written is reduced, the access device is a storage unit to hold the data before and after the reduction until receiving a notification of completion. If writing is performed to a storage unit other than the reduction target and a notification of the completion is received, writing may be performed to the storage unit that should hold the data after reduction.
  • the storage unit to hold data may be determined based on the consistent hash method.
  • the migration device executes data addition processing on the storage units included in the storage device in accordance with fluctuations in the number of storage units included in the storage device, and then executes data deletion processing to execute data deletion.
  • the migration method includes a step of performing rebalancing, and a step of notifying the access device that accesses the storage unit included in the storage device that the addition process has been completed.
  • Form 21 A program for causing a computer to execute each step of the access method according to any one of Forms 14 to 19 or the migration method according to Form 20 is provided.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention prevents access performance in a distributed storage system from declining during a data re-balance process when the number of storage units has increased. An access device provided with: a reception unit for receiving, from a migration device for re-balancing data in accordance with a change in the number of storage units included in a storage device by executing a data addition process on the storage units included in the storage device and then executing a data deletion process, a notification that the addition process has been completed; and an access unit which, when the storage unit that is to hold the data to be accessed changes, accesses the storage unit included in the storage device, on the basis of information indicating the storage unit that is to hold the data before and after the change, until the notification that the addition process has been completed is received, and which, when the notification that the addition process has been completed is received, accesses the storage unit that is to hold the data after change.

Description

アクセス装置、マイグレーション装置、分散ストレージシステム、アクセス方法及びコンピュータ読み取り可能記録媒体Access device, migration device, distributed storage system, access method, and computer-readable recording medium
 本発明は、アクセス装置、マイグレーション装置、分散ストレージシステム、アクセス方法及びコンピュータ読み取り可能記録媒体に関し、特に、記憶部の数が増減し得る分散ストレージシステム、かかる分散ストレージシステムにアクセスするアクセス装置、記憶部の数の増減に伴いデータマイグレーションを行うマイグレーション装置、アクセス装置によるアクセス方法及びコンピュータ読み取り可能記録媒体に関する。 The present invention relates to an access device, a migration device, a distributed storage system, an access method, and a computer-readable recording medium, and in particular, a distributed storage system in which the number of storage units can be increased or decreased, an access device that accesses such a distributed storage system, and a storage unit The present invention relates to a migration device that performs data migration as the number of files increases and decreases, an access method using an access device, and a computer-readable recording medium.
 記憶装置および記憶システムに対するデータアクセスの制御に関して、さまざまな技術が開発されている。例えば、単一または複数の計算機によって構成されるデータストアシステム(例えば、データベースシステム、ファイルシステム、キャッシュシステム等)が知られている。 Various technologies have been developed for controlling data access to storage devices and storage systems. For example, a data store system (for example, a database system, a file system, a cache system, etc.) configured by a single computer or a plurality of computers is known.
 近年では、そのようなシステムとして、主に分散ストレージシステムが用いられている。分散ストレージシステムは、ネットワークを介して接続された複数の汎用的な計算機を含む。また、分散ストレージシステムは、これらの計算機に搭載された記憶部(ないし記憶装置)を用いて、データの格納およびデータの提供を行う。ここで、記憶部とは、例えば、ハードディスク(HDD:Hard Disk Drive)、主記憶(例えば、DRAM:Dynamic Random Access Memory)等である。 In recent years, a distributed storage system is mainly used as such a system. The distributed storage system includes a plurality of general-purpose computers connected via a network. The distributed storage system stores data and provides data using a storage unit (or storage device) mounted on these computers. Here, the storage unit is, for example, a hard disk (HDD: Hard Disk Drive), main memory (for example, DRAM: Dynamic Random Access Memory), or the like.
 上述のような分散ストレージシステムでは、いずれの計算機にデータを配置し、いずれの計算機によってデータを処理するのかを、ソフトウェアまたは特別なハードウェアによって決定する。データに対応する計算機を決定する方法として、例えば、乱数やハッシュ値を用いる手法が採用されている。 In the distributed storage system as described above, data is allocated to which computer, and which computer processes the data is determined by software or special hardware. As a method for determining a computer corresponding to data, for example, a method using a random number or a hash value is adopted.
 クラスタベースの分散ストレージや分散データベース技術は、サーバアーキテクチャを前提として発展して来ている。サーバアーキテクチャによると、自身以外のサーバのリソースにアクセスするには、該当するサーバを経由してアクセスする必要がある。一方、リソース分離型アーキテクチャでは、各リソース(メモリ、ストレージ等)はインターコネクトネットワークを経由して接続されている。すなわち、リソース分離型アーキテクチャによると、各CPU(Central Processing Unit)から各リソースを物理的に共有することが可能となり、サーバを経由してアクセスする必要がなくなる。このようなアーキテクチャの変化は、分散ストレージや分散データベースの技術に革新をもたらすものである。 Cluster-based distributed storage and distributed database technologies have been developed on the premise of server architecture. According to the server architecture, in order to access resources of a server other than itself, it is necessary to access via the corresponding server. On the other hand, in the resource separation architecture, each resource (memory, storage, etc.) is connected via an interconnect network. In other words, according to the resource separation type architecture, it is possible to physically share each resource from each CPU (Central Processing Unit), and it is not necessary to access via a server. Such architectural changes bring innovation to distributed storage and distributed database technologies.
 分散ストレージシステムでは、システムの構成要素である計算機や記憶デバイスの障害や性能拡張のために、記憶部の数の動的変化を許容する必要がある。例えば、ハッシュ値に基づいて責任記憶部を決定する方法を用いた場合、記憶部の数が変更されると、データの分散配置状態が変更されるため、データのリバランス処理が必要となる。このようなデータリバランス処理中の責任記憶部の決定は、一般に、正常時よりも複雑化する。 In a distributed storage system, it is necessary to allow a dynamic change in the number of storage units due to a failure or performance expansion of a computer or storage device that is a component of the system. For example, when a method of determining a responsible storage unit based on a hash value is used, when the number of storage units is changed, the data distributed arrangement state is changed, so that a data rebalancing process is required. In general, the determination of the responsible storage unit during the data rebalancing process is more complicated than normal.
 関連技術として、特許文献1には、コンシステントハッシングによって分散ストレージを構築し、データリバランス処理を最小化する技術が開示されている。 As a related technique, Patent Document 1 discloses a technique for constructing a distributed storage by consistent hashing and minimizing data rebalancing processing.
 また、特許文献2には、データを格納する情報記憶ノードの数が減少したときに、レプリケーション数を迅速に回復する技術が開示されている。 Patent Document 2 discloses a technique for quickly recovering the number of replications when the number of information storage nodes storing data is reduced.
特開2014-142945号公報JP 2014-142945 A 特開2012-221419号公報JP 2012-22214 A
 上記特許文献1、2の全開示内容は、本書に引用をもって繰り込み記載されているものとする。以下の分析は、本発明者によってなされたものである。 The entire disclosure of the above Patent Documents 1 and 2 is incorporated herein by reference. The following analysis was made by the present inventors.
 特許文献1に記載された分散ストレージシステムによると、データマイグレーション(リバランス)処理中に、データのアクセスができない、または、著しく性能が劣化するという問題がある。特許文献1に記載された分散ストレージシステムでは、データマイグレーション(リバランス)処理に時間を要するため、データ配置アルゴリズムによって決定される計算機や記憶デバイス上に責任データが存在しない。これにより、データの読み込みができず、また、書き込み処理もマイグレーションが完了するまで実行できないからである。 According to the distributed storage system described in Patent Document 1, there is a problem that data cannot be accessed or performance is significantly deteriorated during data migration (rebalance) processing. In the distributed storage system described in Patent Document 1, since data migration (rebalancing) processing takes time, no responsible data exists on a computer or a storage device determined by a data arrangement algorithm. This is because data cannot be read, and writing processing cannot be executed until migration is completed.
 また、信頼性向上のために複数の記憶部にデータを格納する場合、データの読み書きをこれら複数の記憶部に対して行う必要がある。記憶部の数が変更されることにより、データの配置先が変更される。このとき、データのリバランスが完了しているかどうかに応じて、アクセス先の記憶部も変更する必要がある。 In addition, when data is stored in a plurality of storage units in order to improve reliability, it is necessary to read / write data from / to these storage units. By changing the number of storage units, the data placement destination is changed. At this time, it is necessary to change the storage unit to be accessed according to whether or not the data rebalancing is completed.
 通常の分散クラスタによる分散ストレージにおいては、変更先の記憶部にアクセスを転送することが可能である。しかしながら、リソース分離型アーキテクチャにおいては、記憶部にかかる機能を設けることが困難である。また、通常の分散クラスタにおいても、アクセスを転送するのに要する通信レイテンシにより、アクセス性能が低下する。 In a distributed storage with a normal distributed cluster, it is possible to transfer access to the storage unit of the change destination. However, in the resource separation type architecture, it is difficult to provide a function related to the storage unit. Even in a normal distributed cluster, the access performance is degraded due to the communication latency required to transfer access.
 また、特許文献2に記載された技術によると、データマイグレーションの状況を示すフラグ(生死情報)が管理ノード上のノード管理テーブルで集中的に管理されているため、データマイグレーションの処理中におけるクライアントからの情報記憶ノードへのアクセスの際、管理ノードにおける処理がボトルネックとなるおそれがある。 Further, according to the technique described in Patent Document 2, since the flag (life / death information) indicating the status of data migration is centrally managed in the node management table on the management node, the client is in the process of data migration. When accessing the information storage node, there is a possibility that the process in the management node becomes a bottleneck.
 そこで、分散ストレージシステム(例えば、一般的な分散クラスタまたはリソース分離型アーキテクチャ上で動作するもの)において、記憶部の数が増減した際のデータリバランス処理中におけるアクセス性能の低下を防ぐことが課題となる。本発明の目的は、かかる課題解決に寄与するアクセス装置、マイグレーション装置、分散ストレージシステム、アクセス方法、および、プログラムを提供することにある。 Therefore, in a distributed storage system (for example, operating on a general distributed cluster or resource separation type architecture), it is a problem to prevent a decrease in access performance during data rebalancing processing when the number of storage units increases or decreases It becomes. An object of the present invention is to provide an access device, a migration device, a distributed storage system, an access method, and a program that contribute to solving such a problem.
 本発明の第1の態様によると、ストレージ装置に含まれる記憶手段の数の変動に応じて、前記ストレージ装置に含まれる記憶手段に対して、データの追加処理を実行した後、データの削除処理を実行することでデータのリバランスを行うマイグレーション装置から、前記追加処理が完了した旨の通知を受信する受信手段を備えるアクセス装置が提供される。前記アクセス装置は、アクセス対象のデータを保持すべき記憶手段が変動した場合、前記完了した旨の通知を受けるまで、変動前後に前記データを保持すべき記憶手段を示す情報に基づいて前記ストレージ装置に含まれる記憶手段にアクセスし、前記完了した旨の通知を受けると、変動後に前記データを保持すべき記憶手段に対してアクセスするアクセス手段を備える。 According to the first aspect of the present invention, the data deletion process is performed after the data addition process is performed on the storage unit included in the storage apparatus in accordance with the change in the number of storage units included in the storage apparatus. An access device is provided that includes receiving means for receiving a notification that the additional processing has been completed from a migration device that performs data rebalancing by executing. When the storage means that should hold the data to be accessed fluctuates, the access device, based on information indicating the storage means that should hold the data before and after the change, until the notification of completion is received Access means for accessing the storage means that should hold the data after the change when the storage means included in the storage is accessed and the notification of the completion is received.
 本発明の第2の態様によると、ストレージ装置に含まれる記憶手段の数の変動に応じて、前記ストレージ装置に含まれる記憶手段に対して、データの追加処理を実行した後、データの削除処理を実行することでデータのリバランスを行うデータ移動手段を備えるマイグレーション装置が提供される。前記マイグレーション装置は、前記ストレージ装置に含まれる記憶手段にアクセスするアクセス装置に対して、前記追加処理が完了した旨を通知する通知手段を備える。 According to the second aspect of the present invention, the data deletion process is performed after the data addition process is performed on the storage unit included in the storage apparatus in accordance with the change in the number of storage units included in the storage apparatus. A migration apparatus is provided that includes data movement means for performing data rebalance by executing. The migration apparatus includes a notification unit that notifies an access device that accesses a storage unit included in the storage apparatus that the addition process has been completed.
 本発明の第3の態様によると、マイグレーション装置とアクセス装置を備える分散ストレージシステムが提供される。前記マイグレーション装置は、ストレージ装置に含まれる記憶手段の数の変動に応じて、前記ストレージ装置に含まれる記憶手段に対して、データの追加処理を実行した後、データの削除処理を実行することでデータのリバランスを行い、前記追加処理が完了した旨を前記アクセス装置に通知する。さらに、前記アクセス装置は、アクセス対象のデータを保持すべき記憶手段が変動した場合、前記完了した旨の通知を受けるまで、変動前後に前記データを保持すべき記憶手段を示す情報に基づいて前記ストレージ装置に含まれる記憶手段にアクセスし、前記完了した旨の通知を受けると、変動後に前記データを保持すべき記憶手段に対してアクセスする。 According to the third aspect of the present invention, a distributed storage system including a migration device and an access device is provided. The migration device executes a data deletion process after executing a data addition process on the storage unit included in the storage apparatus in accordance with a change in the number of storage units included in the storage apparatus. Data rebalancing is performed, and the access device is notified that the additional processing has been completed. Further, when the storage unit that should hold the data to be accessed changes, the access device is based on the information indicating the storage unit that should hold the data before and after the change until the notification of completion is received. When the storage means included in the storage device is accessed and the notification of the completion is received, the storage means that should hold the data after the change is accessed.
 本発明の第4の態様によると、アクセス装置によるアクセス方法が提供される。前記アクセス方法は、ストレージ装置に含まれる記憶手段の数の変動に応じて、前記ストレージ装置に含まれる記憶手段に対して、データの追加処理を実行した後、データの削除処理を実行することでデータのリバランスを行うマイグレーション装置から、前記追加処理が完了した旨の通知を前記アクセス装置が受信する。また、前記アクセス方法は、アクセス対象のデータを保持すべき記憶手段が変動した場合、前記完了した旨の通知を受けるまで、前記アクセス装置が、変動前後に前記データを保持すべき記憶手段を示す情報に基づいて前記ストレージ装置に含まれる記憶手段にアクセスする。さらに、前記アクセス方法は、前記完了した旨の通知を受けると、前記アクセス装置が、変動後に前記データを保持すべき記憶手段に対してアクセスする。 According to the fourth aspect of the present invention, an access method by an access device is provided. The access method executes a data deletion process after executing a data addition process on the storage means included in the storage apparatus according to a change in the number of storage means included in the storage apparatus. The access device receives a notification to the effect that the additional processing has been completed from a migration device that performs data rebalancing. In the access method, when the storage means that should hold the data to be accessed changes, the access device shows the storage means that the access device should hold the data before and after the change until the notification of the completion is received. Based on the information, the storage means included in the storage device is accessed. Further, when the access method is notified of the completion, the access device accesses the storage means that should hold the data after the change.
 本発明の第5の態様によると、ストレージ装置に含まれる記憶手段の数の変動に応じて、前記ストレージ装置に含まれる記憶手段に対して、データの追加処理を実行した後、データの削除処理を実行することでデータのリバランスを行うマイグレーション装置から、前記追加処理が完了した旨の通知を受信する処理を、コンピュータに実行させるプログラムが提供される。また、前記プログラムは、アクセス対象のデータを保持すべき記憶手段が変動した場合、前記完了した旨の通知を受けるまで、変動前後に前記データを保持すべき記憶手段を示す情報に基づいて前記ストレージ装置に含まれる記憶手段にアクセスする処理を、前記コンピュータに実行させる。さらに、前記プログラムは、前記完了した旨の通知を受けると、変動後に前記データを保持すべき記憶手段に対してアクセスする処理を、前記コンピュータに実行させる。 According to the fifth aspect of the present invention, the data deletion process is performed after the data addition process is performed on the storage unit included in the storage apparatus in accordance with the change in the number of storage units included in the storage apparatus. A program is provided that causes a computer to execute a process of receiving a notification that the additional process has been completed from a migration apparatus that performs data rebalancing by executing. Further, when the storage means that should hold the data to be accessed fluctuates, the program stores the storage based on information indicating the storage means that should hold the data before and after the change until the notification of completion is received. The computer is caused to execute processing for accessing storage means included in the apparatus. Further, when the program receives the notification of completion, the program causes the computer to execute processing for accessing the storage means that should hold the data after the change.
 なお、プログラムは、非一時的なコンピュータ可読記録媒体(non-transitory computer-readable storage medium)に記録されたプログラム製品として提供することもできる。 Note that the program can also be provided as a program product recorded in a non-transitory computer-readable storage medium.
 本発明に係るアクセス装置、マイグレーション装置、分散ストレージシステム、アクセス方法、および、コンピュータ読み取り可能記録媒体によると、分散ストレージシステムにおいて、記憶部の数が増減した際のデータリバランス処理中におけるアクセス性能の低下を防ぐことができる。 According to the access device, the migration device, the distributed storage system, the access method, and the computer-readable recording medium according to the present invention, the access performance during the data rebalancing process when the number of storage units increases or decreases in the distributed storage system. Decline can be prevented.
本発明の一実施形態に係るアクセス装置の構成を例示するブロック図である。It is a block diagram which illustrates the composition of the access device concerning one embodiment of the present invention. 本発明の一実施形態に係るマイグレーション装置の構成を例示するブロック図である。It is a block diagram which illustrates the composition of the migration device concerning one embodiment of the present invention. 本発明の第1の実施形態に係る分散ストレージシステムの構成を例示する図である。It is a figure which illustrates the structure of the distributed storage system which concerns on the 1st Embodiment of this invention. 本発明の第1の実施形態に係る分散ストレージシステムの詳細な構成を例示するブロック図である。1 is a block diagram illustrating a detailed configuration of a distributed storage system according to a first embodiment of the present invention. 本発明の第1の実施形態に係る分散ストレージシステムにおけるデータマイグレーション処理時の動作を例示するシーケンス図である。FIG. 6 is a sequence diagram illustrating an operation during data migration processing in the distributed storage system according to the first embodiment of the invention. 本発明の第1の実施形態に係る分散ストレージシステムにおいて、記憶部が減少するときのデータアクセス先について説明するための図である。It is a figure for demonstrating the data access destination when a memory | storage part decreases in the distributed storage system which concerns on the 1st Embodiment of this invention. 本発明の第1の実施形態に係る分散ストレージシステムにおいて、記憶部が増加するときデータアクセス先について説明するための図である。In the distributed storage system according to the first embodiment of the present invention, it is a diagram for explaining the data access destination when the storage unit increases.
 はじめに、一実施形態の概要について説明する。なお、この概要に付記する図面参照符号は、専ら理解を助けるための例示であり、本発明を図示の態様に限定することを意図するものではない。また、図中に示す矢印は、データ等の流れの一例を示す。しかしながら、データ等の流れは、これに限定されない。 First, an outline of one embodiment will be described. Note that the reference numerals of the drawings attached to this summary are merely examples for facilitating understanding, and are not intended to limit the present invention to the illustrated embodiment. Moreover, the arrow shown in a figure shows an example of the flow of data. However, the flow of data and the like is not limited to this.
 図1は、一実施形態に係るアクセス装置2の構成を例示するブロック図である。図1を参照すると、アクセス装置2は、受信部25と、アクセス部26と、を備えている。受信部25は、ストレージ装置に含まれる記憶部の数の変動に応じて、ストレージ装置に含まれる記憶部に対して、データの追加処理を実行する。受信部25は、その後、データの削除処理を実行することでデータのリバランスを行うマイグレーション装置から、追加処理が完了した旨の通知を受信する。アクセス部26は、アクセス対象のデータを保持すべき記憶部が変動した場合、追加処理が完了した旨の通知を受けるまで、変動前後にデータを保持すべき記憶部を示す情報に基づいてストレージ装置に含まれる記憶部にアクセスする。また、アクセス部26は、追加処理が完了した旨の通知を受けると、変動後にデータを保持すべき記憶部に対してアクセスする。 FIG. 1 is a block diagram illustrating the configuration of an access device 2 according to an embodiment. Referring to FIG. 1, the access device 2 includes a receiving unit 25 and an access unit 26. The receiving unit 25 performs data addition processing on the storage unit included in the storage apparatus in accordance with the change in the number of storage units included in the storage apparatus. Thereafter, the reception unit 25 receives a notification that the addition processing is completed from the migration apparatus that performs data rebalancing by executing data deletion processing. When the storage unit that should hold the data to be accessed changes, the access unit 26 stores the storage device based on information indicating the storage unit that should hold the data before and after the change until receiving a notification that the addition process has been completed. Is accessed. Further, when the access unit 26 receives a notification that the addition process has been completed, the access unit 26 accesses the storage unit that should hold the data after the change.
 図2は、一実施形態に係るマイグレーション装置1の構成を例示するブロック図である。図2を参照すると、マイグレーション装置1は、データ移動部11と、通知部14と、を備えている。データ移動部11は、ストレージ装置に含まれる記憶部の数の変動に応じて、ストレージ装置に含まれる記憶部に対して、データの追加処理を実行した後、データの削除処理を実行することでデータのリバランスを行う。通知部14は、ストレージ装置に含まれる記憶部にアクセスするアクセス装置に対して、追加処理が完了した旨を通知する。 FIG. 2 is a block diagram illustrating the configuration of the migration apparatus 1 according to an embodiment. Referring to FIG. 2, the migration apparatus 1 includes a data moving unit 11 and a notification unit 14. The data moving unit 11 executes data addition processing on the storage unit included in the storage device in accordance with fluctuations in the number of storage units included in the storage device, and then executes data deletion processing. Rebalance data. The notification unit 14 notifies the access device that accesses the storage unit included in the storage device that the addition process has been completed.
 かかるアクセス装置2、および、マイグレーション装置1によると、分散ストレージシステムにおいて、記憶部の数が増減した際のデータリバランス処理中におけるアクセス性能の低下を防ぐことができる。 According to the access device 2 and the migration device 1, it is possible to prevent a decrease in access performance during data rebalancing processing when the number of storage units increases or decreases in the distributed storage system.
 以下では、図面を参照してさらに詳細に説明する。図4は、一実施形態に係る分散ストレージシステムの構成を例示するブロック図である。図4を参照すると、外部記憶部200~20Nの増減に伴い、アクセス装置2はデータリード/ライトアルゴリズムを切り替え、一方、マイグレーション装置1はデータ再配置に必要なデータ追加処理のみをすべて実行する。 In the following, further details will be described with reference to the drawings. FIG. 4 is a block diagram illustrating the configuration of a distributed storage system according to an embodiment. Referring to FIG. 4, as the number of external storage units 200 to 20N increases or decreases, the access device 2 switches the data read / write algorithm, while the migration device 1 executes only the data addition processing necessary for data rearrangement.
 また、マイグレーション装置1はデータ追加処理をすべて実行した後に、アルゴリズム切り替え命令をアクセス装置2群へ発行し、切り替え完了後にデータ再配置の削除処理のみを行う。さらに、アクセス装置2群は、外部記憶部200~20Nの増減時には新旧のデータ配置情報を用いてデータアクセスを行い、マイグレーション装置の切り替え命令受信後には、新しいデータ配置情報によって通常のデータアクセス方式で動作する。 Also, after executing all the data addition processing, the migration device 1 issues an algorithm switching command to the access device 2 group, and performs only data relocation deletion processing after the switching is completed. Furthermore, the access device group 2 performs data access using the old and new data arrangement information when the external storage units 200 to 20N increase or decrease, and after receiving the migration device switching command, the new data arrangement information is used in the normal data access method. Operate.
 かかる分散ストレージシステムによると、外部記憶部200~20Nが増減した際のデータリバランス処理中のアクセス性能が劣化しない。 According to such a distributed storage system, the access performance during the data rebalancing process when the external storage units 200 to 20N increase or decrease does not deteriorate.
 その理由は、マイグレーション処理が完了しているかどうかが不明なタイミングにおいて、データリード(READ)処理を待ち合わせる必要がないからである。また、前述のマイグレーション装置1とアクセス装置2の動作によると、マイグレーション処理中にデータの不整合状態が発生しない。すなわち、マイグレーション中のデータ更新処理においてデータの複製同士でデータ不整合(バージョン違い)が発生せず、さらに、マイグレーション処理完了後においても、データの複製同士でデータ不整合が発生しない。したがって、リード(READ)時に不整合修復処理等が不要となるからである。 The reason is that it is not necessary to wait for the data read (READ) process at a timing when it is unknown whether the migration process is completed. Further, according to the operations of the migration apparatus 1 and the access apparatus 2 described above, a data inconsistency state does not occur during the migration process. That is, data inconsistency (difference in version) does not occur between data replicas in data update processing during migration, and further, data inconsistency does not occur between data replicas even after migration processing is completed. Therefore, inconsistency repair processing or the like is not required at the time of reading (READ).
 さらに、かかる分散ストレージシステムによると、データマイグレーションのために、必要データが存在しない外部記憶部にアクセスして通信をやり直すといった通信レイテンシの発生を防ぐことができ、アクセス性能が向上する。 Furthermore, according to such a distributed storage system, for data migration, it is possible to prevent the occurrence of communication latency such as accessing an external storage unit in which necessary data does not exist and re-communication, thereby improving access performance.
 その理由は、マイグレーション装置1がマイグレーションの推移情報をアクセス装置2に通知することにより、古いデータ配置情報に基づくデータにアクセスしないで済むからである。 The reason is that the migration apparatus 1 notifies the transition information of migration to the access apparatus 2 so that it is not necessary to access data based on the old data arrangement information.
<実施形態1>
 次に、本発明の第1の実施形態に係る分散ストレージシステムについて、図面を参照して詳細に説明する。
<Embodiment 1>
Next, the distributed storage system according to the first embodiment of the present invention will be described in detail with reference to the drawings.
 [構成]
 図3は、本実施形態に係る分散ストレージシステムの概略構成を例示する図である。図3を参照すると、本実施形態の分散ストレージシステムは、CPU(Central Processing Unit)100~10Mと、外部記憶部200~20Nと、これらを結合するインターコネクトネットワーク300とを備えている。
[Constitution]
FIG. 3 is a diagram illustrating a schematic configuration of the distributed storage system according to the present embodiment. Referring to FIG. 3, the distributed storage system of this embodiment includes CPUs (Central Processing Units) 100 to 10M, external storage units 200 to 20N, and an interconnect network 300 that couples them.
 CPU100~10Mおよび外部記憶部200~20Nは、それぞれ、1または2以上設けられる。リソース分離型アーキテクチャでは、インターコネクトネットワーク300を介して、CPU100~10Mと外部記憶部200~20N(およびそれ以外のリソース)を結合することで、計算機サーバを構築する。 1 or 2 or more CPUs 100 to 10M and external storage units 200 to 20N are provided, respectively. In the resource separation architecture, a computer server is constructed by coupling the CPUs 100 to 10M and the external storage units 200 to 20N (and other resources) via the interconnect network 300.
 CPU100~10Mは、サーバを構成する演算処理装置(例えば、CPU)、インターコネクトネットワーク300に接続するためのインタフェース(IF:Interface)、高速な記憶回路(レジスタ)、CPUキャッシュ等を備える装置である。もちろん、CPU100~10Mは、これら以外のメモリ等の記憶装置を有していてもよい。 The CPUs 100 to 10M are devices including an arithmetic processing device (for example, a CPU) constituting a server, an interface (IF) for connecting to the interconnect network 300, a high-speed storage circuit (register), a CPU cache, and the like. Of course, the CPUs 100 to 10M may have other storage devices such as a memory.
 外部記憶部200~20Nは、CPU100~10Mと結合するためのインタフェース(すなわち、インターコネクトネットワーク300へのIF)と、記憶装置(フラッシュメモリ、DRAM(Dynamic Random Access Memory)、MRAM(Magnetoresistive Random Access Memory)、HDD(Hard Disk Drive)等)と、記憶装置へのアクセス処理を行う機能と、電源供給手段等を備える装置である。 The external storage units 200 to 20N include an interface for coupling with the CPUs 100 to 10M (that is, an IF to the interconnect network 300), a storage device (flash memory, DRAM (Dynamic Random Access Memory), MRAM (Magnetoresistive Random Access Memory). , HDD (Hard Disk Disk Drive), etc.), a function of performing access processing to the storage device, power supply means, and the like.
 インターコネクトネットワーク300は、CPU100~10M間、および、外部記憶部200~20N間を接続し、データおよび制御メッセージ、その他のメッセージ等をやりとりする。インターコネクトネットワーク300は、例えば、光ケーブルとスイッチ等によって実現することができる。また、インターコネクトネットワーク300は、PCI-e(Peripheral Component Interconnect Express)のケーブル等でも実現することができる。 The interconnect network 300 connects the CPUs 100 to 10M and the external storage units 200 to 20N, and exchanges data, control messages, and other messages. The interconnect network 300 can be realized by, for example, an optical cable and a switch. The interconnect network 300 can also be realized by a PCI-e (Peripheral-Component-Interconnect-Express) cable or the like.
 また、従前のアーキテクチャに基づく計算機を、本実施形態におけるCPUとして扱ってもよい。このとき、インターコネクトネットワーク300は、EthernetやPCI-eネットワーク等によって実現される。ExpEtherによると、計算機のインターコネクトネットワークであるPCI-eを拡張することができる。すなわち、CPU10XにExpEther機能を備えたIFを保持させることで、リソース分離型アーキテクチャに類似したアーキテクチャを実現することができる。このとき、外部記憶部20Xは、ExpEther機能を備えるカード、PCI-eの任意のデバイス等を備える。PCI-eのデバイスは、何らかの記憶装置を有するものであればよい。例えば、PCI-e Flash、RAID(Redundant Arrays of Inexpensive Disks)カード(を経由して複数のHDDやSSD(Solid State Drive)を接続する)、GPGPU(General Purpose computing on Graphics Processing Units)機能を備えるカード(記憶手段を備える)、Intel Xeon PhiのようなMIC(Many Integrated Core)アーキテクチャに基づく演算ボード等を用いることができる。 Further, a computer based on a conventional architecture may be handled as a CPU in this embodiment. At this time, the interconnect network 300 is realized by an Ethernet, a PCI-e network, or the like. According to ExpEther, the computer interconnect network PCI-e can be expanded. That is, an architecture similar to the resource separation architecture can be realized by holding the IF having the ExpEther function in the CPU 10X. At this time, the external storage unit 20X includes a card having an ExpEther function, an arbitrary PCI-e device, and the like. The PCI-e device only needs to have some storage device. For example, PCI-e Flash, RAID (Redundant Arrays of Inexpensive Disks) cards (via multiple HDDs and SSDs (Solid State Drive)), GPGPU (General Purpose computing computing on Graphics) Processing Units An arithmetic board based on a MIC (Many Integrated Core) architecture such as Intel Xeon Phi (including storage means) can be used.
 また、分離されるリソースを従前のストレージシステムに限定すれば、インターコネクトネットワーク300はFibre ChannelやFCoE(Fibre Channel over Ethernet(登録商標))によって実現される。一方、CPU100~10Mは、従前の計算機にホストバスアダプタ(またはEthernetカード)を備え、外部記憶部200~20NはこれらのネットワークへのIFを備えるストレージ装置・システムとして実現される。このようなアーキテクチャにおいては、CPU100~10M間のネットワークは、別途用意されることが多い。例えば、サーバ間をEthernet(TCP/IP(Transmission Control Protocol/Internet Protocol))で接続し、サーバ・ストレージ間をFibre Channelで接続する形態が用いられる。 Also, if the resources to be separated are limited to the conventional storage system, the interconnect network 300 is realized by Fiber Channel or FCoE (Fibre Channel over Ethernet (registered trademark)). On the other hand, the CPUs 100 to 10M are realized as a storage apparatus / system provided with a host bus adapter (or Ethernet card) in a conventional computer, and the external storage units 200 to 20N are provided with IFs for these networks. In such an architecture, a network between the CPUs 100 to 10M is often prepared separately. For example, a form in which servers are connected by Ethernet (TCP / IP (TransmissionTransControl Protocol / Internet Protocol)) and servers and storages are connected by Fiber Channel is used.
 このように、本実施形態を実現するハードウェアアーキテクチャとして、さまざまな変形が可能である。 As described above, various modifications can be made to the hardware architecture for realizing the present embodiment.
 図4は、CPU100~10Mのより詳細な構成を例示するブロック図である。以下では、図3に示すコンピュータによって分散ストレージ・データベースシステムを実現した上で、データマイグレーション処理中にデータアクセスを行う方式を中心に説明する。 FIG. 4 is a block diagram illustrating a more detailed configuration of the CPUs 100 to 10M. In the following, a description will be mainly given of a method of performing data access during data migration processing after realizing a distributed storage database system by the computer shown in FIG.
 図4は、計算機および外部記憶部が複数存在する構成を例示する。図4のマイグレーション装置1およびアクセス装置2は、それぞれ、図3のCPU100~10M上で動作するソフトウェア、または、インターコネクトネットワーク300に接続される任意のハードウェアとして動作するものとする。 FIG. 4 illustrates a configuration in which a plurality of computers and external storage units exist. The migration device 1 and the access device 2 in FIG. 4 are assumed to operate as software that operates on the CPUs 100 to 10M in FIG. 3 or arbitrary hardware connected to the interconnect network 300, respectively.
 また、図4の参加ホスト記憶部3および参加デバイス記憶部4は、任意の記憶手段および記憶手段を実現するソフトウェアとして動作し、図3のCPU100~10Mのいずれか1つまたは複数によって実現する。なお、図4の参加ホスト記憶部3および参加デバイス記憶部4も、図3のインターコネクトネットワーク300、または、その他の任意のネットワークに接続された計算機および記憶手段によって実現してもよい。 Further, the participating host storage unit 3 and the participating device storage unit 4 in FIG. 4 operate as software for realizing arbitrary storage means and storage means, and are realized by any one or more of the CPUs 100 to 10M in FIG. The participating host storage unit 3 and the participating device storage unit 4 in FIG. 4 may also be realized by a computer and storage means connected to the interconnect network 300 in FIG. 3 or other arbitrary network.
 図4を参照すると、本実施形態に係る分散ストレージシステムは、マイグレーション装置1、1または2以上のアクセス装置2、参加ホスト記憶部3、参加デバイス記憶部4、インターコネクトネットワーク300、および、外部記憶部200~20Nを備えている。 Referring to FIG. 4, the distributed storage system according to this embodiment includes a migration device 1, one or more access devices 2, a participating host storage unit 3, a participating device storage unit 4, an interconnect network 300, and an external storage unit. 200 to 20N.
 アクセス装置2は、本実施形態で実現するデータストア(データベース、KVS(Key Value Store)、ファイルシステム等)を実現し、データアクセス機能を提供するソフトウェアである。アクセス装置2を実現するCPU10Xは、インターコネクトネットワーク300を介して任意の通信プロトコルに従って通信処理を行うハードウェアおよびソフトウェアを有するものとする。 The access device 2 is software that implements a data store (database, KVS (Key Value Store), file system, etc.) realized in the present embodiment and provides a data access function. The CPU 10X that implements the access device 2 has hardware and software that perform communication processing according to an arbitrary communication protocol via the interconnect network 300.
 マイグレーション装置1は、分散ストレージシステムを構成する外部記憶部200~20Nの数や容量等に変更が生じた場合、データの配置状態を修正するデータ移動を実行するソフトウェアである。マイグレーション装置1を実現するCPU10Xも、同様に、インターコネクトネットワーク300を介して任意の通信プロトコルに従って通信処理を行うハードウェアおよびソフトウェアを有するものとする。 The migration apparatus 1 is software that executes data movement for correcting the data arrangement state when a change occurs in the number, capacity, etc. of the external storage units 200 to 20N constituting the distributed storage system. Similarly, the CPU 10X that implements the migration apparatus 1 also has hardware and software that perform communication processing according to an arbitrary communication protocol via the interconnect network 300.
 本実施形態では、データの配置先決定手法を、マイグレーションの推移に応じて切り替える必要がある。したがって、マイグレーション装置1は、切り替えが行われた旨の通知先として、分散ストレージシステムにアクセスするアクセス装置2を把握する必要がある。参加ホスト記憶部3は、分散ストレージシステムにアクセスするアクセス装置2に関する情報を保持する。マイグレーション装置1は、参加ホスト記憶部3を参照して、分散ストレージシステムにアクセスするアクセス装置2を把握する。 In this embodiment, it is necessary to switch the data location determination method according to the transition of migration. Therefore, the migration apparatus 1 needs to grasp the access apparatus 2 that accesses the distributed storage system as a notification destination that the switching has been performed. The participating host storage unit 3 holds information related to the access device 2 that accesses the distributed storage system. The migration device 1 refers to the participating host storage unit 3 and grasps the access device 2 that accesses the distributed storage system.
 なお、分散ストレージシステムにアクセスするアクセス装置2に関する情報を、必ずしも記憶手段として一元的に管理する必要はない。ただし、図4では、簡単化のため、参加ホスト記憶部3を備えるものとする。このように一元管理する方法の他に、必要時に動的に参加ホストをネットワーク内で探索する方法が考えられる。 Note that information related to the access device 2 that accesses the distributed storage system is not necessarily managed centrally as a storage means. However, in FIG. 4, for the sake of simplicity, it is assumed that the participating host storage unit 3 is provided. In addition to the centralized management method, a method for dynamically searching for participating hosts in the network when necessary is conceivable.
 参加ホスト記憶部3で保持される情報の具体例として、アクセス装置2を動作させるCPU100~10MのサーバのIPアドレスとプロセスポート番号の一覧が考えられる。
さらに、具体的な実施例として、参加ホスト記憶部3をデータベースソフトウェアとして動作させ、参加通知部21や推移通知部13はそれぞれSQL命令等で参加ホスト記憶部3にアクセスすることが考えられる。他にも、参加ホスト記憶部3として、共有ファイルシステム等を用いてもよい。
As a specific example of the information held in the participating host storage unit 3, a list of IP addresses and process port numbers of servers of the CPUs 100 to 10M that operate the access device 2 can be considered.
Further, as a specific embodiment, it is conceivable that the participating host storage unit 3 is operated as database software, and the participation notification unit 21 and the transition notification unit 13 each access the participating host storage unit 3 by an SQL command or the like. In addition, a shared file system or the like may be used as the participating host storage unit 3.
 参加デバイス記憶部4は、ハッシュや乱数によってデータ配置場所を決定する分散ストレージシステムを実現するために必要な情報を保持する記憶手段である。このような分散ストレージシステムでは、分散ストレージシステムを構成する記憶手段(計算機が記憶手段となる場合には計算機)の一覧情報を保持する必要がある。 The participating device storage unit 4 is a storage unit that holds information necessary for realizing a distributed storage system that determines a data arrangement location based on a hash or a random number. In such a distributed storage system, it is necessary to hold list information of storage means (a computer when a computer is a storage means) constituting the distributed storage system.
 参加デバイス記憶部4を実現する方法として、例えば、任意の1つ以上の計算機によってデータを保持する方法、各CPU10Xでデータを保持し、それらを同期的に更新する方法、CPU100~10M群で分散ストレージシステムを実現し、そこで保持する方法等を用いることができる。 As a method of realizing the participating device storage unit 4, for example, a method of holding data by any one or more computers, a method of holding data by each CPU 10X, and updating them synchronously, distributed among CPUs 100 to 10M A storage system can be realized and a method of holding the storage system can be used.
 以下、図4の各部の動作の詳細について、説明する。 Hereinafter, details of the operation of each unit in FIG. 4 will be described.
 図4を参照すると、マイグレーション装置1は、データ移動部11、デバイス選定部12、および、推移通知部13を備えている。 Referring to FIG. 4, the migration apparatus 1 includes a data movement unit 11, a device selection unit 12, and a transition notification unit 13.
 データ移動部11は、デバイス選定部12によって決定されるデータ配置先情報を用い、データ配置状態の変更を反映するために、外部記憶部200~20N間での必要なデータ移動を実行する。ここで、データ移動処理は、移動元の外部記憶部20Yからのデータリードと、移動先の外部記憶部20Zへのデータライトと、移動元の外部記憶部20Yに対する元データの削除とを含む。 The data moving unit 11 uses the data arrangement destination information determined by the device selection unit 12 and executes necessary data movement between the external storage units 200 to 20N in order to reflect the change of the data arrangement state. Here, the data movement processing includes data read from the migration source external storage unit 20Y, data write to the migration destination external storage unit 20Z, and deletion of the original data from the migration source external storage unit 20Y.
 デバイス選定部12は、参加デバイス記憶部4が保持する、分散ストレージシステムを構成する記憶部一覧情報から、特定のデータ範囲をいずれの外部記憶部20Yに格納するのかを決定するための情報を計算して保持する。本実施形態では、デバイス選定部12は、参加デバイス記憶部4の状態が変化した際に、変化前と変化後の状態情報の両方を保持する。 The device selection unit 12 calculates information for determining which external storage unit 20Y stores a specific data range from the storage unit list information constituting the distributed storage system held by the participating device storage unit 4. And hold. In the present embodiment, when the state of the participating device storage unit 4 changes, the device selection unit 12 holds both state information before and after the change.
 例えば、キーバリューストア(KVS:Key Value Store)システムであれば、デバイス選定部12が保持するこれらの情報を用いて、キーに対して格納先の外部記憶部20Yを特定したり、さらに外部記憶部20Y内のアクセスすべき物理アドレスを特定したりすることができる。マイグレーション装置1で動作するデバイス選定部12は、外部記憶部20Xの数の変更等に応じて、それぞれのデータの格納先が変更されるため、その変更状態を算出する。 For example, in the case of a key value store (KVS: Key Value Store) system, the external storage unit 20Y as a storage destination is specified for a key using these pieces of information held by the device selection unit 12, or further external storage is performed. The physical address to be accessed in the unit 20Y can be specified. The device selection unit 12 operating in the migration apparatus 1 calculates the change state because the storage destination of each data is changed according to the change in the number of external storage units 20X.
 推移通知部13は、参加ホスト記憶部3からアクセス装置2にマイグレーション推移を通知する。推移通知部13は、マイグレーションにおけるデータのライト処理が完了した後に、アクセス装置2に対する通知を行う。推移通知部13は、データ移動部11からマイグレーション推移情報を受け取り、参加ホスト記憶部3の情報を参照して、アクセス装置2に通知を行う。 The transition notification unit 13 notifies the migration transition from the participating host storage unit 3 to the access device 2. The transition notification unit 13 notifies the access device 2 after the data write process in the migration is completed. The transition notification unit 13 receives the migration transition information from the data migration unit 11 and refers to the information in the participating host storage unit 3 to notify the access device 2.
 図4を参照すると、アクセス装置2は、参加通知部21、デバイス選定部24、アルゴリズム切替部23、および、リード/ライト実行部22を備えている。アクセス装置2は、上位アプリケーションプログラムからストレージアクセス要求を受け付ける。 Referring to FIG. 4, the access device 2 includes a participation notification unit 21, a device selection unit 24, an algorithm switching unit 23, and a read / write execution unit 22. The access device 2 receives a storage access request from a higher-level application program.
 アクセス要求は、ストレージソフトウェアの機能によって異なる。例えば、データベースであれば、アクセス要求は、SQLで指定されるようなデータ操作命令である。一方、KVS(Key Value Store、キーバリューストア)であれば、アクセス要求は、キー(key)に対応するバリュー(value)を取得したり、登録・更新したりするような処理要求である。 The access request varies depending on the function of the storage software. For example, in the case of a database, the access request is a data operation instruction as specified by SQL. On the other hand, in the case of KVS (Key Value Store), the access request is a processing request for obtaining, registering, or updating a value corresponding to the key.
 参加通知部21は、アクセス装置2が分散ストレージへのアクセス機能を実現するために、システムの参加・離脱を通知する。具体的には、参加通知部21は、参加ホスト記憶部3に自身の管理用のIPアドレスとポート番号を登録したり、削除したりする。ただし、参加通知部21が登録する情報は、推移通知部13がアクセス装置2を特定できる情報であればよい。システムに及ぼす影響が少ないと考えられる場合、削除(離脱)処理を省略してもよい。ただし、使われていない情報を後で削除する等の処理を施すことが好ましい。 The participation notification unit 21 notifies the participation / leaving of the system in order for the access device 2 to realize the function of accessing the distributed storage. Specifically, the participation notification unit 21 registers or deletes its own management IP address and port number in the participation host storage unit 3. However, the information registered by the participation notification unit 21 may be information that allows the transition notification unit 13 to identify the access device 2. When the influence on the system is considered to be small, the deletion (leaving) process may be omitted. However, it is preferable to perform processing such as deleting unused information later.
 アクセス装置2のデバイス選定部24は、マイグレーション装置1のデバイス選定部12と同様の機能を有する。特に、アクセス装置2におけるデバイス選定部24は、上位アプリケーションからのアクセス要求に対して、アクセス先となる外部記憶部20Yを選定し、必要なデータの外部記憶部20Y上の配置場所(すなわち、アドレス)を算出する。 The device selection unit 24 of the access device 2 has the same function as the device selection unit 12 of the migration device 1. In particular, the device selection unit 24 in the access device 2 selects the external storage unit 20Y as an access destination in response to an access request from a higher-order application, and arranges the necessary data on the external storage unit 20Y (that is, the address) ) Is calculated.
 リード/ライト実行部22は、デバイス選定部24によって選定されたアクセス先に対してデータアクセス命令を発行する。分散ストレージシステムでは、リード/ライト実行部22は、冗長性を確保して信頼性を担保するため、複数の外部記憶部20Yに対して命令を発行する。また、リード/ライト実行部22は、アルゴリズム切替部23の命令に従って、リード/ライトを実行するアルゴリズムを変更する。なお、アルゴリズムの詳細については、後述する。 The read / write execution unit 22 issues a data access command to the access destination selected by the device selection unit 24. In the distributed storage system, the read / write execution unit 22 issues instructions to the plurality of external storage units 20Y in order to ensure redundancy and ensure reliability. Further, the read / write execution unit 22 changes the algorithm for executing the read / write in accordance with the instruction of the algorithm switching unit 23. Details of the algorithm will be described later.
 アルゴリズム切替部23は、推移通知部13から通知された情報に基づき、リード/ライト実行部22に対してアルゴリズム変更要求を発行する。 The algorithm switching unit 23 issues an algorithm change request to the read / write execution unit 22 based on the information notified from the transition notification unit 13.
 [動作]
 次に、本実施形態に係る分散ストレージシステムの動作について、図面を参照して説明する。
[Operation]
Next, the operation of the distributed storage system according to this embodiment will be described with reference to the drawings.
 {マイグレーション処理手順について}
 データ移動部11は、外部記憶部(デバイス)200~20Nの数の変更によって発生するデータのリバランス処理を、以下の手順で行う。まず、デバイス変更に伴い、推移通知部13は、アクセス装置2にマイグレーション開始を通知する。なお、マイグレーション開始の通知は、デバイス増減の反映により暗黙的に行ってもよい。
{Migration procedure}
The data moving unit 11 performs rebalancing processing of data generated by changing the number of external storage units (devices) 200 to 20N according to the following procedure. First, along with the device change, the transition notification unit 13 notifies the access device 2 of the start of migration. The notification of migration start may be implicitly performed by reflecting the increase / decrease in devices.
 次に、デバイス選定部12は、データ配置の変更状態を算出する。さらに、データ移動部11は、必要なデータ追加処理をすべて実行する。その後、推移通知部13は、アルゴリズム切替部23にアルゴリズムの変更を通知し、リード/ライト実行部22のアルゴリズムを変更する。その後、データ移動部11は、データ配置の変更により不要になったデータの削除処理をすべて実行する。 Next, the device selection unit 12 calculates a data arrangement change state. Further, the data moving unit 11 executes all necessary data addition processing. Thereafter, the transition notification unit 13 notifies the algorithm switching unit 23 of the algorithm change, and changes the algorithm of the read / write execution unit 22. After that, the data moving unit 11 executes all data deletion processes that are no longer necessary due to the change in the data arrangement.
 図5は、本実施形態に係る分散ストレージシステムにおけるマイグレーション処理の動作を例示するシーケンス図である。 FIG. 5 is a sequence diagram illustrating the operation of the migration process in the distributed storage system according to this embodiment.
 図5を参照すると、まず、外部記憶部200~20Nの障害やシステム性能拡張のため、分散ストレージシステムへ外部記憶部20Yが追加ないし削除される。マイグレーション装置1(例えば、CPU10X)は、外部記憶部20Yの追加ないし削除を、複数のアクセス装置2へ通知する(ステップS1)。 Referring to FIG. 5, first, the external storage unit 20Y is added to or deleted from the distributed storage system due to a failure of the external storage units 200 to 20N or system performance expansion. The migration device 1 (for example, the CPU 10X) notifies the plurality of access devices 2 of addition or deletion of the external storage unit 20Y (step S1).
 分散ストレージシステムに参加する外部記憶部200~20Nの構成が変更された後には、アクセス装置2は、構成変更前のデータ配置先決定方法と、構成変更後のデータ配置先決定方法の両方の情報を保持し、リード(READ)/ライト(WRITE)処理を行う(ステップS2)。例えば、コンシステントハッシング法による分散ストレージシステムの場合、アクセス装置2はコンシステントハッシュリングを古い状態と新しい状態の2種類保持する。なお、リード(READ)/ライト(WRITE)アクセスの具体的なアルゴリズムについては、後述する。 After the configuration of the external storage units 200 to 20N participating in the distributed storage system is changed, the access device 2 provides information on both the data allocation destination determination method before the configuration change and the data allocation destination determination method after the configuration change. And read (READ) / write (WRITE) processing is performed (step S2). For example, in the case of a distributed storage system based on the consistent hashing method, the access device 2 holds two types of consistent hash rings, an old state and a new state. A specific algorithm for the read (READ) / write (WRITE) access will be described later.
 各アクセス装置2が新旧のデータ配置先決定情報を利用する方式に切り替えた後、マイグレーション装置1はマイグレーションに伴うデータ追加処理を開始する(ステップS3)。データ配置先の変更により、新しくデータをコピーする領域と、データを削除される領域が現れるが、ここでは、マイグレーション装置1は新しいコピー処理(追加処理)のみを実行する。 After each access device 2 switches to a method that uses old and new data location determination information, the migration device 1 starts data addition processing accompanying migration (step S3). By changing the data arrangement destination, an area where new data is copied and an area where data is deleted appear. Here, the migration apparatus 1 executes only a new copy process (addition process).
 マイグレーション装置1は、データ追加処理の完了後、データ配置先決定情報の切り替え要求をアクセス装置2群に発行する(ステップS4)。 The migration device 1 issues a data placement destination determination information switching request to the access device 2 group after completion of the data addition processing (step S4).
 アクセス装置2は、切り替え要求を受けて切り替え処理を完了すると、切り替え完了通知をマイグレーション装置1に送出する(ステップS5)。 When the access device 2 receives the switching request and completes the switching process, the access device 2 sends a switching completion notification to the migration device 1 (step S5).
 マイグレーション装置1は、切り替え処理の完了通知を受け取ると、マイグレーションに伴うデータの削除処理を実行する(ステップS6)。 When the migration apparatus 1 receives the notification of completion of the switching process, the migration apparatus 1 executes a data deletion process associated with the migration (step S6).
 アクセス装置2群は、切り替え完了後、新データ配置先決定情報のみを用いて通常のリード・ライトアクセス処理を実行する(ステップS7)。 The access device group 2 executes normal read / write access processing using only the new data location determination information after completion of switching (step S7).
 {データアクセスアルゴリズム}
 次に、リード/ライト実行部22のアクセスアルゴリズムについて説明する。ここでは、キーバリューストアシステムをコンシステントハッシュリング法によって実現する場合を例として、外部記憶部20Xの決定方法を説明する。ただし、本発明はキーバリューストアシステムや、コンシステントハッシュ法に限定されるものではない。
{Data access algorithm}
Next, an access algorithm of the read / write execution unit 22 will be described. Here, the determination method of the external storage unit 20X will be described by taking as an example the case where the key-value store system is realized by the consistent hash ring method. However, the present invention is not limited to the key-value store system or the consistent hash method.
 以下では、外部記憶部20Yの増減後、データ移動部11がデータ追加処理を完了する前の期間を「切り替え通知受信前」と呼ぶ。一方、データ移動部11がデータ追加処理を完了した後の期間を「切り替え通知受信後」と呼ぶ。本実施形態では、この両期間の間で、異なるデータのアクセスアルゴリズムが用いられる。 Hereinafter, a period before the data moving unit 11 completes the data addition processing after the increase / decrease of the external storage unit 20Y is referred to as “before switching notification reception”. On the other hand, a period after the data moving unit 11 completes the data addition process is referred to as “after switching notification is received”. In the present embodiment, different data access algorithms are used between these two periods.
 「リード処理」
 まず、リード処理のアルゴリズムについて説明する。図6は、コンシステントハッシュ法におけるコンシステントハッシュリングのイメージ図である。コンシステントハッシュ法では、キー(key)のハッシュ値を用いて配置先の外部記憶部20Yを決定する。ハッシュ値の値を円周に写像すると、キーのハッシュ値は円周のいずれかに配置される。また、外部記憶部200~20Nも同様に円周に写像することができる。
"Lead processing"
First, the read processing algorithm will be described. FIG. 6 is an image diagram of a consistent hash ring in the consistent hash method. In the consistent hash method, the arrangement destination external storage unit 20Y is determined using a hash value of a key. If the value of the hash value is mapped onto the circumference, the hash value of the key is placed on one of the circumferences. Similarly, the external storage units 200 to 20N can be mapped onto the circumference.
 図6では、一例として、外部記憶部200~20NをノードD0ないしD7として円周上に写像している。なお、実際のシステムでは、仮想ノードという仕組みを用いて1つの外部記憶部を複数のノードに写像することもできる。ただし、ここでは、説明の簡単化のため、外部記憶部200~20Nと、ノードD0ないしD7とは1:1に対応するものとする。 In FIG. 6, as an example, the external storage units 200 to 20N are mapped on the circumference as nodes D0 to D7. In an actual system, a single external storage unit can be mapped to a plurality of nodes using a mechanism called a virtual node. However, here, for simplicity of explanation, the external storage units 200 to 20N and the nodes D0 to D7 correspond to 1: 1.
 このとき、キーに対応する外部記憶部20Yを、次のように選択することができる。図6に示すように、キーのハッシュ値がノードD0とノードD7の間の場合、リード/ライト実行部22はノードD0に相当する外部記憶部200を記憶手段として選択する。また、耐信頼性のために合計3つの複製を保持するものとすると、リード/ライト実行部22は次のノードD1、D2に相当する外部記憶部201、202を複製先として選択する。本実施形態の説明では、リード/ライト実行部22において外部記憶部を選択するためのアルゴリズムとして、かかる方法を用いた場合について説明する。ただし、本発明における複製先の選択アルゴリズムはこれに限定されない。 At this time, the external storage unit 20Y corresponding to the key can be selected as follows. As shown in FIG. 6, when the hash value of the key is between the node D0 and the node D7, the read / write execution unit 22 selects the external storage unit 200 corresponding to the node D0 as the storage unit. Further, assuming that a total of three copies are held for reliability, the read / write execution unit 22 selects the external storage units 201 and 202 corresponding to the next nodes D1 and D2 as the copy destination. In the description of the present embodiment, a case will be described in which such a method is used as an algorithm for selecting an external storage unit in the read / write execution unit 22. However, the duplication destination selection algorithm in the present invention is not limited to this.
 このように、リード/ライト実行部22は、配置先の外部記憶部20Yに対して、リードまたはライト命令を発行することで、データへのアクセスを行う。リード/ライト実行部22は、複数の外部記憶部20Yにアクセスし、規定数以上のデータを読み出してデータに不整合を発見しなかった場合に、クライアントに応答を返すようにしてもよい。例えば、多数決法では、リード/ライト実行部22は、3つの複製に対して2つのデータが読み込めた段階で処理が成功したものと見なす。 In this way, the read / write execution unit 22 accesses the data by issuing a read or write command to the external storage unit 20Y at the placement destination. The read / write execution unit 22 may access the plurality of external storage units 20Y, read a predetermined number of data and find no inconsistency in the data, and return a response to the client. For example, in the majority voting method, the read / write execution unit 22 considers that the processing is successful at the stage where two data are read for three replicas.
 ただし、システムが要求する信頼性に応じて、リード/ライト実行部22は、3つのデータが読み出されてから応答を返したり、1つのデータのみを読み出した時点で応答を返したりしてもよい。リード/ライト実行部22は、ライト処理に関しても、信頼性を確保するために、リード処理と同様の処理を行う。ここで、リード/ライト実行部22がアクセス(リードないしライト)を成功と見なすために必要とされるアクセス数を「最小アクセス数」と呼ぶ。 However, depending on the reliability required by the system, the read / write execution unit 22 may return a response after reading out three pieces of data, or may return a response when only one piece of data is read out. Good. The read / write execution unit 22 also performs the same process as the read process in order to ensure reliability regarding the write process. Here, the number of accesses required for the read / write execution unit 22 to regard an access (read or write) as a success is referred to as a “minimum access number”.
 ここで、図6に示すように、ノードD1、D2に相当する外部記憶部201、202が障害で離脱する場合を考える。このとき、障害後のデータ配置先記憶部はノードD0、D3、D4に相当する外部記憶部200、203、204となり、2つの外部記憶部が変更される。したがって、データの再配置のためのコピーが必要となる。具体的には、マイグレーション装置1はノードD0に相当する外部記憶部200のデータをノードD3、D4に相当する外部記憶部203、204に複製する必要がある。 Here, as shown in FIG. 6, consider a case where the external storage units 201 and 202 corresponding to the nodes D1 and D2 leave due to a failure. At this time, the data allocation destination storage unit after the failure becomes the external storage units 200, 203, and 204 corresponding to the nodes D0, D3, and D4, and the two external storage units are changed. Therefore, a copy for data rearrangement is required. Specifically, the migration apparatus 1 needs to copy the data in the external storage unit 200 corresponding to the node D0 to the external storage units 203 and 204 corresponding to the nodes D3 and D4.
 本実施形態のリード処理アルゴリズムでは、マイグレーション装置1からの切り替え通知受信前では、リード/ライト実行部22は記憶部削減前のデータ配置情報に基づいてリード処理を実行する。その際、リード/ライト実行部22は削減された外部記憶部20Yにはアクセスを行わない。すなわち、リード/ライト実行部22は、アクセス先からノードD1、D2に相当する外部記憶部201、202を除外する。 In the read processing algorithm of the present embodiment, before receiving the switching notification from the migration apparatus 1, the read / write execution unit 22 executes read processing based on the data arrangement information before the storage unit reduction. At this time, the read / write execution unit 22 does not access the reduced external storage unit 20Y. That is, the read / write execution unit 22 excludes the external storage units 201 and 202 corresponding to the nodes D1 and D2 from the access destination.
 ここで、リード処理において最小アクセス数以上の記憶部からデータを読み出す必要がある。リード/ライト実行部22は、この最小アクセス数から削減される記憶部(ただし、データの格納先の記憶部に限る)の数だけ減らして処理を行う。 Here, it is necessary to read data from the storage unit with the minimum access number or more in the read processing. The read / write execution unit 22 performs processing by reducing the number of storage units that are reduced from the minimum number of accesses (however, only the storage unit that stores data).
 図6の例では、ノードD0、D1、D2の配置状態でノードD1、D2が削除されたため、リード/ライト実行部22はノードD0に相当する外部記憶部200のデータが読み出された時点で応答を返す。 In the example of FIG. 6, since the nodes D1 and D2 are deleted in the arrangement state of the nodes D0, D1, and D2, the read / write execution unit 22 reads the data in the external storage unit 200 corresponding to the node D0. Returns a response.
 なお、システムから2つの外部記憶部20Yが離脱した場合においても、複製を保持する3つの外部記憶部20Yのうちの1つの外部記憶部のみが削除されるときには、リード/ライト実行部22は最小アクセス数から1だけを減らすものとする。リード/ライト実行部22が実際にアクセスする外部記憶部の数の最小値は1となる。なお、3つの複製を保持するシステムにおいて、3つの外部記憶部に同時に障害が発生した場合、データロストを回避することはできない。 Even when the two external storage units 20Y are disconnected from the system, the read / write execution unit 22 is minimum when only one of the three external storage units 20Y holding the copy is deleted. Only 1 is subtracted from the number of accesses. The minimum value of the number of external storage units actually accessed by the read / write execution unit 22 is 1. Note that in a system that holds three replicas, data loss cannot be avoided if three external storage units fail simultaneously.
 切り替え通知受信後においては、リード/ライト実行部22は外部記憶部削減後の状態に基づいてリード(READ)命令を発行する。すなわち、リード/ライト実行部22は、ノードD0、D3、D4に相当する外部記憶部200、203、204に対してアクセスする。ここで、リード/ライト実行部22が用いる最小アクセス数は、システムのデフォルト値(図6の場合には3)のままである。 After receiving the switching notification, the read / write execution unit 22 issues a read (READ) command based on the state after the external storage unit is reduced. That is, the read / write execution unit 22 accesses the external storage units 200, 203, and 204 corresponding to the nodes D0, D3, and D4. Here, the minimum access number used by the read / write execution unit 22 remains the system default value (3 in the case of FIG. 6).
 一方、外部記憶部20Yが追加されるケースでは、切り替え通知受信前において、リード/ライト実行部22は古いデータ配置状態(追加前)に基づいてアクセス先を決定する。また、切り替え通知受信後、リード/ライト実行部22は新しいデータ配置状態(追加後)に基づいてアクセス先を決定する。 On the other hand, in the case where the external storage unit 20Y is added, the read / write execution unit 22 determines the access destination based on the old data arrangement state (before addition) before the switching notification is received. In addition, after receiving the switching notification, the read / write execution unit 22 determines the access destination based on the new data arrangement state (after addition).
 「ライト処理」
 次に、ライトアルゴリズムについて述べる。図7は、外部記憶部20Yを1つ追加する(ここでは、ノードD1に相当する外部記憶部201を追加するものとする)場合のイメージ図である。ここでも、複製数を3とする。外部記憶部が追加されることによって、あるキーの格納先の状態がノードD0、D2、D3からノードD0、D1、D2へ変更される。
"Light processing"
Next, the write algorithm will be described. FIG. 7 is an image diagram when one external storage unit 20Y is added (here, the external storage unit 201 corresponding to the node D1 is added). Again, the number of replicas is three. By adding the external storage unit, the state of the storage destination of a certain key is changed from the nodes D0, D2, and D3 to the nodes D0, D1, and D2.
 切り替え通知受信前では、リード/ライト実行部22は、データのライトを変更前のデータ配置状態および変更後のデータ配置状態の両方について行う。すなわち、リード/ライト実行部22は、ノードD0~D3の4つに相当する外部記憶部200~203に対してデータのライトを行う。したがって、リード/ライト実行部22は、外部記憶部の追加数分だけ一時的に複製数を増加させるように動作する。一方、切り替え通知受信後には、リード/ライト実行部22は、追加後の状態にのみ書き込みを行う。すなわち、リード/ライト実行部22は、ノードD0~D2に相当する外部記憶部200~202にライト処理を施す。 Before the switching notification is received, the read / write execution unit 22 writes data in both the data arrangement state before the change and the data arrangement state after the change. That is, the read / write execution unit 22 writes data to the external storage units 200 to 203 corresponding to the four nodes D0 to D3. Therefore, the read / write execution unit 22 operates so as to temporarily increase the number of replicas by the number of additional external storage units. On the other hand, after receiving the switching notification, the read / write execution unit 22 performs writing only in the added state. That is, the read / write execution unit 22 performs a write process on the external storage units 200 to 202 corresponding to the nodes D0 to D2.
 ここで、リード/ライト実行部22が変更前のデータ配置状態でしか書き込みが行われない外部記憶部(すなわち、ノードD3に相当する外部記憶部203)に対してもライト(書き込み)を行う理由は、リード(READ)アルゴリズム、および、データマイグレーションアルゴリズムとの間で不整合を起こさないようにするためである。 Here, the reason why the read / write execution unit 22 writes (writes) to the external storage unit (that is, the external storage unit 203 corresponding to the node D3) in which writing is performed only in the data arrangement state before the change This is to prevent inconsistency between the read algorithm and the data migration algorithm.
 なお、多数決法(Quorum法)において、リード(READ)数を複製数より少なく規定している場合には、リード/ライト実行部22はノードD3に相当する外部記憶部203への書き込みを省略してもよい。ただし、ノードD3に相当する外部記憶部203への書き込みを省略した場合には、マイグレーション処理において古いデータを書かないようにするようバージョン管理を行う必要がある。 In the majority method (Quorum method), when the number of reads (READ) is defined to be smaller than the number of copies, the read / write execution unit 22 omits writing to the external storage unit 203 corresponding to the node D3. May be. However, if the writing to the external storage unit 203 corresponding to the node D3 is omitted, it is necessary to perform version management so that old data is not written in the migration process.
 また、切り替え通知受信前において、リード/ライト実行部22が新しく格納先となるノードD1に相当する外部記憶部201に対しても書き込みを行う理由は、マイグレーション装置1との整合性のためである。これにより、マイグレーション装置1が該当キー(key)について処理を完了しているかどうかに関わらず処理を行うことができる。ここで、マイグレーション装置1の履歴を確認するには計算機に負荷がかかるため、ライト処理を実行した方がより高速であると考えられる。 The reason why the read / write execution unit 22 also writes to the external storage unit 201 corresponding to the new storage destination node D1 before receiving the switching notification is for consistency with the migration apparatus 1. . As a result, processing can be performed regardless of whether or not the migration apparatus 1 has completed processing for the corresponding key. Here, since it takes a load on the computer to check the history of the migration apparatus 1, it is considered that it is faster to execute the write process.
 なお、図6のように、外部記憶部の数が削減されるケースにおいても、リード/ライト実行部22は、外部記憶部の数が増加する場合と同様の処理を行う。すなわち、切り替え通知受信前には、リード/ライト実行部22は削減前後の両方の外部記憶部20Yに対して書き込みを行う。一方、切り替え通知受信後には、リード/ライト実行部22は削減後の状態の外部記憶部20Yにのみ書き込みを行う。 Note that, as shown in FIG. 6, even in the case where the number of external storage units is reduced, the read / write execution unit 22 performs the same processing as when the number of external storage units increases. That is, before the switching notification is received, the read / write execution unit 22 writes to both external storage units 20Y before and after the reduction. On the other hand, after receiving the switching notification, the read / write execution unit 22 writes only to the external storage unit 20Y in the reduced state.
 図6に例示するように、ノードD1、D2に相当する外部記憶部201、202が障害により離脱する場合、削減前にデータを記憶する外部記憶部20YはノードD0~D2に相当する外部記憶部200~202となる。一方、削減後にデータを記憶する外部記憶部はノードD0、D3、D4に相当する外部記憶部200、203、204となる。 As illustrated in FIG. 6, when the external storage units 201 and 202 corresponding to the nodes D1 and D2 leave due to a failure, the external storage unit 20Y that stores data before the reduction is an external storage unit corresponding to the nodes D0 to D2. 200 to 202. On the other hand, external storage units that store data after reduction are external storage units 200, 203, and 204 corresponding to nodes D0, D3, and D4.
 この場合、切り替え通知受信前には、リード/ライト実行部22は削減前後の両方の外部記憶部20Yに対して書き込みを行う。すなわち、リード/ライト実行部22は削減前後の両方の外部記憶部20Yに関する情報に基づいて、削減前後のノードD0~D4に相当する外部記憶部200~204に対して書き込みを行う。ただし、ノードD1、D2に相当する外部記憶部201、202は削減されてなくなっているため、書き込み先から除外される。結果として、リード/ライト実行部22はノードD0、D3、D4に相当する外部記憶部200、203、204に対する書き込みを行う。 In this case, before receiving the switching notification, the read / write execution unit 22 writes to both the external storage units 20Y before and after the reduction. That is, the read / write execution unit 22 performs writing to the external storage units 200 to 204 corresponding to the nodes D0 to D4 before and after the reduction, based on the information regarding both external storage units 20Y before and after the reduction. However, since the external storage units 201 and 202 corresponding to the nodes D1 and D2 are not reduced, they are excluded from the write destination. As a result, the read / write execution unit 22 performs writing to the external storage units 200, 203, and 204 corresponding to the nodes D0, D3, and D4.
 一方、切り替え通知受信後には、リード/ライト実行部22は、削減後の状態、すなわち、ノードD0、D3、D4に相当する外部記憶部200、203、204のみに対して書き込みを行う。 On the other hand, after receiving the switching notification, the read / write execution unit 22 writes only to the state after the reduction, that is, only the external storage units 200, 203, and 204 corresponding to the nodes D0, D3, and D4.
 結局、図6に示した例では、外部記憶部の削減前後において、書き込み先はいずれもノードD0、D3、D4に相当する外部記憶部200、203、204となり、変化しない。 After all, in the example shown in FIG. 6, before and after the reduction of the external storage units, the write destinations are the external storage units 200, 203, and 204 corresponding to the nodes D0, D3, and D4, and do not change.
 なお、図示しないものの、削減前のアクセス先の外部記憶部がノードD7、D1、D2に相当する外部記憶部207、201、202であり、ノードD1、D2に相当する外部記憶部201、202が削減され、削減後のアクセス先の外部記憶部がノードD0、D3、D4に相当する外部記憶部200、203、204となる場合、切り替え通知受信前には、リード/ライト実行部22は削減前後の両方のノードD7、D0、D3、D4に相当する外部記憶部207、200、203、204に対して書き込みを行う。一方、切り替え通知受信後には、リード/ライト実行部22はノードD0、D3、D4に相当する外部記憶部200、203、204のみに対して書き込みを行う。 Although not shown, the external storage units to be accessed before reduction are the external storage units 207, 201, 202 corresponding to the nodes D7, D1, D2, and the external storage units 201, 202 corresponding to the nodes D1, D2 are If the reduced external storage units to be accessed are the external storage units 200, 203, and 204 corresponding to the nodes D0, D3, and D4, the read / write execution unit 22 performs before and after the reduction notification before receiving the switching notification. Are written in the external storage units 207, 200, 203, and 204 corresponding to both the nodes D7, D0, D3, and D4. On the other hand, after receiving the switching notification, the read / write execution unit 22 writes only to the external storage units 200, 203, and 204 corresponding to the nodes D0, D3, and D4.
 本発明の活用例として、計算機外部の記憶部を利用するデータベースシステム、キーバリューストアシステム、また、複数計算機で共通の記憶部を共有するシェアード型分散データストアシステムがあげられる。また、リソース分離型アーキテクチャで実現されるサーバ上で、ストレージシステムを動作させる際にも、本発明を利用可能である。ただし、本発明の適用先はこれらに限定されない。 Examples of utilization of the present invention include a database system using a storage unit outside a computer, a key-value store system, and a shared distributed data store system sharing a common storage unit among a plurality of computers. The present invention can also be used when a storage system is operated on a server realized by a resource separation architecture. However, the application destination of the present invention is not limited to these.
 なお、本発明において、下記の形態が可能である。
[形態1]
 上記第1の態様に係るアクセス装置のとおりである。
[形態2]
 前記アクセス装置において、前記アクセス部は、読み出し対象のデータを保持すべき記憶部が削減された場合、前記完了した旨の通知を受けるまで、削減前に前記データを保持すべき記憶部のうちの削減対象以外の記憶部から読み出しを行い、前記完了した旨の通知を受けると、削減後に前記データを保持すべき記憶部から読み出しを行ってもよい。
[形態3]
 前記アクセス装置において、前記アクセス部は、読み出し対象のデータを保持すべき記憶部が追加された場合、前記完了した旨の通知を受けるまで、追加前に前記データを保持すべき記憶部から読み出しを行い、前記完了した旨の通知を受けると、追加後に前記データを保持すべき記憶部から読み出しを行ってもよい。
[形態4]
 前記アクセス装置において、前記アクセス部は、書き込み対象のデータを保持すべき記憶部が追加された場合、前記完了した旨の通知を受けるまで、追加前後に前記データを保持すべき記憶部に書き込みを行い、前記完了した旨の通知を受けると、追加後に前記データを保持すべき記憶部に書き込みを行ってもよい。
[形態5]
 前記アクセス装置において、前記アクセス部は、書き込み対象のデータを保持すべき記憶部が削減された場合、前記完了した旨の通知を受けるまで、削減前後に前記データを保持すべき記憶部であって削減対象以外の記憶部に書き込みを行い、前記完了した旨の通知を受けると、削減後に前記データを保持すべき記憶部に書き込みを行ってもよい。
[形態6]
 前記アクセス装置において、データを保持すべき記憶部は、コンシステントハッシュ法に基づいて決定されるようにしてもよい。
[形態7]
 上記第2の態様に係るマイグレーション装置のとおりである。
[形態8]
 上記第3の態様に係る分散ストレージシステムのとおりである。
[形態9]
 前記分散ストレージシステムにおいて、前記アクセス装置は、読み出し対象のデータを保持すべき記憶部が削減された場合、前記完了した旨の通知を受けるまで、削減前に前記データを保持すべき記憶部のうちの削減対象以外の記憶部から読み出しを行い、前記完了した旨の通知を受けると、削減後に前記データを保持すべき記憶部から読み出しを行ってもよい。
[形態10]
 前記分散ストレージシステムにおいて、前記アクセス装置は、読み出し対象のデータを保持すべき記憶部が追加された場合、前記完了した旨の通知を受けるまで、追加前に前記データを保持すべき記憶部から読み出しを行い、前記完了した旨の通知を受けると、追加後に前記データを保持すべき記憶部から読み出しを行ってもよい。
[形態11]
 前記分散ストレージシステムにおいて、前記アクセス装置は、書き込み対象のデータを保持すべき記憶部が追加された場合、前記完了した旨の通知を受けるまで、追加前後に前記データを保持すべき記憶部に書き込みを行い、前記完了した旨の通知を受けると、追加後に前記データを保持すべき記憶部に書き込みを行ってもよい。
[形態12]
 前記分散ストレージシステムにおいて、前記アクセス装置は、書き込み対象のデータを保持すべき記憶部が削減された場合、前記完了した旨の通知を受けるまで、削減前後に前記データを保持すべき記憶部であって削減対象以外の記憶部に書き込みを行い、前記完了した旨の通知を受けると、削減後に前記データを保持すべき記憶部に書き込みを行ってもよい。
[形態13]
 前記分散ストレージシステムにおいて、データを保持すべき記憶部は、コンシステントハッシュ法に基づいて決定されるようにしてもよい。
[形態14]
 上記第4の態様に係るアクセス方法のとおりである。
[形態15]
 前記アクセス方法において、前記アクセス装置は、読み出し対象のデータを保持すべき記憶部が削減された場合、前記完了した旨の通知を受けるまで、削減前に前記データを保持すべき記憶部のうちの削減対象以外の記憶部から読み出しを行い、前記完了した旨の通知を受けると、削減後に前記データを保持すべき記憶部から読み出しを行ってもよい。
[形態16]
 前記アクセス方法において、前記アクセス装置は、読み出し対象のデータを保持すべき記憶部が追加された場合、前記完了した旨の通知を受けるまで、追加前に前記データを保持すべき記憶部から読み出しを行い、前記完了した旨の通知を受けると、追加後に前記データを保持すべき記憶部から読み出しを行ってもよい。
[形態17]
 前記アクセス方法において、前記アクセス装置は、書き込み対象のデータを保持すべき記憶部が追加された場合、前記完了した旨の通知を受けるまで、追加前後に前記データを保持すべき記憶部に書き込みを行い、前記完了した旨の通知を受けると、追加後に前記データを保持すべき記憶部に書き込みを行ってもよい。
[形態18]
 前記アクセス方法において、前記アクセス装置は、書き込み対象のデータを保持すべき記憶部が削減された場合、前記完了した旨の通知を受けるまで、削減前後に前記データを保持すべき記憶部であって削減対象以外の記憶部に書き込みを行い、前記完了した旨の通知を受けると、削減後に前記データを保持すべき記憶部に書き込みを行ってもよい。
[形態19]
 前記アクセス方法において、データを保持すべき記憶部は、コンシステントハッシュ法に基づいて決定されるようにしてもよい。
[形態20]
 マイグレーション装置が、ストレージ装置に含まれる記憶部の数の変動に応じて、前記ストレージ装置に含まれる記憶部に対して、データの追加処理を実行した後、データの削除処理を実行することでデータのリバランスを行うステップと、前記ストレージ装置に含まれる記憶部にアクセスするアクセス装置に対して、前記追加処理が完了した旨を通知するステップと、を含むマイグレーション方法が提供される。
[形態21]
 形態14ないし19のいずれか一に記載のアクセス方法、または、形態20に記載のマイグレーション方法の各ステップをコンピュータに実行させるプログラムが提供される。
In the present invention, the following modes are possible.
[Form 1]
As in the access device according to the first aspect.
[Form 2]
In the access device, when the storage unit that should hold the data to be read is reduced, the access unit includes the storage unit that should hold the data before the reduction until the notification of the completion is received. If reading is performed from a storage unit other than the reduction target and a notification of completion is received, the data may be read from the storage unit that should hold the data after reduction.
[Form 3]
In the access device, when a storage unit that should hold data to be read is added, the access unit reads data from the storage unit that should hold the data before addition until receiving a notification of completion. When the notification of completion is received, the data may be read from the storage unit that should hold the data after the addition.
[Form 4]
In the access device, when the storage unit to hold the data to be written is added, the access unit writes the data to the storage unit to hold the data before and after the addition until the notification of the completion is received. When the notification of completion is received, the data may be written to the storage unit that should hold the data after the addition.
[Form 5]
In the access device, when the storage unit to hold data to be written is reduced, the access unit is a storage unit to hold the data before and after the reduction until the completion notification is received. If writing is performed to a storage unit other than the reduction target and a notification of the completion is received, writing may be performed to the storage unit that should hold the data after reduction.
[Form 6]
In the access device, a storage unit that should hold data may be determined based on a consistent hash method.
[Form 7]
This is the same as the migration apparatus according to the second aspect.
[Form 8]
The distributed storage system according to the third aspect is as described above.
[Form 9]
In the distributed storage system, when the storage unit to hold the data to be read is reduced, the access device includes the storage unit to hold the data before the reduction until the notification of the completion is received. When reading is performed from a storage unit other than the reduction target and a notification of the completion is received, the data may be read from the storage unit that should hold the data after the reduction.
[Mode 10]
In the distributed storage system, when a storage unit to hold data to be read is added, the access device reads from the storage unit to hold the data before adding until receiving a notification of completion. When the notification of completion is received, the data may be read from the storage unit that should hold the data after the addition.
[Form 11]
In the distributed storage system, when a storage unit to hold data to be written is added, the access device writes the data to the storage unit to hold the data before and after the addition until receiving notification of completion. When the notification of completion is received, the data may be written in the storage unit that should hold the data after the addition.
[Form 12]
In the distributed storage system, when the storage unit to hold the data to be written is reduced, the access device is a storage unit to hold the data before and after the reduction until the notification of the completion is received. Then, the data may be written to a storage unit other than the reduction target, and when the notification of completion is received, the data may be written to the storage unit that should hold the data after the reduction.
[Form 13]
In the distributed storage system, a storage unit that should hold data may be determined based on a consistent hash method.
[Form 14]
The access method according to the fourth aspect is as described above.
[Form 15]
In the access method, when the storage unit that should hold the data to be read is reduced, the access device includes the storage unit that should hold the data before the reduction until the notification of the completion is received. If reading is performed from a storage unit other than the reduction target and a notification of completion is received, the data may be read from the storage unit that should hold the data after reduction.
[Form 16]
In the access method, when a storage unit to hold data to be read is added, the access device reads from the storage unit to hold the data before adding until receiving a notification of completion. When the notification of completion is received, the data may be read from the storage unit that should hold the data after the addition.
[Form 17]
In the access method, when a storage unit to hold data to be written is added, the access device writes data to the storage unit to hold the data before and after the addition until receiving a notification of completion. When the notification of completion is received, the data may be written to the storage unit that should hold the data after the addition.
[Form 18]
In the access method, when the storage unit to hold data to be written is reduced, the access device is a storage unit to hold the data before and after the reduction until receiving a notification of completion. If writing is performed to a storage unit other than the reduction target and a notification of the completion is received, writing may be performed to the storage unit that should hold the data after reduction.
[Form 19]
In the access method, the storage unit to hold data may be determined based on the consistent hash method.
[Mode 20]
The migration device executes data addition processing on the storage units included in the storage device in accordance with fluctuations in the number of storage units included in the storage device, and then executes data deletion processing to execute data deletion. The migration method includes a step of performing rebalancing, and a step of notifying the access device that accesses the storage unit included in the storage device that the addition process has been completed.
[Form 21]
A program for causing a computer to execute each step of the access method according to any one of Forms 14 to 19 or the migration method according to Form 20 is provided.
 なお、上記特許文献の全開示内容は、本書に引用をもって繰り込み記載されているものとする。本発明の全開示(請求の範囲を含む)の枠内において、さらにその基本的技術思想に基づいて、実施形態の変更・調整が可能である。また、本発明の全開示の枠内において種々の開示要素(各請求項の各要素、各実施形態の各要素、各図面の各要素等を含む)の多様な組み合わせ、ないし、選択が可能である。すなわち、本発明は、請求の範囲を含む全開示、技術的思想にしたがって当業者であればなし得るであろう各種変形、修正を含むことは勿論である。特に、本書に記載した数値範囲については、当該範囲内に含まれる任意の数値ないし小範囲が、別段の記載のない場合でも具体的に記載されているものと解釈されるべきである。 It should be noted that the entire disclosure of the above patent document is incorporated herein by reference. Within the scope of the entire disclosure (including claims) of the present invention, the embodiment can be changed and adjusted based on the basic technical concept. Further, various combinations or selections of various disclosed elements (including each element of each claim, each element of each embodiment, each element of each drawing, etc.) are possible within the framework of the entire disclosure of the present invention. is there. That is, the present invention of course includes various variations and modifications that could be made by those skilled in the art according to the entire disclosure including the claims and the technical idea. In particular, with respect to the numerical ranges described in this document, any numerical value or small range included in the range should be construed as being specifically described even if there is no specific description.
 この出願は、2014年12月5日に出願された日本出願特願2014-246506を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2014-246506 filed on Dec. 5, 2014, the entire disclosure of which is incorporated herein.
1  マイグレーション装置
2  アクセス装置
3  参加ホスト記憶部
4  参加デバイス記憶部
11  データ移動部
12  デバイス選定部
13  推移通知部
14  通知部
21  参加通知部
22  リード/ライト実行部
23  アルゴリズム切替部
24  デバイス選定部
25  受信部
26  アクセス部
100~10M  CPU
200~20N  外部記憶部
300  インターコネクトネットワーク
D0~D7  ノード
DESCRIPTION OF SYMBOLS 1 Migration apparatus 2 Access apparatus 3 Participating host storage part 4 Participating device storage part 11 Data movement part 12 Device selection part 13 Transition notification part 14 Notification part 21 Participation notification part 22 Read / write execution part 23 Algorithm switching part 24 Device selection part 25 Reception unit 26 Access unit 100 to 10M CPU
200 to 20N External storage unit 300 Interconnect network D0 to D7 Node

Claims (10)

  1.  ストレージ装置に含まれる記憶手段の数の変動に応じて、前記ストレージ装置に含まれる記憶手段に対して、データの追加処理を実行した後、データの削除処理を実行することでデータのリバランスを行うマイグレーション装置から、前記追加処理が完了した旨の通知を受信する受信手段と、
     アクセス対象のデータを保持すべき記憶手段が変動した場合、前記完了した旨の通知を受けるまで、変動前後に前記データを保持すべき記憶手段を示す情報に基づいて前記ストレージ装置に含まれる記憶手段にアクセスし、前記完了した旨の通知を受けると、変動後に前記データを保持すべき記憶手段に対してアクセスするアクセス手段と、を備える、
     ことを特徴とするアクセス装置。
    According to the change in the number of storage units included in the storage apparatus, after performing the data addition process on the storage unit included in the storage apparatus, the data rebalancing is performed to execute the data rebalancing. Receiving means for receiving a notification to the effect that the additional processing has been completed from the migration device to be performed;
    Storage means included in the storage device based on information indicating the storage means to hold the data before and after the change until the notification of completion is received when the storage means to hold the data to be accessed changes. An access means for accessing the storage means to hold the data after the fluctuation when receiving the notification of the completion,
    An access device.
  2.  前記アクセス手段は、読み出し対象のデータを保持すべき記憶手段が削減された場合、前記完了した旨の通知を受けるまで、削減前に前記データを保持すべき記憶手段のうちの削減対象以外の記憶手段から読み出しを行い、前記完了した旨の通知を受けると、削減後に前記データを保持すべき記憶手段から読み出しを行う、
     請求項1に記載のアクセス装置。
    When the storage means that should hold the data to be read is reduced, the access means stores the data other than the reduction target among the storage means that should hold the data before the reduction until receiving the notification of completion. When reading from the means and receiving the notification of completion, reading from the storage means that should hold the data after reduction,
    The access device according to claim 1.
  3.  前記アクセス手段は、読み出し対象のデータを保持すべき記憶手段が追加された場合、前記完了した旨の通知を受けるまで、追加前に前記データを保持すべき記憶手段から読み出しを行い、前記完了した旨の通知を受けると、追加後に前記データを保持すべき記憶手段から読み出しを行う、
     請求項1または2に記載のアクセス装置。
    When the storage means that should hold the data to be read is added, the access means reads from the storage means that should hold the data before addition until receiving the notification of completion, and the access is completed. Upon receiving the notification to the effect, reading from the storage means that should hold the data after addition,
    The access device according to claim 1 or 2.
  4.  前記アクセス手段は、書き込み対象のデータを保持すべき記憶手段が追加された場合、前記完了した旨の通知を受けるまで、追加前後に前記データを保持すべき記憶手段に書き込みを行い、前記完了した旨の通知を受けると、追加後に前記データを保持すべき記憶手段に書き込みを行う、
     請求項1ないし3のいずれか1項に記載のアクセス装置。
    When the storage unit that should hold the data to be written is added, the access unit writes the storage unit that should hold the data before and after the addition until the notification of the completion is received. When notified to the effect, writing to the storage means that should hold the data after addition,
    The access device according to any one of claims 1 to 3.
  5.  前記アクセス手段は、書き込み対象のデータを保持すべき記憶手段が削減された場合、前記完了した旨の通知を受けるまで、削減前後に前記データを保持すべき記憶手段であって削減対象以外の記憶手段に書き込みを行い、前記完了した旨の通知を受けると、削減後に前記データを保持すべき記憶手段に書き込みを行う、
     請求項1ないし4のいずれか1項に記載のアクセス装置。
    When the storage means that should hold the data to be written is reduced, the access means is a storage means that should hold the data before and after the reduction until it receives a notification of completion, and stores the data other than the reduction target Write to the means, and upon receiving the notification of completion, write to the storage means to hold the data after reduction,
    The access device according to any one of claims 1 to 4.
  6.  データを保持すべき記憶手段は、コンシステントハッシュ法に基づいて決定される、
     請求項1ないし5のいずれか1項に記載のアクセス装置。
    The storage means to hold the data is determined based on the consistent hash method.
    The access device according to any one of claims 1 to 5.
  7.  ストレージ装置に含まれる記憶手段の数の変動に応じて、前記ストレージ装置に含まれる記憶手段に対して、データの追加処理を実行した後、データの削除処理を実行することでデータのリバランスを行うデータ移動手段と、
     前記ストレージ装置に含まれる記憶手段にアクセスするアクセス装置に対して、前記追加処理が完了した旨を通知する通知手段と、を備える、
     ことを特徴とするマイグレーション装置。
    According to the change in the number of storage units included in the storage apparatus, after performing the data addition process on the storage unit included in the storage apparatus, the data rebalancing is performed to execute the data rebalancing. Data movement means to perform;
    Notification means for notifying the access device that accesses the storage means included in the storage device that the addition process has been completed,
    A migration apparatus characterized by that.
  8.  マイグレーション装置とアクセス装置を備え、
     前記マイグレーション装置は、ストレージ装置に含まれる記憶手段の数の変動に応じて、前記ストレージ装置に含まれる記憶手段に対して、データの追加処理を実行した後、データの削除処理を実行することでデータのリバランスを行い、前記追加処理が完了した旨を前記アクセス装置に通知し、
     前記アクセス装置は、アクセス対象のデータを保持すべき記憶手段が変動した場合、前記完了した旨の通知を受けるまで、変動前後に前記データを保持すべき記憶手段を示す情報に基づいて前記ストレージ装置に含まれる記憶手段にアクセスし、前記完了した旨の通知を受けると、変動後に前記データを保持すべき記憶手段に対してアクセスする、
     ことを特徴とする分散ストレージシステム。
    It has a migration device and an access device,
    The migration device executes a data deletion process after executing a data addition process on the storage unit included in the storage apparatus in accordance with a change in the number of storage units included in the storage apparatus. Rebalance the data, notify the access device that the additional processing is complete,
    When the storage means that should hold the data to be accessed fluctuates, the access device, based on information indicating the storage means that should hold the data before and after the change, until the notification of completion is received When the storage means included in is accessed and the notification of the completion is received, the storage means that should hold the data after the change is accessed.
    A distributed storage system characterized by that.
  9.  ストレージ装置に含まれる記憶手段の数の変動に応じて、前記ストレージ装置に含まれる記憶手段に対して、データの追加処理を実行した後、データの削除処理を実行することでデータのリバランスを行うマイグレーション装置から、前記追加処理が完了した旨の通知をアクセス装置が受信し、
     アクセス対象のデータを保持すべき記憶手段が変動した場合、前記完了した旨の通知を受けるまで、前記アクセス装置が、変動前後に前記データを保持すべき記憶手段を示す情報に基づいて前記ストレージ装置に含まれる記憶手段にアクセスし、
     前記完了した旨の通知を受けると、前記アクセス装置が、変動後に前記データを保持すべき記憶手段に対してアクセスする、
     ことを特徴とするアクセス方法。
    According to the change in the number of storage units included in the storage apparatus, after performing the data addition process on the storage unit included in the storage apparatus, the data rebalancing is performed to execute the data rebalancing. The access device receives a notification that the additional processing is completed from the migration device to perform,
    When the storage unit that should hold the data to be accessed changes, the storage device is based on information indicating the storage unit that should hold the data before and after the change until the notification of completion is received. Access the storage means contained in
    Upon receiving the notification of completion, the access device accesses the storage means that should hold the data after the change,
    An access method characterized by that.
  10.  ストレージ装置に含まれる記憶手段の数の変動に応じて、前記ストレージ装置に含まれる記憶手段に対して、データの追加処理を実行した後、データの削除処理を実行することでデータのリバランスを行うマイグレーション装置から、前記追加処理が完了した旨の通知を受信する処理と、
     アクセス対象のデータを保持すべき記憶手段が変動した場合、前記完了した旨の通知を受けるまで、変動前後に前記データを保持すべき記憶手段を示す情報に基づいて前記ストレージ装置に含まれる記憶手段にアクセスする処理と、
     前記完了した旨の通知を受けると、変動後に前記データを保持すべき記憶手段に対してアクセスする処理と、をコンピュータに実行させる、
     ことを特徴とするプログラムを格納したコンピュータ読み取り可能記録媒体。
    According to the change in the number of storage units included in the storage apparatus, after performing the data addition process on the storage unit included in the storage apparatus, the data rebalancing is performed to execute the data rebalancing. A process of receiving a notification that the addition process has been completed from the migration apparatus to be performed;
    Storage means included in the storage device based on information indicating the storage means to hold the data before and after the change until the notification of completion is received when the storage means to hold the data to be accessed changes. The process of accessing
    Receiving the notification of completion, causing the computer to execute processing for accessing the storage means that should hold the data after the change,
    The computer-readable recording medium which stored the program characterized by the above-mentioned.
PCT/JP2015/005987 2014-12-05 2015-12-02 Access device, migration device, distributed storage system, access method, and computer-readable recording medium WO2016088372A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2016562308A JPWO2016088372A1 (en) 2014-12-05 2015-12-02 Access device, migration device, distributed storage system, access method and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2014-246506 2014-12-05
JP2014246506 2014-12-05

Publications (1)

Publication Number Publication Date
WO2016088372A1 true WO2016088372A1 (en) 2016-06-09

Family

ID=56091334

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/005987 WO2016088372A1 (en) 2014-12-05 2015-12-02 Access device, migration device, distributed storage system, access method, and computer-readable recording medium

Country Status (2)

Country Link
JP (1) JPWO2016088372A1 (en)
WO (1) WO2016088372A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200099268A (en) * 2019-02-14 2020-08-24 (주)헤르메시스 Satellite data service system for sharing the data
JP2020525906A (en) * 2017-06-27 2020-08-27 セールスフォース ドット コム インコーポレイティッド Database tenant migration system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003296039A (en) * 2002-04-02 2003-10-17 Hitachi Ltd Cluster configuration storage system and method for controlling the same
JP2008203937A (en) * 2007-02-16 2008-09-04 Hitachi Ltd Computer system, storage management server, and data migration method
JP2008233968A (en) * 2007-03-16 2008-10-02 Nec Corp Database server capable of relocating data distributed among multiple processors, relocation method, and program
JP2013061739A (en) * 2011-09-12 2013-04-04 Fujitsu Ltd Data management device, data management system, data management method, and program
JP2014041550A (en) * 2012-08-23 2014-03-06 Nippon Telegr & Teleph Corp <Ntt> Data migration processing system and data migration processing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003296039A (en) * 2002-04-02 2003-10-17 Hitachi Ltd Cluster configuration storage system and method for controlling the same
JP2008203937A (en) * 2007-02-16 2008-09-04 Hitachi Ltd Computer system, storage management server, and data migration method
JP2008233968A (en) * 2007-03-16 2008-10-02 Nec Corp Database server capable of relocating data distributed among multiple processors, relocation method, and program
JP2013061739A (en) * 2011-09-12 2013-04-04 Fujitsu Ltd Data management device, data management system, data management method, and program
JP2014041550A (en) * 2012-08-23 2014-03-06 Nippon Telegr & Teleph Corp <Ntt> Data migration processing system and data migration processing method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020525906A (en) * 2017-06-27 2020-08-27 セールスフォース ドット コム インコーポレイティッド Database tenant migration system and method
JP7053682B2 (en) 2017-06-27 2022-04-12 セールスフォース ドット コム インコーポレイティッド Database tenant migration system and method
US11797498B2 (en) 2017-06-27 2023-10-24 Salesforce, Inc. Systems and methods of database tenant migration
KR20200099268A (en) * 2019-02-14 2020-08-24 (주)헤르메시스 Satellite data service system for sharing the data
KR102229214B1 (en) * 2019-02-14 2021-03-18 (주)헤르메시스 Satellite data service system for sharing the data

Also Published As

Publication number Publication date
JPWO2016088372A1 (en) 2017-09-14

Similar Documents

Publication Publication Date Title
CN112204513B (en) Group-based data replication in a multi-tenant storage system
JP5556816B2 (en) Distributed storage system, distributed storage method, distributed storage program and storage node
US9785349B2 (en) Efficient free-space management of multi-target peer-to-peer remote copy (PPRC) modified sectors bitmap in bind segments
WO2013140459A1 (en) Method for accessing mirrored shared memories and storage subsystem using method for accessing mirrored shared memories
JP4170056B2 (en) Backup / restore management method between replicated volumes and storage control device used in this method
US9792061B2 (en) Efficient cache management of multi-target peer-to-peer remote copy (PPRC) modified sectors bitmap
US9836223B2 (en) Changing storage volume ownership using cache memory
JP2014130420A (en) Computer system and control method of computer
JP2007310448A (en) Computer system, management computer, and storage system management method
WO2015125271A1 (en) File server, method of controlling same, and storage system
JP6271769B2 (en) Computer system and data migration method in computer system
JP6028415B2 (en) Data migration control device, method and system for virtual server environment
US9891992B2 (en) Information processing apparatus, information processing method, storage system and non-transitory computer readable storage media
JPWO2019049224A1 (en) Distributed storage system and distributed storage control method
WO2016088372A1 (en) Access device, migration device, distributed storage system, access method, and computer-readable recording medium
WO2015069225A1 (en) Method and apparatus for avoiding performance decrease in high availability configuration
US9785553B2 (en) Asynchronous cleanup after a peer-to-peer remote copy (PPRC) terminate relationship operation
US9535610B2 (en) Optimizing peer-to-peer remote copy (PPRC) transfers for partial write operations using a modified sectors bitmap
US10191690B2 (en) Storage system, control device, memory device, data access method, and program recording medium
WO2012046585A1 (en) Distributed storage system, method of controlling same, and program
US10437471B2 (en) Method and system for allocating and managing storage in a raid storage system
US10432727B1 (en) Reducing network traffic when replicating memory data across hosts
US11347409B1 (en) Method and apparatus for selective compression of data during initial synchronization of mirrored storage resources
US10705905B2 (en) Software-assisted fine-grained data protection for non-volatile memory storage devices
JP2005301560A (en) Cluster file server

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15865111

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016562308

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15865111

Country of ref document: EP

Kind code of ref document: A1