CN115269463A - Structure changing method of storage system and storage system - Google Patents

Structure changing method of storage system and storage system Download PDF

Info

Publication number
CN115269463A
CN115269463A CN202210210064.9A CN202210210064A CN115269463A CN 115269463 A CN115269463 A CN 115269463A CN 202210210064 A CN202210210064 A CN 202210210064A CN 115269463 A CN115269463 A CN 115269463A
Authority
CN
China
Prior art keywords
node
controller
management information
storage system
redundancy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210210064.9A
Other languages
Chinese (zh)
Inventor
月冈纯
大岛丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Publication of CN115269463A publication Critical patent/CN115269463A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0632Configuration or reconfiguration of storage systems by initialisation or re-initialisation of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2089Redundant storage control functionality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4009Coupling between buses with data restructuring
    • G06F13/4018Coupling between buses with data restructuring with data-width conversion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0635Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Quality & Reliability (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Hardware Redundancy (AREA)

Abstract

The invention provides a storage system and a structure changing method thereof. The structure of a storage system is changed while suppressing the influence on the cost and the business of a user. The storage system includes a first node having two controllers mounted thereon, wherein the two controllers of the first node are configured to be set for redundancy so as to form different redundancy groups, and when a configuration change accompanying addition of a second node having one controller to the storage system is requested, the second node is configured so that the controller of the second node belongs to a redundancy group to which either of the two controllers of the first node belongs, the first node changes the setting for redundancy so that the setting information of the redundancy group of either of the two controllers of the first node is not changed, and the first node invalidates the controller of the first node whose setting information of the redundancy group has changed in accordance with the change in the setting for redundancy.

Description

Structure changing method of storage system and storage system
Technical Field
The present invention relates to a method for changing the configuration of a storage system including one or more nodes.
Background
A storage system having a plurality of nodes as components is known (for example, see patent document 1). In a storage system, data is made redundant among a plurality of nodes in order to improve availability.
In a storage system including one node (hereinafter, referred to as a first type storage system), two controllers are mounted for data redundancy, and the respective controllers are set to constitute different redundancy groups. On the other hand, in a storage system including a plurality of nodes (hereinafter, referred to as a second type storage system), in order to make data redundant, a redundancy group is configured in units of nodes.
When changing from the first type storage system to the second type storage system, there is a problem as follows.
As described above, the management method of redundancy differs between the first type storage system and the second type storage system. Therefore, even if a node is simply added to the first type storage system, the second type storage system is not obtained. In order to operate as the second type storage system, it is necessary to change the redundancy setting of data and to perform operations such as data storage. Therefore, the cost of migration from the first type storage system to the second type storage system is large, and the influence on the business of the user is also large.
Patent document 1: international publication No. 2018/179073
Disclosure of Invention
An object of the present invention is to provide a storage system and a method of controlling a storage system, in which the configuration is changed while suppressing the influence on the business of a user while suppressing the cost.
Representative examples of the invention disclosed in the present application are as follows. That is, a configuration changing method of a storage system executed by a storage system configured by nodes on which at least one controller is mounted, the storage system including a first node on which two controllers are mounted and a plurality of storage media, the two controllers of the first node performing redundancy setting so as to configure different redundancy groups, the configuration changing method of the storage system including: a first step of setting, when a configuration change is requested in a second node in which at least one controller is additionally installed in the storage system, the second node such that the at least one controller of the second node belongs to the redundancy group to which any one of the two controllers of the first node belongs; a second step in which the first node changes the redundancy setting so that the setting information of the redundancy group of one of the two controllers of the first node does not change; and a third step of invalidating, by the first node, the controller of the first node in which the setting information of the redundancy group changes in accordance with a change in the setting for redundancy.
According to the present invention, the configuration of the storage system can be changed while suppressing the influence on the business of the user while suppressing the cost. Problems, structures, and effects other than those described above will be apparent from the following description of the embodiments.
Drawings
Fig. 1 is a diagram showing a configuration example of a system of embodiment 1.
Fig. 2 is a diagram showing an example of a hardware configuration of the controller according to embodiment 1.
Fig. 3A is a diagram showing an example of management information in embodiment 1.
Fig. 3B is a diagram showing an example of management information in embodiment 1.
Fig. 4 is a diagram showing an example of a screen to be presented to the management terminal in embodiment 1.
Fig. 5 is a flowchart illustrating an example of the configuration change process (extent) executed by the storage system according to embodiment 1.
Fig. 6 is a diagram showing an example of state transition of the storage system according to embodiment 1.
Fig. 7 is a diagram showing an example of a configuration after configuration change processing (outward expansion) of the storage system according to embodiment 1.
Fig. 8 is a flowchart for explaining an example of data saving processing performed by the CTL according to embodiment 1.
Fig. 9 is a flowchart for explaining an example of data saving processing performed by the CTL according to embodiment 1.
Fig. 10 is a flowchart illustrating an example of the configuration change processing (inward expansion) executed by the storage system according to embodiment 1.
Description of reference numerals
100 storage system, 101 management terminal, 102 host, 105 network, 110SVP, 120 node, 121CTL, 130 drive box, 140HDD, 141SSD, 200CPU, 201 memory, 202CHB, 203DKB, 210 processor core, 300, 310 management information, 400 screen, 401 configuration display field, 402 add button, 403 add button.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the drawings. However, the present invention is not limited to the description of the embodiments shown below. Those skilled in the art can easily understand that the specific structure can be changed without departing from the spirit and scope of the present invention.
In the following description, various information may be described by expressions such as "table", "list", and "queue", but various information may be expressed by data structures other than these expressions. To indicate independence from a data structure, "XX table", "XX list", and the like are sometimes referred to as "XX information". In the description of the content of each information, the expressions such as "identification information", "identifier", "name", "ID" and "number" are used, but they can be replaced with each other.
In the structure of the invention described below, the same or similar structure or function is denoted by the same reference numeral, and overlapping description is omitted.
The expressions "first", "second", "third", and the like in the present specification and the like are added for identifying the constituent elements, and the number or the order is not necessarily limited.
The positions, sizes, shapes, ranges, and the like of the respective structures shown in the drawings and the like do not necessarily indicate actual positions, sizes, shapes, ranges, and the like for easy understanding of the present invention. Therefore, the present invention is not limited to the positions, sizes, shapes, and ranges disclosed in the drawings and the like.
[ example 1]
Fig. 1 is a diagram showing a configuration example of a system of embodiment 1. Fig. 2 is a diagram showing an example of a hardware configuration of the controller according to embodiment 1.
The system of fig. 1 is composed of a storage system 100, a management terminal 101, and a host computer 102. The management terminal 101 and the host computer 102 are connected to the storage system 100 via a network 105. The Network 105 is a WAN (Wide Area Network), a LAN (Local Area Network), an SAN (Storage Area Network), or the like. The network 105 may be connected by any of wire and wireless. Further, the network between the management terminal 101 and the storage system 100 may be different from the network between the host 102 and the storage system 100.
The management terminal 101 is a computer for managing the storage system 100. The administrator of the storage system 100 uses the management terminal 101 to perform setting and control of the storage system 100.
The host 102 is a computer using the storage system 100. The host 102 writes user data to the storage system 100 and reads user data from the storage system 100.
The storage system 100 provides volumes to hosts 102. The storage system 100 generates a RAID (Redundant Arrays of Inexpensive Disks) group from a plurality of storage media, and generates a volume from the RAID group. The volume is, for example, an LDEV.
The storage system 100 includes an SVP110, a node 120, and a drive box 130. The SVP110 and the node 120 are connected via an internal network including a switch and the like, not shown.
The SVP110 monitors the entire storage system 100, receives a management command or the like transmitted from the management terminal 101, and controls the storage system 100. The SVP110 includes a CPU, a memory, a storage medium, and a network interface, which are not shown. The memory stores management information for managing the configuration and the like of the storage system 100, a program for realizing a control function of the storage system 100, and the like.
The drive box 130 accommodates a plurality of storage media. The storage medium is, for example, an HDD (Hard Disk Drive) 140, an SSD (Solid State Drive) 141, or the like.
The node 120 controls transmission and reception of user data between the host 102 and the storage medium of the drive box 130, and controls reading and writing of user data to and from the storage medium. Node 120 has more than one CTL (memory controller) 121.
As shown in fig. 2, the CTL121 includes a CPU200, a memory 201, a CHB (chord Board) 202, and a DKB (DisK Board) 203.
The CPU200 includes a plurality of processor cores (MPs) 210. The memory 201 stores a program for realizing control related to user data, management information 300 and 310 described later, and the like. Further, the memory 201 includes a cache memory that temporarily stores user data. The program stored in the memory 201 is executed by the CPU 200.
The CHB202 is an interface that connects the host 102 and the storage system 100. The CHB202 converts the data transfer protocol between the host 102 and the CTL121 to the data transfer protocol in the CTL121.
DKB203 is an interface connecting CTL121 and drive enclosure 130. DKB203 converts the data transfer protocol in CTL121 and the data transfer protocol between CTL121 and drive box 130.
In fig. 1, the node 120 and the SVP110 are illustrated as having different structures, but the node 120 may include the SVP 110.
In the following description, when the node 120 and the CTL121 are distinguished, the node (i) 120 and the CTL (i) 121 are described. i is an integer of 0 or more.
The storage system 100 of fig. 1 is composed of one node (0) 120, and thus is a first type storage system. In this case, the storage system 100 sets CTL (0) 121 and CTL (1) 121 to belong to different redundancy groups. Thus, even if a failure occurs in one CTL121, the storage system 100 can avoid data loss and continue to operate.
Fig. 3A is a diagram showing an example of management information 300 of embodiment 1. Fig. 3B is a diagram showing an example of the management information 310 according to embodiment 1. Two pieces of management information 300 and 310 are stored in the memory 201 of the CTL121.
The management information 300 is management information defining the redundancy setting of the first type storage system. The management information 300 stores an entry including a hard ID301, a redundancy group ID302, a soft ID303, and a mount flag 304. One entry corresponds to setting information of the redundant group of one CTL121.
The hard ID301 is a field storing identification information for identifying the CTL121 within the storage system 100.
The redundancy group ID302 is a field storing identification information of the redundancy group.
The soft ID303 is a field that stores identification information for identifying a cache memory within the storage system 100.
The mount flag 304 is a field storing a flag indicating the presence or absence of the CTL121 in the storage system 100. The mount flag 304 stores one of a "circle" indicating that the CTL121 is mounted and a "fork" indicating that the CTL121 is not mounted.
The first type storage system is constituted by one node 120 provided with two CTLs 121, and therefore "forks" are stored in the mount flags 304 of all the entries whose hard IDs 301 are "2" to "11". In addition, different redundancy groups are set for the two CTLs 121. In this way, in the first type memory system, the redundancy group is constituted in units of the CTL121.
The management information 310 is management information defining the redundancy setting of the second type storage system. The management information 310 stores an entry including a hard ID311, a redundancy group ID312, a soft ID313, and a mount flag 314. One entry corresponds to setting information of the redundant group of one CTL121.
The hard ID311, the redundancy group ID312, the soft ID313, and the mount flag 314 are the same fields as the hard ID301, the redundancy group ID302, the soft ID303, and the mount flag 304.
However, the mount flag 314 includes columns for each structure. Specifically, the storage system 100 (2-node 2CTL architecture) is configured by two nodes 120 on which one CTL121 is mounted, the storage system 100 (2-node 4CTL architecture) is configured by two nodes 120 on which two CTLs 121 are mounted, the storage system 100 (4-node 8CTL architecture) is configured by four nodes 120 on which two CTLs 121 are mounted, and the storage system 100 (6-node 12CTL architecture) is configured by six nodes 120 on which two CTLs 121 are mounted.
CTLs 121 of nodes 120 of the second type of storage system are all set to belong to the same redundancy group. In this way, in the second type storage system, the redundancy group is constituted in units of the nodes 120.
Next, a method of changing from the first type storage system to the second type storage system will be described.
Fig. 4 is a diagram showing an example of a screen presented by the management terminal 101 according to embodiment 1.
The screen 400 includes a configuration display field 401, an add button 402, and a subtract button 403.
The configuration display column 401 is a column that displays the hardware configuration of the storage system 100. The node 120 provided in the storage system 100, the CTL121 mounted on the node 120, and the like are displayed in the configuration display field 401. In addition, the driver box 130 and the like may be displayed in the configuration display column 401.
The add button 402 is an operation button for changing the configuration to which at least one of the node 120 and the CTL121 is added. The subtraction button 403 is an operation button for changing the configuration of deleting at least one of the node 120 and the CTL121.
The storage system 100 of the present embodiment changes the configuration of the storage system 100 in stages. That is, the extent out is from the first type of storage system to the second type of storage system of the 2-node 2CTL architecture, the extent out is from the second type of storage system of the 2-node 2CTL architecture to the second type of storage system of the 2-node 4CTL architecture, the extent out is from the second type of storage system of the 2-node 4CTL architecture to the second type of storage system of the 4-node 8CTL architecture, and the extent out is from the second type of storage system of the 4-node 8CTL architecture to the second type of storage system of the 6-node 12CTL architecture. The same is true for the inward expansion.
In the case of the configuration change processing accompanying the addition of the node 120, the node 120 or the CTL121 is physically connected to the storage system 100 by an administrator or the like before the processing is started.
The configuration change process accompanying the addition of the node 120 will be described with reference to fig. 5 to 9. Fig. 5 is a flowchart illustrating an example of the configuration change process (extent) executed by the storage system 100 according to embodiment 1. Fig. 6 is a diagram showing an example of state transition of the storage system 100 according to embodiment 1. Fig. 7 is a diagram showing an example of a configuration after configuration change processing (expansion) of the storage system 100 according to embodiment 1.
The SVP110 of the storage system 100 executes the configuration change process (expansion) when receiving an operation of the add button 402 from the management terminal 101. In fig. 5, the configuration change processing (outward expansion) in the case of changing from the first type storage system to the second type storage system of the 2-node 2CTL configuration will be described. Since the 2-node 2CTL and subsequent extensions need to be extended using a known technique, the description thereof will be omitted.
In the present embodiment, a first type memory system shown in state a of fig. 6 is explained as an example. CTL (0) 121 of node (0) 120 belongs to redundancy group 600 and CTL (1) 121 belongs to redundancy group 601.
In the case of changing from the first type storage system to the second type storage system of the 2-node 2CTL, the administrator adds a new node 120 to the storage system 100. At least one CTL121 is piggybacked in the new node 120. When the new node 120 mounts two CTLs 121, one CTL121 is invalidated.
In the present embodiment, as shown in state B of fig. 6, it is assumed that node (1) 120 on which CTL (3) 121 is mounted is added.
The SVP110 instructs the new node 120 of the redundancy setting of the second type storage system (step S101).
When receiving the instruction, CTL121 of new node 120 reads management information 310 of memory 201 and sets such that CTL121 belongs to the redundancy group to which either of two CTLs 121 of existing node 120 belongs.
The entry having the hard ID301 of the management information 300 of "3" and the entry having the hard ID311 of "3" of the management information 310 have the same value for the redundancy group IDs 302 and 312 and the soft IDs 303 and 313. That is, even if the first type storage system is changed to the second type storage system, the setting information of the redundancy group is not changed. Therefore, the administrator makes a setting to validate the hard ID311 "3". CTL121 of new node 120 is set to belong to the redundancy group corresponding to redundancy group ID312 of the entry.
In the present embodiment, as shown in state B of fig. 6, CTL (3) 121 is set to belong to the same redundancy group 601 as CTL (1) 121.
Next, the SVP110 instructs the existing node 120 to save the user data stored in the cache memory (step S102).
When receiving the instruction, CTL121 of existing node 120 determines target CTL121 to save the data. Specifically, the CTL121 of the existing node 120 refers to the management information 300 and 310 and searches for an entry in which the setting information for changing from the first-type storage system to the second-type storage system redundancy group does not change. The CTL121 determines CTL (1) 121 belonging to a redundancy group different from the redundancy group of the entry as the target CTL121, and instructs the target CTL121 to execute the data saving process. The object CTL121 executes data save processing, and after the data save processing is completed, notifies the SVP110 of the fact.
In the present embodiment, as shown in state C of fig. 6, CTL (1) 121 of node (0) 120 holds the data stored in the cache memory as target CTL121.
After the data saving processing is completed, the SVP110 instructs the existing node 120 to switch from the management information 300 to the management information 310 (step S103). That is, the instruction is made to change the redundancy setting.
CTL121 different from object CTL121 of existing node 120 is switched from management information 300 to management information 310. By switching management information 300 to management information 310, the setting of redundancy is switched from CTL121 units to node 120 units. In the present embodiment, as shown in state D of fig. 6, node (0) 120 belongs to redundancy group 600, and node (1) 120 belongs to redundancy group 601.
Next, the SVP110 instructs the existing node 120 to invalidate the CTL121 (step S104), and the process is terminated.
The CTL121 different from the CTL121 of the object of the existing node 120 invalidates the CTL121 of the object.
In the present embodiment, as shown in state E of fig. 6, CTL (0) 121 invalidates CTL (1) 121 as subject CTL121.
Further, the SVP110 may display a message requesting the removal of the invalidated CTL121 on the management terminal 101. The SVP110 may display a message notifying completion of addition of the node 120 on the management terminal 101.
As a result of the above processing, the storage system 100 shown in fig. 1 is changed to the configuration shown in fig. 7.
In the present embodiment, only the CTL121 having the same setting information of the redundancy group in the first-type storage system and the second-type storage system is retained in the existing node 120. Thus, even if the management information 300 is switched to the management information 310, the setting information of the redundancy group does not change for the CTL121 of the existing node 120. Therefore, it is possible to change from the first type storage system to the second type storage system of the 2-node 2CTL architecture without stopping the storage system 100.
Fig. 8 and 9 are flowcharts explaining an example of data saving processing performed by the CTL121 according to embodiment 1.
First, the data saving process of fig. 8 will be explained.
The object CTL121 of the existing node 120 copies the user data stored in the cache memory to the new node 120 (step S201). CTL121 of new node 120 writes the user data received from existing node 120 into the cache memory.
After the copying of all the user data stored in the cache memory is completed, the object CTL121 of the existing node 120 discards all the user data stored in the cache memory (step S202), and the process ends.
Further, the target CTL121 of the existing node 120, after receiving the instruction to save data, controls not to write the user data to the cache memory.
Next, the data saving process of fig. 9 is explained.
The object CTL121 of the existing node 120 copies the user data stored in the cache memory to the storage medium of the drive box 130 (step S301).
After the copying of all the user data stored in the cache memory is completed, the object CTL121 of the existing node 120 discards all the user data stored in the cache memory (step S302), and the process ends.
Further, the target CTL121 of the existing node 120, after receiving the instruction to save data, controls not to write the user data to the cache memory.
In addition, the object CTL121 of the conventional node 120 may combine the data saving processing of fig. 8 and 9. For example, the object CTL121 of the existing node 120 copies user data with high access frequency to the new CTL121, and copies user data with low access frequency to the storage medium.
The configuration change process accompanying the reduction of the node 120 will be described with reference to fig. 10. Fig. 10 is a flowchart illustrating an example of the configuration change process (inward expansion) executed by the storage system 100 according to embodiment 1.
The SVP110 of the storage system 100 executes configuration change processing (inward expansion) when receiving an operation of the reduction button 403 from the management terminal 101. In fig. 10, a configuration change process (inward expansion) in the case of changing from the second type storage system of the 2-node 2CTL configuration to the first type storage system will be described. The inward expansion from the 6-node 12CTL to the 2-node 2CTL may be performed using a known technique, and thus, the description thereof is omitted.
The SVP110 instructs validation of the CTL121 to any one of the nodes 120 (step S401).
In the present embodiment, SVP110 instructs node (0) 120 on the validation of CTL121. Node (0) 120 validates CTL (1) 121. At this time, CTL (1) 121 is controlled so that the user data from host 102 is not written.
Next, the SVP110 instructs the deletion target node 120, for which the CTL121 is not validated, to save the user data stored in the cache memory (step S402).
In the present embodiment, the node (1) 120 becomes the deletion target node 120. The node (1) 120 executes data save processing, and notifies the SVP110 of the fact after the data save processing is completed. The data saving process is the same as the process described in fig. 8 and 9.
After the data saving processing is completed, the SVP110 instructs the node 120 that validates the CTL121 to switch from the management information 310 to the management information 300 (step S403).
The existing CTL121 of the node 120 for which the CTL121 is validated is switched from the management information 310 to the management information 300. In addition, the validated CTL121 reads the management information 300. By switching the management information 310 to the management information 300, the setting of redundancy is switched from the node 120 unit to the CTL121 unit. In the present embodiment, the redundant setting is switched from state D to state B in fig. 6.
Next, the SVP110 instructs the deletion target node 120 to invalidate (step S404), and the process ends. When receiving the instruction, the CTL121 of the deletion target node 120 invalidates the CTL121 set as the redundancy group.
The SVP110 may display a message requesting the deletion of the deletion target node 120 in the management terminal 101. The SVP110 may display a message notifying the completion of the setting-down of the node 120 on the management terminal 101.
In the present embodiment, the setting information of the redundant group of the CTL121 operated before the subtraction is not changed. Therefore, it is possible to change from the second type storage system of the 2-node 2CTL architecture to the first type storage system without stopping the storage system 100.
The present invention is not limited to the above-described embodiments, and various modifications are possible. For example, the above-described embodiments are examples in which the configurations are explained in detail to explain the present invention easily and understandably, and are not limited to having all the configurations explained. In addition, some of the configurations of the embodiments may be added, deleted, or replaced with other configurations.
The above-described structures, functions, processing units, and the like may be designed, for example, by integrated circuits, and some or all of them may be realized by hardware. In addition, the present invention can also be realized by program codes of software that realizes the functions of the embodiments. In this case, a storage medium having the program code recorded thereon is supplied to the computer, and a processor provided in the computer reads out the program code stored in the storage medium. In this case, the program code itself read out from the storage medium realizes the functions of the embodiments described, and the program code itself and the storage medium storing the program code constitute the present invention. Examples of the storage medium for supplying such program codes include a flexible disk, a CD-ROM, a DVD-ROM, a hard disk, an SSD (Solid State Drive), an optical disk, a magneto-optical disk, a CD-R, a magnetic tape, a nonvolatile memory card, and a ROM.
The program code that realizes the functions described in the present embodiment can be installed in a wide range of programs or script languages, such as assembly language, C/C + +, perl, shell, PHP, python, and Java (registered trademark).
Further, the program code of software for realizing the functions of the embodiments may be distributed via a network, stored in a storage unit such as a hard disk or a memory of a computer, or a storage medium such as a CD-RW or a CD-R, and read out by a processor provided in the computer and executed by the storage unit or the program code stored in the storage medium.
In the embodiments, the control lines and the information lines are required for the description, and not all the control lines and the information lines are necessarily shown on the product. All structures may also be interconnected.

Claims (14)

1. A method for changing a configuration of a storage system, which is executed by a storage system including nodes having at least one controller mounted thereon,
the storage system comprises a first node carrying two of the controllers and a plurality of storage media,
the two controllers of the first node are configured to perform redundancy setting so as to form different redundancy groups,
the structure changing method of the storage system comprises the following steps:
a first step of setting, when a configuration change of a second node having at least one controller mounted thereon is requested to be added to the storage system, the second node so that the at least one controller of the second node belongs to the redundancy group to which any one of the two controllers of the first node belongs;
a second step of changing the redundancy setting by the first node so that the setting information of the redundancy group of one of the two controllers of the first node is not changed; and
a third step of invalidating, by the first node, the controller of the first node whose setting information of the redundancy group changes in accordance with a change in the setting for redundancy.
2. The method of changing a structure of a storage system according to claim 1,
the controller holds first management information for setting redundancy using one of the nodes and second management information for setting redundancy using two or more of the nodes,
the first management information and the second management information include setting information of a redundancy group set for the controller mounted on the node,
the first step comprises the steps of: the at least one controller of the second node refers to the first management information and the second management information, and determines the redundancy group to which the controller belongs, based on the same setting information of the redundancy group in the first management information and the second management information.
3. The structure change method of a storage system according to claim 2,
the second step comprises the steps of: the controller of the first node having the same redundancy group setting information in the first management information and the second management information switches from the first management information to the second management information.
4. The structure change method of a storage system according to claim 2,
the second step comprises: a fourth step of, before switching from the first management information to the second management information, storing, by the controller of the first node having the different redundancy group setting information from the first management information and the second management information, data stored in a cache memory of the controller.
5. The storage system structure change method according to claim 4,
the fourth step includes the steps of: the controller of the first node having the redundancy group setting information different from the first management information and the second management information writes the data stored in the cache memory of the controller into the at least one controller of the second node having the redundancy group set.
6. The structure change method of a storage system according to claim 4,
the fourth step includes the steps of: the controller of the first node having the first management information and the second management information that are different in the setting information of the redundancy group writes the data stored in the cache memory of the controller to the storage medium.
7. The structure change method of a storage system according to claim 3,
the method comprises the following steps:
when a configuration change accompanying deletion of the second node from the storage system is requested, validating the invalidated controller by the controller of the first node validated before the deletion of the second node is requested;
the at least one controller of the second node to which the redundancy group is set holds data stored in a cache memory of the controller;
the controller of the first node validated from before the deletion of the second node is requested is switched from the second management information to the first management information; and
the at least one controller of the second node invalidates the at least one controller that has set the redundancy group.
8. A storage system comprising a node carrying at least one controller,
the storage system comprises a first node carrying two controllers and a plurality of storage media,
the two controllers of the first node are configured to perform redundancy setting so as to form different redundancy groups,
when a configuration change accompanying addition of a second node having at least one controller mounted thereon to the storage system is requested,
the second node executes a first process set so that the at least one controller of the second node belongs to the redundancy group to which any one of the two controllers of the first node belongs,
the first node performs the following processing:
a second process of changing the redundancy setting so that the setting information of the redundancy group of one of the two controllers of the first node does not change; and
a third process of invalidating the controller of the first node whose setting information of the redundancy group changes in accordance with a change in the setting of the redundancy.
9. The storage system of claim 8,
the controller holds first management information for setting redundancy using one of the nodes and second management information for setting redundancy using two or more of the nodes,
the first management information and the second management information include setting information of a redundancy group set for the controller mounted on the node,
in the first process, the at least one controller of the second node refers to the first management information and the second management information, and determines the redundancy group to which the controller belongs, based on the same setting information of the redundancy group in the first management information and the second management information.
10. The storage system of claim 9,
in the second process, the controller of the first node in which the setting information of the redundancy group is the same in the first management information and the second management information is switched from the first management information to the second management information.
11. The storage system of claim 9,
the second process is a fourth process of, before switching from the first management information to the second management information, storing, by the controller of the first node having the redundancy group setting information different between the first management information and the second management information, data stored in a cache memory of the controller.
12. The storage system of claim 11,
in the fourth process, the controller of the first node having the different redundancy group setting information from among the first management information and the second management information writes the data stored in the cache memory of the controller to the at least one controller of the second node having the redundancy group set therein.
13. The storage system of claim 11,
in the fourth process, the controller of the first node having the redundancy group setting information different between the first management information and the second management information writes the data stored in the cache memory of the controller to the storage medium.
14. The storage system of claim 10,
the storage system performs the following processing:
in the event that a configuration change is requested that accompanies the deletion of the second node from the storage system,
the controller of the first node validated from before the deletion of the second node is requested validates the invalidated controller;
the at least one controller of the second node to which the redundancy group is set holds data stored in a cache memory of the controller;
the controller of the first node validated from before the deletion of the second node is requested is switched from the second management information to the first management information; and
the at least one controller of the second node invalidates the at least one controller that has set the redundancy group.
CN202210210064.9A 2021-04-30 2022-03-04 Structure changing method of storage system and storage system Pending CN115269463A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021077095A JP7266060B2 (en) 2021-04-30 2021-04-30 Storage system configuration change method and storage system
JP2021-077095 2021-04-30

Publications (1)

Publication Number Publication Date
CN115269463A true CN115269463A (en) 2022-11-01

Family

ID=83758293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210210064.9A Pending CN115269463A (en) 2021-04-30 2022-03-04 Structure changing method of storage system and storage system

Country Status (3)

Country Link
US (1) US11868630B2 (en)
JP (1) JP7266060B2 (en)
CN (1) CN115269463A (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110333770B (en) * 2019-07-10 2023-05-09 合肥兆芯电子有限公司 Memory management method, memory storage device and memory control circuit unit
TWI739676B (en) * 2020-11-25 2021-09-11 群聯電子股份有限公司 Memory control method, memory storage device and memory control circuit unit
JP7266060B2 (en) * 2021-04-30 2023-04-27 株式会社日立製作所 Storage system configuration change method and storage system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7756839B2 (en) * 2005-03-31 2010-07-13 Microsoft Corporation Version tolerant serialization
US8666960B2 (en) * 2008-06-26 2014-03-04 Microsoft Corporation Schema-based data transfer between a data-based application and a document application
US9430114B1 (en) * 2011-11-03 2016-08-30 Pervasive Software Data transformation system, graphical mapping tool, and method for creating a schema map
US9639589B1 (en) * 2013-12-20 2017-05-02 Amazon Technologies, Inc. Chained replication techniques for large-scale data streams
WO2016051512A1 (en) * 2014-09-30 2016-04-07 株式会社日立製作所 Distributed storage system
US10339179B2 (en) * 2016-04-11 2019-07-02 Oracle International Corporation Graph processing system that can define a graph view from multiple relational database tables
CN110383251B (en) * 2017-03-28 2023-04-07 株式会社日立制作所 Storage system, computer-readable recording medium, and method for controlling system
JP6791834B2 (en) * 2017-11-30 2020-11-25 株式会社日立製作所 Storage system and control software placement method
JP6814764B2 (en) * 2018-04-06 2021-01-20 株式会社日立製作所 Information processing system and path management method
JP7003976B2 (en) * 2018-08-10 2022-01-21 株式会社デンソー Vehicle master device, update data verification method and update data verification program
JP6947717B2 (en) * 2018-12-27 2021-10-13 株式会社日立製作所 Storage system
JP7266060B2 (en) * 2021-04-30 2023-04-27 株式会社日立製作所 Storage system configuration change method and storage system

Also Published As

Publication number Publication date
JP7266060B2 (en) 2023-04-27
JP2022170852A (en) 2022-11-11
US11868630B2 (en) 2024-01-09
US20220350510A1 (en) 2022-11-03

Similar Documents

Publication Publication Date Title
CN115269463A (en) Structure changing method of storage system and storage system
US8595549B2 (en) Information system and I/O processing method
US8539194B2 (en) Storage system and method for storage capacity change of host-device-specified device
JP4299474B2 (en) Method, system, and medium for selecting a preferred path to a storage device
US9003414B2 (en) Storage management computer and method for avoiding conflict by adjusting the task starting time and switching the order of task execution
US8392756B2 (en) Storage apparatus and method of detecting power failure in storage apparatus
US9971527B2 (en) Apparatus and method for managing storage for placing backup data into data blocks based on frequency information
JP2009116783A (en) Storage system for restoring data stored in failed storage device
JP5712713B2 (en) Control device, control method, and storage device
JP2003015826A (en) Shared memory copy function in disk array controller
JPH07281840A (en) Dual-disk recording device
JP2010282324A (en) Storage control apparatus, storage system, and storage control method
US8627034B2 (en) Storage control apparatus and storage control method
JP6005446B2 (en) Storage system, virtualization control device, information processing device, and storage system control method
US10606754B2 (en) Loading a pre-fetch cache using a logical volume mapping
US8312234B2 (en) Storage system configured from plurality of storage modules and method for switching coupling configuration of storage modules
JP2009295045A (en) Storage system, storage subsystem and storage control method
CN113282246A (en) Data processing method and device
US8140800B2 (en) Storage apparatus
WO2019043815A1 (en) Storage system
JP7437428B2 (en) Storage systems, drive migration methods, and programs
JP2009252114A (en) Storage system and data saving method
US11907582B2 (en) Cloud storage device implementing composite zoned namespace architecture
US20230289100A1 (en) Network Interface Card Implementing Composite Zoned Namespace Architecture
JP2002182864A (en) Disk array controller

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination