WO2013183143A1 - 管理システム及び管理方法 - Google Patents
管理システム及び管理方法 Download PDFInfo
- Publication number
- WO2013183143A1 WO2013183143A1 PCT/JP2012/064672 JP2012064672W WO2013183143A1 WO 2013183143 A1 WO2013183143 A1 WO 2013183143A1 JP 2012064672 W JP2012064672 W JP 2012064672W WO 2013183143 A1 WO2013183143 A1 WO 2013183143A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- storage
- storage device
- segment
- disk
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0662—Virtualisation aspects
- G06F3/0665—Virtualisation aspects at area level, e.g. provisioning of virtual or logical volumes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
Definitions
- the present invention relates to a technique for managing the physical arrangement of data units in a database (hereinafter referred to as DB).
- DBMS DB management subsystems
- One of the features of DB is that it handles a large amount of data. Therefore, in many computer systems in which a DBMS operates, a system configuration in which a storage device having a large-capacity disk is connected to a computer in which the DBMS operates and the DB data is stored on the storage device is common.
- Patent Document 1 is a technology that dynamically creates tasks each time data is read and multiplexes data reads by executing the tasks in parallel in an unordered manner. According to a DBMS using this technology, search performance can be dramatically improved as compared with a conventional DBMS that executes tasks in the order of occurrence.
- the above-mentioned super large-scale DB often handles data with order added every day such as POS data and sensor data (for example, time-series data, hereinafter, order data), and search for this order data. Then, the more recent data, the higher the possibility of access.
- order data for example, time-series data, hereinafter, order data
- a DBMS that performs processing in high parallel and issues I / O to storage with high multiplexing to improve performance, to maximize performance, for example, as much as possible to increase I / O multiplicity It is necessary to issue I / O to this disk.
- Patent Document 2 relates to an efficient logical volume management method.
- data is unbalanced by performing data migration at the logical volume address (logical address) level.
- a technique for solving the problem is disclosed.
- data is not moved according to the contents of DB data. For example, even if there is a relationship between the logical address and the order of DB data, if the relationship is lost, data with close order relationship is concentrated on a specific disk, or the access range is widened on each disk. As a result, there is a possibility that I / O imbalance occurs and performance cannot be obtained.
- the management system manages a plurality of data units constituting one or more schemas of the database in the storage device.
- the storage apparatus has a plurality of first storage device sets each having a plurality of storage areas.
- One or more schemas include an ordering schema configured by a plurality of data units having ordering in which each order is defined.
- the management system configures an ordering schema Based on management information including mapping information, which is information indicating which data unit is stored in which storage area, and order information indicating the order of the data unit, in each of the plurality of storage areas of the first storage device set.
- mapping information which is information indicating which data unit is stored in which storage area
- order information indicating the order of the data unit, in each of the plurality of storage areas of the first storage device set
- At least one first storage device such that a plurality of empty storage areas are distributed to the first storage device set and the second storage device set, in which two or more data units that are not in sequence among a plurality of stored data units are distributed Move from the set to the free storage area of the second storage device set.
- the management system may be composed of a single computer or a kind of computer system composed of a plurality of computers.
- FIG. 1 is a configuration diagram of an example of a computer system according to the first embodiment.
- FIG. 2A is a configuration diagram of an example of schema information according to the first embodiment.
- FIG. 2B is a configuration diagram of an example of DB mapping information according to the first embodiment.
- FIG. 2C is a configuration diagram of an example of DB data additional information according to the first embodiment.
- FIG. 3A is a configuration diagram of an example of OS mapping information according to the first embodiment.
- FIG. 3B is a configuration diagram of an example of storage mapping information according to the first embodiment.
- FIG. 4A is a configuration diagram of an example of DB data area management information according to the first embodiment.
- FIG. 4B is a configuration diagram of an example of DB data area attribute information according to the first embodiment.
- FIG. 4C is a configuration diagram of an example of DB data arrangement information according to the first embodiment.
- FIG. 5A is a configuration diagram of an example of a data area addition instruction according to the first embodiment.
- FIG. 5B is a configuration diagram of an example of a data movement instruction according to the first embodiment.
- FIG. 5C is a diagram illustrating a physical arrangement example of the order-added DB data according to the first embodiment.
- FIG. 6 is a flowchart of management processing according to the first embodiment.
- FIG. 7 is a flowchart of data addition processing according to the first embodiment.
- FIG. 8 is a flowchart of additional segment distribution processing according to the first embodiment.
- FIG. 9A is a first diagram for explaining movement of DB data according to the first embodiment.
- FIG. 9A is a first diagram for explaining movement of DB data according to the first embodiment.
- FIG. 9B is a second diagram for explaining the movement of the DB data according to the first embodiment.
- FIG. 10A is a third diagram for explaining the movement of the DB data according to the first embodiment.
- FIG. 10B is a fourth diagram illustrating the movement of DB data according to the first embodiment.
- FIG. 11A is a fifth diagram for explaining the movement of the DB data according to the first embodiment.
- FIG. 11B is a sixth diagram illustrating the movement of DB data according to the first embodiment.
- FIG. 12A is a seventh diagram illustrating the movement of DB data according to the first embodiment.
- FIG. 12B is an eighth diagram for explaining the movement of the DB data according to the first embodiment.
- FIG. 13 is a flowchart of additional segment distribution processing according to the second embodiment.
- FIG. 14 is a flowchart of segment adjacency processing according to the second embodiment.
- FIG. 15A is a first diagram for explaining movement of DB data according to the second embodiment.
- FIG. 15B is a second diagram for explaining the movement of the DB data according to the first embodiment.
- FIG. 16A is a third diagram for explaining the movement of the DB data according to the second embodiment.
- FIG. 16B is a diagram for explaining an ideal arrangement of DB data.
- FIG. 17A is a first diagram illustrating movement of DB data according to a modification.
- FIG. 17B is a second diagram illustrating the movement of DB data according to the modification.
- FIG. 18A is a third diagram for explaining the movement of the DB data according to the modification.
- FIG. 17A is a first diagram illustrating movement of DB data according to a modification.
- FIG. 17B is a second diagram illustrating the movement of DB data according to the modification.
- FIG. 18A is a third diagram for explaining the movement
- FIG. 18B is a fourth diagram illustrating the movement of DB data according to the modification.
- FIG. 19A is a fifth diagram illustrating the movement of DB data according to the modification.
- FIG. 19B is a sixth diagram illustrating the movement of DB data according to the modification.
- processing may be described using “program” as the subject, but the program is defined by being executed by a processor (for example, a CPU (Central Processing Unit)) included in a computer, a storage device, or the like. Therefore, the subject of the processing may be a processor, using a storage resource (for example, a memory) and / or a communication interface device (for example, a communication port) as appropriate.
- the processing described with the program as the subject may be processing performed by a processor or a device (computer, storage device, etc.) having the processor.
- the controller may be the processor itself or may include a hardware circuit that performs part or all of the processing performed by the controller.
- the program may be installed in each controller from a program source.
- the program source may be, for example, a program distribution computer or a storage medium.
- a serial interface or an Ethernet interface (Ethernet is a registered trademark) is used as an input / output device as an alternative to an input / output device in a computer, and a display device having a display, a keyboard, or a pointer device is connected to the interface. May be sent to the display device or input information may be received from the display device to display on the display device or accept input to replace the input and display on the input / output device. .
- Ethernet is a registered trademark
- Example 1 will be described.
- FIG. 1 is a configuration diagram of an example of a computer system according to the first embodiment.
- a computer 100 as an example of a management system and a storage device 150 are connected via communication networks 180 and 182.
- the computer 100 executes a DBMS 120 that manages DB data stored in the storage apparatus 150.
- the DBMS 120 is preferably a DBMS that improves performance by issuing I / O to the storage apparatus 150 with a high degree of parallelism.
- the DBMS 120 receives a DB query, generates a query execution plan including information representing one or more database operations and an execution procedure of the one or more database operations necessary for executing the received query, and generates the query execution plan.
- the received query may be executed based on the query execution plan that has been executed.
- the DBMS 120 may dynamically generate a task for executing a database operation and execute the dynamically generated task in executing the query.
- the DBMS 120 in executing the query, the DBMS 120 generates (a) a task for executing a database operation, and (b) executes the generated task, thereby obtaining data necessary for the database operation corresponding to the task.
- C When executing the (N + 1) th database operation based on the execution result of the Nth database operation corresponding to the task executed in (b) above Includes newly generating a task based on the execution result (N is an integer of 1 or more), and (d) performing the above (b) and (c) for the newly generated task. If there are two or more executable tasks in (b) and (d) above, Them in parallel two or more of at least two tasks of the tasks may be adapted to run.
- the DBMS 120 may be a DBMS conforming to the technique disclosed in Patent Document 1 described above.
- the communication networks 180 and 182 may be a network such as a local area network (LAN) or a wide area network (WAN), or may be a network (storage area network: SAN) configured by a fiber channel or the like. good.
- LAN local area network
- WAN wide area network
- SAN storage area network
- FIG. 1 only one computer 100 and one storage device 150 are shown, but a plurality of computers may be used.
- the computer 100 can be realized by a general computer, for example.
- the computer 100 includes a CPU (control device) 104, an input / output device 106, a storage device 108, a memory 110, an I / F (1) 112, and an I / F (2) 114.
- the CPU 104, the input / output device 106, the storage device 108, the memory 110, the I / F (1) 112, and the I / F (2) 114 are connected via the internal bus 102.
- the I / F (1) 112 is an interface with the communication network 180.
- the I / F (2) 114 is an interface with the communication network 182.
- the input / output device 106 includes, for example, an input device such as a mouse and a keyboard, and an output device for a liquid crystal display.
- a DBMS 120, a management program 130, and an operating system (hereinafter referred to as OS) 140 are stored in at least one of the storage device 108 and the memory 110 as an example of a management storage device.
- the DBMS 120, the management program 130, and the OS 140 are executed by the CPU 104.
- the DBMS 120 holds schema mapping information 122 relating to schemas such as DB tables and indexes, and DB mapping information 124 that associates DB data areas where DB data is logically stored with devices on the OS 140.
- schema mapping information 122 relating to schemas such as DB tables and indexes
- DB mapping information 124 that associates DB data areas where DB data is logically stored with devices on the OS 140.
- the OS 140 holds OS mapping information 132 that associates a device on the OS 140 with a logical storage area on the storage apparatus 150.
- the management program 140 acquires various types of information from the DBMS 120, the OS 130, and the control program 174 of the storage device 150 as necessary, and creates DB data area information 142 regarding the attributes of the DB data area, the actual DB data arrangement, and the like. Hold.
- the management program 140 may be provided on another computer (for example, a host computer) connected to the computer 100 via the network 180 or 182 so as to operate on the other computer.
- the storage apparatus 150 may be provided and operated on the storage apparatus 150.
- the storage apparatus 150 includes a controller 154 and a plurality of disks (HDD: an example of a storage device) 156, and the controller 154 and the plurality of disks 156 are connected by an internal bus 152.
- the controller 154 includes, for example, an I / F (1) 162, an I / F (2) 164, a CPU (control device) 166, a cache memory 168, and a memory 170.
- the I / F (1) 162, I / F (2) 164, CPU 166, cache memory 168, and memory 170 are connected via an internal bus 160.
- the I / F (1) 162 is an interface with the communication network 180.
- the I / F (2) 164 is an interface with the communication network 182.
- the memory 170 stores a control program 172 that controls the storage apparatus 150.
- the control program 172 is executed by the CPU 166.
- the control program 172 holds storage mapping information 174 that associates a logical storage area (LU) of the storage apparatus 150 with a physical storage area (segment) of the disk 156.
- the disk 156 is, for example, a hard disk drive (magnetic storage device).
- a plurality of disks 156 may be configured in a RAID (Redundant Array of Independent (or Inexpensive) Disks).
- a storage device for example, a flash memory drive having other types of storage media may be provided.
- the storage apparatus 150 When the storage apparatus 150 receives a data area addition instruction 500 and a data movement instruction 510, which will be described later, via the I / F (1) 162 or IF (2) 164 and receives the data area addition instruction 500, A storage area for storing data is added according to the contents of the instruction, and when a data movement instruction 510 is received, data (data unit) is moved according to the contents of the instruction.
- FIG. 2A is a configuration diagram of an example of schema information according to the first embodiment.
- the schema information 122 is information related to schemas such as tables and indexes constituting the DB, and has an entry for each schema.
- Each entry includes a field 200 for registering an identifier for identifying a schema, a field 202 for registering a schema name, a field 204 for registering a schema type, a field 206 for registering a data type handled by the schema, and a schema storage.
- types of data stored in the field 206 there are an order addition that indicates sequential data that is sequentially added and an unorder that indicates data that is not sequential data.
- the schema for storing the order data is the order schema.
- the schema information 122 is created when a DB is constructed, and updated when DB data is added / deleted.
- FIG. 2B is a configuration diagram of an example of DB mapping information according to the first embodiment.
- the DB mapping information 124 is information relating to the association between a DB data area in which each DB schema is stored and a device on the OS, and has an entry for each DB data area.
- Each entry includes a field 220 for registering an identifier (DB data area ID) for identifying a DB data area, a field 222 for registering a file path of a device in which the DB data area is created, and a physical allocated to the DB data area.
- Field 224 for registering the size of a typical storage area (in the figure, for example, the number of area units called allocated segments), and the number of segments used in the area allocated to the DB data area It has a field 226 for registration. In the present embodiment, for example, 4096 pages of data pages in the DB are stored in one segment.
- the DB mapping information 124 is created when a DB is constructed, and is updated when a DB data area is added, deleted, or changed.
- FIG. 2C is a configuration diagram of an example of DB data additional information according to the first embodiment.
- the DB data addition information 126 is created by the DBMS 120 when DB data is added, and is transmitted to the management program 140.
- the DB data addition information 126 is information related to the added DB data, and has an entry for each added DB data.
- each DB data means every data unit with DB data of a predetermined data amount (for example, data amount corresponding to one segment) as a unit.
- the data unit may be referred to as DB data.
- Each entry includes a field 240 for registering an identifier (schema ID) of a schema to which DB data is added, a field 242 for registering an identifier (DB data area ID) of a DB data area to which DB data is added, and a DB to which DB data is added.
- a field 244 for registering a logical address on the data area (in the figure, for example, logical page ID), a field 246 for registering the size of the added DB data (in the figure, for example, the number of data pages), and the like are added.
- It has a field 248 for registering information (order information) that can specify the order of DB data, and a field 250 for registering information about the range of added DB data (for example, a range of IDs from 0 to 49999). Note that only when the data type of the schema of the added DB data is the order addition, the fields 248 and 250 are valid, and information corresponding to each is stored.
- FIG. 3A is a configuration diagram of an example of OS mapping information according to the first embodiment.
- the OS mapping information 132 is information that associates a device on the OS 130 with a logical storage area (LU) on the storage apparatus 150, and has an entry for each device. Each entry includes a field 300 for registering a file path on the OS 130 in which the device is created, and an identifier (storage device ID: ST-ID) for identifying the storage device 150 having a storage area (LU) corresponding to the device. And a field 304 for registering a number (LUN) for identifying a storage area (LU) corresponding to the device.
- the OS mapping information 132 is created when the system is constructed, and is updated when the system configuration is changed.
- FIG. 3B is a configuration diagram of an example of storage mapping information according to the first embodiment.
- the storage mapping information 174 is information that associates a logical storage area (LU) of the storage apparatus 150 with a segment that is a physical area of the disk 156, and has an entry for each segment that constitutes the LU.
- Each entry includes a field 310 for registering an identifier (ST-ID) for identifying the storage apparatus 150, a field 312 for registering a number (LUN) for identifying an LU, and an address of a logical storage area in the LU ( For example, a field 314 for registering a logical page ID), a field 316 for registering a number (disk No.) for identifying a disk storing the segment, and an address in the segment corresponding to the logical area of the field 314 ( For example, it has a field 318 for registering a physical page ID) and a field 320 for registering a number (segment number) for identifying a segment.
- the segment No. stored in the field 320 is a number for uniquely identifying within the disc indicated by the disc No.
- the storage mapping information 174 is created when the system is constructed, and is updated when the system configuration is changed.
- the disk in the field 316 may be a single hard disk drive (HDD) or an RG (RaidRaGroup) in which a plurality of HDDs are configured in RAID.
- one hard disk or RG is a storage device set.
- FIG. 4A is a configuration diagram of an example of DB data area management information according to the first embodiment.
- the DB data area management information 142 is created and held by the management program 140.
- the DB data area management information 142 includes DB data area attribute information 400 and DB data arrangement information 402.
- FIG. 4B is a configuration diagram of an example of DB data area attribute information according to the first embodiment.
- the DB data area attribute information 400 is information related to the DB data area, and has an entry for each DB data area.
- Each entry includes a field 410 for registering an identifier (DB data area ID) for identifying a DB data area, a field 412 for registering an identifier (schema ID) for identifying a schema stored in the DB data area, and a DB A field 414 for registering the type of data handled by the schema stored in the data area, a field 416 for registering an identifier for identifying the storage device 150 in which the DB data area is created, and a storage in which the DB data area is created
- a field 418 for registering a number (LUN) for identifying a storage area (LU) on the device 150, a field 420 for registering the number of segments allocated to the DB data area, and an area allocated to the DB data area Field 422 for registering the number of segments used in the field Then have a field 424 for registering a subject to the number
- the DB data area attribute information 400 includes the schema information 122 and DB mapping information 124 acquired from the DBMS 120 when the management program 140 is started, the OS mapping information 132 acquired from the OS 130, and the storage acquired from the control program 172 of the storage apparatus 150. Created based on the mapping information 174, and then updated as necessary when the DB data is added.
- FIG. 4C is a configuration diagram of an example of DB data arrangement information according to the first embodiment.
- the DB data arrangement information 402 is information relating to the physical arrangement of data units that are DB data having a predetermined data amount, and has an entry for each segment that can store one data unit.
- Each entry includes a field 430 for registering an identifier (ST-ID) for identifying the storage apparatus 150, a field 432 for registering a number (LUN) for identifying an LU, and an address of a logical area in the LU (for example, , A logical page ID) field 434, a number for identifying the disk storing the segment (disk No.) 436, a segment identification number (segment No.) field 438, It has a field 440 for registering the order information of the DB data (data unit) stored in the segment, and a field 442 for registering information on the range of the DB data (data unit) stored in the segment.
- the DB data arrangement information 402 is created when the management program 140 is started.
- the DBMS 120 adds DB data and transmits the DB data addition information 126 to the management program 140, the DB data arrangement information 402 is changed depending on the contents of the DB data addition information 126. An addition is made.
- FIG. 5A is a configuration diagram of an example of a data area addition instruction according to the first embodiment.
- the data area addition instruction 500 is issued to the storage apparatus 150 by the management program 140.
- the data area addition instruction 500 registers a field 502 for registering an identifier for identifying a storage apparatus 150 to which a data storage area (data area) is added, and a storage area (LU) number (LUN) for adding the data area.
- a field 506 for registering the number of segments of the data area to be added.
- the storage apparatus 150 that has received the data area addition instruction 500 adds the number of segments specified in the field 506 from the internal unused data area to the storage area specified in the field 504, and storage mapping information 174 is updated, and the result is returned to the management program 140.
- FIG. 5B is a configuration diagram of an example of a data movement instruction according to the first embodiment.
- the data movement instruction 510 is issued to the storage apparatus 150 by the management program 140.
- the data movement instruction 510 includes a field 512 for registering an identifier for identifying a storage apparatus to which data is to be moved, a field 514 for registering the number (LUN) of a storage area (LU) to which data is to be moved, and the data migration source disk.
- a field 516 for registering a number, a field 518 for registering a number for identifying a segment to which data is to be moved, a field 520 for registering a number for a disk to which data is to be moved, and a number for identifying a segment to which a data is to be moved are registered.
- Field 522 for registering an identifier for identifying a storage apparatus to which data is to be moved
- a field 514 for registering the number (LUN) of a storage area (LU) to which data is to be moved
- the storage apparatus 150 that has received the data movement instruction 510 specifies the DB data (data unit) on the segment specified by the field 518 of the disk specified by the field 516 and is specified by the field 522 of the disk specified by the field 520.
- the storage mapping information 174 is updated, and the result is returned to the management program 140.
- FIG. 5C is a diagram illustrating a physical arrangement example of the order-added DB data according to the first embodiment.
- LU 530 is composed of five disks 550 numbered 0-4. In each disk 550, five segments 542 with segment numbers 540 from 0 to 4 are allocated.
- the segment 542 of the disk No. 0 segment No. 0 stores DB data (data unit) of order 1, the segment No. 0 of the disk No. 1 stores DB data of order 2, and the disk number 2
- the segment No. 0 segment 542 stores order 3 DB data
- the disk number 3 segment No 0 segment 542 stores order 4 DB data
- DB data with order 5 is stored
- DB data with order 6 is stored in segment 542 of segment No. 1 with disk number 0.
- each segment 542 of each disk 550 stores order data.
- DB data after 7 are stored.
- the storage apparatus 150 stores DB data in a distributed manner so that continuous DB data is stored in different disks 550.
- the logical storage areas on the LU 530 are, in order from the first storage area, the segment 542 of segment number 0 with disk number 0, the segment 542 of segment number 0 with disk number 1, the segment 542 with segment number 0 of disk number 2, and the disk number 3 Correspond to the segment 542 of the segment No 0, the segment 542 of the segment No 0 of the disk number 4, the segment 542 of the segment No 1 of the disk number 0, the segment 542 of the segment No 1 of the disk number 1.
- FIG. 6 is a flowchart of management processing according to the first embodiment.
- Management processing is realized by the CPU 104 of the computer 100 executing the management program 140.
- the management process is started (step 600).
- the management program 140 acquires the schema information 122 and the DB mapping information 124 from the DBMS 120 (Step 602), acquires the OS mapping information 132 from the OS 130 (Step 604), and acquires the storage mapping information 174 from the control program 172 of the storage apparatus 150. Acquire (step 606), create DB data area attribute information 400 and DB data arrangement information 402 based on the information (step 608), and then wait for reception of DB data additional information 126 from the DBMS 120. .
- the management program 140 determines whether or not the DB data additional information 126 has been received from the DBMS 120 (step 610). As a result, when the DB data additional information 126 has been received (Yes in step 610), the data is added. Processing (see FIG. 7) is executed (step 612). After executing the data addition process, the management program 140 instructs the DBMS 120 to write the additional data to the storage apparatus 150, and the additional data is stored in the storage apparatus 150 in accordance with the instruction of the DBMS 120. The storage device 150 stores additional data in the free space so that continuous data is distributed to different disks.
- the management program 140 After completing the data addition process, or when the DB data addition information 126 has not been received in step 610 (No in step 610), the management program 140 receives an instruction to end the management program 140 by the system administrator. If the end instruction is received (Yes in step S614), the management program 140 is ended and the management process is ended (step 616). If the instruction is not received (No in step S614), the process is repeated from step 610.
- FIG. 7 is a flowchart of data addition processing according to the first embodiment.
- the data addition process is realized by the CPU 104 of the computer 100 executing the management program 140.
- the management program 140 updates the corresponding DB data area entry of the data area attribute information 400 based on the data addition information 126 received in step 610 of FIG. 6 (step 702), and assigns the entry of the DB data area. It is determined whether the ratio of the number of used segments to the number of segments is below a predetermined threshold (step 704).
- the predetermined threshold value may be previously stored in the management program 140, or may be set by a system administrator or the like when the management program 140 is started. Further, in the above step 704, it is determined whether or not the number of unused segments is equal to or less than a predetermined amount depending on whether or not the ratio of the number of used segments to the number of allocated segments of entries in the DB data area falls below a predetermined threshold. However, instead of this, for example, it may be determined whether or not there are insufficient segments for storing additional DB data, and the absolute number of unused segments is a predetermined value. It may be determined whether or not:
- step 704 if the ratio of the number of used segments to the number of allocated segments is below a predetermined threshold (Yes in step 704), the management program 140 performs the processing from step 706 onward, and falls below the threshold. If not (No in step 704), the data addition process is terminated (step 718).
- step 706 the management program 140 sets information in each field of the data area addition instruction 500 and transmits it to the storage apparatus 150 (step 706). Specifically, the management program 140 sets the ST-ID of the field 416 of the corresponding DB data area entry in the DB data area attribute information table 400 in the field 502 of the data area addition instruction 500, and sets the LUN of the field 418.
- the data area addition instruction 500 is set in the field 504, and the additional segment request number is set in the field 506.
- the number of additional segment requests set in the field 506 of the data area addition instruction 500 may be a value given in advance by a system administrator or the like, or a value calculated from the current number of allocated segments (for example, the number of current allocated segments). Half).
- the management program 140 receives the response of the data area addition instruction 500 transmitted in step 706 from the storage apparatus 150, and then reacquires the storage mapping information 174 from the storage apparatus 150 (step 708), and acquires the storage mapping information 174 acquired. Based on the information, an entry of the data arrangement information 402 regarding the newly added segment in the storage apparatus 150 is added (step 710).
- the management program 140 searches for an entry corresponding to the DB data area to which the DB data of the data area attribute information 400 is added, and specifies the data type of the schema of the added DB data by the value of the field 414. (Step 712), it is determined whether or not the data type is an order addition, that is, whether or not the schema is an ordering schema (Step 714).
- step 714 If the result of determination in step 714 is that the data type is order addition (Yes in step 714), the management program 140 executes additional segment distribution processing (see FIG. 8) (step 716), and then data addition The process ends (step 718). On the other hand, when the data type is not the order addition (No in step 714), the management program 140 ends the data addition process as it is (step 718).
- FIG. 8 is a flowchart of additional segment distribution processing according to the first embodiment.
- the additional segment distribution process is realized by the CPU 104 of the computer 100 executing the management program 140.
- the management program 140 sets “1” to the variable N (step 802).
- the management program 140 searches the DB data arrangement information 402 to determine whether the disk containing the Nth segment among the added segments is a newly added disk (second storage device set), that is, an existing one. It is determined whether or not it is not used as a disk (one or more first storage device set) constituting the entire DB data area (step 804). As a result, in the newly added disk (second storage device set), If not (No in Step 804), the process proceeds to Step 810.
- step 804 if it is determined in step 804 that the disk is a newly added disk (second storage device set) (Yes in step 804), the management program 140 stores the data stored in the field 424 of the corresponding entry. Among the segments on the disk indicated by the moving disk No., the DB data (data unit) of the segment having the Nth oldest DB data in the order of the order information in the field 440 is replaced with the Nth segment of the newly added disk. The information is set in the data movement instruction 510 so as to move to the storage device 150 and transmitted to the storage apparatus 150 (step 806).
- the management program 140 stores the disk number of the disk indicated by the data movement disk number in the field 516 of the data movement instruction 510, and the segment having the Nth oldest DB data on the disk in the field 518. , The disk number of the newly added disk is stored in the field 520, and the segment number of the Nth segment of the newly added disk is stored in the field 522.
- the management program 140 receives the response of the data movement instruction 510 transmitted in step 806 from the storage device 150, and then updates the data arrangement information 402 according to the contents of the DB data movement (step 808). Specifically, the order information and data range information of the fields 440 and 442 of the entry corresponding to the segment of the source disk are registered in the fields 440 and 442 of the entry corresponding to the segment of the destination disk, and the migration is performed. Clear the contents of fields 440 and 442 of the original entry.
- the management program 140 increments the variable N (+1) (step 810), and when the data is moved in step 806, the next disk (next first storage device set) is transferred to the data movement disk No in the field 424. ) Is set (step 812), and it is determined whether or not the variable N is the same as the number of added segments, that is, whether or not the number of added segments-1 times (steps 804 to 812) has been repeated (step 812). Step 814). As a result, when the variable N is smaller than the added number of segments (No in Step 814), the management program 140 executes the processing from Step 804 again. Accordingly, the processing in steps 804 to 812 is repeated only for the number of additional segments in the DB data area set in step 706.
- step 814 when the variable N is the same as the number of added segments (Yes in step 814), the management program 140 ends the additional segment distribution process (step 816).
- FIG. 9A is a first diagram illustrating movement of DB data according to the first embodiment.
- FIG. 9B is a second diagram for explaining the movement of the DB data according to the first embodiment.
- FIG. 10A is a third diagram illustrating the movement of the DB data according to the first embodiment.
- FIG. 10B is a fourth diagram illustrating the movement of the DB data according to the first embodiment.
- FIG. 11A is a fifth diagram illustrating the movement of the DB data according to the first embodiment.
- FIG. 11B is a sixth diagram illustrating the movement of the DB data according to the first embodiment.
- FIG. 12A is a seventh diagram illustrating the movement of the DB data according to the first embodiment.
- FIG. 12B is an eighth diagram illustrating the movement of the DB data according to the first embodiment.
- each rectangle inside the disk corresponds to a segment
- the number on the left side of the rectangle indicates the number of the segment in the disk
- the number written in the rectangle Indicates the order of the DB data (data unit) stored in the segment.
- the number “1” in the rectangle indicates that data of order 1 is stored in the corresponding segment.
- the segments arranged in the vertical direction are arranged along the sequence of addresses. That is, the segment being continuous (adjacent) means that the address range is continuous (adjacent).
- P discs P is an integer of 2 or more
- there are P (horizontal) ⁇ Q (vertical) segments Q is an integer of 1 or more). That is, each disk has Q segments.
- the number of Q differs depending on the capacity of the disk (or which range of the disk is used as the DB data storage range).
- the segments in the same row have the same address.
- storage areas of five disks (a set of five first storage devices) of disks 900 to 908 are allocated as data areas for storing data of a certain schema at the time of starting processing.
- the “disk” may be a single storage device or a plurality of storage devices (for example, a RAID (Redundant Array of Independent) (or Inexpensive) Disks) group).
- Five segments are assigned to each of the disks 900 to 908.
- Order 1 data data unit
- order 2 data is stored in the segment No 0 segment of the disk 902
- order 3 data is stored in the disk 904. It is stored in the segment No. 0, and each data is similarly stored in the segment of the disk.
- Such a state is realized, for example, when the DBMS 120 (or the CPU 166 of the storage apparatus 150) sequentially switches the storage destination disks when storing sequentially from the head order data.
- the data movement in the additional segment distribution processing is taken as an example. Is described below. It is assumed that the data migration disk number stored in the field 424 of the DB data area attribute information 400 indicates the disk 900.
- the management program 140 monitors whether or not a new disk 910 is added to the disks 900 to 908 (a group of disks), and if a new disk 910 is added, detects it. good. In this state, the segments No. 0 to No. 4 of the disk 910 are unused.
- the data in the next order for example, data from the order 26 to the order 30
- the management program 140 moves the DB data moved from another disk and the DB data that is relatively out of order to the disk 910. Can be made. That is, it is possible to appropriately reduce the movement of continuous DB data to the disk 910. Thereafter, the management program 140 sets the data migration disk No in the field 424 of the DB data area attribute information 400 so as to indicate the next disk, that is, the disk 904.
- step S804 to step 808 the management program 140 moves the data of the order 13 as shown in FIG. 11A. That is, when the additional segment distribution process is executed, the data migration disk No in the field 424 indicates the disk 904, so that the management program 140 has the Nth order in the disk 904 (here, the third position). ), The data of the segment (here, segment No. 2) having the old data (here, data of order 13) is added to the Nth (here, third) segment (here, segment No. 2) of the added disk 910. ). Thereafter, the management program 140 sets the data migration disk No in the field 424 of the DB data area attribute information 400 to point to the next disk, that is, the disk 906.
- step S804 to step 808 when the variable N is set to 4 in the additional segment distribution process is executed, the management program 140 moves the data of the order 19 as shown in FIG. 11B. That is, when the additional segment distribution processing is executed, the data migration disk No in the field 424 indicates the disk 906, so that the management program 140 has the Nth order (here, the fourth) in the disk 906. ), The data of the segment (here, segment No. 3) having the old data (here, data of order 19) is added to the Nth (here, fourth) segment (here, segment No. 3) of the added disk 910. ). Thereafter, the management program 140 sets the data migration disk No in the field 424 of the DB data area attribute information 400 to point to the next disk, that is, the disk 908.
- the state shown in FIG. 12A is obtained. That is, before new data is added, the unused segments are segment No. 0 of disk 900, segment No. 1 of disk 902, segment No. 2 of disk 904, segment No. 3 of disk 906, segment No. 4 of disk 910, etc. In other words, it is distributed to different disks.
- the number of disks is P (P is an integer of 2 or more), and each disk has Q segments (Q is an integer of 1 or more).
- Each disk has (P ⁇ Q) segments, and for every P pieces of DB data in which the order is continuous, P pieces of DB data in which the order is continuous are P pieces in the same row of different disks. Is stored in the segment.
- the management program 140 determines that the free segment generated in the Xth disk is not adjacent to the segment storing the latest data unit in the Xth disk, and is the second newest DB in the Xth disk.
- the data is moved from the segment storing the second new DB data to the empty segment in the Xth disk.
- unused data areas are distributed to each disk, and thereafter, new data is added to the unused segments.
- new data is added to the unused segments.
- FIG. 12B the state shown in FIG. 12B is obtained. For example, assuming that access is made to the data for the most recent five segments (here, data of order 26 to order 30), the disk 900, the disk 902, the disk Since access is distributed to 904, the disk 906, and the disk 910, the degree of parallelism of processing is increased and the performance can be improved.
- the physical access range (access) on the disk can be obtained only with the technique of the first embodiment. It is conceivable that there will be a disc that becomes wider. For example, in the disk 900, the range from segment No0 to segment No4 is accessed. The narrower the physical access range, the better the performance. Therefore, for a disk with an extended access range, the performance improvement rate becomes small.
- a technique capable of narrowing a physical access range in a disk will be described.
- segment adjacency processing (see FIG. 14) is further performed in the additional segment distribution processing in the computer system according to the first embodiment.
- the configuration of the computer system according to the second embodiment is the same as that of the computer system according to the first embodiment shown in FIG.
- the computer system according to the second embodiment will be described focusing on differences from the computer system according to the first embodiment.
- FIG. 13 is a flowchart of additional segment distribution processing according to the second embodiment.
- symbol is attached
- step 1300 for executing segment adjacency processing is added between step 808 and step 810.
- FIG. 14 is a flowchart of segment adjacency processing according to the second embodiment.
- the management program 140 searches the DB data arrangement information 402 and newly adds a disk in which the Nth segment in the added segment (this variable N is a variable inherited from the additional segment distribution process) exists. It is determined whether the disk has been used, that is, whether it is not used as a disk constituting all existing DB data areas (step 1402). As a result, if it is not a newly added disk (No in step 1402). , The process proceeds to step 1410.
- the management program 140 determines the segment that became the movement source in step 806 of the additional segment distribution processing in FIG.
- the segment number of the unused segment and the segment number having the newest order information in the field 440 among all the segments of the disk to which the segment belongs are acquired (step 1404), and whether or not both segments are adjacent to each other That is, it is determined whether or not the segment numbers of both segments are ⁇ 1 (step 1406).
- Step 1416 the management program 140 ends the segment adjacent processing (Step 1416).
- the management program 140 determines the segment adjacent to the segment storing the newest data among all the segments of the disk. Information is set in the data movement instruction 510 so that the data is moved to the segment that became the movement source in Step 806 and transmitted to the storage apparatus 150. After receiving the response from the storage apparatus 150, the data arrangement information 402 is updated. (Step 1408), the segment adjacency processing is terminated (Step 1416).
- step 1402 If it is determined in step 1402 that the disk is not a newly added disk (No in step 1402), the management program 140 determines the Nth segment (currently unused segment) in the added segment and the segment.
- the segment No. of the segment with the newest order information in the field 440 is acquired from all the segments of the disk to which the segment belongs (step 1410), and whether or not both segments are adjacent, that is, the segment No. of both segments is It is determined whether or not ⁇ 1 (step 1412).
- Step 1416 when both segments are adjacent (Yes in Step 1412), the management program 140 ends the segment adjacent processing (Step 1416).
- the management program 140 determines the segment adjacent to the segment storing the newest data among all the segments of the disk. Information is set in the data movement instruction 510 so that the data is moved to the Nth segment in the added segment and transmitted to the storage apparatus 150. After receiving the response from the storage apparatus 150, the data arrangement information 402 is updated. (Step 1414), the segment adjacency process is terminated (Step 1416).
- the data of the adjacent segment may be moved further.
- the segment with the newest order and the unused segment are adjacent to each other on the same disk.
- the present invention is not limited to this.
- the statistical information of the search frequency may be stored in the management program 140, and the data may be moved so that the segments storing the data with the high search frequency are adjacent to each other.
- the data of any segment within that range is set as a free area. You may make it move.
- FIG. 15A is a first diagram illustrating movement of DB data according to the second embodiment.
- FIG. 15B is a second diagram illustrating the movement of the DB data according to the first embodiment.
- FIG. 16A is a third diagram for explaining the movement of the DB data according to the second embodiment.
- FIG. 16B is a diagram for explaining an ideal data arrangement.
- FIG. 15A is a diagram summarizing the state (FIG. 10B, FIG. 11A, FIG. 11B, FIG. 12A) before executing the segment adjacent processing in the additional segment distribution processing shown in FIG. 13 for convenience.
- the segment adjacent process resulting from one added segment is performed following the process for one segment of the additional segment distribution process.
- segment No. 0 data (order 1 data) of the disk 900 is moved to the segment No. 0 segment of the disk 910, it is shown in FIG. 15A by the segment adjacency process (step 1300).
- segment here, the segment No. 3 adjacent to the segment (here, the segment No. 4 segment) having the most recent data (here, the data of the order 21) in the disk 900 is shown.
- Segment data here, data of order 16
- FIG. 11A after the data of segment No.
- the state shown in FIG. 15B is obtained. That is, before new data is added, the unused segments are segment No. 3 of the disk 900, segment No. 3 of the disk 902, segment No. 3 of the disk 904, segment No. 3 of the disk 906, segment No. 4 of the disk 910, and so on. In other words, it is distributed to different disks. Further, the unused segment is adjacent to the segment having the newest data on each disk.
- the added data area (segment) is distributed to each disk, and in each disk, an unused segment is adjacent to a segment having the latest data. New data is added to the used segment.
- the state shown in FIG. 16A is obtained. For example, assuming that access is made to data for the latest five segments (here, data of order 26 to order 30), as in the first embodiment, Since access is distributed to the disk 900, the disk 902, the disk 904, the disk 906, and the disk 910, the degree of parallelism of processing is increased and the performance can be improved. Further, assuming that access is made to the most recent 10 segments of data (here, data of order 21 to order 30), in the first embodiment, the physical access range on the disk becomes wide.
- the physical access range is limited to the segment No. 3 and the segment No. 4 in the disk 900. Therefore, the occurrence of a disk with a wide access range can be suppressed.
- the narrower the physical access range on the disk the better the performance. Therefore, according to the second embodiment, the performance can be further improved as compared with the first embodiment.
- FIG. 16B is a diagram for explaining an ideal data arrangement.
- the arrangement of the data shown in FIG. 16B is, for example, from the state shown in FIG. 9B, the DB data is moved so as to be arranged in order according to the order information between the disks and within each disk, and then the unused segments are ordered. This can be realized by adding data from the property 26 to the order 30.
- disk 902 disk 904, disk 906, disk 908, disk 910 Access is distributed to each other, the parallelism of processing is increased, and the performance can be improved. Further, assuming that the most recent 10 segments of data (in this case, data having an order of 21 to 30) are to be accessed, the access range is minimum on any disk.
- This modification is different from the method for determining a segment to which data is moved in the additional segment distribution processing in the first embodiment.
- the management program 140 uses the field 440 in the segment on the disk pointed to by the data migration disk No. stored in the field 424 of the corresponding entry.
- the information is set in the data movement instruction 510 so that the DB data of the segment with the order of N + 1 oldest data is moved to the Nth segment of the newly added disk and transmitted to the storage apparatus 150. ing.
- FIG. 17A is a first diagram for explaining the movement of DB data according to a modification.
- FIG. 17B is a second diagram illustrating the movement of DB data according to the modification.
- FIG. 18A is a third diagram for explaining the movement of the DB data according to the modification.
- FIG. 18B is a fourth diagram illustrating the movement of DB data according to the modification.
- FIG. 19A is a fifth diagram illustrating the movement of DB data according to the modification.
- FIG. 19B is a sixth diagram illustrating the movement of DB data according to the modification.
- step S804 to step 808 the data of the order 18 is moved as shown in FIG. 18A. That is, when the additional segment distribution processing is executed, the data migration disk No in the field 424 indicates the disk 904, and therefore, the order of the data in the disk 904 is N + 1th (here, fourth) oldest data ( Here, the data of the segment having the order 18 (here, segment No. 3) is moved to the Nth (here, third) segment (here, segment No. 2) of the added disk 910. It will be. Thereafter, the data migration disk No. in the field 424 of the DB data area attribute information 400 is set to indicate the next disk, that is, the disk 906.
- step S804 to step 808 the data of the order 24 is moved as shown in FIG. 18B. That is, when the additional segment distribution processing is executed, the data migration disk No in the field 424 indicates the disk 906, and therefore, the order of the data in the disk 906 is N + 1th (here, fifth) oldest data ( Here, the data of the segment having the order 24 data (here, segment No. 4) is moved to the Nth (here, fourth) segment (here, segment No. 3) of the added disk 910. It will be. Thereafter, the data migration disk No. in the field 424 of the DB data area attribute information 400 is set to indicate the next disk, that is, the disk 908.
- the state shown in FIG. 19A is obtained. That is, before new data is added, the unused segments are segment No. 1 of the disk 900, segment No. 2 of the disk 902, segment No. 3 of the disk 904, segment No. 4 of the disk 906, segment No. 5 of the disk 910, and so on. In other words, it is distributed to different disks.
- unused data areas are distributed to the respective disks, and new data is then stored in the unused segments.
- An addition is made.
- the state shown in FIG. 19B is obtained.
- data for the latest five segments here, data of order 26 to order 30
- the degree of parallelism of processing is increased and the performance can be improved.
- the additional segment distribution processing is performed when the DB data area is equal to or smaller than a predetermined amount (for example, the DB data area is insufficient or the number of used segments with respect to the allocated number of segments in the DB data area).
- a predetermined amount for example, the DB data area is insufficient or the number of used segments with respect to the allocated number of segments in the DB data area.
- the present invention is not limited to this, and may be performed in response to an instruction from a system administrator, for example.
- a new disk is added, and the data of the disk segment constituting the existing DB data area is moved to the segment on the added disk.
- the degree of parallelism can be increased and the performance can be improved.
- the management program 140 may be executed by another computer connected to the computer 100.
- the other computer may be a management system.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (15)
- ストレージ装置におけるデータベースの1以上のスキーマを構成する複数のデータ単位を管理する管理システムであって、
記憶資源と、
前記記憶資源に接続された制御デバイスと
を有し、
前記ストレージ装置は、複数の記憶領域を有する複数の第1記憶デバイス集合を有し、
前記1以上のスキーマには、それぞれの順序が規定される順序性を有する複数のデータ単位により構成される順序性スキーマが含まれており、
前記記憶資源は、前記順序性スキーマを構成するどのデータ単位がどの記憶領域に記憶されているかを示す情報であるマッピング情報と、前記データ単位の順序を示す順序情報とを含む管理情報を記憶し、
前記第1記憶デバイス集合は、1以上の第1記憶デバイスの集合であり、
前記制御デバイスは、
(A)1以上の第2記憶デバイスの集合でありそれぞれの記憶領域が空きの記憶領域である第2記憶デバイス集合が前記第1記憶デバイス集合に追加される場合に、前記管理情報に基づいて、前記第1記憶デバイス集合の複数の記憶領域にそれぞれ格納されている複数のデータ単位のうち順序が連続しない2以上のデータ単位を、複数の空き記憶領域が前記第1記憶デバイス集合及び前記第2記憶デバイス集合に分散するよう少なくとも1つの前記第1記憶デバイス集合から前記第2記憶デバイス集合の空き記憶領域に移動させる、
管理システム。 - 第1記憶デバイス集合の数は、Pであり(Pは2以上の整数)、各第1の記憶デバイスが、Q個の記憶領域を有しており(Qは1以上の整数)、故に、P個の第1記憶デバイス集合が、(P×Q)個の記憶領域を有しており、
順番が連続したP個のデータ単位毎に、順番が連続したP個のデータ単位が、異なる第1記憶デバイス集合の、同一行のP個の記憶領域に格納されており、
前記制御デバイスは、前記(A)において、X番目の第1記憶デバイス集合のうちX番目に古いデータ単位を(Xは整数、且つ、X=0、1、…、(Q-1))、X番目の第1記憶デバイス集合の記憶領域から前記第2記憶デバイス集合の空き記憶領域に移動させる、
請求項1に記載の管理システム。 - 前記(A)の後、前記制御デバイスは、前記X番目の第1記憶デバイス集合に生じた空き記憶領域が、前記X番目の第1記憶デバイス集合における最新のデータ単位を格納している記憶領域に隣接していない場合、前記X番目の第1記憶デバイス集合における2番目に新しいデータ単位を、前記2番目に新しいデータ単位を記憶している記憶領域から、前記X番目の第1記憶デバイス集合における空き記憶領域に移動させる、
請求項2に記載の管理システム。 - 前記(A)において、前記制御デバイスは、前記第1記憶デバイス集合から前記第2記憶デバイス集合へのデータ単位の移動を、Y回行った場合に(Yは自然数)、前記(A)を終了し、
前記Yの値は、前記第2記憶デバイス集合が有する記憶領域の数以下である、
請求項2に記載の管理システム。 - 前記制御デバイスは、
前記第1記憶デバイス集合が複数ある場合に、前記第1記憶デバイス集合と同数のデータ単位をそれぞれの前記第1記憶デバイス集合から前記第2記憶デバイス集合に移動させる、
請求項1に記載の管理システム。 - 前記制御デバイスは、
前記第1記憶デバイス集合が複数ある場合に、前記第2記憶デバイス集合に移動させる前記データ単位の選択対象とする前記第1記憶デバイス集合を順次切り替えることにより、同数のデータ単位がそれぞれの前記第1記憶デバイス集合から前記第2記憶デバイス集合に移動されるようにする、
請求項5に記載の管理システム。 - 前記制御デバイスは、
前記第1記憶デバイス集合が複数ある場合に、複数の前記第1記憶デバイス集合からそれぞれ順序が比較的はなれたデータ単位を前記第2記憶デバイス集合に移動させる、
請求項6に記載の管理システム。 - 前記制御デバイスは、
前記第2記憶デバイス集合に移動させる前記データ単位についての前記第1記憶デバイス集合内における順序の順位を、異なる前記第1記憶デバイス集合間で異ならせる、
請求項7に記載の管理システム。 - 前記制御デバイスは、
(B)前記第1記憶デバイス集合における前記順序性スキーマの前記データ単位を格納可能な記憶領域の空き領域が所定量以下であると判定した場合に、前記順序性スキーマのデータ単位を格納可能な新たな記憶領域を追加させ、前記追加させる新たな記憶領域が前記第2記憶デバイス集合の記憶領域であるか否かを判定し、前記新たな記憶領域が前記第2記憶デバイス集合の記憶領域である場合に、前記(A)を行う
請求項1に記載の管理システム。 - 前記制御デバイスは、
前記データベースの前記スキーマに対する新たなデータ単位の追加が発生したか否かを判定し、
前記データ単位の追加が発生した場合に、前記(B)を行い、
前記(A)を行った後に、前記新たなデータ単位を、前記空き領域に格納させることにより、連続する前記データ単位が異なる記憶デバイス集合に格納されるようにする
請求項9に記載の管理システム。 - 前記制御デバイスは、
前記(A)を実行した後に、
(C)前記空き領域と、当該空き領域が含まれる前記第1記憶デバイス集合における最新のデータ単位が格納された記憶領域とが所定の範囲内に存在しない場合に、前記所定の範囲内のデータ単位を前記空き領域に移動させる
請求項1に記載の管理システム。 - 前記制御デバイスは、
前記(C)において、前記所定の範囲とは、前記空き領域に隣接する範囲である
請求項11に記載の管理システム。 - 前記制御デバイスは、
前記(C)を行った後に、新たなデータ単位を前記空き領域に格納することにより、前記第1記憶デバイス集合における新しい複数のデータ単位が、前記第1記憶デバイス集合の比較的限られた範囲に格納されるようにする
請求項11に記載の管理システム。 - ストレージ装置におけるデータベースの1以上のスキーマを構成する複数のデータ単位を管理するコンピュータに実行させるためのコンピュータプログラムであって、
前記ストレージ装置は、複数の記憶領域を有する複数の第1記憶デバイス集合を有し、
前記1以上のスキーマには、それぞれの順序が規定される順序性を有する複数のデータ単位により構成される順序性スキーマが含まれており、
前記コンピュータプログラムは、
1以上の第2記憶デバイスの集合でありそれぞれの記憶領域が空きの記憶領域である第2記憶デバイス集合が前記第1記憶デバイス集合に追加されるか否かを判断し、
前記判断の結果が肯定的の場合に、前記順序性スキーマを構成するどのデータ単位がどの記憶領域に記憶されているかを示す情報であるマッピング情報と前記データ単位の順序を示す順序情報とを含む管理情報を基に、前記第1記憶デバイス集合の複数の記憶領域にそれぞれ格納されている複数のデータ単位のうち順序が連続しない2以上のデータ単位を、複数の空き記憶領域が前記第1記憶デバイス集合及び前記第2記憶デバイス集合に分散するよう少なくとも1つの前記第1記憶デバイス集合から前記第2記憶デバイス集合の空き記憶領域に移動させる、
ことを前記コンピュータに実行させるコンピュータプログラム。 - ストレージ装置におけるデータベースの1以上のスキーマを構成する複数のデータ単位を管理する方法であって、
前記ストレージ装置は、複数の記憶領域を有する複数の第1記憶デバイス集合を有し、
前記1以上のスキーマには、それぞれの順序が規定される順序性を有する複数のデータ単位により構成される順序性スキーマが含まれており、
前記方法は、
1以上の第2記憶デバイスの集合でありそれぞれの記憶領域が空きの記憶領域である第2記憶デバイス集合が前記第1記憶デバイス集合に追加されるか否かを判断し、
前記判断の結果が肯定的の場合に、前記順序性スキーマを構成するどのデータ単位がどの記憶領域に記憶されているかを示す情報であるマッピング情報と前記データ単位の順序を示す順序情報とを含む管理情報を基に、前記第1記憶デバイス集合の複数の記憶領域にそれぞれ格納されている複数のデータ単位のうち順序が連続しない2以上のデータ単位を、複数の空き記憶領域が前記第1記憶デバイス集合及び前記第2記憶デバイス集合に分散するよう少なくとも1つの前記第1記憶デバイス集合から前記第2記憶デバイス集合の空き記憶領域に移動させる、
管理方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014519763A JP5978297B2 (ja) | 2012-06-07 | 2012-06-07 | 管理システム及び管理方法 |
PCT/JP2012/064672 WO2013183143A1 (ja) | 2012-06-07 | 2012-06-07 | 管理システム及び管理方法 |
US14/404,963 US9870152B2 (en) | 2012-06-07 | 2012-06-07 | Management system and management method for managing data units constituting schemas of a database |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2012/064672 WO2013183143A1 (ja) | 2012-06-07 | 2012-06-07 | 管理システム及び管理方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013183143A1 true WO2013183143A1 (ja) | 2013-12-12 |
Family
ID=49711560
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/064672 WO2013183143A1 (ja) | 2012-06-07 | 2012-06-07 | 管理システム及び管理方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US9870152B2 (ja) |
JP (1) | JP5978297B2 (ja) |
WO (1) | WO2013183143A1 (ja) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10642633B1 (en) * | 2015-09-29 | 2020-05-05 | EMC IP Holding Company LLC | Intelligent backups with dynamic proxy in virtualized environment |
KR20190067540A (ko) * | 2017-12-07 | 2019-06-17 | 에스케이하이닉스 주식회사 | 스토리지 시스템 및 그것의 동작 방법 |
CN108228107A (zh) * | 2018-01-02 | 2018-06-29 | 联想(北京)有限公司 | 一种数据传输方法、数据传输装置及电子设备 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003077111A1 (fr) * | 2002-03-13 | 2003-09-18 | Fujitsu Limited | Controleur pour dispositif raid |
JP2007179146A (ja) * | 2005-12-27 | 2007-07-12 | Hitachi Ltd | データスキーマのマッピングプログラム及び計算機システム |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5502836A (en) * | 1991-11-21 | 1996-03-26 | Ast Research, Inc. | Method for disk restriping during system operation |
US6530035B1 (en) | 1998-10-23 | 2003-03-04 | Oracle Corporation | Method and system for managing storage systems containing redundancy data |
JP4611830B2 (ja) | 2005-07-22 | 2011-01-12 | 優 喜連川 | データベース管理システム及び方法 |
JP5141402B2 (ja) * | 2008-06-26 | 2013-02-13 | 富士通株式会社 | ストレージシステム,コピー制御方法およびコピー制御装置 |
US9176779B2 (en) * | 2008-07-10 | 2015-11-03 | Juniper Networks, Inc. | Data access in distributed systems |
JP2010033261A (ja) * | 2008-07-28 | 2010-02-12 | Hitachi Ltd | ストレージ装置及びその制御方法 |
-
2012
- 2012-06-07 US US14/404,963 patent/US9870152B2/en active Active
- 2012-06-07 WO PCT/JP2012/064672 patent/WO2013183143A1/ja active Application Filing
- 2012-06-07 JP JP2014519763A patent/JP5978297B2/ja active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003077111A1 (fr) * | 2002-03-13 | 2003-09-18 | Fujitsu Limited | Controleur pour dispositif raid |
JP2007179146A (ja) * | 2005-12-27 | 2007-07-12 | Hitachi Ltd | データスキーマのマッピングプログラム及び計算機システム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2013183143A1 (ja) | 2016-01-21 |
US9870152B2 (en) | 2018-01-16 |
JP5978297B2 (ja) | 2016-08-24 |
US20150177984A1 (en) | 2015-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8386532B2 (en) | Mechanism for co-located data placement in a parallel elastic database management system | |
US8719529B2 (en) | Storage in tiered environment for colder data segments | |
US7689573B2 (en) | Prefetch appliance server | |
Ahn et al. | ForestDB: A fast key-value storage system for variable-length string keys | |
US8296286B2 (en) | Database processing method and database processing system | |
US11321302B2 (en) | Computer system and database management method | |
US9740722B2 (en) | Representing dynamic trees in a database | |
US20140181455A1 (en) | Category based space allocation for multiple storage devices | |
WO2012095771A1 (en) | Sparse index table organization | |
JP6707797B2 (ja) | データベース管理システム及びデータベース管理方法 | |
US10242053B2 (en) | Computer and data read method | |
JP5858307B2 (ja) | データベース管理システム、計算機、データベース管理方法 | |
US20170270149A1 (en) | Database systems with re-ordered replicas and methods of accessing and backing up databases | |
Goswami et al. | Graphmap: Scalable iterative graph processing using nosql | |
JP5978297B2 (ja) | 管理システム及び管理方法 | |
JP6974706B2 (ja) | 情報処理装置、ストレージシステムおよびプログラム | |
JP6108418B2 (ja) | データベース管理システム、計算機、データベース管理方法 | |
Bin et al. | An efficient distributed B-tree index method in cloud computing | |
Mazumdar et al. | An index scheme for fast data stream to distributed append-only store | |
JP7458610B2 (ja) | データベースシステム、及びクエリ実行方法 | |
US11868352B2 (en) | Systems and methods for spilling data for hash joins | |
Wang et al. | KT-store: a key-order and write-order hybrid key-value store with high write and range-query performance | |
Herodotou | Towards a distributed multi-tier file system for cluster computing | |
Chen et al. | CSMqGraph: Coarse-Grained and Multi-external-storage Multi-queue I/O Management for Graph Computing | |
Fukatani et al. | Lightweight Dynamic Redundancy Control with Adaptive Encoding for Server-based Storage |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12878549 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014519763 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14404963 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12878549 Country of ref document: EP Kind code of ref document: A1 |