US20220283875A1 - Storage system, resource control method, and recording medium - Google Patents
Storage system, resource control method, and recording medium Download PDFInfo
- Publication number
- US20220283875A1 US20220283875A1 US17/460,670 US202117460670A US2022283875A1 US 20220283875 A1 US20220283875 A1 US 20220283875A1 US 202117460670 A US202117460670 A US 202117460670A US 2022283875 A1 US2022283875 A1 US 2022283875A1
- Authority
- US
- United States
- Prior art keywords
- physical server
- load
- resources
- physical
- virtual machines
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003860 storage Methods 0.000 title claims abstract description 59
- 238000000034 method Methods 0.000 title claims 3
- 238000012545 processing Methods 0.000 claims abstract description 71
- 230000006870 function Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 14
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000013341 scale-up Methods 0.000 description 5
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/06—Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/182—Distributed file systems
- G06F16/1824—Distributed file systems implemented using Network-attached Storage [NAS] architecture
- G06F16/183—Provision of network file services by network file servers, e.g. by using NFS, CIFS
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/188—Virtual file systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
-
- H04L67/1002—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45579—I/O management, e.g. providing access to device drivers or storage
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/45595—Network integration; Enabling network access in virtual machine instances
Definitions
- the present disclosure relates to a technology for controlling allocation of resources to virtual machines generated in physical server in a storage system including one or more physical servers.
- an HCI (Hyper-Converged Infrastructure) system that includes one or more physical servers realized by virtualizing infrastructure functions of storages or network devices.
- virtual machines VMs
- a virtual machine a hypervisor
- a virtual machine a block storage VM
- a virtual machine a file storage VM
- U.S. Patent Application Publication No. 2018/0157522 discloses a technology for performing scale-up, scale-down, scale-in, and scale-out of a virtual file server based on a throughput or the like of the virtual file server.
- the present disclosure has been devised in view of the foregoing circumstances and an objective of the present disclosure is to provide a technology for appropriately allocating resources of physical servers.
- a storage system includes one or more physical servers.
- One or more first virtual machines to which resources of the physical servers are allocated and which perform processing related to a protocol of a file storage through a network with a client and one or more second virtual machines which perform processing related to management of a file in the file storage are formed in the physical server.
- the physical server acquires load information regarding loads of the first and second virtual machines and controls the allocation of the resources of the physical servers to the first and second virtual machines based on the load information.
- FIG. 1 is a diagram illustrating an overall configuration of a computer system according to an embodiment
- FIG. 2 is a diagram illustrating an overall configuration of a storage system according to an embodiment
- FIG. 3 is a diagram illustrating a configuration of a physical server according to an embodiment
- FIG. 4 is an explanatory diagram illustrating a configuration of a filesystem VM management table according to an embodiment
- FIG. 5 is an explanatory diagram illustrating a configuration of a protocol VM management table according to an embodiment
- FIG. 6 is an explanatory diagram illustrating a configuration of a physical server management table according to an embodiment
- FIG. 7 is an explanatory diagram illustrating a configuration of a threshold management table according to an embodiment
- FIG. 8 is a flowchart illustrating load registration processing according to an embodiment
- FIG. 9 is a flowchart illustrating load balancing processing according to an embodiment.
- FIG. 10 is a flowchart illustrating resource control processing according to an embodiment.
- AAA information In the following description, information will be described with an expression of an “AAA table,” but the information may be expressed with any data structure. That is, to indicate that information does not depend on a data structure, an “AAA table” can be referred to as “AAA information.”
- an operating entity for the processing may be considered as a processor (or an apparatus or a system including the processor).
- the processor may include a hardware circuit that performs some or all of the processing.
- the program may be installed from a program source to an apparatus such as a computer.
- the program source may be, for example, a recording medium (for example, a portable recording medium) which can be read by a program distribution server or computer.
- two or more programs may be realized as one program or one program may be realized as two or more programs.
- FIG. 1 is a diagram illustrating an overall configuration of a computer system according to an embodiment.
- a computer system 1 includes one or more clients 10 , a management computer 20 , and a storage system 2 .
- the storage system 2 includes one or more physical servers 100 .
- the client 10 performs various types of processing using data (for example, files) stored in the storage system 2 .
- the management computer 20 performs processing of managing the storage system 2 .
- the physical server 100 includes, as a VM (virtual machine), a hypervisor 110 , one or more protocol VMs (first virtual machine) 120 , a filesystem VM (second virtual machine) 130 , and a blockstorage VM 140 .
- VM virtual machine
- hypervisor 110 one or more protocol VMs (first virtual machine) 120
- filesystem VM second virtual machine
- blockstorage VM 140 blockstorage VM 140 .
- the frontend network 30 is, for example, a communication network such as a wired LAN (Local Area Network), a wireless LAN, or a WAN (Wide Area Network).
- LAN Local Area Network
- WAN Wide Area Network
- the management computer 20 and the blockstorage VM 140 of the physical server 100 are coupled via a management network 40 .
- the management network 40 is, for example, a communication network such as a wired LAN, a wireless LAN, or a WAN.
- the inter-node network 50 is, for example, a communication network such as a wired LAN, a wireless LAN, or a WAN.
- the frontend network 30 , the management network 40 , and the inter-node network 50 are assumed to be different networks, but any of these plurality of networks may be identical networks, for example.
- FIG. 2 is a diagram illustrating an overall configuration of a storage system according to an embodiment.
- the storage system 2 includes the plurality of physical servers 100 that configure a cluster of a distributed file system.
- the storage system 2 includes physical servers 100 A, 100 B, and 100 C.
- the physical server 100 A is a physical server operating as a master primary (main master) that generally controls the physical servers configuring the cluster of the distributed file system.
- main master a master primary
- one physical server 100 operating as a master primary is provided.
- the physical server 100 A includes the hypervisor 110 , one or more protocol VMs 120 , the filesystem VM 130 , and the blockstorage VM 140 .
- the hypervisor 110 generates or deletes a VM or controls allocation of resources to the VMs.
- the hypervisor 110 executes a load balancing program 111 and a resource control program 112 .
- the load balancing program 111 is a program for performing processing of balancing loads in accordance with a difference in the number of connected clients 10 between the protocol VMs 120 .
- the resource control program 112 is a program for performing processing of controlling allocation of resources to the protocol VMs 120 and the filesystem VM 130 .
- the protocol VM 120 performs some of the functions in the file storage, for example, functions in conformity with a protocol of a file system (for example, an NFS (Network File System) and/or a CIFS (Common Internet File System)) with the client 10 via the frontend network 30 .
- a protocol of a file system for example, an NFS (Network File System) and/or a CIFS (Common Internet File System)
- the protocol VM 120 performs a load registration program 121 .
- the load registration program 121 is a program that acquires information regarding a load on an own-VM (here, the protocol VM 120 ) and performs processing of registering the information in a database 141 to be described below.
- the filesystem VM 130 performs some functions (other than the functions of the protocol VM 120 ) in the file storage, for example, functions of managing files (conversion functions between a file I/O and a block I/ 0 ).
- the filesystem VM 130 performs the load registration program 121 .
- the load registration program 121 is a program for performing processing of acquiring information regarding a load on a own-VM (here, the filesystem VM 130 ) and registering the information in the database 141 to be described below.
- the protocol VM 120 and the filesystem VM 130 provide functions necessary for the file storage.
- the blockstorage VM 140 functions as a block storage that stores data in units of blocks in the storage device 154 to be described below and manages the data.
- the blockstorage VM 140 includes the database 141 .
- the database 141 stores various kinds of information.
- the database 141 can be read and written by each physical server 100 of the storage system 2 .
- the database 141 stores a filesystem VM management table 161 , a protocol VM management table 162 , a physical server management table 163 , and a threshold management table 164 to be described below.
- the blockstorage VM 140 executes the load registration program 121 .
- the load registration program 121 is a program for performing processing of acquiring information regarding a load of an own-VM (here, the blockstorage VM 140 ) and registering the information in the database 141 .
- the physical server 100 B is a physical server operating as a master secondary (sub-master) that can operate as a master primary when a failure occurs in the physical server 100 A operating as the master primary in the distributed file system.
- master secondary sub-master
- two physical servers operating as the master secondary can be provided.
- the physical server 100 B includes the hypervisor 110 , one or more protocol VMs 120 , the filesystem VM 130 , and the blockstorage VM 140 .
- the hypervisor 110 of the physical server 100 B executes the resource control program 112 and does not execute the load balancing program 111 .
- the load balancing program 111 is executed.
- the database 141 of the blockstorage VM 140 of the physical server 100 B is a replica of the database 141 of the blockstorage VM 140 of the physical server 100 A.
- data of the database 141 of the physical server 100 A is copied at a predetermined timing by the blockstorage VM 140 of the physical server 100 A.
- the physical server 100 C is a physical server other than the physical server operating as the master (the master primary and the master secondary) in the distributed file system.
- the physical server 100 C includes the hypervisor 110 , one or more protocol VMs 120 , the filesystem VM 130 , and the blockstorage VM 140 .
- the blockstorage VM 140 of the physical server 100 C may not include the database 141 .
- FIG. 3 is a diagram illustrating a configuration of a physical server according to an embodiment.
- the physical server 100 ( 100 A, 100 B, and 100 C) is configured by, for example, a PC (Personal Computer) or a general-purpose server.
- the physical server 100 includes resources such as a communication I/F 151 , one or more CPUs (Central Processing Units) 152 , an input device 153 , a storage device 154 , a memory 155 , and a display device 156 .
- resources such as a communication I/F 151 , one or more CPUs (Central Processing Units) 152 , an input device 153 , a storage device 154 , a memory 155 , and a display device 156 .
- the communication I/F 151 is, for example, an interface such as a wired LAN card or a wireless LAN card and communicates with other apparatuses (for example, the clients 10 , the management computer 20 , and the other physical servers 100 ) via networks ( 30 , 40 , and 50 ).
- the CPU 152 performs various types of processing in accordance with programs stored in the memory 155 and/or the storage device 154 .
- the CPU 152 is allocated to each VM. Units of allocation to each VM may be units of the number of CPUs 152 .
- the memory 155 is, for example, a RAM (RANDOM ACCESS MEMORY) and stores necessary information or programs which are executed by the CPU 152 .
- the memory 155 is allocated to each VM for usage.
- the storage device 154 is, for example, a hard disk, a flash memory, or the like and stores a program executed by the CPU 152 , data used in the CPU 152 , files of user data used by the client 10 , and the like.
- the storage device 154 stores a program for realizing the hypervisor 110 (including, for example, the load balancing program 111 and the resource control program 112 ), a program for causing a VM generated by the hypervisor 110 to function as the protocol VM 120 (including, for example, the load registration program 121 ), a program for causing a VM generated by the hypervisor 110 to function as the filesystem VM 130 (including, for example, the load registration program 121 ), a program for causing a VM generated by the hypervisor 110 to function as the blockstorage VM 140 , and the like.
- the storage device 154 stores data managed in the database 141 of the blockstorage VM 140 .
- the input device 153 is, for example, a mouse, a keyboard, or the like and receives information input by a user.
- the display device 156 is, for example, a display, and displays and outputs various kinds of information.
- FIG. 4 is an explanatory diagram illustrating a configuration of the filesystem VM management table according to an embodiment.
- the filesystem VM management table 161 is a table in which information regarding the filesystem VM 130 in the storage system 2 is managed and stores entries for each filesystem VM 130 .
- the entry of the filesystem VM management table 161 includes items of a filesystem VM identifier 161 a, a physical server identifier 161 b, the number of allocated CPUs 161 c, an allocated memory size 161 d, a CPU usage ratio 161 e, and a memory usage amount 161 f.
- filesystem VM identifier 161 a an identifier with which the filesystem VM 130 corresponding to the entry can be uniquely identified (a filesystem VM identifier) is stored.
- physical server identifier 161 b an identifier with which the physical server 100 in which the filesystem VM 130 corresponding to the entry is generated is uniquely identified (a physical server identifier) is stored.
- number of allocated CPUs 161 c the number of CPUs 152 allocated to the filesystem VM 130 corresponding to the entry is stored.
- allocated memory size 161 d a size of the memory 155 allocated to the filesystem VM 130 corresponding to the entry is stored.
- a usage ratio of the CPU 152 allocated to the filesystem VM 130 corresponding to the entry is stored.
- the memory usage amount 161 f a size which is used in the memory 155 allocated to the filesystem VM 130 corresponding to the entry (a memory usage amount) is stored.
- values of the filesystem VM identifier 161 a, the physical server identifier 161 b, the number of allocated CPUs 161 c, and the allocated memory size 161 d are updated and referred to by the hypervisor 110 .
- Values of the CPU usage ratio 161 e and the memory usage amount 161 f are updated by the filesystem VM 130 and are referred to by the hypervisor 110 .
- FIG. 5 is an explanatory diagram illustrating a configuration of a protocol VM management table according to an embodiment.
- the protocol VM management table 162 is a table in which information regarding the protocol VM 120 in the storage system 2 is managed and stores entries for each protocol VM 120 .
- the entry of the protocol VM management table 162 includes items of a protocol VM identifier 162 a, a physical server identifier 162 b, the number of CIFS connections 162 c, the number of NFS connections 162 d, the number of allocated CPUs 162 e, an allocated memory size 162 f, a CPU usage ratio 162 g, and a memory usage amount 162 h.
- protocol VM identifier 162 a an identifier with which the protocol VM 120 corresponding to the entry can be uniquely identified (a protocol VM identifier) is stored.
- physical server identifier 162 b an identifier with which the physical server 100 in which the protocol VM 120 corresponding to the entry is generated is uniquely identified (a physical server identifier) is stored.
- CIFS connections 162 c the number of clients connected by the CIFS to the protocol VM 120 corresponding to the entry is stored.
- NFS connections 162 d the number of clients connected by the NFS to the protocol VM 120 corresponding to the entry is stored.
- the number of allocated CPUs 162 e the number of CPUs 152 allocated to the protocol VM 120 corresponding to the entry is stored.
- the allocated memory size 162 f a size of the memory 155 allocated to the protocol VM 120 corresponding to the entry is stored.
- the CPU usage ratio 162 g a usage ratio of the CPU 152 allocated to the protocol VM 120 corresponding to the entry is stored.
- the memory usage amount 162 h a size which is used in the memory 155 allocated to the protocol VM 120 corresponding to the entry is stored.
- values of the protocol VM identifier 162 a, the physical server identifier 162 b, the number of allocated CPUs 162 e, and the allocated memory size 162 f are updated and referred to by the hypervisor 110 .
- Value of the number of CIFS connections 162 c, the number of NFS connections 162 d, the CPU usage ratio 162 g, and the memory usage amount 162 h are updated by the protocol VM 120 and are referred to by the hypervisor 110 .
- FIG. 6 is an explanatory diagram illustrating a configuration of a physical server management table according to an embodiment.
- the physical server management table 163 is a table in which information regarding the physical server 100 in the storage system 2 is managed and stores entries for each physical server 100 .
- the entry of the physical server management table 163 includes items of a physical server identifier 163 a, the number of allocated CPUs 163 b, and an allocated memory size 163 c.
- the physical server identifier 163 a an identifier with which the physical server 100 corresponding to the entry is uniquely identified (a physical server identifier) is stored.
- the number of allocated CPUs 163 b the number of CPUs 152 which can be allocated in the physical server 100 corresponding to the entry is stored.
- the allocated memory size 163 c a size of the allocable memory 155 in the physical server 100 corresponding to the entry is stored.
- values of the physical server identifier 163 a, the number of allocated CPUs 163 b, and the allocated memory size 163 c are updated and referred to by the hypervisor 110 .
- FIG. 7 is an explanatory diagram illustrating a configuration of a threshold management table according to an embodiment.
- the threshold management table 164 is a table in which thresholds used for processing are managed and includes items of a user connection upper limit 164 a, a user connection lower limit 164 b, a scale-out upper limit 164 c, a scale-in lower limit 164 d, a scale-up upper limit 164 e, and a scale-down lower limit 164 f.
- an upper limit of the number of user connections for determining that a load of the protocol VM 120 is high (a user connection upper limit) in the load balancing program 111 is stored.
- a lower limit of the number of user connections for determining that a load of the protocol VM 120 is low (a user connection lower limit) in the load balancing program 111 is stored.
- the scale-out upper limit 164 c an upper limit for determining that a load of the protocol VM 120 is high (a scale-out upper limit) in the resource control program 112 is stored.
- scale-out that is, addition of a new protocol VM 120
- scale-in lower limit 164 d a lower limit for determining that a load of the protocol VM 120 is low (a scale-in lower limit) in the resource control program 112 is stored.
- scale-in that is, deletion of the protocol VM 120
- scale-up upper limit 164 e an upper limit for determining that a load of the filesystem VM 130 is high (a scale-up upper limit) in the resource control program 112 is stored.
- scale-up that is, addition of resources to the filesystem VM 130
- scale-down lower limit 164 f a lower limit for determining that a load of the filesystem VM 130 is low (a scale-down lower limit) in the resource control program 112 is stored.
- scale-down that is, releasing of resources from the filesystem VM 130 , is performed.
- FIG. 8 is a flowchart illustrating the load registration processing according to an embodiment.
- the load registration processing is performed by allowing the CPUs 152 allocated to the protocol VM 120 , the filesystem VM 130 , and the blockstorage VM 140 to execute the load registration program 121 .
- the load registration program 121 checks various loads in the VM executing the load registration program 121 (step S 11 ).
- the load registration program 121 executed by the protocol VM 120 checks the number of CIFS connections, the number of NFS connections, the CPU usage ratio, and the memory usage amount in the protocol VM 120 .
- the load registration program 121 executed by the filesystem VM 130 checks the CPU usage ratio and the memory usage amount in the filesystem VM 130 .
- the load registration program 121 executed by the blockstorage VM 140 checks the CPU usage ratio and the memory usage amount in the blockstorage VM 140 .
- the load registration program 121 updates the items corresponding to the corresponding table in the database 141 of the physical server 100 operating as the master primary based on the various checked loads (step S 12 ). Subsequently, the load registration program 121 determines whether a certain amount of time has passed (step S 13 ). When it is determined that the certain amount of time has passed (YES in step S 13 ), the processing proceeds to step S 11 .
- FIG. 9 is a flowchart illustrating the load balancing processing according to an embodiment.
- the load balancing processing is performed by allowing the CPUs 152 allocated to the hypervisor 110 of the physical server 100 operating as the master primary to execute the load balancing program 111 .
- the load balancing program 111 acquires information regarding the loads of the VMs in the storage system 2 from the database 141 (step S 21 ).
- the load balancing program 111 determines whether there is the physical server in which the number of connected users (the number of CIFS connections +the number of NFS connections) exceeds the upper limit (the user connection upper limit of the user connection upper limit 164 a of the threshold management table 164 ) (referred to as the physical server ( 1 )) (step S 22 ).
- step S 22 when there is the physical server in which the number of connections exceeds the user connection upper limit (YES in step S 22 ), the load balancing program 111 causes the processing to proceed to step S 23 . Conversely, when there is no physical server in which the number of connections exceeds the user connection upper limit (NO in step S 22 ), the processing proceeds to step S 26 .
- step S 23 the load balancing program 111 determines whether there is the physical server in which the number of connected users is smaller than the lower limit (the user connection lower limit of the user connection lower limit 164 b of the threshold management table 164 ) (referred to as the physical server ( 2 )).
- step S 23 when there is the physical server in which the number of connections is smaller than the user connection lower limit (YES in step S 23 ), the load balancing program 111 causes the processing to proceed to step S 24 . Conversely, when there is no physical server in which the number of connections is smaller than the user connection lower limit (NO in step S 23 ), the processing proceeds to step S 26 .
- step S 24 the load balancing program 111 balances the loads from the physical server ( 1 ) to the physical server ( 2 ). Specifically, the load balancing program 111 performs equivalent failover on some types of processing for the connected users from the protocol VM 120 of the physical server ( 1 ) to the protocol VM 120 of the physical server ( 2 ).
- the load balancing program 111 updates the values of the corresponding table in the database 141 based on the result of the load balancing (step S 25 ).
- step S 26 the load balancing program 111 instructs the hypervisor 110 of each physical server 100 to execute the resource control program 112 .
- step S 27 the load balancing program 111 determines whether a certain amount of time has passed.
- the processing proceeds to step S 21 .
- the number of connected users can be balanced between the physical servers 100 and the loads of each physical server 100 can be balanced.
- FIG. 10 is a flowchart illustrating the resource control processing according to an embodiment.
- the resource control processing is performed by allowing the CPUs 152 allocated to the hypervisor 110 of each physical server 100 to execute the resource control program 112 .
- the resource control program 112 acquires information regarding the load of each VM in the physical server 100 in which there is the hypervisor 110 executing the resource control program 112 (in the description of this processing, referred to as an own-physical server) from the database 141 (step S 31 ).
- the resource control program 112 determines whether there is the protocol VM 120 with a high load in the physical server 100 , that is, whether there is the protocol VM 120 with a load equal to or larger than a predetermined load (step S 32 ). For example, the resource control program 112 may determine that there is the protocol VM 120 with the high load when there is the protocol VM 120 satisfying any of a case in which the memory usage amount exceeds a predetermined threshold, a case in which the CPU usage ratio exceeds a predetermined threshold, and a case in which a sum of the numbers of connections (the number of CIFS connections +the number of NFS connections) exceeds a predetermined threshold.
- step S 32 when it is determined that there is the protocol VM 120 with the high load (YES in step S 32 ), the resource control program 112 causes the processing to proceed to step S 33 . Conversely, when it is determined that there is no protocol VM 120 with the high load (NO in step S 32 ), the processing proceeds to step S 42 .
- step S 33 the resource control program 112 determines whether there are free resources in the own-physical server.
- whether there are the free resources can be specified in accordance with, for example, whether the number of allocated CPUs and the allocated memory size in the entry corresponding to the own-physical server of the physical server management table 163 are different from the number of allocated CPUs and the allocated memory size in all the VMs of the own-physical server.
- the resource control program 112 when it is determined that there are the free resources in the own-physical server (YES in step S 33 ), the resource control program 112 generates a new protocol VM 120 to which the free resources of the own-physical server are allocated (step S 34 ) and scales out the distributed file system by inserting the generated protocol VM 120 to the cluster of the distributed file system in the storage system 2 (step S 35 ). Thus, it is possible to improve efficiency of the processing performed by the protocol VM 120 . Subsequently, the resource control program 112 causes the processing to proceed to step S 36 .
- step S 36 the resource control program 112 determines whether a certain amount of time has passed. When it is determined that the certain amount of time has passed (YES in step S 36 ), the processing proceeds to step S 31 .
- the resource control program 112 determines whether the load of the filesystem VM 130 on the own-physical server is low (step S 37 ). For example, when the memory usage amount of the filesystem VM 130 is equal to or smaller than a predetermined threshold and the CPU usage ratio is equal to or smaller than a predetermined threshold, the resource control program 112 may determine that the load of the filesystem VM 130 is low.
- the resource control program 112 releases (scales down) some of the resources allocated to the filesystem VM 130 (step S 38 ), generates a new protocol VM 120 to which the released resources are allocated (step S 39 ), and scales out the distributed file system by inserting the generated protocol VM 120 to the cluster of the distributed file system in the storage system 2 (step S 40 ).
- the resource controlling program 112 causes the processing to proceed to step S 36 .
- the processing proceeds to step S 36 .
- step S 37 when it is determined that the load of the filesystem VM 130 on the own-physical server is not low (NO in step S 37 ), the resource control program 112 notifies of an alert indicating that performance of the own-physical server reaches an upper limit (for example, notifies the management computer 20 ) (step S 41 ). Then, the processing proceeds to step S 36 .
- step S 42 the resource control program 112 determines whether the load of the filesystem VM 130 on the own-physical server is high, that is, whether the load of the filesystem VM 130 is equal to or larger than a predetermined load. For example, when the memory usage amount of the filesystem VM 130 exceeds a predetermined threshold or when the CPU usage ratio exceeds a predetermined threshold, the resource control program 112 may determine that the load of the filesystem VM 130 is high.
- step S 42 when it is determined that the load of the filesystem VM 130 is high (YES in step S 42 ), the resource control program 112 causes the processing to proceed to step S 43 . Conversely, when it is determined that the load of the filesystem VM 130 is not high (NO in step S 42 ), the processing proceeds to step S 36 .
- step S 43 the resource control program 112 determines whether there are free resources in the own-physical server.
- whether there are the free resources can be specified in accordance with, for example, whether the number of allocated CPUs and the allocated memory size in the entry corresponding to the own-physical server of the physical server management table 163 are different from the number of allocated CPUs and the allocated memory size in all the VMs which are in the own-physical server.
- the resource control program 112 scales up the distributed file system by allocating the free resources of the own-physical server to filesystem VM 130 (step S 44 ). Thus, it is possible to improve efficiency of the processing performed by the filesystem VM 130 . Subsequently, the resource control program 112 causes the processing to proceed to step S 36 .
- the resource control program 112 determines whether the load of the protocol VM 120 on the own-physical server is low (step S 45 ). For example, when the memory usage amount of the protocol VM 120 is equal to or smaller than the predetermined threshold, the CPU usage ratio is equal to or smaller than the predetermined threshold, and the sum of the numbers of connected users (the number of CIFS connections+the number of NFS connections) is equal to or smaller than the predetermined threshold, the resource control program 112 may determine that the load of the protocol VM 120 is low.
- the resource control program 112 adjusts the loads between the protocol VMs 120 in the own-physical server to change the load of at least any one protocol VM 120 to 0 (step S 46 ) and excludes (scales out) the protocol VM 120 in which the load is changed to 0 from the cluster of the distributed file system (step S 47 ). Subsequently, the resource control program 112 deletes the protocol VM 120 excluded from the cluster from the VMs managed by the hypervisor 110 (step S 48 ). Thus, the resources allocated to the protocol VM 120 are released.
- the resource control program 112 scales up the distributed file system by allocating the released resources to the filesystem VM 130 (step S 49 ). Thus, it is possible to improve efficiency of the processing performed by the filesystem VM 130 . Subsequently, the resource control program 112 causes the processing to proceed to step S 36 .
- step S 45 when it is determined that the load of the protocol VM 120 on the own-physical server is not low (NO in step S 45 ), the resource control program 112 notifies of an alert indicating that the performance of the own-physical server reaches the upper limit (for example, notifies the management computer 20 ) (step S 50 ). The processing proceeds to step S 36 .
- control is not performed to adjust the number of filesystem VMs 130 .
- the present invention is not limited thereto.
- a resource amount allocated to the filesystem VMs 130 in the physical server 100 may be adjusted by adjusting the number of filesystem VMs 130 .
- the resources are allocated using the number of CPUs 152 as units, but the present invention is not limited thereto.
- the resources may be allocated in units of CPU cores of the CPU 152 or the resources may be allocated in units of processing times of the CPUs 152 or the CPU cores.
- some or all of the processing performed by the CPU may be performed by a hardware circuit.
- the program according to the foregoing embodiments may be installed from a program source.
- the program source may be a program distribution server or a storage medium (for example, a portable storage medium).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-032991 | 2021-03-02 | ||
JP2021032991A JP2022133993A (ja) | 2021-03-02 | 2021-03-02 | ストレージシステム、リソース制御方法、及びリソース制御プログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220283875A1 true US20220283875A1 (en) | 2022-09-08 |
Family
ID=83116165
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/460,670 Pending US20220283875A1 (en) | 2021-03-02 | 2021-08-30 | Storage system, resource control method, and recording medium |
Country Status (2)
Country | Link |
---|---|
US (1) | US20220283875A1 (ja) |
JP (1) | JP2022133993A (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230004410A1 (en) * | 2021-03-16 | 2023-01-05 | Nerdio, Inc. | Systems and methods of auto-scaling a virtual desktop environment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160239331A1 (en) * | 2015-02-17 | 2016-08-18 | Fujitsu Limited | Computer-readable recording medium storing execution information notification program, information processing apparatus, and information processing system |
US20170019345A1 (en) * | 2014-08-27 | 2017-01-19 | Hitachi ,Ltd. | Multi-tenant resource coordination method |
US20170235760A1 (en) * | 2016-02-12 | 2017-08-17 | Nutanix, Inc. | Virtualized file server |
US20220222097A1 (en) * | 2021-01-12 | 2022-07-14 | Citrix Systems, Inc. | Systems and methods to improve application performance |
-
2021
- 2021-03-02 JP JP2021032991A patent/JP2022133993A/ja active Pending
- 2021-08-30 US US17/460,670 patent/US20220283875A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170019345A1 (en) * | 2014-08-27 | 2017-01-19 | Hitachi ,Ltd. | Multi-tenant resource coordination method |
US20160239331A1 (en) * | 2015-02-17 | 2016-08-18 | Fujitsu Limited | Computer-readable recording medium storing execution information notification program, information processing apparatus, and information processing system |
US20170235760A1 (en) * | 2016-02-12 | 2017-08-17 | Nutanix, Inc. | Virtualized file server |
US20220222097A1 (en) * | 2021-01-12 | 2022-07-14 | Citrix Systems, Inc. | Systems and methods to improve application performance |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230004410A1 (en) * | 2021-03-16 | 2023-01-05 | Nerdio, Inc. | Systems and methods of auto-scaling a virtual desktop environment |
US11748125B2 (en) * | 2021-03-16 | 2023-09-05 | Nerdio, Inc. | Systems and methods of auto-scaling a virtual desktop environment |
Also Published As
Publication number | Publication date |
---|---|
JP2022133993A (ja) | 2022-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10394847B2 (en) | Processing data in a distributed database across a plurality of clusters | |
US9971823B2 (en) | Dynamic replica failure detection and healing | |
US9933956B2 (en) | Systems and methods for implementing stretch clusters in a virtualization environment | |
US8832130B2 (en) | System and method for implementing on demand cloud database | |
US10871960B2 (en) | Upgrading a storage controller operating system without rebooting a storage system | |
US10581704B2 (en) | Cloud system for supporting big data process and operation method thereof | |
US10621151B2 (en) | Elastic, ephemeral in-line deduplication service | |
US11734137B2 (en) | System, and control method and program for input/output requests for storage systems | |
US8782235B2 (en) | Resource migration system and resource migration method | |
US11042519B2 (en) | Reinforcement learning for optimizing data deduplication | |
US10019182B2 (en) | Management system and management method of computer system | |
US20220283875A1 (en) | Storage system, resource control method, and recording medium | |
US10447800B2 (en) | Network cache deduplication analytics based compute cluster load balancer | |
US20240134762A1 (en) | System and method for availability group database patching | |
US11134121B2 (en) | Method and system for recovering data in distributed computing system | |
US20160373523A1 (en) | Profile management method and apparatus for running of virtual desktop in heterogeneous server | |
WO2022121387A1 (zh) | 数据存储方法、装置、服务器及介质 | |
US20220269571A1 (en) | Virtual machine configuration update technique in a disaster recovery environment | |
US11994965B2 (en) | Storage system, failover control method, and recording medium | |
US11483205B1 (en) | Defragmentation of licensed resources in a provider network | |
KR102024846B1 (ko) | 파일 시스템 프로그램 및 이를 이용한 데이터 센터 제어 방법 | |
WO2016109743A1 (en) | Systems and methods for implementing stretch clusters in a virtualization environment | |
JP5294352B2 (ja) | シンクライアントシステム、セッション管理装置、セッション管理方法およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HITACHI, LTD., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:UKAWA, HITOSHI;REEL/FRAME:057326/0373 Effective date: 20210726 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |