CN103188159A - Management method for hardware performance and cloud computing system - Google Patents

Management method for hardware performance and cloud computing system Download PDF

Info

Publication number
CN103188159A
CN103188159A CN2011104464251A CN201110446425A CN103188159A CN 103188159 A CN103188159 A CN 103188159A CN 2011104464251 A CN2011104464251 A CN 2011104464251A CN 201110446425 A CN201110446425 A CN 201110446425A CN 103188159 A CN103188159 A CN 103188159A
Authority
CN
China
Prior art keywords
node
bottleneck
pond
resource
switching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011104464251A
Other languages
Chinese (zh)
Other versions
CN103188159B (en
Inventor
卢盈志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inventec Corp
Original Assignee
Inventec Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inventec Corp filed Critical Inventec Corp
Priority to CN201110446425.1A priority Critical patent/CN103188159B/en
Publication of CN103188159A publication Critical patent/CN103188159A/en
Application granted granted Critical
Publication of CN103188159B publication Critical patent/CN103188159B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Debugging And Monitoring (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention relates to a management method for a hardware performance and a cloud computing system. The cloud computing system comprises a plurality of node devices, and a management node for realizing a management method, wherein the node devices are configured in a plurality of node resource pools. The management method for the hardware performance comprises the following steps of detecting loads of the node resource pools to judge bottlenecks and corresponding bottleneck resource pools with the bottlenecks, evaluating and selecting at least one switching node from the node devices of the other node resource pools than the bottleneck resource pools, and changing the switching node to distribute the switching node into the bottleneck resource pool from the original node resource pool.

Description

The management method of hardware usefulness and high in the clouds arithmetic system
Technical field
The present invention relates to the usefulness administrative skill of a kind of high in the clouds computing, relate in particular to a kind of management method and high in the clouds arithmetic system of hardware usefulness.
Background technology
High in the clouds computing (Cloud Computing) technology be by world-wide web (Internet) in conjunction with a large amount of server (or being called node) to form high-speed computation and the integrated computer that possesses a large amount of storage capacitys, it is emphasized under the situation of local side resource-constrained, utilizes network to obtain the calculation resources in a distant place, storage resources or service.The high in the clouds computing can be carried out resource-sharing or the division of labor with these nodes by technology such as virtual and automations, and operates the webpage of these services by Tas such as network and browsers, uses and carries out various computings and work.
These numerous sets of node are formed and are a server group of planes (server group).Because the quantity of these nodes is huge, therefore how at certain node resource generation bottleneck of a server group of planes and then when influencing the overall efficiency of system, a server group of planes can automatically terminate the generation of bottleneck, use the usefulness that provides higher, become the important topic of numerous high in the clouds arithmetic system now.
Summary of the invention
The invention provides a kind of management method and high in the clouds arithmetic system of hardware usefulness, it is detected various node resources pond whether bottleneck takes place, and the role who adjusts and redistribute these servers automatically divides the work, just can solve bottleneck effectively automatically, for the high in the clouds arithmetic system provides also high usefulness.
The present invention proposes a kind of management method of hardware usefulness, and it is applicable to a high in the clouds arithmetic system.Described high in the clouds arithmetic system comprises a plurality of node apparatus, and these node apparatus are disposed at a plurality of node resources pond.Described management method comprises the following steps.Detect the load in these node resource ponds, to judge a bottleneck and the corresponding bottleneck pond of this bottleneck takes place.Wherein these node resource ponds comprise described node resource pond.Assess and select at least one switching node these node apparatus in other node resource ponds beyond the bottleneck pond.Change described switching node, so that described switching node is reassigned to the bottleneck pond from original node resource pond.
In one embodiment of this invention, above-mentioned management method also comprises the following steps.Set normal critical value and a bottleneck critical value respectively at each node resource pond.When the load of the one in these node resource ponds is lower than corresponding normal critical value, represent that the one in these node resource ponds is positioned at normal phenomenon.When the load of the one in these node resource ponds is higher than corresponding bottleneck critical value, represent that the one in these node resource ponds described bottleneck takes place and becomes described bottleneck pond.
In one embodiment of this invention, assessing described switching node comprises the following steps.To estimate described switching node after original node resource pond is dispensed to the bottleneck pond, the load in original node resource pond and the load in bottleneck pond should be separately less than its corresponding bottleneck critical values according to described bottleneck critical value.
In one embodiment of this invention, changing described switching node comprises the following steps.Retrieve a node linked database and obtain the node related data of described switching node.Adjust the node related data of described switching node, so that described switching node is modified to the bottleneck pond from original node resource pond.From the arithmetic system of high in the clouds, isolate described switching node.Adjust described switching node according to described bottleneck pond.And, described switching node is added the high in the clouds arithmetic system again.
In one embodiment of this invention, isolating described switching node comprises the following steps.A plurality of virtual machines in the described switching node are moved to other node apparatus in original node resource pond from described switching node.And, close the performed a plurality of service routines of described switching node.
In one embodiment of this invention, isolating described switching node also comprises the following steps.Set the node linked database with the node related data of isolated described switching node.
In one embodiment of this invention, the load in described node resource pond comprises computing load, space load and/or its combination separately of these node resource ponds.
In one embodiment of this invention, described node resource pond comprises Service Source pond, computational resource pond, storage resources pond and/or its combination.
From another viewpoint, the present invention proposes a kind of high in the clouds arithmetic system, and it comprises a plurality of node apparatus and a management node.These node apparatus couple and are disposed at a plurality of node resources pond mutually by a network.Management node to above-mentioned node apparatus, is used the load in these node resource ponds of detecting by network-coupled, judges bottleneck and the corresponding bottleneck of this bottleneck pond takes place, and wherein these node resource ponds comprise the node resource pond.Management node is assessed from the node apparatus in other node resource ponds in addition, bottleneck pond and is selected at least one switching node, and changes described switching node so that described switching node is reassigned to the bottleneck pond from original node resource pond.
All the other implementation details of this high in the clouds arithmetic system please refer to above-mentioned explanation, do not add at this and give unnecessary details.
Based on above-mentioned, the high in the clouds arithmetic system of the embodiment of the invention is set the different load upper limits respectively at each node resource pond, and detects the function situation in each node resource pond.When specific node resource pool generation bottleneck and do not have other redundant node can be when supporting, the high in the clouds arithmetic system can be from normal operation and selection portion partial node the node resource pond of bottleneck is not taken place, and (in other words it dropped in the above-mentioned specific node resource pool, redistribute role's division of labor of part of nodes exactly), use the generation that reduces bottleneck.Therefore, by automatic adjustment and redistribute the role division of labor of these servers, the high in the clouds arithmetic system just can solve bottleneck effectively automatically and promote its hardware operational effectiveness, and higher usefulness is provided.
For above-mentioned feature and advantage of the present invention can be become apparent, embodiment cited below particularly, and cooperate institute's accompanying drawing to be described in detail below.
Description of drawings
Fig. 1 is the schematic diagram according to one embodiment of the invention explanation high in the clouds arithmetic system.
Fig. 2 is the management method flow chart according to one embodiment of the invention explanation hardware usefulness.
Fig. 3 is another schematic diagram according to one embodiment of the invention explanation high in the clouds arithmetic system.
The main element symbol description
100: the high in the clouds arithmetic system
110: the Service Source pond
112_1~112_i: service node
120: the computational resource pond
122_1~122_j: computing node
130: the storage resources pond
132_1~132_k: storage node
140: switch
150: bottleneck monitoring module
160: node is selected module
170: the data access module
180: node is isolated module
190: the node deployment module
195: node increases module
300: dotted arrow
DB: node linked database
S210~S290: step
Embodiment
Now will the example of described one exemplary embodiment be described in the accompanying drawings in detail with reference to one exemplary embodiment of the present invention.In addition, all possibility parts, the identical or similar portions of element/member/symbology of use same numeral in figure and execution mode.
Fig. 1 is the schematic diagram according to one embodiment of the invention explanation high in the clouds arithmetic system 100, and for example, the embodiment of the invention is provided infrastructures and namely served (Infrastructure as a Service; Abbreviate IaaS as) cabinet-type (Container) data center (Data Center) with as high in the clouds arithmetic system 100.As shown in Figure 1, the high in the clouds arithmetic system 100 of present embodiment can comprise at least one rack (container).With the rack described in the present embodiment, each rack includes a plurality of frames (RACK), and each frame also has a plurality of slots, and each slot also can include one to many servers (or being called node apparatus).Because each rack has similar composition, for convenience of description, in the present embodiment with a rack as an example.
Please refer to Fig. 1, comprise a plurality of node apparatus in the high in the clouds arithmetic system 100, these node apparatus just have been disposed in the multiple node resource pond when high in the clouds operating system of arithmetic system 100 is disposed (deploy) beyond the clouds.In other words, these node apparatus can be categorized into three kinds of node types, that is, Service Source pond (service resource pool) 110, computational resource pond (computing resource pool) 120 and storage resources pond (storage resource pool) 130, and Service Source pond 110 can also more thin portion be divided according to its service function.Therefore, the node resource pond comprises Service Source pond 110, computational resource pond 120, storage resources pond 130 and/or its combination.
In present embodiment, Service Source pond 110 comprises i node apparatus 112_1~112_i, and computational resource pond 120 comprises j node apparatus 122_1~122_j, and storage resources pond 130 comprises k node apparatus 132_1~132_k, and i, j, k are all nonnegative integer.Node apparatus 112_1~112_i, node apparatus 122_1~122_j and node apparatus 132_1~132_k also can be referred to as service node 112_1~112_i, computing node 122_1~122_j and storage node 132_1~132_k respectively in present embodiment.Above-mentioned these node apparatus all are coupled to layer 2 switch 140, use by Local Area Network and couple, carry out communication and information transmission mutually.Use the present embodiment person and also can couple these node apparatus by the network mode of other kinds, for example world-wide web, wireless network .. etc. do not repeat them here.
Service Source pond 110 and the service node that is positioned at wherein can segment its kind according to service function, for example entity is installed (physical installer) service, entity management (physical manager) service, service is handled in daily record (LOG), virtual management (virtual manager) service, application interface (Application Programming Interface, API) service, virtual resource provides (virtual resource provisioning) service, database service, storage management (storage manager) service, load balance (load balance) service and security mechanism (security) service ... etc.Computational resource pond 120 and be positioned at wherein computing node in order to calculation services to be provided.Storage resources pond 130 and be positioned at wherein storage node then in order to store-service to be provided.
In other words, service node 112_1~112_i mainly provides many virtual machines (Virtual Machine; Abbreviate VM as) service give the user, these virtual machines are executed in the computational resource pond 120 that computing node 122_1~122_j forms, and its required storage area of virtual machine is then provided by the storage resources pond 130 that storage node 132_1~132_k forms.Each service node 112_1~112_i can provide the user different services according to its performed different software.Comparatively speaking, be disposed at the computing node 122_1~122_j in the computational resource pond 120 or the storage node 132_1~132_k in the storage resources pond 130 carries out similar software program respectively, make it be easy to integrate mutually and carry out huge computing or storage data.
The high in the clouds arithmetic system also comprises a management node, and it is in order to monitor and to adjust the loading condition of each node apparatus.Above-mentioned management node can be one of them in the above-mentioned node apparatus, or is independent of another supervising device beyond the node apparatus, present embodiment with the node apparatus 112_2 of position in Service Source pond 110 as management node.Management node 112_2 comprises bottleneck monitoring module 150, node selection module 160, data access module 170, node is isolated module 180, node deployment module 190 and node increases module 195, and these functional module group will be in following detailed description.In addition, the high in the clouds operating system is when carrying out the deployment of each node apparatus, just can obtain the corresponding node related data of each node apparatus (node related data), the high in the clouds operating system can be integrated these node related datas and be stored among the node linked database DB, carries out reference for management node 112_2.In present embodiment, node linked database DB is arranged among the service node 112_1, but the present invention is not limited to this, and other embodiment also can be positioned over node linked database DB in arbitrary node apparatus.
Though the specific role that the node apparatus of part is specifically applied to high in the clouds arithmetic system 100 (for example, be exclusively used in specific node resource pond), but also there is the node apparatus of part kind can support multiple high in the clouds resource, is not subject to the specific role of playing the part of high in the clouds arithmetic system 100.For example, the operation efficiency of part of nodes device is much better than other node apparatus, but it is for providing service or the ability of storage data then far is inferior to other nodes, and this moment, these node apparatus just can be referred in the computational resource pond 120 specially with as computing node.Yet the existing good operation efficiency of many node apparatus also can provide preferable service and data to store, and therefore can be used as the usefulness of service node, storage node or computing node.That is to say, this node apparatus not because of its hardware designs is subject to only can be as the specific role of high in the clouds arithmetic system 100.Though when operating system was configured beyond the clouds, the preferable node apparatus of this compatibility was determined to belong to specific node resource pond, under specific circumstances, the role that management module 112_2 also changes these node apparatus.
So-called " bottleneck (Bottleneck) ", namely be that usefulness load or space load in high in the clouds arithmetic system 100 each node resource ponds is overweight, and when standby (spare) node that does not have other can supply to support, can be called high in the clouds arithmetic system 100 and be in bottleneck this moment.For example, when the average service rate of the central processing unit (CPU) of each computing node 122_1~122_j was too high in the whole computational resource pond 120, be called this moment was usefulness bottleneck (Performance Bottleneck).Again for example, when the remaining storage area of storage node 122_1~122_k was soon not enough in the storage resources pond 130, be called this moment was space bottleneck (Space Bottleneck).
In this, bottleneck takes place when detecting high in the clouds arithmetic system 100, when whole rack does not have unnecessary secondary node again, the high in the clouds arithmetic system 100 of the embodiment of the invention can never take place to select in the node resource pond of bottleneck some node apparatus to carry out role's change, make these selecteed node apparatus just with the wherein a member as the node resource pond that bottleneck takes place, use the bottleneck of eliminating whole high in the clouds arithmetic system 100.Certainly, the embodiment of the invention must be considered the usefulness whether these node apparatus that changed can bear the node resource pond after the switching.
Below namely be illustrated the management method of hardware usefulness by the high in the clouds arithmetic system 100 that it was suitable for.Fig. 2 is the management method flow chart according to one embodiment of the invention explanation hardware usefulness.Please also refer to Fig. 1 and Fig. 2, in step S210, the load in the above-mentioned node resource of the bottleneck monitoring module 150 detectings pond 110~130 among the management node 112_2, and in step S220, bottleneck monitoring module 150 judges whether to take place bottleneck, and judges the corresponding node resource of this bottleneck of generation pond.To take place herein that the corresponding node resource of bottleneck pond is called is the bottleneck pond.
In present embodiment, management node 112_2 sets a normal critical value and a bottleneck critical value respectively at each node resource pond 110~130, uses the present function situation of judging each node resource pond.Specifically, the load of bottleneck monitoring module 150 each node apparatus of detecting of present embodiment, the load of described each node apparatus comprises computing load and the space load (just the storage area of usefulness) separately of these node apparatus, and by the node linked database DB whole average load that calculates each node resource pond 110~130 of uniting.By this, the load in node resource pond comprises the average calculating operation load of node apparatus, space load and/or its combination separately in these node resource ponds.
In present embodiment, the normal critical value setting of usefulness bottleneck is 70%, and the bottleneck critical value of usefulness bottleneck then is set at 80%.That is to say that the average potency utilization rate of CPU is less than 70% in the node resource pond, and after certain dress node apparatus carries out role change, the average potency utilization rate of CPU still need be lower than 70% in the original some joint resource pool after changing.On the other hand, in the node resource pond average potency utilization rate of CPU greater than 80%, and when certain the dress node apparatus carry out the role after changing, bottleneck pond after changing should be lower than 80% just can meet its assessment.
In the present embodiment, the normal critical value setting of space bottleneck is 80%, and the bottleneck critical value of space bottleneck then is set at 90%.That is to say, the numerical value of (usage space/all storage areas) is less than 80% in the node resource pond, and after certain dress node apparatus carried out role change, the numerical value of (usage space/all storage areas) still need be lower than 80% in the original some joint resource pool after changing.The numerical value of (usage space/all storage areas) is greater than 80% in working as the node resource pond, and when certain dress node apparatus carries out the role after changing, the numerical value of (usage space/all storage areas) should be lower than 80% and just can meet its assessment in the bottleneck pond after changing.
After calculating the load in each node resource pond, when the load in node resource pond was lower than the normal critical value of its correspondence, bottleneck monitoring module 150 can judge that just these node resource ponds are positioned at normal phenomenon, and the situation of bottleneck does not take place.When the load in node resource pond was higher than corresponding normal critical value but is lower than corresponding bottleneck critical value, bottleneck monitoring module 150 can judge that just this node resource pond is in high load condition, but does not reach above-mentioned " bottleneck " as yet.Yet, when the load in node resource pond has reached or has been higher than its corresponding bottleneck critical value, represent the respective load in this node resource pond near full load condition, bottleneck monitoring module 150 can judge that just " bottleneck " takes place in this node resource pond." bottleneck " for example is, the usefulness underloading in the computational resource pond 120 is with load operand now, but or the storage area in the storage resources pond 130 has been lower than default spare space.
For convenience of description, the problem of space bottleneck has taken place in present embodiment hypothesis storage resources pond 130.Therefore, when bottleneck has taken place 150 judgements of bottleneck monitoring module, and judged and this bottleneck corresponding node resource pond (just the storage resources pond 130) has taken place afterwards, just enter step S230 by step S220, node is selected to assess and select at least one switching node the node apparatus in other node resource ponds of module 160 beyond the bottleneck pond.In other words, node selects module 160 node apparatus that selection the node resource pond (for example being Service Source pond 110 or computational resource pond 120) of bottleneck can be used as storage node not to take place from other, and assess these node apparatus and carrying out the role after changing, whether can positively make high in the clouds arithmetic system 100 bottleneck can not take place.
Can carry out role's conversion continuously for fear of two kinds of node apparatus that have been in the node data pond of high load condition, the node of present embodiment selects the module 160 just can be from the node resource pond that is positioned at normal phenomenon (just, its load is lower than the node data pond of normal critical value) the middle node apparatus of selecting desire change role, can not select in the node data pond that is arranged in high load condition.In addition, node select module 160 also need according to the above-mentioned switching node of bottleneck critical value estimation after carrying out role's change (just, after original node resource pond is dispensed to the bottleneck pond) load that whether can make original node data pond and bottleneck pond is all less than its corresponding bottleneck critical value, carries out can reaching the effect that high in the clouds arithmetic system 100 does not all have bottleneck after the role transforming to be expected at.The node of present embodiment selects module 160 can intuitively calculate the average potency load of position central processing unit of each node apparatus in same node resource pond, or calculate intuitively whether the storage area of usefulness exceeds, whether reached or exceeded the bottleneck critical value to judge it.
For example, suppose that node selects module 160 to select computing node 122_2 as switching node in the computational resource pond 120 that is arranged in normal phenomenon, computing node 122_2 can be used as storage node, and computing node 122_2 selects can reach above-mentioned effect after module 160 assessments at node, it is switching node that node selects module 160 just these node apparatus to be regarded as, uses the following step that continues.
In step S240, management node 112_2 changes the node related data of above-mentioned switching node 122_2 in node linked database DB, uses switching node 122_2 is reassigned to the bottleneck pond (just the storage resources pond 130) from original node resource pond (just the computational resource pond 120).Wherein, the motion flow of step S240 can be subdivided into step S250 to step S290, and by the node relevent information form among the node linked database DB in detail these steps is described in detail one by one.
The high in the clouds operating system of high in the clouds arithmetic system 100 when the configuration node device, just with the node relevent information interpretation of records of each node apparatus in node relevent information form.Above-mentioned node relevent information can be obtained by following several modes.For example, when the basic input output system (BIOS) of each node apparatus when carrying out Power-On Self-Test (POST) program, (for example can dynamically obtain its node related data, central processing unit, memory, hard disk, network card ... etc. related data), and by for example SMBIOS data structure (type 0,1,2 and the OEM type) and network card EEPROM in MAC Address obtain other the node related data (for example, the product data of node apparatus, the BIOS information, node type), at last these data are sent in the baseboard management controller (BMC) of each node apparatus via IPMI OEM instruction.In addition, BMC can also dynamically obtain for example BMC network card relevent information, and for example BMC network card MAC Address, IP address and frequency range thereof are used and enriched its node relevent information.Below utilize table (1) to be used as giving an example of node linked database DB and node relevent information wherein.Wherein, 5 node relevent informations recording of table (1) are taken from service node 112_1,112_i, computing node 122_1,122_2 and the storage node 132_1 of Fig. 1 respectively in regular turn.
Table (1)
Figure BDA0000125878280000091
Figure BDA0000125878280000101
Figure BDA0000125878280000111
Table (1) comprises 10 fields, records the IP address of MAC Address, BMC network interface card of baseboard management controller in each node apparatus (BMC) network interface card and frequency range, the MAC Address of system's network interface card, IP address and frequency range, processor information (model/arithmetic speed), memory information, hard disk information, node location, node type and the server type of system's network interface card respectively.Obtaining of the IP address of system's network interface card then is by network startup (Network Boot).
Node relevent information with node apparatus 112_1, the MAC Address of its BMC network interface card is " 00:A0:D1:EC:F8:B1 ", the IP address of assigned BMC network interface card is " 10.1.0.1 ", and the frequency range of BMC network interface card is 100Mbps (bps=bits per second).And the MAC Address of system's network interface card of node apparatus 112_1 be " 00:A0:D1:EA:34:E1 ", IP address for " 10.1.0.11 " and frequency range be 1000Mbps.In addition, the model of the CPU of node apparatus 112_1 is " Intel (R) Xeon (R) CPU E5540 ", and its computing frequency is " 2530MHz ".And node apparatus 112_1 comprises 4 memory module, DIMM1~DIMM4, and the capacity of each memory module is all 8G.In addition, the carriage of the hard disk of node apparatus 112_1 (carrier) is numbered 1, hard disk type is that SAS (Serial Attached SCSI, SCSI=Small Computer System Interface), hard-disk capacity are that 1TB, the rotation speed of hard disk are that 7200RPM (Revolution Per Minute) and hard disk cache (cache) capacity are 16MB.
Get back to Fig. 2 and while with reference to figure 1, in step S250, data access module 170 is retrieved node linked database DB and is obtained the node related data of switching node 122_2.Just, data access module 170 can be obtained for example node related data of following table (2) from node linked database DB.
Table (2)
Figure BDA0000125878280000112
Figure BDA0000125878280000121
In step S260, data access module 170 is adjusted the node related data of switching node 122_2 in the above-mentioned table (2), and these node related datas are returned the node linked database DB that deposits to the service node 112_1, present embodiment is that the middle field " node type " of modification table (2) and " server type " are adjusted into " storage node " (shown in following table (3)) with it by original " computing node ", makes switching node 122_2 be modified to the storage node in bottleneck pond (storage resources pond 130) from the computing node 122_2 in original node resource pond (computational resource pond 120).In other embodiment, data access module 170 also can be set the node related data of node linked database DB and isolated switching node 112_2 this moment, so that other node apparatus can't access switching node 112_2.
Table (3)
Figure BDA0000125878280000122
Then, in step S270, node is isolated module 180 and from high in the clouds arithmetic system 100 switching node 122_2 is isolated.Specifically, node is isolated the module 180 meeting many flow processs of execution so that switching node 122_2 isolates from high in the clouds arithmetic system 100.For example, node is isolated module 180 is moved to a plurality of virtual machines (VM) that moving among the switching node 122_2 computational resource pond 120 from switching node other node apparatus 122_1~122_j.And after having shifted above-mentioned virtual machine, node is isolated module 180 and is closed upward all performed service routines of switching node 122_2.
In step S280, node deployment module 190 is adjusted switching node 122_2 according to bottleneck pond (storage resources pond 130).Node deployment module 190 can redeploy this switching node 122_2 according to the node type/server type after adjusting, that is to say, this switching node 122_2 is installed the required operating system in bottleneck pond (storage resources pond 130), and after above-mentioned operating system installation, install again all storage nodes the service software bag (service packages) that must have so that switching node 122_2 meets the demand in storage resources pond 130.
At last, as shown in Figure 3, Fig. 3 is another schematic diagram according to one embodiment of the invention explanation high in the clouds arithmetic system 100, and simultaneously with reference to Fig. 2, in step S290, node increases module 195 and just switching node 122_2 can be added in the high in the clouds arithmetic system 100 again, and is converted to storage node 132_x (shown in dotted arrow 300) in the bottleneck pond (storage resources pond 130) by the computing node 122_2 in original node resource pond (computational resource pond 120).The node related data that data access module 170 also can be set node linked database DB and reopen original switching node 112_2 (the storage node 132_x of Fig. 3 just) this moment is so that other node apparatus are able to access storage node 132_x.
In sum, the high in the clouds arithmetic system of the embodiment of the invention is set the different load upper limits respectively at each node resource pond, and detects the function situation in each node resource pond.When specific node resource pool generation bottleneck and do not have other redundant node can be when supporting, the high in the clouds arithmetic system can be from normal operation and selection portion partial node the node resource pond of bottleneck is not taken place, and (in other words it dropped in the above-mentioned specific node resource pool, redistribute role's division of labor of part of nodes exactly), use the generation that reduces bottleneck.Therefore, by automatic adjustment and redistribute the role division of labor of these servers, the high in the clouds arithmetic system just can solve bottleneck effectively automatically and promote its hardware operational effectiveness, and higher usefulness is provided.
Though the present invention discloses as above with embodiment; so it is not in order to limit the present invention; have in the technical field under any and know the knowledgeable usually; without departing from the spirit and scope of the present invention; when can doing a little change and retouching, so protection scope of the present invention is as the criterion when looking the scope that claim defines.

Claims (10)

1. the management method of a hardware usefulness is applicable to a high in the clouds arithmetic system, and this high in the clouds arithmetic system comprises a plurality of node apparatus, and those node apparatus are disposed at a plurality of node resources pond, it is characterized in that this management method comprises:
Detect the load in those node resource ponds, to judge a bottleneck and the corresponding bottleneck pond of this bottleneck takes place that wherein those node resource ponds comprise this node resource pond;
Assess and select at least one switching node those node apparatus in other node resource ponds beyond this bottleneck pond; And
Change this at least one switching node, so that this at least one switching node is reassigned to this bottleneck pond from original node resource pond.
2. management method according to claim 1 also comprises:
Set normal critical value and a bottleneck critical value respectively at each those node resource pond;
When the load of the one in those node resource ponds was lower than this corresponding normal critical value, the one in those node resource ponds was positioned at a normal phenomenon; And
When the load of the one in those node resource ponds was higher than this corresponding bottleneck critical value, this bottleneck took place and becomes this bottleneck pond in the one in those node resource ponds.
3. management method according to claim 2 is assessed this at least one switching node and is comprised the following steps:
Estimating this at least one switching node after original node resource pond is dispensed to this bottleneck pond, the load in original node resource pond and the load in this bottleneck pond are separately less than this bottleneck critical value of its correspondence according to this bottleneck critical value.
4. management method according to claim 1 changes this at least one switching node and comprises the following steps:
Retrieve a node linked database and obtain the node related data of this at least one switching node;
Adjust the node related data of this at least one switching node, so that this at least one switching node is modified to this bottleneck pond from original node resource pond;
From this high in the clouds arithmetic system, isolate this at least one switching node;
According to this at least one switching node of this bottleneck pond adjustment; And
Should add this high in the clouds arithmetic system again by at least one switching node.
5. management method according to claim 4 is isolated this at least one switching node and is comprised the following steps:
A plurality of virtual machines in this at least one switching node are moved to other node apparatus in original node resource pond from this at least one switching node; And
Close the performed a plurality of service routines of this at least one switching node.
6. management method according to claim 5 is isolated this at least one switching node and is also comprised the following steps:
Set this node linked database with the node related data of isolated this at least one switching node.
7. management method according to claim 1, wherein the load in those node resource ponds comprises a computing load, a space load and/or its combination separately of those node resource ponds.
8. management method according to claim 1, wherein those node resource ponds comprise a Service Source pond, a computational resource pond, a storage resources pond and/or its combination.
9. a high in the clouds arithmetic system is characterized in that, comprising:
A plurality of node apparatus, those node apparatus couple and are disposed at a plurality of node resources pond mutually by a network; And
One management node, be coupled to those node apparatus by this, detect the load in those node resource ponds, to judge a bottleneck and the corresponding bottleneck pond of this bottleneck take place, wherein those node resource ponds comprise this node resource pond, this management node is assessed from those node apparatus in other node resource ponds in addition, this bottleneck pond and is selected at least one switching node, changes this at least one switching node so that this at least one switching node is reassigned to this bottleneck pond from original node resource pond.
10. high in the clouds according to claim 9 arithmetic system, wherein this management node comprises:
One bottleneck monitoring module, set normal critical value and a bottleneck critical value respectively at each those node resource pond, when the load of the one in those node resource ponds is lower than this corresponding normal critical value, the one of judging those node resource ponds is positioned at a normal phenomenon, and when the load of the one in those node resource ponds is higher than this corresponding bottleneck critical value, judge that the one in those node resource ponds this bottleneck takes place and becomes this bottleneck pond;
One node is selected module, according to this bottleneck critical value with the assessment this at least one switching node, wherein this at least one switching node is after original node resource pond is dispensed to this bottleneck pond, and the load in original node resource pond and the load in this bottleneck pond are separately less than this bottleneck critical value of its correspondence;
One data access module, retrieve a node linked database to obtain the node related data of this at least one switching node, and adjust the node related data of this at least one switching node, so that this at least one switching node is modified to this bottleneck pond from original node resource pond;
One node is isolated module, isolates this at least one switching node from this high in the clouds arithmetic system;
One node deployment module is according to this at least one switching node of this bottleneck pond adjustment; And
One node increases module, should add this high in the clouds arithmetic system again by at least one switching node.
CN201110446425.1A 2011-12-28 2011-12-28 The management method of hardware usefulness and high in the clouds arithmetic system Active CN103188159B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110446425.1A CN103188159B (en) 2011-12-28 2011-12-28 The management method of hardware usefulness and high in the clouds arithmetic system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110446425.1A CN103188159B (en) 2011-12-28 2011-12-28 The management method of hardware usefulness and high in the clouds arithmetic system

Publications (2)

Publication Number Publication Date
CN103188159A true CN103188159A (en) 2013-07-03
CN103188159B CN103188159B (en) 2016-08-10

Family

ID=48679130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110446425.1A Active CN103188159B (en) 2011-12-28 2011-12-28 The management method of hardware usefulness and high in the clouds arithmetic system

Country Status (1)

Country Link
CN (1) CN103188159B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618413A (en) * 2013-11-05 2015-05-13 英业达科技有限公司 Configuration method for cloud device
CN107277193A (en) * 2017-08-09 2017-10-20 郑州云海信息技术有限公司 A kind of baseboard management controller address management method, device and system
CN107316190A (en) * 2016-04-26 2017-11-03 阿里巴巴集团控股有限公司 A kind of processing method and processing device of Internet resources transfer service

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101582850A (en) * 2009-06-19 2009-11-18 优万科技(北京)有限公司 Method and system for realizing load balance
US20100250746A1 (en) * 2009-03-30 2010-09-30 Hitachi, Ltd. Information technology source migration
US20110191477A1 (en) * 2010-02-03 2011-08-04 Vmware, Inc. System and Method for Automatically Optimizing Capacity Between Server Clusters
CN102244685A (en) * 2011-08-11 2011-11-16 中国科学院软件研究所 Distributed type dynamic cache expanding method and system supporting load balancing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100250746A1 (en) * 2009-03-30 2010-09-30 Hitachi, Ltd. Information technology source migration
CN101582850A (en) * 2009-06-19 2009-11-18 优万科技(北京)有限公司 Method and system for realizing load balance
US20110191477A1 (en) * 2010-02-03 2011-08-04 Vmware, Inc. System and Method for Automatically Optimizing Capacity Between Server Clusters
CN102244685A (en) * 2011-08-11 2011-11-16 中国科学院软件研究所 Distributed type dynamic cache expanding method and system supporting load balancing

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618413A (en) * 2013-11-05 2015-05-13 英业达科技有限公司 Configuration method for cloud device
CN104618413B (en) * 2013-11-05 2018-09-11 英业达科技有限公司 High in the clouds device configuration method
CN107316190A (en) * 2016-04-26 2017-11-03 阿里巴巴集团控股有限公司 A kind of processing method and processing device of Internet resources transfer service
CN107277193A (en) * 2017-08-09 2017-10-20 郑州云海信息技术有限公司 A kind of baseboard management controller address management method, device and system
CN107277193B (en) * 2017-08-09 2020-05-15 苏州浪潮智能科技有限公司 Method, device and system for managing address of baseboard management controller

Also Published As

Publication number Publication date
CN103188159B (en) 2016-08-10

Similar Documents

Publication Publication Date Title
US9954758B2 (en) Virtual network function resource allocation and management system
US8230069B2 (en) Server and storage-aware method for selecting virtual machine migration targets
US8185893B2 (en) Starting up at least one virtual machine in a physical machine by a load balancer
US9921866B2 (en) CPU overprovisioning and cloud compute workload scheduling mechanism
US9412075B2 (en) Automated scaling of multi-tier applications using reinforced learning
US9298512B2 (en) Client placement in a computer network system using dynamic weight assignments on resource utilization metrics
US8341626B1 (en) Migration of a virtual machine in response to regional environment effects
CN102844724B (en) Power supply in managing distributed computing system
US8510747B2 (en) Method and device for implementing load balance of data center resources
US8296760B2 (en) Migrating a virtual machine from a first physical machine in response to receiving a command to lower a power mode of the first physical machine
Shen et al. A resource usage intensity aware load balancing method for virtual machine migration in cloud datacenters
US8332847B1 (en) Validating manual virtual machine migration
Hsu et al. Smoothoperator: Reducing power fragmentation and improving power utilization in large-scale datacenters
US10346208B2 (en) Selecting one of plural layouts of virtual machines on physical machines
CN108173698B (en) Network service management method, device, server and storage medium
EP4029197B1 (en) Utilizing network analytics for service provisioning
JP7366054B2 (en) Method and system for scheduling virtual machines
TW201327205A (en) Managing method for hardware performance and cloud computing system
CN103188159A (en) Management method for hardware performance and cloud computing system
US20220164229A1 (en) Managing deployment of workloads
US20210182194A1 (en) Processor unit resource exhaustion detection and remediation
CN110266790A (en) Edge cluster management method, device, edge cluster and readable storage medium storing program for executing
Nema et al. Vm consolidation technique for green cloud computing
Patel et al. Existing and Relevant Methodologies for Energy Efficient Cloud Data centers
Youn et al. VM placement via resource brokers in a cloud datacenter

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant