CN103259809A - Load balancer, load balancing method and stratified data center system - Google Patents

Load balancer, load balancing method and stratified data center system Download PDF

Info

Publication number
CN103259809A
CN103259809A CN2012100342029A CN201210034202A CN103259809A CN 103259809 A CN103259809 A CN 103259809A CN 2012100342029 A CN2012100342029 A CN 2012100342029A CN 201210034202 A CN201210034202 A CN 201210034202A CN 103259809 A CN103259809 A CN 103259809A
Authority
CN
China
Prior art keywords
data center
load
current data
lower floor
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012100342029A
Other languages
Chinese (zh)
Inventor
石颖
吴娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to CN2012100342029A priority Critical patent/CN103259809A/en
Priority to JP2013010061A priority patent/JP2013168139A/en
Publication of CN103259809A publication Critical patent/CN103259809A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to a load balancer, a load balancing method and a stratified data center system. The load balancer is used in the stratified data center system which is formed by connecting a plurality of data centers in a stratified mode, and is used for distributing flow of a present data center load which is transmitted to the data centers and corresponds to the load balancer. The load balancer is used for distributing the load flow transmitted to the present data centers according to the resource using state of the present data centers and the resource using state of the lower layer data centers. According to the load balancer, the load balancing method and the stratified data center system, structural diversity and flexibility of the stratified data center system can be improved, the data center with the position close to a user and with spare resources is selected as far as possible to process the request of the user according to the resource using state of the data centers by which the stratified data center system is formed, load of the stratified data center system can be balanced, response speed of a user request is improved, and service efficiency of the stratified data center system is improved.

Description

Load equalizer, load-balancing method and individual-layer data centring system
Technical field
The present invention relates to data center's load-balancing technique, particularly for load equalizer, the load-balancing method of the load balancing between the subnet of individual-layer data centring system and use the individual-layer data centring system of this load equalizer and load-balancing method.
Background technology
At present, in network application, particularly growing Internet of Things were used, the individual-layer data centring system was widely used.In order to improve the efficiency of service of such hierarchical system, need the load balancing between its subnet.
In patent documentation 1, mentioned following technology, that is: used polymerizer equipment, polymerization and the load balance of branch's equipment is provided in form and the mode that reduces branch (i.e. website hereinafter or data center) equipment configuration of layering.Mutual by between the polymerizer equipment identified the facility information of different branches, makes resource request can cross over all branches and branched structure equipment, thereby seeks the load balance of the overall situation.
In addition, in patent documentation 2, mention following technology, that is: selected business that website is provided according to user's IP address.
The prior art document:
Patent documentation 1:US 2008/0034111A1; Applicant: Citrix Systems, Inc.; Denomination of invention: " Systems and methods for hierarchical global load balancing "; The applying date: on August 3rd, 2006.
Patent documentation 2:US7,523,170B1; Applicant: Cisco Technology, Inc.; Denomination of invention: " Service locator technique implemented in a data network "; The applying date: on June 24th, 2002.
But, in above prior art, have following technical problem.At first, connecting each branch by polymerizer equipment and collecting under the situation of facility information of each branch, network hierarchy diversity structure and flexibility have been limited.In addition, as the website system of selection, do not consider type of service is distinguished, and only handle newly arrived connection request according to the state of existing connection and session.Even this just causes type of service possible different, their resulting load balance process but are indifferences.Such as may be the traffic assignments that produces a large amount of network traffics to distance users website far away, thereby take Internet resources, cause the network delay increase, reduce the efficient that whole hierarchy provides service.
Summary of the invention
The present invention is directed to above-mentioned technical problem, its purpose is, a kind of load equalizer, load-balancing method and individual-layer data centring system are provided, can improve diversity structure and the flexibility of individual-layer data centring system, and improve the efficiency of service of individual-layer data centring system.
The present invention is directed to above-mentioned technical problem, its another purpose is, a kind of load equalizer, load-balancing method and individual-layer data centring system are provided, and the type of service of service request is distinguished, and improved the efficiency of service of individual-layer data centring system.
In order to solve the problems of the technologies described above, the present invention relates to a kind of load equalizer, be used for connecting the individual-layer data centring system that a plurality of data centers form with layered mode, the load flow that is sent to current data center corresponding with this load equalizer among described a plurality of data center is distributed, wherein, described load equalizer uses state according to the resource of described current data center and data center of lower floor, the load flow that is sent to described current data center is distributed, and data center of described lower floor is the data center that is connected and is positioned at current data center lower floor with described current data center.
The invention still further relates to a kind of load-balancing method, be used for connecting the individual-layer data centring system that a plurality of data centers form with layered mode, the load flow that is sent to the current data center among described a plurality of data center is distributed, comprise: determining step, resource use state to described current data center and data center of lower floor judges that data center of described lower floor is the data center that is connected and is positioned at current data center lower floor with described current data center; And allocation step, according to the judged result of described determining step, the load flow that is sent to described current data center is distributed.
The invention still further relates to a kind of individual-layer data centring system, connecting a plurality of data centers with layered mode forms, comprise corresponding with each data center respectively a plurality of load equalizers, wherein, each load equalizer uses state according to the resource of the current data center corresponding with this load equalizer and data center of lower floor, the load flow that is sent to described current data center is distributed, and data center of described lower floor is the data center that is connected and is positioned at current data center lower floor with described current data center.
According to load equalizer of the present invention, load-balancing method and individual-layer data centring system, can improve diversity structure and the flexibility of individual-layer data centring system, and can use state according to the resource of each data center that forms the individual-layer data centring system, come the request of process user as far as possible by close user and the data center of resource free time, can make the load in the individual-layer data centring system more balanced, improve user's request responding speed, thereby improved the efficiency of service of individual-layer data centring system.
In above-mentioned load equalizer, also can be also according to the Link State between described current data center and data center of described lower floor and the upper layer data center, the load flow that is sent to described current data center is distributed, and described upper layer data center is the data center that is connected and is positioned at upper strata, current data center with described current data center.
In above-mentioned load-balancing method, also can be in described determining step, also the Link State between described current data center and data center of described lower floor and the upper layer data center is judged that described upper layer data center is the data center that is connected and is positioned at upper strata, current data center with described current data center.
Thus, can transmit data by idle link, alleviate the pressure that the business transmitted is brought network link, further improve the efficiency of service of individual-layer data centring system.
In above-mentioned load equalizer, also can the load flow that be sent to described current data center be distributed also according to the protocol type of the packet that constitutes load flow.
In above-mentioned load-balancing method, also can be in described determining step, also the protocol type of the packet that constitutes load flow is judged.
Thus, can ensure professional service quality better, the business that reduces to transmit is avoided network congestion to bring bigger response time-delay for the business of forwarding, thereby has further been improved the efficiency of service of individual-layer data centring system the pressure that network link brings.
In above-mentioned load equalizer, also can utilize the header packet information of network layer to judge the protocol type of the packet that constitutes load flow.
In above-mentioned load-balancing method, also can be in described determining step, utilize the header packet information of network layer to judge the protocol type of the packet that constitutes load flow.
Thus, by utilizing the header packet information of network layer, can accelerate the processing speed of load equalizer.
In above-mentioned load equalizer, also can the resource at described current data center use state for the situation below the first threshold of regulation under, described load equalizer will be sent to the load flow at described current data center and distribute to the current data center; Use state greater than described first threshold and under as the situation below the predetermined second threshold value in the resource at described current data center, described load equalizer is according to the protocol type of the packet of the Link State between described current data center and data center of described lower floor and the upper layer data center and/or formation load flow, the load flow that is sent to described current data center is distributed, described second threshold value is greater than described first threshold, and described upper layer data center is the data center that is connected and is positioned at upper strata, current data center with described current data center; Use under the situation of state greater than predetermined second threshold value in the resource at described current data center, described load equalizer uses state according to the resource of data center of described lower floor, and the load flow that is sent to described current data center is distributed to data center of described lower floor or described upper layer data center.
In above-mentioned load-balancing method, also can the resource that described determining step is judged as described current data center use state for the situation below the first threshold of regulation under, described allocation step will be sent to the load flow at described current data center and distribute to the current data center; The resource that is judged as described current data center at described determining step is used state greater than described first threshold and under as the situation below the predetermined second threshold value, described allocation step is according to the Link State between described current data center and data center of described lower floor and the upper layer data center, and/or the protocol type of the packet of formation load flow, the load flow that is sent to described current data center is distributed, described second threshold value is greater than described first threshold, and described upper layer data center is the data center that is connected and is positioned at upper strata, current data center with described current data center; The resource that is judged as described current data center at described determining step is used under the situation of state greater than predetermined second threshold value, described allocation step is used state according to the resource of data center of described lower floor, and the load flow that is sent to described current data center is distributed to data center of described lower floor or described upper layer data center.
Thus, come the performance at quantized data center by using a plurality of threshold values, the distribution of load flow is optimized on the performance ground that can more meet data center, thereby can be more flexibly and suitably distribute load flow, has further improved the efficiency of service of individual-layer data centring system.
In above-mentioned load equalizer, also can have: communication interface, receive and the transmission packet, this packet comprises load flow and status message at least; Retransmission unit changes the header packet information of described packet, and this header packet information comprises source address and destination address at least; Memory, the resource of having stored described current data center use state and the resource of the data center of described lower floor that obtains by described status message is used state; And processing unit, according to the header packet information of the packet that is used state by the resource of described memory stores and received by described communication interface, control described retransmission unit the packet that constitutes load flow is distributed.
Thus, can realize load equalizer of the present invention with simple structure.
According to load equalizer involved in the present invention, load-balancing method and individual-layer data centring system, newly arrived service request is distinguished, and according to current station state and network state, business to the different agreement type adopts different distribution mechanisms, allow close user's website respond the service request that need take more Internet resources, and allow the website away from the user respond the service request that takies less Internet resources, reduced between the different websites of hierarchical system communication to network resources demand.In addition, if owing to make to distinguish the new professional information spinner network layer packet header that arrives among the present invention, this makes the processing speed of load equalizer might reach the router linear speed.Therefore, when the load of generation area burst flood tide, for example because the big quantity sensor reporting message that disaster causes, perhaps because the large number reporting message that sensor node dormancy mechanism is made mistakes and caused, click in a large number when perhaps the user is to the focus content, or even come the situations such as malicious attack of automatic network, by employing of the present invention, can rapidly fractional load be distributed to other regional data center and handle, thereby alleviate the situation of this regional system's excess load.At last, the hierarchy of system makes that terminal website can be very near the user, most of business during normal condition is stayed local terminal website and is handled, and has improved the response speed of local service, has also reduced the performance requirement to upper layer data central loading equalizer.
Description of drawings
Fig. 1 is the structure chart of the related individual-layer data centring system of first execution mode.
Fig. 2 is the module map of a specific embodiment of the related load equalizer of first execution mode.
Fig. 3 is an example of the message format of status message.
Fig. 4 is the example that resource is used state.
Fig. 5 is an example of load allocation map information.
Fig. 6 is the flow chart of the related load-balancing method of first execution mode.
Fig. 7 is the flow chart of a specific embodiment of the related load-balancing method of first execution mode.
Fig. 8 is an example of Link State.
Fig. 9 is an example of threshold information.
Figure 10 is the flow chart of a specific embodiment of the related load-balancing method of the 4th execution mode.
Figure 11 is the sequential chart of an example of data allocations processing.
Figure 12 is the sequential chart of an example of data allocations processing.
Figure 13 is the sequential chart of an example of data allocations processing.
Figure 14 is the sequential chart of an example of data allocations processing.
Figure 15 is the sequential chart of an example of data allocations processing.
Figure 16 is the sequential chart of an example of data allocations processing.
Embodiment
1, first execution mode
Below the contrast accompanying drawing specifies first execution mode of the present invention.
1.1, the overall structure of individual-layer data centring system
Below contrast Fig. 1 specifies the overall structure of the related individual-layer data centring system of present embodiment.Fig. 1 is the structure chart of the related individual-layer data centring system of first execution mode.
In the present embodiment, the individual-layer data centring system is made up of a plurality of data centers (being website) that connect with layered mode.Wherein, as long as connection herein can be communicated by letter, comprise the various modes that communicate to connect such as wired connected mode and wireless connections mode between data center.For example, in Fig. 1, the individual-layer data centring system is become by 7 data central. set, i.e. data center 11 101, data center 21102, data center 22 103, data center 31 104, data center 32 105, data center 33 106 and data center 34 107.They link together with tree topology, the website that wherein is arranged in root direction (above Fig. 1 is) in tree topology is called parent web (upper layer data center), the website that is arranged in leaf direction (below Fig. 1 is) is called substation point (data center of lower floor), and each data center only communicates by letter with substation point with the parent web of oneself.For example, data center 11 101 has as the data center 21 102 of substation point and data center 22 103, data center 21 102 has as the data center 31 104 of substation point and data center 32 105, and data center 22103 has data center 33 106 and data center 34 107 as substation point.
In the related individual-layer data centring system of present embodiment, data center can be distributed in different physical locations, and is connected with one's own local user respectively.For example, in Fig. 1, data center 31 104 is with the sensor groups of being made up of sensor node 108 1 109 and organize 1 111 by the user that user terminal 110 is formed and be connected.Simultaneously, data center 31 104 can also be connected with other local user, and other 6 websites among the figure also can be organized the (not shown) that be connected with various users or user.
1.2, data center
Below continue contrast Fig. 1 and specify the related data center of present embodiment.
In the present embodiment, data center can be connected with the local user, and and this local user between transmit load flow etc.In addition, data center can also be connected with other data centers, and and other data centers between transmit load flow and status message etc.Data center has load equalizer (equalizer), data center's Intranet and server group (home server group).Wherein, load equalizer is used for the load flow that is sent to the notebook data center is distributed, particularly, for example distribute load flow by the notebook data center processing, or load flow is transmitted to upper layer data center or the data center of lower floor that is connected with the notebook data center; Data center's Intranet is used for distributing the load flow that comes to send the server group at notebook data center to load equalizer; The server group is for the treatment of the load flow that sends via data center's Intranet.
For example, in Fig. 1, data center 31 104 provides the storage service of the data that storage sensor for example gathers to sensor groups 1 109, and organizes 1 111 to the user business such as for example the Internet access, video request program and mail are provided.These service convergences become load flow 112, organize 1 111 from sensor groups 1 109 and user and uply are sent to data center 31 104, otherwise perhaps be sent to sensor groups 1 109 and the user organizes 1 111 from data center 31 104 is descending.Data center 31 104 distributes by 31 113 pairs of load flows 112 of equalizer, part or all of load flow is distributed to the server group 31 115 that belongs to notebook data center (website) by data center's Intranet 31114, and may be forwarded to coupled upper layer data center (parent web is data center 21 102) or data center of lower floor (substation point) by the load flow that other data center (website) handled in Fig. 1.The remainder data center has the 31 104 similar structures with data center, for simple and clear, only shows the corresponding load equalizer in partial data center among the figure, i.e. equalizer 11 116, equalizer 21 117, equalizer 22 118 and equalizer 32 119.In addition, can put mutual swap status message 120 with parent web and substation in each data center that belongs to same individual-layer data centring system, transmit the station state of oneself to the other side.About status message 120, will describe in detail below.
1.3, load equalizer
Below specify the related load equalizer of present embodiment.
1.3.1, the structure of load equalizer
In the present embodiment, load equalizer is used for connecting the individual-layer data centring system that a plurality of data centers form with layered mode, and the load flow that is sent to the current data center corresponding with this load equalizer among a plurality of data centers is distributed.
Wherein, load equalizer is connected according to the current data center and with the current data center and the resource of data center of lower floor that is positioned at the lower floor at current data center is used state, and the load flow that is sent to the current data center is distributed.
Particularly, for example, use under the situation of state as the free time in the resource at current data center, load equalizer is distributed to the current data center with load flow; Resource use state at the current data center is that resource not idle and data center of lower floor is used under the situation of state as the free time, and load equalizer is distributed to data center of lower floor with load flow; Resource use state at the current data center is not idle and the resource use state of data center of lower floor is that the resources balance device is distributed to load flow at the upper layer data center that is connected and is positioned at upper strata, current data center with the current data center under the not idle situation.
1.3.2, a specific embodiment of load equalizer
Below contrast a specific embodiment of the load equalizer that Fig. 2~Fig. 5 illustrates that present embodiment is related.
Fig. 2 is the module map of a specific embodiment of the related load equalizer of first execution mode.As shown in Figure 2, the related load equalizer 21 117 of this specific embodiment comprises communication interface 201, retransmission unit 202, processing unit 203 and memory 204.Other load equalizers also can adopt similar structure.
1.3.2.1, communication interface
Communication interface 201 is used for receiving and sending packet, and this packet comprises load flow and status message 120 at least.Communication interface 201 for example can be the data communication interface of wired or wireless mode etc.
Fig. 3 is an example of the message format of status message, and particularly, Fig. 3 is used for the current data center to an example of the message format of the status message 120 of upper layer data center report resource use state by what communication interface sent.As shown in Figure 3, comprise in the status message 120 that the main frame virtual IP address of unique public network IP address that Site ID, mark current data center for identification current data center (for example sequence number at mark current data center) externally use, the resource at expression current data center use the resource status of state and the service list of the COS that expression current data center can provide.
Wherein, the resource status in the status message 120 may further include the request processing speed of average request processing speed of the home server at the idle percentage of memory of average memory idleness of home server at the idle percentage of CPU, expression current data center of average CPU idleness of home server at expression current data center and expression current data center.
1.3.2.2, retransmission unit
Retransmission unit 202 is used for changing the header packet information of packet, and this header packet information comprises source address and destination address at least.For example, the load allocation map information (specifying hereinafter) that retransmission unit 202 bases are stored by memory 204 changes the header packet information of packet, and packet is transmitted.Retransmission unit 202 for example is special-purpose or general processor or integrated circuit etc.
1.3.2.3, memory
The resource that memory 204 has been stored the current data center is used state and the resource of the data center of lower floor that obtains by status message 120 is used state.Memory 204 for example is read-write memory devices such as RAM or HDD.
Fig. 4 is the example that resource is used state, and particularly, Fig. 4 is an example of being used the site capacity table of state by the resource of the expression current data center of memory 204 storages and data center of lower floor.As shown in Figure 4, the site capacity table comprises index, the Site ID that is used for identification current data center or data center of lower floor (for example sequence number of mark current data center or data center of lower floor), the main frame virtual IP address of the public network IP address of the external use of mark current data center or data center of lower floor, the idle percentage of the CPU of the average CPU idleness of the home server of expression current data center or data center of lower floor, the idle percentage of the memory of the average memory idleness of the home server of expression current data center or data center of lower floor, the request processing speed of the average request processing speed of the home server of expression current data center or data center of lower floor, the integrated load of the website integrated load of expression current data center or data center of lower floor, and the service list of representing the COS that current data center or data center of lower floor can provide.
Wherein, the integrated load of data center for example can be the CPU usage of home server of this data center and the arithmetic mean of memory utilization rate, that is:
Integrated load=AVG (CPU usage, memory utilization rate)
=1-AVG (the idle percentage of CPU, the idle percentage of memory)
Wherein (x, y) arithmetic mean of x and y is got in expression to AVG, the idle percentage of CPU usage=1-CPU, the idle percentage of memory utilization rate=1-memory.
Fig. 5 is an example of load allocation map information, and particularly, Fig. 5 accepts professional load allocation map table by the current data center of memory 204 storages.This load allocation map table comprises index, mark is arranged in the user terminal of user group or is arranged in the sensor node of sensor groups or the client ip of their agency's IP address, mark is positioned at the Intranet IP address of server of home server group at current data center or the IP at server end that mark is transmitted the main frame virtual ip address at destination data center, mark is transmitted the next-hop IP of purpose IP address, next of the data-link layer address of next jumping in the path of mark arrival forwarding purpose jumped physical address, flag data wraps in the port that enters the station of the transport layer port number when arriving the current data center, flag data wraps in the outbound port of the transport layer port number when transmitting, and the protocol type of the transport layer protocol type of flag data bag.Wherein transport layer port number can be tcp port number or udp port number, and protocol type can be TCP or UDP.
1.3.2.4, processing unit
Processing unit 203 is according to the header packet information of the packet that is used state by memory 204 stored resource and received by communication interface, and 202 pairs of packets that constitute load flow of control retransmission unit distribute.Processing unit 203 for example is general processors such as CPU or MPU or application-specific integrated circuit (ASIC) etc.
Particularly, for example processing unit 203 can read in the integrated load (please refer to Fig. 4) by memory 204 storages, under the integrated load at current data center was situation below the defined threshold A, the packet that control retransmission unit 202 will constitute load flow was forwarded to the home server at current data center; Under the integrated load at current data center greater than the integrated load of described defined threshold A and data center of lower floor was situation below the defined threshold B, the packet that control retransmission unit 202 will constitute load flow was forwarded to data center of lower floor; Under integrated load the situation greater than defined threshold B of integrated load greater than described defined threshold A and data center of lower floor at current data center, the packet that control retransmission unit 202 will constitute load flow is forwarded to the upper layer data center.Wherein, threshold value A can be identical value with threshold value B, also can be different values.
1.4, load-balancing method
Below contrast Fig. 6 specifies the load-balancing method that present embodiment relates to.
1.4.1, the flow process of load-balancing method
Fig. 6 is the flow chart of the related load-balancing method of first execution mode.As shown in Figure 6, the related load-balancing method of present embodiment is used for connecting the individual-layer data centring system that a plurality of data centers form with layered mode, the load flow that is sent to the current data center among described a plurality of data center is distributed, comprise: determining step S1, use state and be connected with described current data center and the resource use state of data center of lower floor that is positioned at the lower floor at described current data center is judged the resource at described current data center; And allocation step S2, according to the judged result of described determining step S1, the load flow that is sent to the current data center is distributed.
1.4.2, a specific embodiment of load-balancing method
Below contrast Fig. 7 illustrates a specific embodiment of the load-balancing method that present embodiment is related.Fig. 7 is the flow chart of a specific embodiment of the related load-balancing method of first execution mode.Wherein, the related load-balancing method of this specific embodiment can be realized by the above-mentioned specific embodiment of load equalizer, with reference to the related description of the above-mentioned specific embodiment of load equalizer.
At first, the communication interface 201 of load equalizer receives the packet that constitutes load flow, and sends it to processing unit 203 (step S901).
Then, processing unit 203 checks the load allocation map table (step S902) of memory 204 storages.
Then, processing unit 203 judges whether to exist the identical record (step S903) of source address of client ip address or IP at server end address and packet.
If the judged result of step S903 is for being, carry out according to matched record by retransmission unit 202 and communication interface 201 then that handle in packet header and packet forwarding (step S904).
Otherwise, if the judged result of step S903 then further checks the direction of packet for not, judge whether it is upstream data bag (step S905).
If it is from the downlink data packet of server rather than from the request of user's side that the judged result of step S905, illustrates this packet for not, can directly abandon (step S906).
Otherwise, if the judged result of step S905, illustrates then that this is new user's request, need be distributed to the home server at current data center or be forwarded to other data center for being.Therefore processing unit 203 is checked the current resource use state at current data center, information such as the idle percentage of CPU, the idle percentage of memory, request processing speed for example, and by one in the above-mentioned information or multinomial and one or more predefine threshold being judged current data center free time whether.As concrete determination methods, for example processing unit 203 can check the site capacity table of memory 204 storage, adopts as mentioned above the integrated load of the home server at current data center and the method (step S907) that defined threshold compares.
If the judged result of step S907 illustrates the current data center free time for being, can handle this request, then processing unit 203 sends to home server by retransmission unit 202 with this packet.Exist under the situation of a plurality of home servers, can select a home server (step S908) by for example selecting the maximum server of idling-resource.
Then, processing unit 203 by write the client and server end IP address that comprises this request, address and the port information of next jumping waits to upgrade load allocation map table (step S909).
If the judged result of step S907 illustrate that for not the current data center is busy, then processing unit 203 need be transmitted at data center of selection from the upper layer data center at current data center and data center of lower floor.For the destination data center that determines to transmit, processing unit 203 needs to check the site capacity table of memory 204 storages, information such as the idle percentage of CPU, the idle percentage of memory, request processing speed for example, and by one in the above-mentioned information or multinomial and one or more predefine threshold being judged whether the data center of lower floor of free time.As concrete determination methods, for example can adopt as mentioned above the integrated load of the home server of data center of lower floor and the method (step S910) that defined threshold compares.
If the judged result of step S910 illustrate that the current data center does not have idle data center of lower floor, and the current data center is busy for not, so processing unit 203 is selected request is forwarded to upper layer data center (step S911).
If the judged result of step S910 is for being that then processing unit 203 sends to data center of lower floor by retransmission unit 202 with this packet.Under the situation that has data center of a plurality of lower floor, can select a data center of lower floor (step S912) by for example selecting the maximum data center of lower floor of idling-resource.
Same, after step S911 and S912, processing unit 203 needs to upgrade load allocation map table (step S909).
Wherein, in the concrete example of load-balancing method shown in Figure 7, step S901 is used for receiving load flow, step S902, S903, S905 are used for judging whether the load flow that receives is new request, step S907, S910 are equivalent to the determining step S1 in the load-balancing method shown in Figure 6, and step S908, S911, S912 are equivalent to the allocation step S2 in the load-balancing method shown in Figure 6.
1.5, the effect of first execution mode
According to the related load equalizer of present embodiment, load-balancing method and individual-layer data centring system, can improve diversity structure and the flexibility of individual-layer data centring system, and can use state according to the resource of each data center that forms the individual-layer data centring system, come the request of process user as far as possible by close user and the data center of resource free time, can make the load in the individual-layer data centring system more balanced, improve user's request responding speed, thereby improved the efficiency of service of individual-layer data centring system.
2, second execution mode
Below the contrast accompanying drawing specifies second execution mode of the present invention.
2.1, the feature of second execution mode
Present embodiment is not only used state according to the resource of data center, but also is distributed load flow according to the Link State between the data center on the basis of first execution mode.Below, only the difference for present embodiment and first execution mode is specifically described, and the identical point of present embodiment and first execution mode please refer to the explanation of first execution mode.
The individual-layer data centring system that present embodiment is related is identical with first execution mode with the structure of data center, does not do at this and gives unnecessary details.
The related load equalizer of the related load equalizer of present embodiment and first execution mode is compared, also according to the Link State between described current data center and data center of described lower floor and the described upper layer data center, the load flow that is sent to described current data center is distributed.
The related load-balancing method of the related load-balancing method of present embodiment and first execution mode is compared, and in determining step, also judges the Link State between current data center and data center of lower floor and the upper layer data center; In allocation step, also according in the determining step to the judged result of described Link State, load flow is distributed.
2.2, a concrete example of second execution mode
The below related load equalizer of explanation present embodiment and a specific embodiment of load-balancing method.
A specific embodiment as the related load equalizer of present embodiment, on the basis of the load equalizer of the specific embodiment of first execution mode, memory 204 also stores the information of the Link State between expression current data center and other data centers that are connected.Fig. 8 is an example of Link State, and particularly, Fig. 8 is that record current data center 21 102 is to the network link status table of the Link State of upper layer data center and data center of lower floor.This network link status table comprises: be used for identification and connect the current data center with the link ID of its upper layer data center and data center of lower floor (for example network link sequence number), for identifying upper layer data center or the remote station ID of data center of lower floor (for example data center's sequence number) and the link load of representing the bandwidth utilization rate of described link that described link is connected.Wherein, the bandwidth during link load for example equals to use accounts for the percentage of link total bandwidth.
A specific embodiment as the related load-balancing method of present embodiment, on the basis of the load-balancing method of the specific embodiment of first execution mode, in step S907, be judged as under the not idle situation in current data center, the network link status table that control unit 203 is also stored according to memory 204 in the related load equalizer of the specific embodiment of present embodiment, Link State and defined threshold between current data center and upper layer data center and the data center of lower floor are compared, judge thus whether Link State is idle.Under the situation that is judged as the Link State free time, carry out step S910, judge that the resource of data center of lower floor is used state.Being judged as Link State under the not idle situation, carry out step S908, load flow is sent to the home server at current data center by retransmission unit 202.
2.3, the effect of second execution mode
According to present embodiment, in the individual-layer data centring system, load equalizer not only uses state according to resource, but also according to Link State the load flow that is sent to the current data center is distributed.Therefore, according to the related load equalizer of present embodiment, load-balancing method and individual-layer data centring system, the effect that not only has first execution mode, but also can transmit data by idle link, alleviate the pressure that the business transmitted is brought network link, further improved the efficiency of service of individual-layer data centring system.
3, the 3rd execution mode
Below specify the 3rd execution mode of the present invention.
3.1, the feature of the 3rd execution mode
Present embodiment is not only used state according to the resource of data center, but also is distributed load flow according to the protocol type of the packet that constitutes load flow on the basis of first execution mode.Below, only the difference for present embodiment and first execution mode is specifically described, and the identical point of present embodiment and first execution mode please refer to the explanation of first execution mode.
The individual-layer data centring system that present embodiment is related is identical with first execution mode with the structure of data center, does not do at this and gives unnecessary details.
The related load equalizer of the related load equalizer of present embodiment and first execution mode is compared, and also according to the protocol type of the packet that constitutes load flow, the load flow that is sent to described current data center is distributed.
The related load-balancing method of the related load-balancing method of present embodiment and first execution mode is compared, and in determining step, also judges the protocol type of the packet that constitutes load flow; In allocation step, also according in the determining step to the judged result of described packet ground protocol type, load flow is distributed.
3.2, a concrete example of the 3rd execution mode
Below specify a specific embodiment of the related load equalizer of present embodiment and load-balancing method.
As the protocol type of the packet that constitutes load flow, for example can enumerate TCP and UDP.Wherein, Transmission Control Protocol generally is used for the business that transmission is insensitive to delaying time, data volume is less, for example web page browsing, file transfer, mail, and udp protocol generally is used for transmission to the sensitivity of delaying time, business that data volume is bigger, for example instant chat, video request program, the networking telephone.Therefore the business of udp protocol type is stayed the current data center processing and can better ensure professional service quality, and the business that reduces to transmit avoids network congestion to bring bigger response time-delay for the business of forwarding to the pressure that network link brings.
According to above-mentioned rule, on the basis of the load-balancing method of the specific embodiment of first execution mode, be judged as in step S907 under the not idle situation in current data center, control unit 203 also checks by the represented protocol type of protocol fields in the IP packet header of the packet that receives in the step 901.Protocol type at packet is under the situation of TCP, carries out step S910, judges that the resource of data center of lower floor is used state.Protocol type at packet is under the situation of UDP, carries out step S908, load flow is sent to the home server at current data center by retransmission unit 202.
When the protocol type of judgment data bag as mentioned above, utilized the header packet information of network layer, this can accelerate the processing speed of load equalizer, might reach the router linear speed.Particularly, under the situation that the header packet information that only utilizes network layer is judged, the processing speed of load equalizer is much higher than the situation of utilizing application layer message to judge.
3.3, the effect of the 3rd execution mode
According to present embodiment, in the individual-layer data centring system, load equalizer not only uses state according to resource, but also according to the protocol type of the packet that constitutes load flow the load flow that is sent to the current data center is distributed.In addition, when judging protocol type, utilize the header packet information of network layer.Therefore, according to the related load equalizer of present embodiment, load-balancing method and individual-layer data centring system, the effect that not only has first execution mode, and can ensure professional service quality better, the pressure that the business that reduces to transmit is brought network link, avoid network congestion to bring bigger response time-delay for the business of forwarding, accelerate the processing speed of load equalizer, thereby further improved the efficiency of service of individual-layer data centring system.
4, the 4th execution mode
Below the contrast accompanying drawing specifies the 4th execution mode of the present invention.
4.1, the feature of the 4th execution mode
Present embodiment makes up and improves first, second, and third execution mode, uses the protocol type of the packet of state, Link State and formation load flow to distribute load flow according to the resource of data center.Below, only the difference for present embodiment and above-mentioned execution mode is specifically described, and the identical point of present embodiment and above-mentioned execution mode please refer to the explanation of above-mentioned execution mode.
The individual-layer data centring system that present embodiment is related is identical with first execution mode with the structure of data center, does not do at this and gives unnecessary details.
The related load equalizer of the related load equalizer of present embodiment and first execution mode is compared, also according to the protocol type of the Link State between current data center and data center of lower floor and the upper layer data center with the packet that constitutes load flow, the load flow that is sent to described current data center is distributed.
The related load-balancing method of the related load-balancing method of present embodiment and first execution mode is compared, in determining step, also judge the Link State between current data center and data center of lower floor and the upper layer data center and constitute the protocol type of the packet of load flow; In allocation step, also according in the determining step to the judged result of described Link State and described packet ground protocol type, load flow is distributed.
4.2, a concrete example of the 4th execution mode
Below specify a specific embodiment of the related load equalizer of present embodiment and load-balancing method.
As a specific embodiment of the related load equalizer of present embodiment, on the basis of the load equalizer of the specific embodiment of first execution mode, memory 204 also stores threshold information.Fig. 9 is an example of threshold information, and particularly, Fig. 9 accepts professional predefine load threshold table by the current data center of memory 204 storages.This predefine load threshold table comprises threshold value 11 and threshold value 12, the threshold value 21 of memory utilization rate and threshold value 1 and the threshold value 2 of threshold value 22 and integrated load of CPU usage.Wherein the threshold value 1 of integrated load and threshold value 2 are the CPU usage of correspondence and the arithmetic mean of memory utilization rate, that is:
Threshold value m=AVG (threshold value 1m, threshold value 2m), m={1,2}.
By rational selection threshold value nm (n, m={1, value 2}), CPU usage and memory utilization rate when increasing by 2 times when threshold value 11 and threshold value 21 for example being set being respectively the server average response time than underloading, and threshold value 12 and threshold value 22 CPU usage and the memory utilization rate when increasing by 100 times when being respectively the server average response time than underloading, the expression server average behavior that can use integrated load threshold value 1 and threshold value 2 to quantize begins to occur the situation that worsens and severe exacerbation occurs.
Contrast Figure 10 illustrates a specific embodiment of the load-balancing method that present embodiment is related.Figure 10 is the flow chart of a specific embodiment of the related load-balancing method of the 4th execution mode.Wherein, the related load-balancing method of this specific embodiment can be realized by the above-mentioned specific embodiment of load equalizer, with reference to the related description of the above-mentioned specific embodiment of load equalizer.
At first, the communication interface 201 of load equalizer receives the packet that constitutes load flow, and sends it to processing unit 203 (step S1001).
Then, processing unit 203 checks the load allocation map table (step S1002) of memory 204 storages.
Then, processing unit 203 judges whether to exist the identical record (step S1003) of source address of client ip address or IP at server end address and packet.
If the judged result of step S1003 is for being, carry out according to matched record by retransmission unit 202 and communication interface 201 then that handle in packet header and packet forwarding (step S1004).
Otherwise, if the judged result of step S1003 then further checks the direction of packet for not, judge whether it is upstream data bag (step S1005).
If it is from the downlink data packet of server rather than from the request of user's side that the judged result of step S1005, illustrates this packet for not, can directly abandon (step S1006).
Otherwise, if the judged result of step S1005, illustrates then that this is new user's request, need be distributed to the home server at current data center or be forwarded to other data center for being.Therefore processing unit 203 is checked the resource use state at current data center, information such as the idle percentage of CPU, the idle percentage of memory, request processing speed for example, and by one in the above-mentioned information or multinomial and one group of predefine threshold being judged current data center free time whether.As concrete determination methods, for example processing unit 203 can check site capacity table and the predefine load threshold table of memory 204 storage, the method that adopts as mentioned above the threshold value 1,2 with the integrated load of the home server at current data center and integrated load to compare.
At this moment, be that threshold value is below 1 if the judged result of step S1007 is integrated load, the home server average behavior that the current data center is described is in good state, can handle this request, and then processing unit 203 sends to home server by retransmission unit 202 with this packet.Exist under the situation of a plurality of home servers, can select a home server (step S1008) by for example selecting the maximum server of idling-resource.
Then, processing unit 203 by write the client and server end IP address that comprises this request, address and port information, the protocol type of next jumping waits to upgrade load allocation map table (step S1009).
If the judged result of step S1007 is that integrated load is greater than integrated load threshold value 2, the home server average behavior that the current data center is described is in abominable state, then needs to select from the upper layer data center at current data center and data center of lower floor a data center to transmit.For the destination data center that determines to transmit, processing unit 203 needs to check the site capacity table of memory 204 storages, information such as the idle percentage of CPU, the idle percentage of memory, request processing speed for example, and by one in the above-mentioned information or multinomial and one or more predefine threshold being judged whether the data center of lower floor of free time.As concrete determination methods, for example can adopt as mentioned above the integrated load of the home server of data center of lower floor and the method (step S1010) that defined threshold compares.
If the judged result of step S1010 illustrate that the current data center does not have idle data center of lower floor, and the current data center is busy for not, so processing unit 203 is selected request is forwarded to upper layer data center (step S1011).
If the judged result of step S1010 is for being that then processing unit 203 sends to data center of lower floor by retransmission unit 202 with this packet.Under the situation that has data center of a plurality of lower floor, can select a data center of lower floor (step S1012) by for example selecting the maximum data center of lower floor of idling-resource.
If the judged result of step S1007 is integrated load greater than threshold value 1 and is threshold value below 2, the home server average behavior that the current data center is described is in the general state between good and abominable, then need the suitable load that alleviates the current data center, and the newly-increased load of part is forwarded to upper layer data center or the data center of lower floor at current data center.At this moment, processing unit 203 needs to check the network link status table of memory 204 storages and is judged whether to transmit the request that this packet represents by the represented protocol type of protocol fields in the IP packet header of packet that concrete determination methods is with reference to the explanation (step S1013) of second, third execution mode.
If the judged result of step S1013 is that idle link and data pack protocol type are arranged is TCP, then forwards step S1010 to and investigate whether idle data center of lower floor is arranged.Otherwise, if the judged result of step S1013 is that not have idle link or data pack protocol type be UDP, then forwarding step S1008 to, processing unit 203 sends to home server by retransmission unit 202 with this packet.
Same, after step S1008, S1011 and S1012, processing unit 203 needs to upgrade load allocation map table 2 (step S1009).
Wherein, in the concrete example of load-balancing method shown in Figure 10, step S1001 is used for receiving load flow, step S1002, S1003, S1005 are used for judging whether the load flow that receives is new request, step S1007, S1013, S1010 are equivalent to the determining step S1 in the load-balancing method shown in Figure 6, and step S1008, S1011, S1012 are equivalent to the allocation step S2 in the load-balancing method shown in Figure 6.
4.3, the effect of the 4th execution mode
According to present embodiment, in the individual-layer data centring system, load equalizer not only uses state according to resource, but also according to the protocol type of Link State with the packet that constitutes load flow the load flow that is sent to the current data center is distributed.Therefore, present embodiment has both the effect of above-mentioned first, second and third execution mode.And, come the performance at quantized data center by using a plurality of threshold values, the distribution of load flow is optimized on the performance ground that can more meet data center, thereby can be more flexibly and suitably distribute load flow, has further improved the efficiency of service of individual-layer data centring system.
5, load allocation process of the present invention
The sequential chart that contrasts Figure 11~Figure 16 below is described in the individual-layer data centring system involved in the present invention load equalizer to the allocation process of load flow under different situations.Wherein Figure 11 and Figure 12 are the home server that the load flow 112 of TCP and UDP type is distributed to current data center 31 104 server group 31 115 for load equalizer 31 113, thereby are handled the situation of load flow 112 by current data center 31104.Figure 13 and Figure 14 are data center 21 102 for the load flow 112 of TCP and UDP type is forwarded to parent web (upper layer data center), and are handled the situation of load flow 112 by parent web.And Figure 15 and Figure 16 are data center 21 102 for the load flow 112 of TCP and UDP type is forwarded to parent web, and be data center 32 105 by other substation point (data center of lower floor) that parent web further is forwarded to it, handled the situation of load flow 112 at last by described other substation point.
Wherein, above load distributes and can be realized by above first~the 4th execution mode related load divider and load allocation method, especially can realize by the 4th execution mode related load divider and load allocation method.
As shown in figure 11, (can be one or more user, perhaps their agency) initiates connection request (301) at first to organize 1 111 by the user, and 31 104 send SYN message 302 to the current data center.After the communication interface 201 of the equalizer 31 113 of data center 31 104 receives SYN message 302, processing unit 203 is analyzed SYN message 302 according to memory 204 canned datas and the header packet information of the SYN message 302 that receives, thereby determine to handle this connection request (303) and from home server group 31 115, select a destination server by site-local (current data center), for example select the maximum server of idling-resource.Then by processing unit 203 by write the client and server end IP address that comprises this request, address and the port information of next jumping waits to upgrade load allocation map table.Next retransmission unit 202 is according to the load allocation map list processing SYN message of upgrading 302, purpose IP address is set to next-hop ip address, it is the IP address of the destination server of processing unit 303 selections, the private IP address of data center normally, the target physical address is set to next and jumps physical address, the physical address that namely leads to the switch of the next-door neighbour on the path of destination server, and source IP address and source physical address are set to IP address and the physical address of equalizer 31 113.SYN message 302 after the processing is forwarded to the data center's Intranet 31 114 that is connected with destination server through communication interface 201, and further is forwarded to destination server through data center's Intranet 31 114.The response message that destination server sends is through rightabout processing, purpose IP address is set to next-hop ip address, it is client ip address, the target physical address is set to next and jumps physical address, namely lead to the user and organize the physical address of the switch of the next-door neighbour on 1 111 the path, and source IP address and source physical address are set to IP address and the physical address of equalizer 31 113.Response message SYN-ACK 304 after treatment sends to the user by communication interface 201 and organizes 1 111.The user organizes 1 111 and connects (305) in client after receiving response message, and sends response message ACK 306 again.When response message ACK 306 arrives, equalizer 31 113 is forwarded to destination server according to the existing record in the load allocation map table, thereby connect at server end (307), and beginning provides service (308) to the user, make the user can obtain service (309), uplink and downlink load flow 310 is wherein similarly transmitted according to the existing record in the load allocation map table passing through equalizer at 31 113 o'clock.
Figure 12 and Figure 11 are similar, and difference is that the protocol type of load flow is UDP.As shown in figure 12, (can be one or more user, perhaps their agency) begins to send packet (401) at first to organize 1 111 by the user, sends uplink traffics 402 to data center 31 104.What send packet here, also can be sensor groups 1 109 (can be one or more sensor node, perhaps their agency).Described uplink traffic 402 can be the video request program request that user terminal is initiated, the perhaps sensing data that reports of sensor node, and perhaps other uses the data service flow of udp protocol.After the communication interface 201 of the equalizer 31 113 of data center 31 104 receives uplink traffic 402, processing unit 203 is analyzed uplink traffic 402 according to memory 204 canned datas and the header packet information of the uplink traffic 402 that receives, thereby recognizing this is a new data flow, decision is handled this connection request (403) by site-local and select a destination server from home server group 31 115, for example selects the maximum server of idling-resource.Then by processing unit 203 by write the client and server end IP address that comprises this request, address and the port information of next jumping waits to upgrade load allocation map table.Similar described in ensuing repeating process and Figure 11, the uplink traffic of handling through retransmission unit 202 402 is sent to destination server.Destination server begins to provide service (404) to the user, the rightabout processing of downlink traffic 407 processes to user's transmission, make the user can obtain service (406), uplink and downlink load flow 405 wherein and 407 is similarly transmitted according to the existing record in the load allocation map table passing through equalizer at 31 113 o'clock.
Figure 13 and Figure 11 are similar, and what difference was to accept request is the parent web data center 21 102 of data center 31 104.As shown in figure 13, (can be one or more user, perhaps their agency) initiates connection request (501) at first to organize 1 111 by the user, sends SYN message 502 to data center 31104.After the communication interface 201 of the equalizer 31 113 of data center 31 104 receives SYN message 502, processing unit 203 is analyzed SYN message 502 according to memory 204 canned datas and the header packet information of the SYN message 502 that receives, find that this request exceeds the processing capacity, determine by its parent web just data center 21 102 handle this connection request (503).Upgrade load allocation map table (with reference to the form of Figure 13) by processing unit 203 then, by retransmission unit 202 according to the load allocation map list processing SYN message of upgrading 502, and the SYN message of being transmitted after handling to data center 21 102 by communication interface 201 502.In data center 21 102, mode and Figure 11 that 21 117 pairs of SYN message 502 of equalizer are handled are similar, select the server of site-local to handle this connection request (504).Subsequent data center 21 102 sends response message SYN-ACK 505 to the user, and further send to the user by equalizer 31 113 and organize 1 111, wherein passing through equalizer at 31 113 o'clock, equalizer 31 113 is according to its packet header address information of the existing record modification in the load allocation map table.The user organizes 1 111 and receives that response message SYN-ACK 505 backs connect (506) in client, and sends response message ACK507 again.Response message ACK 507 was passing through equalizer at 31 113 o'clock, and equalizer 31 113 is according to its packet header address information of the existing record modification in the load allocation map table.Last server end connect (508), and beginning provides service (509) to the user, make the user can obtain service (510), uplink and downlink load flow 514 is wherein similarly transmitted according to the existing record in the load allocation map table passing through equalizer at 31 113 o'clock.
Figure 14 and Figure 12 are similar, and what difference was to accept request is the parent web data center 21 102 of data center 31 104.As shown in figure 14, (can be one or more user, perhaps their agency) begins to send packet (601) at first to organize 1 111 by the user, sends uplink traffics 602 to data center 31 104.After the communication interface 201 of the equalizer 31 113 of data center 31 104 receives uplink traffic 602, processing unit 203 is analyzed uplink traffic 602 according to memory 204 canned datas and the header packet information of the uplink traffic 602 that receives, find that this request exceeds the processing capacity, determine by its parent web just data center 21 102 handle this connection request (603).Upgrade load allocation map tables by processing unit 203 then, by retransmission unit 202 according to the load allocation map list processing uplink traffic of upgrading 602, and the uplink traffic of being transmitted after handling to data center 21 102 by communication interface 201 602.In data center 21 102, mode and Figure 12 that 21 117 pairs of uplink traffics 602 of equalizer are handled are similar, and recognizing this is a new data flow, select the server of site-local to handle this connection request (604), and upgrade its load allocation map table.21 102 beginnings of subsequent data center provide service (605) to the user, make the user can obtain service (606), uplink and downlink load flow 608 wherein and 607 is similarly transmitted according to the existing record in the load allocation map table passing through equalizer at 31 113 o'clock.
Figure 15 and Figure 11, Figure 13 are similar, and what difference was to accept request is the fraternal station data center 32 105 of data center 31 104.As shown in figure 15, (can be one or more user, perhaps their agency) initiates connection request (701) at first to organize 1 111 by the user, sends SYN message 702 to data center 31 104.After the communication interface 201 of the equalizer 31 113 of data center 31 104 receives SYN message 702, processing unit 203 is analyzed SYN message 702 according to memory 204 canned datas and the header packet information of the SYN message 702 that receives, find that this request exceeds the processing capacity, determine by its parent web just data center 21 102 handle this connection request (703).Upgrade load allocation map tables by processing unit 203 then, by retransmission unit 202 according to the load allocation map list processing SYN message of upgrading 702, and the SYN message of being transmitted after handling to data center 21 102 by communication interface 201 702.Similarly, after the equalizer 21 117 of data center 21102 receives SYN message 702, find that this request exceeds the processing capacity, data center 32 105 handles this connection request (704) to determine to put just by its another substation.Upgrade load allocation map table then, according to the load allocation map list processing SYN message of upgrading 702, and the SYN message of transmitting after handling to data center 32 105 702.In data center 32 105, mode and Figure 11 that 32 119 pairs of SYN message 702 of equalizer are handled are similar, select the server of site-local to handle this connection request (705).Subsequent data center 32 105 sends response message SYN-ACK 706 to the user, and further send to the user by equalizer 21 117 and equalizer 31 113 and organize 1 111, wherein passing through equalizer 21 117 and equalizer at 31 113 o'clock, they are according to its packet header address information of existing record modification in the load allocation map table separately.The user organizes 1 111 and receives that response message SYN-ACK 706 backs connect (707) in client, and sends response message ACK 708 again.Response message ACK 708 is through equalizer 31 113 and equalizer 21117 time, and they are according to its packet header address information of existing record modification in the load allocation map table separately.Last server end connect (709), and beginning provides service (710) to the user, make the user can obtain service (711), uplink and downlink load flow 712 is wherein similarly transmitted according to the existing record in the load allocation map table passing through equalizer 31 113 and equalizer at 21 117 o'clock.
Figure 16 and Figure 12, Figure 14 are similar, and what difference was to accept request is the fraternal station data center 32 105 of data center 31 104.As shown in figure 16, (can be one or more user, perhaps their agency) begins to send packet (801) at first to organize 1 111 by the user, sends uplink traffics 802 to data center 31 104.After the communication interface 201 of the equalizer 31 113 of data center 31 104 receives uplink traffic 802, processing unit 203 is analyzed uplink traffic 802 according to memory 204 canned datas and the header packet information of the uplink traffic 802 that receives, find that this request exceeds the processing capacity, determine by its parent web just data center 21 102 handle this connection request (803).Upgrade load allocation map tables by processing unit 203 then, by retransmission unit 202 according to the load allocation map list processing uplink traffic of upgrading 802, and the uplink traffic of being transmitted after handling to data center 21 102 by communication interface 201 802.Similarly, after the equalizer 21 117 of data center 21 102 receives uplink traffic 802, find that this request exceeds the processing capacity, data center 32 105 handles this connection request (804) to determine to put just by its another substation.Upgrade load allocation map table then, according to the load allocation map list processing uplink traffic of upgrading 802, and the uplink traffic of transmitting after handling to data center 32 105 802.In data center 32 105, mode and Figure 12 that 21 117 pairs of uplink traffics 802 of equalizer are handled are similar, and recognizing this is a new data flow, select the server of site-local to handle this connection request (805), and upgrade its load allocation map table.32 105 beginnings of subsequent data center provide service (806) to the user, make the user can obtain service (807), uplink and downlink load flow 808 wherein and 809 is similarly transmitted according to the existing record in the load allocation map table passing through equalizer 31 113 and equalizer at 21 117 o'clock.
As mentioned above, according to load equalizer involved in the present invention, load-balancing method and individual-layer data centring system, can realize above-mentioned load allocation process.Thus, improved the efficiency of service of individual-layer data centring system.
(other)
In above each execution mode, the load equalizer at current data center can obtain the resource of other data center by the following method and use state and Link State.For example, the load equalizer of each data center is collected the port bandwidth data of resource data and the notebook data central outlet router at notebook data center in real time, store the information of collecting in the memory (for example in the site capacity table and network link status table by memory stores), and at any time or periodically (for example every 10 seconds) send status message to oneself upper layer data center and report above-mentioned information, simultaneously will be from the information updating of data center of lower floor in the memory of self.
In above each execution mode, the quantity of data center's (being website) that the individual-layer data centring system is included is 7.But in the present invention, the quantity of the data center that the individual-layer data centring system is included is not limited to 7, as long as can form the individual-layer data centring system with layered mode, also can be other quantity.
In above each execution mode, the included data center of individual-layer data centring system connects in the tree topology mode.But in the present invention, the included data center of individual-layer data centring system is not limited to connect in the tree topology mode, so long as get final product with the layered mode connection, also can be other connected modes such as mixing connected mode.
In above each execution mode, load equalizer is arranged at corresponding data center (first data center) inside.But in the present invention, as long as load equalizer can distribute the load flow that is sent to corresponding data center (first data center), also can be arranged at this data center outside.
In above each execution mode, adopted message format and sheet format as shown in the figure.But in the present invention, message format and sheet format are not limited thereto, as long as can represent corresponding information, also can adopt other message formats and sheet format.
In above each execution mode, the resource that has adopted arithmetic mean according to the CPU usage of the home server of data center and memory utilization rate to calculate this data center is used the method for state.But in the present invention, the resource at calculated data center uses the method for state to be not limited thereto, for example also can be according to the one or more calculating in CPU usage, memory utilization rate and the request processing speed of home server, or append other parameters and calculate.

Claims (12)

1. load equalizer, be used for connecting the individual-layer data centring system that a plurality of data centers form with layered mode, the load flow that is sent to current data center corresponding with this load equalizer among described a plurality of data center is distributed, it is characterized in that
Described load equalizer uses state according to the resource of described current data center and data center of lower floor, the load flow that is sent to described current data center is distributed, and data center of described lower floor is the data center that is connected and is positioned at current data center lower floor with described current data center.
2. load equalizer as claimed in claim 1 is characterized in that,
Described load equalizer is also according to the Link State between described current data center and data center of described lower floor and the upper layer data center, the load flow that is sent to described current data center is distributed, and described upper layer data center is the data center that is connected and is positioned at upper strata, current data center with described current data center.
3. load equalizer as claimed in claim 1 or 2 is characterized in that,
Described load equalizer also according to the protocol type of the packet that constitutes load flow, distributes the load flow that is sent to described current data center.
4. load equalizer as claimed in claim 3 is characterized in that,
Described load equalizer utilizes the header packet information of network layer to judge the protocol type of the packet that constitutes load flow.
5. load equalizer as claimed in claim 1 is characterized in that,
The resource at described current data center use state for the situation below the first threshold of regulation under, described load equalizer will be sent to the load flow at described current data center and distribute to the current data center;
Use state greater than described first threshold and under as the situation below the predetermined second threshold value in the resource at described current data center, described load equalizer is according to the protocol type of the packet of the Link State between described current data center and data center of described lower floor and the upper layer data center and/or formation load flow, the load flow that is sent to described current data center is distributed, described second threshold value is greater than described first threshold, and described upper layer data center is the data center that is connected and is positioned at upper strata, current data center with described current data center;
Use under the situation of state greater than predetermined second threshold value in the resource at described current data center, described load equalizer uses state according to the resource of data center of described lower floor, and the load flow that is sent to described current data center is distributed to data center of described lower floor or described upper layer data center.
6. load equalizer as claimed in claim 1 is characterized in that,
Described load equalizer has:
Communication interface receives and the transmission packet, and this packet comprises load flow and status message at least;
Retransmission unit changes the header packet information of described packet, and this header packet information comprises source address and destination address at least;
Memory, the resource of having stored described current data center use state and the resource of the data center of described lower floor that obtains by described status message is used state; And
Processing unit according to the header packet information of the packet that is used state by the resource of described memory stores and received by described communication interface, is controlled described retransmission unit the packet that constitutes load flow is distributed.
7. load-balancing method is used for connecting the individual-layer data centring system that a plurality of data centers form with layered mode, and the load flow that is sent to the current data center among described a plurality of data center is distributed, and it is characterized in that, comprising:
Determining step is judged the resource use state of described current data center and data center of lower floor, and data center of described lower floor is the data center that is connected and is positioned at current data center lower floor with described current data center; And
Allocation step according to the judged result of described determining step, is distributed the load flow that is sent to described current data center.
8. load-balancing method as claimed in claim 7 is characterized in that,
In described determining step, also the Link State between described current data center and data center of described lower floor and the upper layer data center is judged that described upper layer data center is the data center that is connected and is positioned at upper strata, current data center with described current data center.
9. as claim 7 or 8 described load-balancing methods, it is characterized in that,
In described determining step, also the protocol type of the packet that constitutes load flow is judged.
10. load-balancing method as claimed in claim 9 is characterized in that,
In described determining step, utilize the header packet information of network layer to judge the protocol type of the packet that constitutes load flow.
11. load-balancing method as claimed in claim 7 is characterized in that,
The resource that described determining step is judged as described current data center use state for the situation below the first threshold of regulation under, described allocation step will be sent to the load flow at described current data center and distribute to the current data center;
The resource that is judged as described current data center at described determining step is used state greater than described first threshold and under as the situation below the predetermined second threshold value, described allocation step is according to the Link State between described current data center and data center of described lower floor and the upper layer data center, and/or the protocol type of the packet of formation load flow, the load flow that is sent to described current data center is distributed, described second threshold value is greater than described first threshold, and described upper layer data center is the data center that is connected and is positioned at upper strata, current data center with described current data center;
The resource that is judged as described current data center at described determining step is used under the situation of state greater than predetermined second threshold value, described allocation step is used state according to the resource of data center of described lower floor, and the load flow that is sent to described current data center is distributed to data center of described lower floor or described upper layer data center.
12. an individual-layer data centring system connects a plurality of data centers with layered mode and forms, and comprises corresponding with each data center respectively a plurality of load equalizers, it is characterized in that,
Each load equalizer uses state according to the resource of the current data center corresponding with this load equalizer and data center of lower floor, the load flow that is sent to described current data center is distributed, and data center of described lower floor is the data center that is connected and is positioned at current data center lower floor with described current data center.
CN2012100342029A 2012-02-15 2012-02-15 Load balancer, load balancing method and stratified data center system Pending CN103259809A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN2012100342029A CN103259809A (en) 2012-02-15 2012-02-15 Load balancer, load balancing method and stratified data center system
JP2013010061A JP2013168139A (en) 2012-02-15 2013-01-23 Load balancing device, load balancing method and hierarchized data center system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012100342029A CN103259809A (en) 2012-02-15 2012-02-15 Load balancer, load balancing method and stratified data center system

Publications (1)

Publication Number Publication Date
CN103259809A true CN103259809A (en) 2013-08-21

Family

ID=48963506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012100342029A Pending CN103259809A (en) 2012-02-15 2012-02-15 Load balancer, load balancing method and stratified data center system

Country Status (2)

Country Link
JP (1) JP2013168139A (en)
CN (1) CN103259809A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104135406A (en) * 2014-08-01 2014-11-05 浪潮(北京)电子信息产业有限公司 Monitoring data transmitting method and device
CN106982171A (en) * 2017-04-28 2017-07-25 中国人民解放军信息工程大学 A kind of flow equalization method and device of descendant node information Perception
WO2017162184A1 (en) * 2016-03-25 2017-09-28 阿里巴巴集团控股有限公司 Method of controlling service traffic between data centers, device, and system
WO2017167129A1 (en) * 2016-03-31 2017-10-05 阿里巴巴集团控股有限公司 Method and device for distributing data center data to user
CN107528884A (en) * 2017-07-14 2017-12-29 北京三快在线科技有限公司 The user's request processing method and device of a kind of aggregate server
CN107592370A (en) * 2017-10-31 2018-01-16 郑州云海信息技术有限公司 A kind of network load balancing method and device
CN108259336A (en) * 2017-11-22 2018-07-06 新华三技术有限公司 Data center's interconnected method and device
WO2018233013A1 (en) * 2017-06-21 2018-12-27 深圳市盛路物联通讯技术有限公司 Filtering control method for internet of things data and routing node
CN109714216A (en) * 2019-01-24 2019-05-03 江苏中云科技有限公司 A kind of mixing cloud service system of double-layer structure
CN109951370A (en) * 2017-12-21 2019-06-28 博元森禾信息科技(北京)有限公司 Much data centers are layered the method and device that interconnects
JP2019525293A (en) * 2016-06-10 2019-09-05 テレフオンアクチーボラゲット エルエム エリクソン(パブル) Hierarchical data collector and related techniques for use in real-time data collection
WO2021004385A1 (en) * 2019-07-09 2021-01-14 阿里巴巴集团控股有限公司 Service unit switching method, system and apparatus
CN112491601A (en) * 2020-11-16 2021-03-12 北京字节跳动网络技术有限公司 Traffic topology generation method and device, storage medium and electronic equipment
CN112671704A (en) * 2020-11-18 2021-04-16 国网甘肃省电力公司信息通信公司 Attack-aware mMTC slice resource allocation method and device and electronic equipment

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6550712B2 (en) * 2014-10-10 2019-07-31 株式会社リコー Communication system, management server, and communication method
CN104320301B (en) * 2014-10-31 2018-06-22 北京思特奇信息技术股份有限公司 A kind of Intranet special line flux monitoring method and system
US11102285B2 (en) * 2017-01-05 2021-08-24 Bank Of America Corporation Network routing tool
US10425472B2 (en) * 2017-01-17 2019-09-24 Microsoft Technology Licensing, Llc Hardware implemented load balancing
CN111431814A (en) * 2020-03-18 2020-07-17 紫光云技术有限公司 Method for realizing outbound load balancing of P2P flow
CN113093682B (en) * 2021-04-09 2022-07-05 天津商业大学 Non-centralized recursive dynamic load balancing computing system
CN113724485B (en) * 2021-09-03 2022-08-26 重庆邮电大学 Rapid intensive information acquisition method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010047414A1 (en) * 2000-05-29 2001-11-29 Yoon Ki J. Dedicated private network service method having backup and loads-balancing functions
CN101009648A (en) * 2006-12-27 2007-08-01 北京航空航天大学 Multi-server hierarchical alterative load balance method
CN101325552A (en) * 2008-08-01 2008-12-17 杭州华三通信技术有限公司 Triangle forwarding method for access request and GLB server
CN101980505A (en) * 2010-10-22 2011-02-23 中山大学 3Tnet-based video-on-demand load balancing method
CN102035737A (en) * 2010-12-08 2011-04-27 北京交通大学 Adaptive load balancing method and device based on cognitive network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010047414A1 (en) * 2000-05-29 2001-11-29 Yoon Ki J. Dedicated private network service method having backup and loads-balancing functions
CN101009648A (en) * 2006-12-27 2007-08-01 北京航空航天大学 Multi-server hierarchical alterative load balance method
CN101325552A (en) * 2008-08-01 2008-12-17 杭州华三通信技术有限公司 Triangle forwarding method for access request and GLB server
CN101980505A (en) * 2010-10-22 2011-02-23 中山大学 3Tnet-based video-on-demand load balancing method
CN102035737A (en) * 2010-12-08 2011-04-27 北京交通大学 Adaptive load balancing method and device based on cognitive network

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104135406A (en) * 2014-08-01 2014-11-05 浪潮(北京)电子信息产业有限公司 Monitoring data transmitting method and device
CN104135406B (en) * 2014-08-01 2017-10-03 浪潮(北京)电子信息产业有限公司 A kind of monitoring data transfer approach and device
WO2017162184A1 (en) * 2016-03-25 2017-09-28 阿里巴巴集团控股有限公司 Method of controlling service traffic between data centers, device, and system
CN107231221B (en) * 2016-03-25 2020-10-23 阿里巴巴集团控股有限公司 Method, device and system for controlling service flow among data centers
TWI724106B (en) * 2016-03-25 2021-04-11 香港商阿里巴巴集團服務有限公司 Business flow control method, device and system between data centers
CN107231221A (en) * 2016-03-25 2017-10-03 阿里巴巴集团控股有限公司 Job flow control method, apparatus and system between data center
WO2017167129A1 (en) * 2016-03-31 2017-10-05 阿里巴巴集团控股有限公司 Method and device for distributing data center data to user
CN107295042A (en) * 2016-03-31 2017-10-24 阿里巴巴集团控股有限公司 A kind of method and apparatus that data center is distributed for user
CN107295042B (en) * 2016-03-31 2021-12-14 阿里巴巴集团控股有限公司 Method and equipment for distributing data center for user
JP2019525293A (en) * 2016-06-10 2019-09-05 テレフオンアクチーボラゲット エルエム エリクソン(パブル) Hierarchical data collector and related techniques for use in real-time data collection
CN106982171A (en) * 2017-04-28 2017-07-25 中国人民解放军信息工程大学 A kind of flow equalization method and device of descendant node information Perception
CN106982171B (en) * 2017-04-28 2019-08-13 中国人民解放军信息工程大学 A kind of flow equalization method and device of descendant node information Perception
WO2018233013A1 (en) * 2017-06-21 2018-12-27 深圳市盛路物联通讯技术有限公司 Filtering control method for internet of things data and routing node
CN107528884A (en) * 2017-07-14 2017-12-29 北京三快在线科技有限公司 The user's request processing method and device of a kind of aggregate server
CN107528884B (en) * 2017-07-14 2020-08-07 北京三快在线科技有限公司 User request processing method and device of aggregation server
CN107592370A (en) * 2017-10-31 2018-01-16 郑州云海信息技术有限公司 A kind of network load balancing method and device
CN108259336B (en) * 2017-11-22 2020-12-29 新华三技术有限公司 Data center interconnection method and device
CN108259336A (en) * 2017-11-22 2018-07-06 新华三技术有限公司 Data center's interconnected method and device
CN109951370A (en) * 2017-12-21 2019-06-28 博元森禾信息科技(北京)有限公司 Much data centers are layered the method and device that interconnects
CN109951370B (en) * 2017-12-21 2022-07-05 博元森禾信息科技(北京)有限公司 Hierarchical interconnection method and device for big data centers
CN109714216A (en) * 2019-01-24 2019-05-03 江苏中云科技有限公司 A kind of mixing cloud service system of double-layer structure
WO2021004385A1 (en) * 2019-07-09 2021-01-14 阿里巴巴集团控股有限公司 Service unit switching method, system and apparatus
CN112491601A (en) * 2020-11-16 2021-03-12 北京字节跳动网络技术有限公司 Traffic topology generation method and device, storage medium and electronic equipment
CN112491601B (en) * 2020-11-16 2022-08-30 北京字节跳动网络技术有限公司 Traffic topology generation method and device, storage medium and electronic equipment
CN112671704A (en) * 2020-11-18 2021-04-16 国网甘肃省电力公司信息通信公司 Attack-aware mMTC slice resource allocation method and device and electronic equipment
CN112671704B (en) * 2020-11-18 2022-11-15 国网甘肃省电力公司信息通信公司 Attack-aware mMTC slice resource allocation method and device and electronic equipment

Also Published As

Publication number Publication date
JP2013168139A (en) 2013-08-29

Similar Documents

Publication Publication Date Title
CN103259809A (en) Load balancer, load balancing method and stratified data center system
CN104272708B (en) It is distributed with the stateless first order grouping to server farm and is distributed to the secondary data packets of the stateful second level grouping distribution of some server in group
US10225193B2 (en) Congestion sensitive path-balancing
CN103929368B (en) Load balance method and device for multiple service units
CN103220354A (en) Method for achieving load balancing of server cluster
CN102469124B (en) Based on the implementation method of the mobile Internet business of AOG, gateway, agency and system
CN106170024A (en) A kind of system, method and node processed based on data in software defined network
CN107086966A (en) A kind of load balancing of network, control and network interaction method and device
TW201013420A (en) Methods for collecting and analyzing network performance data
WO2023284447A1 (en) Cloud-edge collaboration data transmission method, server, and storage medium
CN103067292A (en) Websocket-transmission-based load balancing method and device
CN106686137B (en) Network isolating device load-balancing method based on L2 data forwarding
CN106688277A (en) Efficient centralized resource and schedule management in time slotted channel hopping networks
US7826465B2 (en) Methods, systems and computer program products for dynamic communication data routing by a multi-network remote communication terminal
CN103067977B (en) Data concurrence transmission method based on cross-layer optimization in wireless heterogeneous network system
CN105592559B (en) Method and apparatus based on base station scheduling business datum
CN108234309A (en) A kind of transmission method of network data
CN108632931A (en) A kind of data transmission method, device, equipment and medium based on 5G networks
CN110191066A (en) A kind of method, equipment and the system of determining maximum transmission unit PMTU
CN101355521B (en) Control method for equalizing load, communication apparatus and communication system
CN106412043B (en) CDN network flow bootstrap technique and device
CN109005126A (en) The processing method and equipment of data flow
CN114513467B (en) Network traffic load balancing method and device of data center
CN111835564A (en) Self-adaptive recovery method and system for power Internet of things communication link fault
CN107078935A (en) Network is the cross-domain Synergistic method of service business, cooperative device and control device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130821