CN100435530C - Method for realizing two-way load equalizing mechanism in multiple machine servicer system - Google Patents

Method for realizing two-way load equalizing mechanism in multiple machine servicer system Download PDF

Info

Publication number
CN100435530C
CN100435530C CNB2006100427623A CN200610042762A CN100435530C CN 100435530 C CN100435530 C CN 100435530C CN B2006100427623 A CNB2006100427623 A CN B2006100427623A CN 200610042762 A CN200610042762 A CN 200610042762A CN 100435530 C CN100435530 C CN 100435530C
Authority
CN
China
Prior art keywords
node
load
end server
load equalizer
server node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2006100427623A
Other languages
Chinese (zh)
Other versions
CN1859313A (en
Inventor
伍卫国
董小社
付重钦
钱德沛
王恩东
胡雷钧
王守昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Beijing Electronic Information Industry Co Ltd
Xian Jiaotong University
Original Assignee
Inspur Beijing Electronic Information Industry Co Ltd
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Beijing Electronic Information Industry Co Ltd, Xian Jiaotong University filed Critical Inspur Beijing Electronic Information Industry Co Ltd
Priority to CNB2006100427623A priority Critical patent/CN100435530C/en
Publication of CN1859313A publication Critical patent/CN1859313A/en
Application granted granted Critical
Publication of CN100435530C publication Critical patent/CN100435530C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

The present invention relates to a method for realizing a two-way load equalizing mechanism in a multi-machine server system. A load equalizing system composed of one or more load equalizer nodes is connected with an outer network, request and return data packets pass through the load equalizing system, server nodes in the system are externally shielded, and a server system has high safety. Simultaneously, the load equalizer nodes are used for equalizing the load of the request from a client terminal, and the objective MAC addresses of the data packets are modified to distribute the data packets to improve performance when the load of the request is equalized; when returned data packets are uniformly distributed to the load equalizer nodes when pass through the load equalizing system, and the whole server system has the function of bidirectional load equalization; when the load equalizer nodes have faults, the request data packets and the return data packets can be transferred to other load equalizer nodes to realize high usability.

Description

The implementation method of two-way load equalizing mechanism in a kind of multiple machine servicer system
Technical field
The present invention relates to field of computer technology, just provided a kind of two-way load equalizing mechanism that is used for multiple machine servicer system specifically.
Technical background
Because the develop rapidly of internet, the various application that increase increase the visit capacity of the webserver greatly fast, this situation has caused the appearance of multiple machine servicer system (such as Network of Workstation, being also referred to as group system) to satisfy growing various demands.Load-balancing technique is the key technology in the multiple machine servicer system.It mainly acts on and exactly equilibrium treatment is carried out in load and make whole multiple machine servicer system reach optimum performance.At present the SiteServer LBS of forming based on many load equalizers is many a solution of usefulness wherein, but refers to the load balancing to client-requested data (also deserving to be called line data) bag on the present load balancing ordinary meaning.Wrap at the response data of returning from server node (also claiming downlink data), the system that has directly returns to client by server node with it, do not need to consider load balancing this moment, in this case, back-end server system (generally being made up of by network interconnection server node) is exposed to outer net, and the fail safe of system is bad; The system that has adopts network address translation (NAT, Network Address Translation) mechanism (strategy), though request data package and return data bag all pass through SiteServer LBS, guaranteed fail safe, but two shortcomings are arranged also, the one, network address translation makes the expense of SiteServer LBS bigger, influences the SiteServer LBS performance; The 2nd, the return data bag is not carried out load balancing.
Summary of the invention
The objective of the invention is to overcome above-mentioned prior art deficiency, the implementation method of two-way load equalizing mechanism in a kind of multiple machine servicer system is provided, have high available function, and can improve the performance and the fail safe of server system.
Technical scheme of the present invention is achieved in that as follows carries out:
1) in the multiple machine servicer system that is constituted by a lot of platform computers, form SiteServer LBS with one or more load equalizer node, every equalizer node all has two ethernet ports, and one links to each other with external network, is responsible for receiving the request data package of client; Another links to each other with internal network, is responsible for and back-end server system communication;
2) when the client-requested packet arrives, the transmission destination that load equalizer node in the SiteServer LBS is selected request data package according to the load and the survival condition of back-end server node, the target MAC (Media Access Control) address of packet is revised as the MAC Address of selected back-end server node, gives this back-end server node with this packet delivery then;
3) to have a network interface card at least be in the same network segment for load equalizer node and back-end server node, to guarantee just can to make packet arrival purpose back-end server node by the target MAC (Media Access Control) address of revising packet;
4) the internal IP address of all load equalizer nodes and back-end server IP addresses of nodes information are write in the configuration file, be kept in the control desk of SiteServer LBS;
5) on the control desk of SiteServer LBS, the keeper carries out numbering since 0 continuous integral number to load equalizer node and back-end server node respectively, with these numberings load equalizer node sum is carried out delivery, if certain back-end server node be numbered j, j carries out delivery to the load equalizer node, the delivery result is i, then with the internal IP address of i platform load equalizer node default gateway address, by this method with the default gateway address as each back-end server node of the internal IP address equilibrium of load equalizer node as this back-end server node;
6) after the request data package that the intact client of back-end server node processing brings in, return data is transmitted to own corresponding default gateway, just Dui Ying load equalizer node;
7) when the load equalizer node is based on the main frame realization of (SuSE) Linux OS, the load equalizer node will be revised the (SuSE) Linux OS kernel, allow (SuSE) Linux OS kernel reception sources IP identical with self IP and be from the outside packet, and in linux kernel, open the forwarding capability of self, the return data bag that the back-end server node is sent directly is forwarded to outer net;
8) when the load equalizer node increases or deletes, the program of garrisoning on the control desk of SiteServer LBS can be revised the internal IP address information table of the load equalizer node of preserving automatically, carry out delivery again then, back-end server node and load equalizer node are repartitioned, the internal IP of the load equalizer node of operate as normal is given the default gateway address of each rear end station server node as them again, to guarantee the high availability and the load balancing of equalizer system;
9) when the back-end server node increases or deletes, the keeper will carry out since 0 continuous integer numbering the back-end server node on the control desk of SiteServer LBS again, with new numbering load equalizer node sum is carried out delivery, reconfigure the default gateway of back-end server node.
By adopting above method, the present invention has following technique effect:
1, two-way load equalizing
Native system both can carry out load balancing to request data package, when the return data bag returns through SiteServer LBS, can carry out load balancing to the return data bag again, thereby can better improve the load balancing effect of server system.
2, high-performance
When carrying out the forward load balancing, be to give the back-end server node packet delivery by the target MAC (Media Access Control) address of revising packet; Carrying out reverse load when balanced, is by the mode of server node gateway address is set, and its essence also is to realize by the target MAC (Media Access Control) address of revising packet.Compare with NAT mechanism, do not need packet is carried out network address translation, the expense of SiteServer LBS is smaller, thereby has improved the performance of server system.
3, high availability
When the load equalizer node failure, can be that task is moved in the internal IP address of the load equalizer node of operate as normal by the gateway address of dynamic modification back-end server node, thereby realize high available function.
4, fail safe
Request and return data bag all pass through SiteServer LBS, whole multiple machine servicer system internal server node shields outer net, compare with DR (Direct Routing) mechanism (strategy), can more effectively guarantee the fail safe of whole multiple machine servicer system.
5, extensibility
Whole server system can add or delete the quantity of load equalizer node as required dynamically to reach best cost performance.
Description of drawings
Operation principle schematic diagram when Fig. 1 carries out the forward load balancing for the present invention.
Operation principle schematic diagram when Fig. 2 carries out the reverse load equilibrium for the present invention.
When Fig. 3 carries out load balancing for the present invention, the address in the packet, the transition diagram of port numbers.
Fig. 4 is for carrying out the topological diagram that gateway is divided to all server nodes.
Fig. 5 is for carrying out the topological diagram that gateway is divided respectively to service pool.
Accompanying drawing is a concrete enforcement use-case of the present invention.
Below in conjunction with accompanying drawing content of the present invention is described in further detail.
Embodiment
With reference to shown in Figure 1, SiteServer LBS is made up of many equalizer nodes, and every equalizer node all has two ethernet ports, and one links to each other the request data package of responsible reception client with external network; One links to each other with Intranet, is responsible for and back-end server system communication.The above dotted ellipse frame of straight dashed line among the figure is the place of carrying out the forward load balancing.As can be seen from Figure 1 when carrying out the forward load balancing, (i ∈ 0~n) is the distribution that realizes packet by the target MAC (Media Access Control) address of revising packet to the equalizer node i.When the request msg of bringing in from the client wraps in through SiteServer LBS, (i ∈ 0~n) determines this request should be sent to which station server node j according to pre-set equalization algorithm, and (j ∈ 0~m) handles the load equalizer node i, and (MAC Address of j ∈ 0~m) also forwards it for selected server node j to revise its target MAC (Media Access Control) address then.
With reference to shown in Figure 2, the following dotted ellipse frame of the straight dashed line among the figure is exactly the place of carrying out the reverse load equilibrium.When carrying out the reverse load equilibrium, be that (gateway address of j ∈ 0~m) is that the equalizer node i (realize by the internal IP address of i ∈ 0~n) by server node j is set.(((on the i ∈ 0~n), (i ∈ 0~n) is transmitted to client to the pairing equalizer node i of the gateway address of j ∈ 0~m) to the return data bag by the equalizer node i then to be forwarded to this server node j during j ∈ 0~m) output from server node j.
With reference to shown in Figure 3, Cip refers to client ip, and Vip refers to the SiteServer LBS unification to the IP that external network provides, and is commonly referred to as single IP or virtual IP address; Cport refers to client (network) port numbers, and Vport refers to the destination slogan; The unified virtual mac address that externally provides of SiteServer LBS is provided Vmac, Rmac refers to the MAC Address of selected back-end server node, Gmac refers to the MAC Address of the gateway of back-end server node, just the MAC Address of the internal network interface card of Dui Ying equalizer node.As can be seen from Figure 3, the equalizer node in the SiteServer LBS is received the request data package of client, and when carrying out the forward load balancing, the target MAC (Media Access Control) address of packet is modified to the MAC Address of selected server node; The return data bag of server node, when carrying out the reverse load equilibrium, the gateway address that the target MAC (Media Access Control) address of packet is modified to server is the MAC Address of corresponding equalizer node just.
Reference is shown in Figure 4, server node is put together unify equalizer node platform number is carried out delivery, comes balanced segmentation service device node gateway with this.Hypothesis has n platform equalizer node among the figure, is numbered 0-n-1; (k+1) n station server node is arranged, the result after dividing by delivery as shown in the figure: be numbered 0, n ... the server node of kn has been allocated to and has been numbered 0 equalizer node; Be numbered 1, n+1 ... the server node of kn+1 has been allocated to and has been numbered 1 equalizer node; Be numbered n-1,2n-1 ... the server node of kn+n-1 has been allocated to the equalizer node that is numbered n-1.The purpose of dividing is to make every equalizer bear impartial relatively task.
With reference to shown in Figure 5, the back-end server system provides the service pool of different services to be formed by a lot of.At this moment need service pool is divided respectively, exactly each service pool is divided according to the method shown in the accompanying drawing 4 respectively, the unbalanced situation of equalizer load appears in server node after a little while in the service pool in order to avoid, and adopts server node two-stage partition strategy in service pool and the service pool.
Two-way load equalizing mechanism provided by the invention comprises forward load balancing and reverse load equilibrium, the forward load balancing refers to carries out load balancing to upstream request packet (client is to the request data package of server end), by SiteServer LBS request balancedly is distributed to each station server node by predefined strategy; The reverse load equilibrium refers to from being distributed on each load equalizer node of the packet equilibrium returned after server node is handled, returns to client through the load equalizer node then.This method is used for the requirement request and the return data bag all will be through the situation of SiteServer LBS.
In the SiteServer LBS, in the time of the load equalizer node failure, can exert an influence to whole multimachine system, provided by the inventionly its method that redirects to the load equalizer node of other any operate as normal can well be addressed this problem by the gateway address that on-the-fly modifies the back-end server node, when guaranteeing that height is available, also the load average of failure node is shared on other load balancing node and handled, avoided traditional and caused load all to be added in situation on the backup node by the task of the complete taking over failing node of backup node.
The data packet transmission mode that provides among the present invention is: when carrying out the forward load balancing, the load equalizer node is revised the target MAC (Media Access Control) address of packet; When carrying out the reverse load equilibrium, then be to adopt server node that the mode of gateway is set.The target MAC (Media Access Control) address overhead of revising packet is very little; The load equalizer node is as the gateway of back-end server node, just does simple judgement and just packet forwarded through out-of-date at packet, and overhead is equally very little.Like this, can make whole multiple machine servicer system have higher access performance.
Two-way load equalizing mechanism of the present invention is supported extensibility.Extensibility is meant by increasing resource satisfying ever-increasing requirement to performance and function, or by the reduction resource, to reduce cost.Total service ability of system should proportionally increase along with the increase of resource.Ideal situation is that the speed of growth is linear.The increase of cost should be less than the linear coefficient of N (N refers to the number of repetition resource) or NlogN.In the load-balancing mechanism of the present invention, the quantity of load equalizer node can increase dynamically and deletes with demand, and the performance of system becomes linear approximate relationship with the quantity of load equalizer node.
The present invention realizes in the following manner: the request data package of bringing in from the client, when carrying out the forward load balancing, the load equalizer node directly is transmitted to it back-end server node by the target MAC (Media Access Control) address of revising data, need not pass through network address translation; For the packet that returns to client can be forwarded through the load equalizer node, on the control desk of multimachine system, preserve the information list of load equalizer node and back-end server node, the load-balancing algorithm that configures according to the keeper, the partition program that control desk is garrisoned carries out gateway to server node and divides, make balanced as far as possible the corresponding on each load equalizer node of server node, then according to calculating the default gateway that good corresponding relation is provided with the back-end server node, like this, the packet that returns from the back-end server node just is forwarded on its default gateway, just on the Dui Ying load equalizer node.Meanwhile, the control desk supervisory routine is monitored the state of each load equalizer node and back-end server node simultaneously, revises server node and the internodal corresponding relation of load equalizer dynamically with this, guarantees its high availability.
Owing to be to realize when carrying out the forward load balancing by the target MAC (Media Access Control) address of direct modification packet, carry out reverse load when balanced be the load equalizer node as the default gateway of back-end server node, be in the same network segment so require load equalizer node and server node to have a port at least.The packet that could guarantee access server system like this can arrive server node through the load equalizer node, and guarantees that the packet that returns can forward through the load equalizer node.
When carrying out the forward load balancing, for the equilibrium of visit data bag be distributed to the back-end server node, can adopt multiple load-balancing algorithm, such as polling method, minimum linking number method etc.
When carrying out the reverse load equilibrium, for each load equalizer node that is distributed to of return data bag equilibrium, need divide the corresponding relation of back-end server node and load equalizer node, concrete division methods is as follows:
Suppose that the load equalizer node has the n platform, the back-end server node has m platform (m>n), the load equalizer node be numbered 0,1 ... n-1, the back-end server node be numbered 0,1 ... m-1, then draw and send out algorithm and can realize by following method: with the numbering of back-end server node to the n delivery, if the mould value is i (0=<i<=n-1), the then default gateway address that is this server node with the internal IP address setting of i platform load equalizer node.
When the load equalizer node failure, in order to guarantee high availability, need reset the gateway of back-end server node, still want the proof load equilibrium to work after resetting.It is as follows to repartition method:
When being numbered the load equalizer node failure of i, need this load equalizer node is rejected from system, the sum of load equalizer node is n-1 now, remaining load equalizer node is carried out continuous numbering since 0 again, then again with the numbering of back-end server node to the n-1 delivery, reconfigure the default gateway address of back-end server node with this.Can also by the pairing server node of load equalizer node that is numbered i again with other load equalizer node correspondence, and other server node is changed no longer again with this and is raised the efficiency, just when being numbered the load equalizer node failure of i, the pairing back-end server node of this load equalizer node is repartitioned to other load equalizer node.
In order to realize extensibility, when needs increase the load equalizer node, need repartition equally, algorithm is as follows:
When existing n platform load equalizer node, and in the time of need increasing one newly, be numbered n to it, the load equalizer sum is n+1 now, then again with the numbering of back-end server node to the n+1 delivery, reconfigure the default gateway address of back-end server node with this.Can also take out some by handle and original n platform load equalizer node corresponding server node and correspond to again on initiate this load equalizer node, and other server node is changed with this no longer again and raised the efficiency.Just when increasing a load equalizer node newly, be numbered n to it, then the back-end server node is taken out (m/n+1) (rounding) platform downwards and be allocated to this initiate load equalizer node, just in original n platform load equalizer node, taking out (m/n* (n+1)) (rounding) platform from every pairing server node downwards comes, it is corresponded on the initiate load equalizer node again, remodify its default gateway address.
The main frame of (SuSE) Linux OS realizes if load weighing apparatus node is based on, and when then handling the return data bag, it is not that the packet of self IP is transmitted that the data forwarding function that need open oneself in the LINUX operating system nucleus guarantees purpose IP.

Claims (1)

1, the implementation method of two-way load equalizing mechanism in a kind of multiple machine servicer system is characterized in that, carries out as follows:
1) in the multiple machine servicer system that is constituted by a lot of platform computers, form SiteServer LBS with one or more load equalizer node, every equalizer node all has two ethernet ports, and one links to each other with external network, is responsible for receiving the request data package of client; Another links to each other with internal network, is responsible for and back-end server system communication;
2) when the client-requested packet arrives, the transmission destination that load equalizer node in the SiteServer LBS is selected request data package according to the load and the survival condition of back-end server node, the target MAC (Media Access Control) address of packet is revised as the MAC Address of selected back-end server node, gives this back-end server node with this packet delivery then;
3) to have a network interface card at least be in the same network segment for load equalizer node and back-end server node, to guarantee just can to make packet arrival purpose back-end server node by the target MAC (Media Access Control) address of revising packet;
4) the internal IP address of all load equalizer nodes and back-end server IP addresses of nodes information are write in the configuration file, be kept in the control desk of SiteServer LBS;
5) on the control desk of SiteServer LBS, the keeper carries out numbering since 0 continuous integral number to load equalizer node and back-end server node respectively, with these numberings load equalizer node sum is carried out delivery, if certain back-end server node be numbered j, j carries out delivery to the load equalizer node, the delivery result is i, then with the internal IP address of i platform load equalizer node default gateway address, by this method with the default gateway address as each back-end server node of the internal IP address equilibrium of load equalizer node as this back-end server node;
6) after the request data package that the intact client of back-end server node processing brings in, return data is transmitted to own corresponding default gateway, just Dui Ying load equalizer node;
7) when the load equalizer node is based on the main frame realization of (SuSE) Linux OS, the load equalizer node will be revised the (SuSE) Linux OS kernel, allow (SuSE) Linux OS kernel reception sources IP identical with self IP and be from the outside packet, and in linux kernel, open the forwarding capability of self, the return data bag that the back-end server node is sent directly is forwarded to outer net;
8) when the load equalizer node increases or deletes, the program of garrisoning on the control desk of SiteServer LBS can be revised the internal IP address information table of the load equalizer node of preserving automatically, carry out delivery again then, back-end server node and load equalizer node are repartitioned, the internal IP of the load equalizer node of operate as normal is given the default gateway address of each back-end server node as them again, to guarantee the high availability and the load balancing of equalizer system;
9) when the back-end server node increases or deletes, the keeper will carry out since 0 continuous integer numbering the back-end server node on the control desk of SiteServer LBS again, with new numbering load equalizer node sum is carried out delivery, reconfigure the default gateway of back-end server node.
CNB2006100427623A 2006-04-30 2006-04-30 Method for realizing two-way load equalizing mechanism in multiple machine servicer system Expired - Fee Related CN100435530C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006100427623A CN100435530C (en) 2006-04-30 2006-04-30 Method for realizing two-way load equalizing mechanism in multiple machine servicer system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006100427623A CN100435530C (en) 2006-04-30 2006-04-30 Method for realizing two-way load equalizing mechanism in multiple machine servicer system

Publications (2)

Publication Number Publication Date
CN1859313A CN1859313A (en) 2006-11-08
CN100435530C true CN100435530C (en) 2008-11-19

Family

ID=37298177

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006100427623A Expired - Fee Related CN100435530C (en) 2006-04-30 2006-04-30 Method for realizing two-way load equalizing mechanism in multiple machine servicer system

Country Status (1)

Country Link
CN (1) CN100435530C (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101217420B (en) * 2007-12-27 2011-04-20 华为技术有限公司 A linkage processing method and device
CN101500005B (en) * 2008-02-03 2012-07-18 北京艾德斯科技有限公司 Method for access to equipment on server based on iSCSI protocol
CN101557388B (en) * 2008-04-11 2012-05-23 中国科学院声学研究所 NAT traversing method based on combination of UPnP and STUN technologies
CN101276289B (en) * 2008-05-09 2010-06-16 中兴通讯股份有限公司 Method for user and multi-inner core to perform communication in Linux system
CN101404619B (en) * 2008-11-17 2011-06-08 杭州华三通信技术有限公司 Method for implementing server load balancing and a three-layer switchboard
US20130204995A1 (en) * 2010-06-18 2013-08-08 Nokia Siemens Networks Oy Server cluster
CN102497652B (en) * 2011-12-12 2014-07-30 武汉虹信通信技术有限责任公司 Load balancing method and device for large-flow data of code division multiple access (CDMA) R-P interface
CN102523302B (en) * 2011-12-26 2015-08-19 华为数字技术(成都)有限公司 The load-balancing method of cluster virtual machine, server and system
CN104580391A (en) * 2014-12-18 2015-04-29 国云科技股份有限公司 Server bandwidth improving method suitable for cloud computing
CN105554176B (en) * 2015-12-29 2019-01-18 华为技术有限公司 Send the method, apparatus and communication system of message
CN110198337B (en) * 2019-03-04 2021-10-08 腾讯科技(深圳)有限公司 Network load balancing method and device, computer readable medium and electronic equipment
CN111010342B (en) * 2019-11-21 2023-04-07 天津卓朗科技发展有限公司 Distributed load balancing implementation method and device
CN111338454B (en) * 2020-02-29 2021-08-03 苏州浪潮智能科技有限公司 System and method for balancing server power supply load
CN111556177B (en) * 2020-04-22 2021-04-06 腾讯科技(深圳)有限公司 Network switching method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020035225A (en) * 2000-11-04 2002-05-11 남민우 Method and apparatus of server load balancing using MAC address translation
CN1403934A (en) * 2001-09-06 2003-03-19 华为技术有限公司 Load balancing method and equipment for convective medium server
US6567377B1 (en) * 1999-03-18 2003-05-20 3Com Corporation High performance load balancing of outbound internet protocol traffic over multiple network interface cards
CN1426211A (en) * 2001-12-06 2003-06-25 富士通株式会社 Server load sharing system
JP2004118622A (en) * 2002-09-27 2004-04-15 Jmnet Inc Load distributor, and method and program for the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6567377B1 (en) * 1999-03-18 2003-05-20 3Com Corporation High performance load balancing of outbound internet protocol traffic over multiple network interface cards
KR20020035225A (en) * 2000-11-04 2002-05-11 남민우 Method and apparatus of server load balancing using MAC address translation
CN1403934A (en) * 2001-09-06 2003-03-19 华为技术有限公司 Load balancing method and equipment for convective medium server
CN1426211A (en) * 2001-12-06 2003-06-25 富士通株式会社 Server load sharing system
JP2004118622A (en) * 2002-09-27 2004-04-15 Jmnet Inc Load distributor, and method and program for the same

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Web服务器的负载均衡". 杨厚群,康耀红,魏应彬.计算机工程 增刊,第26卷. 2000
"基于机群的网络服务器系统构架研究". 范新媛,徐国治,陈研,王东民.上海大学学报(自然科学版)增刊,第8卷. 2002
"Web服务器的负载均衡". 杨厚群,康耀红,魏应彬.计算机工程 增刊,第26卷. 2000 *
"基于机群的网络服务器系统构架研究". 范新媛,徐国治,陈研,王东民.上海大学学报(自然科学版)增刊,第8卷. 2002 *

Also Published As

Publication number Publication date
CN1859313A (en) 2006-11-08

Similar Documents

Publication Publication Date Title
CN100435530C (en) Method for realizing two-way load equalizing mechanism in multiple machine servicer system
US10917351B2 (en) Reliable load-balancer using segment routing and real-time application monitoring
US10547544B2 (en) Network fabric overlay
US9397946B1 (en) Forwarding to clusters of service nodes
EP2880828B1 (en) System and method for virtual ethernet interface binding
US7697536B2 (en) Network communications for operating system partitions
US11233737B2 (en) Stateless distributed load-balancing
US20140029412A1 (en) Systems and methods for providing anycast mac addressing in an information handling system
EP2791802A1 (en) System and method for non-disruptive management of servers in a network environment
CN102904825B (en) A kind of message transmitting method based on Hash and equipment
US10097481B2 (en) Methods and apparatus for providing services in distributed switch
CN104301246A (en) Large-flow load balanced forwarding method and device based on SDN
EP4141666A1 (en) Dual user space-kernel space datapaths for packet processing operations
US20220166715A1 (en) Communication system and communication method
WO2017084228A1 (en) Method for managing traffic item in software-defined networking
WO2022216440A1 (en) Scaling host policy via distribution
US11516125B2 (en) Handling packets travelling towards logical service routers (SRs) for active-active stateful service insertion
CN101699821B (en) Method for realizing address resolution protocol in distribution type multi-kernel network system
CN101030890A (en) Flexibly grouping method and its related route apparatus
CN112073503A (en) High-performance load balancing method based on flow control mechanism
CN116195239A (en) Providing modular network services through a distributed elastic middlebox
Sun et al. Data center network architecture
Karandikar Assessment of DCNET: A New Data Center Network Architecture
WO2022216432A1 (en) Architectures for disaggregating sdn from the host
CN117879997A (en) Construction method and device for traditional bare metal access storage high-speed channel

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081119

Termination date: 20110430