CN102857438A - Synchronizing state among load balancer components - Google Patents

Synchronizing state among load balancer components Download PDF

Info

Publication number
CN102857438A
CN102857438A CN2011104443221A CN201110444322A CN102857438A CN 102857438 A CN102857438 A CN 102857438A CN 2011104443221 A CN2011104443221 A CN 2011104443221A CN 201110444322 A CN201110444322 A CN 201110444322A CN 102857438 A CN102857438 A CN 102857438A
Authority
CN
China
Prior art keywords
load balancer
data flow
destination host
multiplexer
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011104443221A
Other languages
Chinese (zh)
Other versions
CN102857438B (en
Inventor
P·帕特尔
V·伊万诺夫
M·齐科斯
V·彼得
V·库兹涅佐夫
D·A·戴恩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Publication of CN102857438A publication Critical patent/CN102857438A/en
Application granted granted Critical
Publication of CN102857438B publication Critical patent/CN102857438B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Multi Processors (AREA)

Abstract

The present invention extends to methods, systems, and computer program products for synchronizing state among load balancer components. Embodiments of the invention include load balancers using a consistent hashing algorithm to decide how new connections should be load balanced. Use of consistent hashing algorithm permits load balancers to work in a stateless manner in steady state. Load balancers start keeping flow state information (destination address for a given flow) about incoming packets when it is needed, i.e. such as, for example, when a change in destination host configuration is detected. State information is shared across load balancers in a deterministic way, which allows knowing which load balancer is authoritative (e.g., is the owner) for a given flow. Each load balancer can reach the authoritative load balancer to learn about a flow that cannot be determined locally.

Description

The state of synchronized loading balancer inter-module
Technical field
The present invention relates to load balance, particularly the state of synchronized loading balancer inter-module.
Background technology
1. background and correlation technique
Unify many aspects of related technology affect society of department of computer science.Really, the ability of computer system processor information has changed the mode of people's live and works.Computer system is carried out many tasks (for example, word processing, schedule and accounting etc.) of manually carrying out now usually before computer system occurs.Recently, computer system is coupled to each other and be coupled to other electronic equipments to form department of computer science's other electronic equipments wired and radio computer network of transmission electronic data thereon of unifying.Therefore, a plurality of different computer systems and/or the distribution of a plurality of different computing environment are striden in the execution of many calculation tasks.
In distributed computing system, distributed load balancer is often used in strides the load of many computer system shared processing.For example, can receive the PERCOM peripheral communication that is directed to a plurality of process endpoint with a plurality of load balancers.Each load balancer has certain mechanism to be guaranteed to be directed to same process endpoint from all PERCOM peripheral communication of same source point.
In order so that load balancer can be made PERCOM peripheral communication to be directed to the accurate decision-making that where (for example is directed to which process endpoint), load balancer is in shared state each other.For example, the decision-making to the communication in the source of appointment of making at a load balancer place can be striden other load balancer and be come synchronously.Based on through synchronous state, any load balancer can be made subsequently relevant source from appointment and be sent communication to the accurately decision-making of same process endpoint.
Unfortunately, in order to be maintained in the synchronous state of warp between a plurality of load balancers, usually need between a plurality of load balancers, exchange sizable data volume.Therefore, the state between synchronous a plurality of load balancers becomes bottleneck, and has limited the scalability of load balancer.
Summary of the invention
The present invention relates to method, system and computer program for the state of synchronized loading balancer inter-module.In certain embodiments, load balancer receives grouping from router.Grouping comprises the source electronic address information in the source on the sign wide area network and the destination electronic address information that comprises the virtual electronic address.Load balancer comes to generate the data flow identifiers of available data stream from described source electronic address information and destination electronic address information with an algorithm.Load balancer determines that this grouping is for existing data flow.
Load balancer determines that this load balancer lacks enough information and come to identify the destination host that flows corresponding to this available data from a plurality of destination hosts.This comprises that load balancer does not have high-speed cache available data stream to be mapped to the state of a destination host in a plurality of destination hosts.
Determine the designated possessory owner's load balancer as available data stream of load balancer sign in response to described.Also determine in response to described, load balancer is to the request of owner's load balancer transmission to the data flow state information.Load balancer is from owner's load balancer receiving status information.This state information has identified the destination host corresponding to available data stream.Load balancer carries out high-speed cache to the state information that receives.
In the follow-up grouping in this data flow, load balancer sends it back message with the continuation of designation data stream to owner's load balancer.Only need to send once this continuation message at each idle timeout interval.Lack any grouping even idle timeout interval is determined, how long data flow can keep its mapping of arriving same destination host.
Load balancer determines that the grouping of this reception is for existing data flow.Load balancer determines that this load balancer is not the owner of available data stream.Load balancer determine this load balancer high-speed cache the state of available data stream.The state of high-speed cache is mapped to a destination host in a plurality of destination hosts with available data stream.Load balancer sends to the grouping that receives the destination host that is mapped to available data stream.Load balancer determines whether it needs the data flow continuation message is sent to owner's load balancer.Load balancer sends to owner's load balancer with the state of high-speed cache.
Provide content of the present invention so that some concepts that will in following embodiment, further describe with the form introduction of simplifying.Content of the present invention is not to be intended to identify key feature or the essential feature of theme required for protection, is not intended to for the scope that helps to determine theme required for protection yet.
Supplementary features of the present invention and advantage will be narrated in the following description, and its part will be apparent according to this specification, maybe can be by practice of the present invention is known.The features and advantages of the present invention can realize and obtain by the instrument that particularly points out in appended claims and combination.These and other features of the present invention will become more apparent by the following description and the appended claims book, maybe can be by described practice of the present invention is hereinafter understood.
Description of drawings
In order to describe the mode that can obtain above-mentioned and other advantage of the present invention and feature, will present the of the present invention of above concise and to the point description by the specific embodiments of the invention shown in reference to the accompanying drawings and more specifically describe.Be appreciated that these accompanying drawings only describe exemplary embodiments of the present invention, thereby be not considered to the restriction to its scope, the present invention will be by describing with supplementary features and details with accompanying drawing and illustrating, in the accompanying drawings:
Fig. 1 illustrates the example computer architecture of being convenient to synchronous regime between the load balancer assembly.
Fig. 2 shows the flow chart for the exemplary method of shared state between load balancer.
Fig. 3 shows the flow chart for the exemplary method of shared state between load balancer.
Fig. 4 A and 4B have illustrated the example computer architecture that is used for shared state between multiplexer.
Fig. 5 A and 5B have illustrated the example computer architecture that is used for shared state between multiplexer.
Fig. 6 A, 6B, 6C and 6D have illustrated and have been used for the example computer architecture that service data flows to the mapping of destination host.
Fig. 7 A and 7B have illustrated and have been used for the example computer architecture that service data flows to the mapping of owner's multiplexer.
Embodiment
The present invention relates to method, system and computer program for the state of synchronized loading balancer inter-module.In certain embodiments, load balancer receives grouping from router.Grouping comprises the source electronic address information in the source on the sign wide area network and the destination electronic address information that comprises the virtual electronic address.Load balancer comes to generate the data flow identifiers of available data stream from described source electronic address information and destination electronic address information with an algorithm.Load balancer determines that this grouping is for existing data flow.
Load balancer determines that this load balancer lacks enough information and come to identify the destination host that flows corresponding to this available data from a plurality of destination hosts.This comprises that load balancer does not have high-speed cache available data stream to be mapped to the state of a destination host in a plurality of destination hosts.
Determine the designated possessory owner's load balancer as available data stream of load balancer sign in response to described.Also determine in response to described, load balancer is to the request of owner's load balancer transmission to the data flow state information.Load balancer is from owner's load balancer receiving status information.This state information has identified the destination host corresponding to available data stream.Load balancer carries out high-speed cache to the state information that receives.
In the follow-up grouping in this data flow, load balancer sends it back message with the continuation of designation data stream to owner's load balancer.Only need to send once this continuation message at each idle timeout interval.Lack any grouping even idle timeout interval is determined, how long data flow can keep its mapping of arriving same destination host.
Load balancer determines that the grouping of this reception is for existing data flow.Load balancer determines that this load balancer is not the possessory of available data stream.Load balancer determine this load balancer high-speed cache the state of available data stream.The state of high-speed cache is mapped to a destination host in a plurality of destination hosts with available data stream.Load balancer sends to the bag that receives the destination host that is mapped to available data stream.Load balancer determines whether it needs the data flow continuation message is sent to owner's load balancer.Load balancer sends to owner's load balancer with the state of high-speed cache.
Various embodiments of the present invention can comprise or utilize special use or all-purpose computer, and this special use or all-purpose computer comprise such as for example computer hardware of one or more processors and system storage, and be as discussed in detail below.Each embodiment in the scope of the invention also comprises be used to the physical medium that carries or store computer executable instructions and/or data structure and other computer-readable mediums.These computer-readable mediums can be the addressable any usable mediums of universal or special computer system.The computer-readable medium of storage computer executable instructions is physical storage medium.The computer-readable medium that carries computer executable instructions is transmission medium.Thus, and unrestricted, various embodiments of the present invention can comprise at least two kinds of complete dissimilar computer-readable mediums: computer-readable storage medium (equipment) and transmission medium as example.
Computer-readable storage medium (equipment) comprise RAM, ROM, EEPROM, CD-ROM, DVD or other optical disc storage, disk storage or other magnetic storage apparatus or can be used for storing computer executable instructions or data structure form required program code devices and can be by any other medium of universal or special computer access.
" network " is defined as allowing one or more data link of transmission electronic data between computer system and/or module and/or other electronic equipments.When information exchange crosses that network or another communication connection (hardwired, wireless or hardwired or wireless combination) is sent to or when offering computer, this computer should connect and suitably was considered as transmission medium.Transmission medium can comprise and can be used for carrying the required program code devices of computer executable instructions or data structure form and network and/or the data link of universal or special computer-accessible.Above-mentioned combination also should be included in the scope of computer-readable medium.
In addition, after arriving various computer system components, the program code devices of computer executable instructions or data structure form can be from the transmission medium automatic transmission to computer-readable storage medium (equipment) (or vice versa).For example, the computer executable instructions or the data structure that receive by network or data link (for example can be buffered in Network Interface Module, " NIC ") in RAM in, then finally be transferred to the computer-readable storage medium (equipment) of the more not volatibility of computer system RAM and/or computer systems division.Accordingly, it should be understood that computer-readable storage medium (equipment) can be included in also in the computer system component that utilizes (even main utilization) transmission medium.
Computer executable instructions for example comprises, makes all-purpose computer, special-purpose computer or dedicated treatment facility carry out the instruction and data of a certain function or certain group function when carrying out at the processor place.Computer executable instructions can be for example binary code, the intermediate format instructions such as assembler language or even source code.Although with the special-purpose language description of architectural feature and/or method action this theme, should be appreciated that subject matter defined in the appended claims is not necessarily limited to above-mentioned feature or action.On the contrary, described feature and action are as the exemplary forms that realizes claim and disclosed.
It should be appreciated by those skilled in the art that, the present invention can put into practice in having the network computing environment of being permitted eurypalynous computer system configurations, these computer system configurations comprise personal computer, desktop computer, laptop computer, message handling device, portable equipment, multicomputer system, based on microprocessor or programmable consumer electronic device, network PC, minicom, mainframe computer, mobile phone, PDA, beep-pager, router, switch etc.The present invention also can pass through to implement in the distributed system environment that the local and remote computer system of network linking (perhaps by hardwired data link, wireless data link, the perhaps combination by hardwired and wireless data link) both executes the task therein.In distributed system environment, program module can be arranged in local and remote memory storage device both.
Fig. 1 illustrates the example computer architecture 100 of being convenient to synchronous regime between the load balancer assembly.With reference to Fig. 1, Computer Architecture 100 comprises router one 02, load balancing management device 103, multiplexer 106 and destination host 107.In the computer system of describing each is by being connected to each other such as the network of for example local area network (LAN) (" LAN ") and/or wide area network (" the WAN ") part of network (or as).Router one 02 is further connected to network 101.Network 101 can be other WAN, for example internet.Therefore, in the assembly of describing each and the computer system of any other connection and assembly thereof can create message relevant data and by described network exchange and message relevant data (for example, Internet protocol (" IP ") datagram and other upper-layer protocols more that utilize IP datagram, such as transmission control protocol (" TCP "), HTML (Hypertext Markup Language) (" HTTP "), Simple Mail Transfer protocol (" SMTP ") etc.).
Usually, router one 02 is docking between other assembly of network 101 and Computer Architecture 100 with routing packets suitably between other assembly of network 101 and Computer Architecture 100.The suitable assembly that router one 02 can be configured to receive the message of automatic network 101 and these message is forwarded to Computer Architecture 100.For example, router one 02 can be configured at multiplexer 106 places the IP traffic of virtual internet address (" VIP ") is transmitted to the IP address of physical interface.Router one 02 can support to be routed to the Equal-Cost Multipath (" ECMP ") of the IP address of any quantity basically (such as 4,8,16 etc.).Therefore, a plurality of multiplexers 106 can be configured to movable multiplexer.In other embodiments, (for example when not supporting ECMP) can be configured to a multiplexer movable multiplexer, and zero or more other multiplexer be configured to the standby multiplexer.
In other embodiments, use domain name service (" DNS ") round-robin method.One or more VIP are assigned to a plurality of multiplexers 106, and between described a plurality of multiplexers 106, share.The name of domain name service (" DNS ") is registered to resolve one or more VIP.If multiplexer 106 failures, then its VIP that has is transferred to other multiplexer 106 by fault.
In a further embodiment, be each multiplexer configuration VIP in network interface unit.One (for example host node) in the described multiplexer is set in response to address resolution protocol (" the ARP ") request to VIP.Like this, router one 02 can send to host node with any grouping of VIP.Subsequently, host node can be transmitted based on current state and/or load balancer rule execution level 2.Use " host node " can alleviate overflow (f1ooding) and ratio is transmitted easily many at layer 3.
As described, multiplexer 106 comprises a plurality of multiplexers, comprises multiplexer 106A, 106B and 106C.Destination host 107 comprises a plurality of destination hosts, comprises destination host 107A, 107B and 107C.Generally speaking, each multiplexer 106 is configured to receive grouping, identifies the destination host that is fit to of this grouping, and with this forwarding of packets to this destination host that is fit to.In certain embodiments, according to the destination host 107 that is fit to of following one or more identification packet: the content of grouping, be for configuration existing data flow or new data stream and destination host 107 at the state that receives multiplexer place high-speed cache, in state, this grouping of other multiplexer place high-speed cache.
Each multiplexer comprises ID maker, owner's detector and state supervisor.For example, multiplexer 106A, 106B and 106C comprise respectively ID maker 141A, 141B and 141C, owner's detector 142A, 142B and 142C, and state supervisor 143A, 143B and 143C.Each ID maker is configured to packet-based content and generates the data flow ID that is used for grouping.In certain embodiments, 5 tuples of (source IP: port, VIP: port, IP agreement) are used to indicate and/or generated data stream ID.In other embodiments, use the subset of this 5 tuple.New data stream ID can be mapped to by for example (destination host IP: the destination host 107 that port) identifies.Corresponding state supervisor can high-speed cache maps data streams ID state and (destination host IP: port).Like this, when receiving other when grouping with same stream ID, multiplexer can be with reference to the state of high-speed cache in order to be the suitable destination host 107 of each other group character.
In certain embodiments, the different piece in data flow ID space " is had " by different multiplexers.Owner's detector at each multiplexer place is configured to owner's multiplexer of specified data stream ID.Subsequently, owner's multi-channel detector can receive data flow ID as input, and the IP address of owner's multiplexer is returned as output.Like this, each multiplexer can send to owner's multiplexer with state and/or from owner's multiplexer solicited status for each data flow ID.For example, when multiplexer had identified the destination host that is fit to of data flow ID, this multiplexer can (except also high-speed cache) be transmitted to the destination host that is fit to owner's multiplexer of this data flow ID.On the other hand, when multiplexer lacked enough information and comes the destination host that is fit to of identification data stream ID, this multiplexer can be inquired about the destination host of owner's multiplexer to obtain to be fit to.
In certain embodiments, when multiplexer lacked enough information and identifies the destination host that is fit to corresponding to the grouping of data flow ID, this multiplexer sent to grouping owner's multiplexer of this data flow ID.In response to receiving described grouping, owner's multiplexer is determined the destination host that is fit to of this data flow ID.And, owner's multiplexer (generates the state of high-speed cache at owner's multiplexer place, receive from other multiplexer) send to this multiplexer, described state is mapped to suitable destination host with data flow ID.
In certain embodiments, when multiplexer lacks enough information and comes the destination host that is fit to of identification data stream ID, this multiplexer will send to the clearly request of the state of high-speed cache owner's multiplexer of this data flow ID.In response to receiving clearly request, owner's multiplexer (generates the state of high-speed cache at owner's multiplexer place, receive from other multiplexer) send to this multiplexer, described state is mapped to suitable destination host with data flow ID.Subsequently, this multiplexer sends to suitable destination host with grouping.
Generally, load balancing management device 103 is configured to monitor transition (for example when adding new destination host) transition in the arrangement of destination host 107.It is next with formulating the array that data flow ID is mapped to destination host that destination array maker 104 can (for example use hash function) every now and then.Load balancing management device 103 can be safeguarded two versions of array: the previous release (for example old array 108) of the current version of array (for example new array 109) and array.Position in array can be corresponding to data flow ID.For example, array position 1 can be corresponding to data flow ID 1 etc.Like this, as shown in array 108 and 109, destination host 107B is the destination host that is fit to of data flow ID1.
Load balancing management device 103 can send to multiplexer 106 with two versions of array.When the arrangement of destination host 107 was in the stable state, then the mapping in the current and last version of array was mated.Like this, in stable state, multiplexer 106 can come determine that where grouping with named data stream ID sends to (even when this multiplexer lacks the state of high-speed cache) with reference to this mapping.
On the other hand, when destination host 107 be arranged in the transition state time (for example ought add new destination host 107), then the mapping in the current and last version of array is not identical.For example, when adding new destination host 107, data flow ID space can be striden more destination host 107 and be spread out to reduce load on each destination host 107.For example, difference 111 has indicated before a part (for example data flow ID3) corresponding to the data flow ID space of destination host 107D now corresponding to destination host 107C.Proceed to the possibility of same destination host for the grouping that increases available data stream, when the arrangement that multiplexer 106 can work as destination host 107 is in the transition with reference to the state of high-speed cache (locally or or inquire from owner's multiplexer).
Fig. 2 shows the flow chart for the exemplary method 200 of shared state between load balancer.Method 200 is described with reference to assembly and the data of Computer Architecture 100.
Method 200 comprises that load balancer receives the action of grouping from router, and described grouping has comprised the source electronic address information in the source that is identified on the wide area network and comprised the destination electronic address information of virtual electronic address (action 201).For example, multiplexer 106A can receive grouping 121 from router one 02.Grouping 121 has comprised the address 122, source (for example IP) in the source on the marked network 101.Grouping 121 has also comprised destination-address 123.Destination-address 123 can be the virtual ip address for contact destination host 107.
Method 200 comprises that load balancer determines that this grouping is the action (action 202) for existing data flow.For example, multiplexer 106 can determine to divide into groups 121 for existing data flow.The first grouping in data flow (for example SYN grouping of transmission control protocol TCP grouping) can comprise the first packet indicator.Other grouping in the data flow (for example non-SYN grouping of TCP) does not comprise the first packet indicator.Like this, when grouping when not comprising the first packet indicator, multiplexer can infer that this grouping is for existing data flow. multiplexer 106A can determine to divide into groups 121 not comprise the first packet indicator.Like this, multiplexer 106A infers that grouping 121 is for existing data flow.
Method 200 comprises that load balancer comes to generate the data flow identifiers (action 203) of available data stream with algorithm from source electronic address information and destination electronic address information.For example, the ID maker can hash to stream ID 144 with source address 122 and destination-address 123 with hash function.Stream ID 144 can represent for example index position of new array 109.For example, stream ID 144 can represent the 4th position in the new array 109.In certain embodiments, hashing algorithm is used to source IP address and VIP are hashed to data flow identifiers.
Method 200 comprises that load balancer determines that this load balancer lacks enough information and come to identify action (action 204) corresponding to the destination host of this available data stream from a plurality of destination hosts.For example, multiplexer 106A can determine that this multiplexer 106A lacks enough information and come to identify the destination host that is fit to corresponding to stream ID 144 from destination host 107.
Action 204 can comprise that load balancer determines that this load balancer does not have the action of the state of any high-speed cache that available data stream is mapped to a destination host in a plurality of destination hosts (action 205).For example, state supervisor 143A can determine that multiplexer 106A does not have and anyly will flow the state that ID 144 is mapped to the high-speed cache of one of destination host 107.State supervisor 143A can shine upon with the destination host that checks stream ID 144 by reference state 146A (state of high-speed cache).
Action 204 can comprise that arrangement that load balancer detects a plurality of destination hosts is in the action in the transition.For example, multiplexer 106A can detect the transition in the arrangement of destination host 107.During the life cycle of one or more existing data flow (for example flowing ID 144), destination host 107C can be added to destination host 107.Destination array maker 104 can detect this change.As response, destination array maker 104 can generate new array 109.Multiplexer 106A can be with reference to old array 108 and new array 109.At least as described in the difference 111, multiplexer 106A detects this transition.In other words, the part in data flow ID space is assigned to destination host 107C now.
Method 200 comprises that lacking enough information in response to load balancer identifies judgement corresponding to the destination host of available data stream, one of load balancer sign is designated as the action of possessory owner's load balancer of available data stream, and described owner's load balancer is (action 206) selected from one or more other load balancers.For example, multiplexer 106A can be designated multiplexer 106B the owner of stream ID 144.Owner's detector 142A can receive stream ID 144 as input, and the IP address of multiplexer 106B is exported as the owner who flows ID 144.
In certain embodiments, multiplexer 106A uses the second hashing algorithm so that source address 122 and destination-address 123 are hashed in the second hashed value.The second hashed value is illustrated in index position in owner's array (for example as described in Fig. 7 A and the 7B).Owner's array is mapped to corresponding owner's multiplexer with data flow, and when detecting transition, described owner's multiplexer is safeguarded the state of mapped data flow.Like this, multiplexer 106A can be with reference to the index position that flows ID 144 in owner's array to be designated multiplexer 106B the owner of stream ID 144.
Load balancing management device 103 can monitor multiplexer 106, and adjusts main owner's array and the reserve owner array of data flow when multiplexer being added multiplexer 106 or delete multiplexer from multiplexer 106.Load balancing management device 103 can distributed data streams ownership stride main and ownership reserve of multiplexer 106 with (on possible degree) balance.
Method 200 comprises that also lacking enough information in response to load balancer identifies judgement corresponding to the destination host of available data stream, and load balancer will send to the request of data flow state information the action (action 207) of owner's load balancer.For example, multiplexer 106A can send to multiplexer 106B with grouping 121.Perhaps, multiplexer 106A can keep dividing into groups 121 and the clearly request of the data flow state information of convection current ID 144 sent to multiplexer 106B.
Multiplexer 106B can receive grouping 121 from multiplexer 106A.Can from source address 122 and destination-address 123, generate stream ID 144 in case receive grouping 121, ID maker 141B.Subsequently, owner's detector 142B can determine that multiplexer 106B is the owner of stream ID 144.State supervisor 142B can reference state 146B with Access status 126.State 126 can be mapped to destination host 107B with stream ID 144.If do not find state, multiplexer 106B can use current destination array to generate new state 126.Multiplexer 106B can send to destination host 107B with grouping 121.Multiplexer 106B can be to multiplexer 106A return state 126.Multiplexer 106B can also send to state 126 the reserve owner corresponding to this stream.
Perhaps, multiplexer 106B can receive from multiplexer 106A the clearly request of the data flow state information of convection current ID 144.Owner's detector 142B can determine that multiplexer 106B is the owner of stream ID 144.State supervisor 142B can reference state 146B with Access status 126.Multiplexer 106B can be to multiplexer 106A return state 126.
Method 200 comprises load balancer from the action of owner's load balancer receiving status information, and described state information has identified corresponding to the destination host of available data stream (action 208).For example, multiplexer 106A can be from multiplexer 106B accepting state 126.Method 200 comprises the action (action 209) of the state information that the load balancer high-speed cache receives.For example, multiplexer 106A can be at state 146A high speed buffer status 126.As multiplexer 106A during in response to clear and definite request receiving state 126, multiplexer 106A can send to destination host 107B with grouping 121 subsequently.
And when receiving the follow-up grouping (even multiplexer 106B will divide into groups 121 send to destination host 107B) of stream ID 144, multiplexer 106A can be designated destination host 107B the suitable destination host of follow-up grouping.For example, multiplexer 106A can receive grouping 132.Grouping 132 comprises source address 122 and destination-address 123.ID maker 141B can determine that this grouping 132 is corresponding to stream ID 144.State supervisor 143B can reference state 146A be the destination host that is fit to of stream ID 144 with recognition purpose ground main frame 107B.Multiplexer 106A can send to destination host 107B with grouping 132 subsequently.
Other multiplexer also can receive the grouping of stream ID 144.If these other multiplexers high-speed cache the state of stream ID 144 (be that himself generates, or inquire from another multiplexer), they can send to grouping on the destination host 107B.For example, multiplexer 106C can receive grouping 131.Grouping 131 comprises source address 122 and destination-address 123.ID maker 141C can determine that this grouping 132 is corresponding to stream ID 144.State supervisor 143C can reference state 146C be the destination host that is fit to of stream ID 144 with recognition purpose ground main frame 107B.Multiplexer 106C can send to destination host 107B with grouping 131 subsequently.
In addition, when the arrangement of destination host is in the transition, multiplexer with state of available data stream can send to this state owner's multiplexer of this data flow, and described available data stream has different destination hosts in old and new destination array.For example, the interpolation of destination host 107C may cause the transition in the destination host 107.In case detect transition, multiplexer 106C can have the state of one or more available datas stream, other multiplexer wherein, and for example multiplexer 106A and/or multiplexer 106B are the owners of these available datas streams.In response to detecting transition, multiplexer 106 can send to suitable owner's main frame with the state that has the available data stream of different destination hosts in old and new destination array.For example, multiplexer 106C can send to multiplexer 106C (not shown) with the state of stream ID 144.During transition, the owner's multiplexer that is fit to can be from other multiplexer accepting state.For example, multiplexer 106A can receive from multiplexer 106C (not shown) the state of stream ID 144.
Fig. 3 shows the flow chart for the exemplary method 300 of shared state between load balancer.Method 300 is described with reference to assembly and the data of Computer Architecture 100.
Method 300 comprises that another load balancer that load balancer comprises receives the action of dividing into groups from one or more other load balancers, described grouping has comprised the source electronic address information in the source that is identified on the wide area network and comprised the destination electronic address information of virtual electronic address (action 301).For example, multiplexer 106B can receive grouping 121 from multiplexer 106A.Method 300 comprises that the definite grouping that receives of load balancer is the action (action 302) for existing data flow.For example, ID maker 144 can determine that this grouping 121 is corresponding to stream ID 144.Method 300 comprises that load balancer determines that this load balancer is the possessory action (action 302) of existing data flow.For example, owner's detector 142B can determine that multiplexer 106B is the owner of stream ID 144.
Method 300 comprises that load balancer determines the action of the state of the existing data flow of high-speed cache of this load balancer, and the state of high-speed cache is mapped to a destination host (action 304) in a plurality of destination hosts with existing data flow.For example, state supervisor 142B can reference state 146B with Access status 126.State 126 can indicate stream ID 144 corresponding to destination host 107B.Perhaps, state supervisor 142B can generate state 126.
Method 300 comprises that load balancer sends to the grouping that receives the action (action 305) of the destination host that is mapped to available data stream.For example, multiplexer 106B can send to destination host 107B with grouping 121.Method 300 comprises that load balancer sends to the state of high-speed cache the action (action 306) of other load balancer.For example, multiplexer 106B can be to multiplexer 106A return state 126.
Perhaps, multiplexer 106B can receive the clearly request that is mapped to the state of suitable destination host 107 to flowing ID 144 from multiplexer 106A.As response, state supervisor 142B can reference state 146B with Access status 126.State 126 can indicate stream ID 144 corresponding to destination host 107B.Multiplexer 106B can be to multiplexer 106A return state 126.The subsequently mapping in the state-based 126 of multiplexer 106A will divide into groups 121 to send to destination host 107B.
Fig. 4 A and 4B have illustrated the example computer architecture 400 that is used for shared state between multiplexer.As shown, Computer Architecture 400 comprises multiplexer 401A and 401B and destination host 402A, 402B and 402C.In Fig. 4 A, multiplexer 401B receives grouping 421.Multiplexer 401B determines that it lacks enough information and identifies suitable destination host.As response, multiplexer 401B will divide into groups 421 to send to multiplexer 401A (owner's multiplexer).Multiplexer 401A can receive grouping 421 from multiplexer 401B.Multiplexer 401A identification-state 426 and to multiplexer 401B return state 426.To divide into groups 421 data flow of state 426 is shone upon to destination host 402B.Multiplexer 401A can also be transmitted to destination host 402B with grouping 421.Subsequently, multiplexer 401B receives the grouping 422 and 423 with the 421 identical data flow of dividing into groups.State-based 426, multiplexer 401B will divide into groups 422 and 423 to send to destination host 402B.
In Fig. 4 B, multiplexer 401A receives grouping 431.Multiplexer 401B determines that it lacks enough information and identifies suitable destination host.As response, multiplexer 401B will divide into groups 431 to send to multiplexer 401A (owner's multiplexer).Multiplexer 401A can receive grouping 431 from multiplexer 401B.Multiplexer 401A identification-state 436 and to multiplexer 401B return state 436.To divide into groups 431 data flow of state 436 is shone upon to destination host 402B.Multiplexer 401A will divide into groups 431 to send to destination host 402B.
Yet before accepting state 436, multiplexer 401B receives the grouping 432 with the 431 identical data flow of dividing into groups.Because multiplexer 401B does not also have accepting state 436, multiplexer 401B determines that it lacks enough information and identifies suitable destination host.As response, multiplexer 401B also will divide into groups 432 to send to multiplexer 401A.Multiplexer 401A can receive grouping 432 from multiplexer 401B.Multiplexer 401A determines that it sends to multiplexer 401B with state 436.Multiplexer 401A will divide into groups 432 to send to destination host 402B.Subsequently, multiplexer 401B receives the grouping 433 with the 431 identical data flow of dividing into groups.State-based 436, multiplexer 401B will divide into groups 433 to send to destination host 402B.Therefore, embodiments of the invention can compensate the delay in the status exchange between multiplexer.
Fig. 5 A and 5B have illustrated the example computer architecture 500 that is used for shared state between multiplexer.As shown, Computer Architecture 500 comprises multiplexer 501A, 501B and 501C and destination host 502A, 502B and 502C.
In Fig. 5 A, multiplexer 501A is the main owner who comprises the available data stream of grouping 521 and 522 ( grouping 521 and 522 right and wrong SYN grouping).Multiplexer 501C is the reserve owner who comprises the data flow of grouping 521 and 522.
Multiplexer 501A receives grouping 521.Multiplexer 501A determines that it is the owner of available data stream, and it lacks enough information and identifies suitable destination host (being the state that multiplexer 501A lacks the high-speed cache of available data stream).As response, the current destination of multiplexer 501A reference array (for example new array 109) is to be designated suitable destination host with destination host 502A.Multiplexer 501A also begins to follow the tracks of the state 526 of available data stream.In case the judgement that follow-up transition and state 526 are different from new array, multiplexer 501A just sends to multiplexer 501C with state 526.Multiplexer 501C is from multiplexer 501A accepting state 526 and cached state 526.State 526 is mapped to destination host 502A with available data stream.Therefore, if multiplexer 501A failure, multiplexer 501C can take over state 526 is offered other multiplexer.
In Fig. 5 B, multiplexer 501A is the main owner who comprises the available data stream of grouping 531 and 532 ( grouping 531 and 532 right and wrong SYN grouping).Multiplexer 501C is the reserve owner who comprises the data flow of grouping 531 and 532.
Multiplexer 501B receives grouping 531 and 532.Multiplexer 501B have enough information with determine destination host 502A be available data stream the destination host that is fit to (namely or available data stream is new stream, otherwise multiplexer high-speed cache the information of relevant this stream).Multiplexer 501B determines that also multiplexer 501A is the main owner of available data stream.In case change in the array of destination, multiplexer 501B detects transition and state 536 is sent to multiplexer 501A.Multiplexer 501A is from multiplexer 501B accepting state 536.State 536 is mapped to destination host 502A with available data stream.
Arrive multiplexer 501B place if belong to the more grouping of phase homogeneous turbulence, multiplexer 501B sends to owner's multiplexer 501A with batch updating 538 (comprising that state 536 and multiplexer 501A are possessory other states) with keeping every now and then, is the current information of possessory all streams so that owner's multiplexer always has about it.
Sometimes, multiplexer 501A can upgrade the batch state and send to other reserve owner.For example, multiplexer 501A can send to multiplexer 501C with state 537.Multiplexer 501C can be from multiplexer 501A accepting state 537.State 537 can be that the state of the batch of the active flow of being followed the tracks of by multiplexer 501 upgrades (comprising state 536).
Fig. 6 A, 6B, 6C and 6D have illustrated and have been used for the example computer architecture 600 that service data flows to the mapping of destination host.Fig. 6 A has described the destination host A 601, the destination host B 602 that are in stable state and the arrangement of destination host C 603.Therefore, old array 608 and new array 609 match each other.In stable state, multiplexer can reference array with the destination host that is fit to of specified data stream.
Fig. 6 B has described the arrangement of destination host A 601, destination host B 602 and destination host C 603, wherein, has removed destination host C 603.Removing of destination host can be instantaneous basically.Like this, destination host removes the transition of not necessarily indicating in the arrangement of destination host.Like this, in case removed a destination host, multiplexer still can reference array with the destination host that is fit to of specified data stream.
Fig. 6 C has described the arrangement of destination host A 601, destination host B 602 and destination host C 603 and destination host D 604, wherein, has replaced destination host C 603 with destination host D 604.The replacement of destination host also can be instantaneous basically.Like this, the transition in the arrangement of destination host is not necessarily indicated in the replacement of destination host.Like this, in case replaced a destination host, multiplexer still can reference array with the destination host that is fit to of specified data stream.
Fig. 6 D has described the arrangement of destination host A 601, destination host B 602 and destination host C 603 and destination host D 604, wherein, has added destination host D 603.The interpolation of destination host can comprise transient period and the such transition in the arrangement of destination host.During the transient period, the mapping between old array 608 and new array 609 can difference (because some data flow is reallocated to destination host D 604 with the balance operating load).When detecting different mappings, multiplexer can be followed the tracks of the also state of switched traffic.When all owner's multiplexers have enough information when making the decision of the relevant stream that they have, the arrangement of destination host turns back to stable state, and old array 608 and new array 609 are mated again.
Fig. 7 A and 7B have illustrated and have been used for the example computer architecture 700 that service data flows to the mapping of owner's multiplexer.Fig. 7 A has described multiplexer A 701, multiplexer B 702, multiplexer C 703 and multiplexer D 704.Main owner's array 708 is mapped to main owner's multiplexer with data flow.Reserve owner array 709 is mapped to reserve owner multiplexer with data flow.Position in array can be corresponding to data flow ID.For example, the main owner of data flow ID 6 is exactly multiplexer B 702.Similarly, the reserve owner of data flow ID 6 is exactly multiplexer C 703.In certain embodiments, owner's detector (such as 142A, 142B, 142C etc.) usage data flows ID as the index position in the array, and state is upgraded the multiplexer that sends in index position place sign.
When multiplexer failure, can redistribute the main and reserve ownership responsibility of this multiplexer.Fig. 7 B has described the fault of multiplexer C 703.In response to this fault, the reserve ownership of the main ownership of index position (data flow ID) 9-12 and index position (data flow ID) 5-8 is reallocated to remaining multiplexer.
Therefore, embodiments of the invention comprise that load balancer uses the consistency hashing algorithm to decide the how new connection of load balance.The use of consistency hashing algorithm allows load balancer will need exchanged quantity of state to minimize.Particularly, only have and to need with the stream mode that hash and destination array are determined by synchronously.Load balancer keeps the relevant state information (to the destination-address of constant current) of importing grouping into.When needs at that time, namely for example during the change in detecting destination host configuration, in the certainty mode selected state information is striden on the load balancer and to share, the load balancer of authoritative to allow (for example being the owner) is as selecting correct destination host to constant current.Each load balancer can get in touch to understand with this authoritative load balancer can not the local stream of determining.
The present invention can be embodied as other concrete form and not deviate from its spirit or substantive characteristics.It is illustrative and nonrestrictive that described embodiment should be considered in all respects.Therefore, scope of the present invention is by appended claims but not foregoing description indication.Fall in the implication of equivalents of claims and the scope change and all contained by the scope of claims.

Claims (10)

1. at the computer systems division that comprises router and load balance system, load balance system comprises load balancer, one or more other load balancer and a plurality of destination host, router is connected to network and is the entrance that enters load balance system, assembly on the network uses the virtual electronic address to communicate by letter with load balance system, a kind of method for shared state between load balancer, described method comprises:
Load balancer receives the action of grouping, the destination electronic address information that described grouping comprises the source electronic address information in the source that is identified on the wide area network and comprises described virtual electronic address from router;
Load balancer determines that described grouping is the action for existing data flow;
Load balancer comes to generate the action of the data flow identifiers of described existing data flow from the packet content that comprises packet header with an algorithm;
Load balancer determines that described load balancer lacks enough information and come to identify action corresponding to the destination host of described existing data flow from described a plurality of destination hosts;
Load balancer determines that this load balancer does not have the action of the state of any high-speed cache that described existing data flow is mapped to a destination host in described a plurality of destination host;
In response to determining that described load balancer lacks enough information with the destination host of sign corresponding to described existing data flow, carries out:
The load balancer sign is designated as the action of possessory owner's load balancer of described existing data flow, and described owner's load balancer is selected from one or more other load balancers; And
Described load balancer will send to the request of data flow state information the action of owner's load balancer;
Described load balancer is from the action of described owner's load balancer receiving status information, and described state information has identified the destination host corresponding to described existing data flow; And
Described load balancer carries out the action of high-speed cache to the state information that receives.
2. the method for claim 1 is characterized in that, also comprises in response to the arrangement of determining described a plurality of destination hosts being in the transition, carries out:
The action of the state of the high-speed cache of one or more other the existing data flow of described load balancer sign, the state of described high-speed cache is mapped to described one or more other existing data flow the destination host of the correspondence in described a plurality of destination host;
Described load balancer identifies the action that its destination host is different from the data flow of current mapping with current destination host array;
For in these one or more existing more data flow each:
Described load balancer sign is designated as the action of possessory owner's load balancer of described existing data flow, and described owner's load balancer is to select from one or more other load balancers; And
The state of the high-speed cache of described existing data flow is sent to the action of owner's load balancer of described existing data flow.
3. the method for claim 1, it is characterized in that, described load balancer comprises that from the action that described router receives grouping described load balancer receives grouping according to following one: Equal-Cost Multipath (ECMP) algorithm or domain name system (DNS) round-robin algorithm.
4. the method for claim 1 is characterized in that, also comprises:
Described load balancer receives the action of the second grouping of described existing data flow; And
Described load balancer with reference to the state information of high-speed cache described destination host is designated the action corresponding to described existing data flow; And
Described load balancer sends to described the second grouping the action of described destination host.
5. the method for claim 1, it is characterized in that, the action that described load balancer generates the data flow identifiers of described existing data flow with an algorithm is included as the action that described load balance system hashes to data flow identifiers with hashing algorithm with source Internet Protocol address and the virtual internet protocol address in source, and described data flow identifiers is illustrated in data flow is mapped to index in the current mapping array of corresponding destination host.
6. method as claimed in claim 5 is characterized in that, the action that described load balancer sign is designated as possessory owner's load balancer of described existing data flow comprises:
Source Internet Protocol address and virtual internet protocol address are hashed to the action of the second hashed value with the second hashing algorithm, described the second hashed value is illustrated in the position in main owner's subregion array, and described main owner's subregion array is mapped to data flow the load balancer of correspondence of the state of the high-speed cache of safeguarding described data flow; And
Be mapped as the action of the possessory load balancer of described existing data flow with sign with reference to the position in described main owner's subregion array.
7. at the computer systems division that comprises router and load balance system, load balance system comprises load balancer, one or more other load balancer and a plurality of destination host, router is connected to network and is the entrance that enters described load balance system from wide area network, assembly on the network uses the virtual electronic address to communicate by letter with described load balance system, a kind of method for shared state between load balancer, described method comprises:
Described load balancer receives the action of the grouping of encapsulation, the destination electronic address information that described grouping comprises the source electronic address information in the source that is identified on the network and comprises described virtual electronic address from another load balancer;
Described load balancer determines that the grouping that receives is the action for existing data flow;
Described load balancer determines that described load balancer is the possessory action of described existing data flow;
Described load balancer is determined the action of the state of the described existing data flow of high-speed cache of described load balancer, and the state of high-speed cache is mapped to a destination host in described a plurality of destination host with described existing data flow;
Described load balancer sends to the grouping that receives the action of the destination host that is mapped to described available data stream; And
Described load balancer sends to described other load balancer to be used for the follow-up grouping of described existing data flow suitably is transmitted to the action of one of described destination host with the state of described high-speed cache.
8. method as claimed in claim 7, it is characterized in that, also comprise: described load balancer receives the action of state of the high-speed cache of described one or more other existing data flow from described one or more other load balancers, and described one or more other load balancers have determined that described load balancer is main owner's load balancer of described one or more other existing data flow.
9. method as claimed in claim 7, it is characterized in that, also comprise: described load balancer sends to the state of another existing data flow the action of another load balancer, and described load balancer has determined that the state of described data flow is different in the array of current destination; And
Described load balancer determines that described other load balancer is the main possessory action of described other existing data flow.
10. load balance system comprises:
One or more processors;
System storage;
Router;
One or more computer memory devices of storage computer executable instructions on it, described computer executable instructions represents load balancing management device, a plurality of multiplexer and a plurality of destination host, wherein, wherein said load balancer is configured to:
Described a plurality of destination hosts are monitored the change in the arrangement of described a plurality of destination hosts;
Maintenance will be flowed the destination host array that ID is mapped to destination host;
Upgrade described destination host array with periodic intervals;
Before each array is upgraded, the destination host array is copied to the legacy version of destination host array;
The legacy version of destination host array and destination host array is offered described a plurality of multiplexer;
Described a plurality of multiplexers are monitored change;
Maintenance will be flowed the main ownership array that ID is mapped to main owner's multiplexer;
Maintenance will be flowed the reserve ownership array that ID is mapped to reserve owner multiplexer;
Main ownership array and reserve ownership array are offered described a plurality of multiplexer; And
Wherein, each in described a plurality of multiplexer is configured to:
Receive grouping from described router;
Based on the information that in described grouping, comprises to formulate the stream ID of each grouping that is received;
Be the destination host that described group character is fit to from described a plurality of destination hosts, comprise:
Determine whether new data flow of described data flow;
When described data flow was confirmed as being existing data flow, the index of usage data stream ID conduct in the destination host array was to identify the destination host that is fit to of described grouping;
When the legacy version of the content and aim ground of described destination host array main frame does not mate,, comprising to identify the destination host that is fit to of described grouping with reference to the state of high-speed cache:
With reference to the state of the high-speed cache at multiplexer place with determine described multiplexer before whether high-speed cache the indication of the destination host that is fit to of stream ID, comprising:
When the state at the high-speed cache at multiplexer place comprised indication for the destination host that is fit to of stream ID, access was at the state of the high-speed cache at multiplexer place; And
When the state at the high-speed cache at multiplexer place does not comprise indication for the destination host that is fit to of stream ID,
With reference to owner's array in order to be described stream ID sign owner multiplexer;
State to the owner's multiplexer query caching that identifies; And
Receive the state of high-speed cache from the owner's multiplexer that identifies, described state has been indicated the destination host that is fit to that is used for described stream ID; And
Described grouping is sent to the destination host that is fit to that identifies.
CN201110444322.1A 2010-12-17 2011-12-16 The state of synchronized loading balancer inter-module Active CN102857438B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/972,340 US8755283B2 (en) 2010-12-17 2010-12-17 Synchronizing state among load balancer components
US12/972,340 2010-12-17

Publications (2)

Publication Number Publication Date
CN102857438A true CN102857438A (en) 2013-01-02
CN102857438B CN102857438B (en) 2015-12-02

Family

ID=46234270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110444322.1A Active CN102857438B (en) 2010-12-17 2011-12-16 The state of synchronized loading balancer inter-module

Country Status (5)

Country Link
US (3) US8755283B2 (en)
EP (1) EP2652924B1 (en)
JP (1) JP5889914B2 (en)
CN (1) CN102857438B (en)
WO (1) WO2012083264A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114221907A (en) * 2021-12-06 2022-03-22 北京百度网讯科技有限公司 Network hash configuration method and device, electronic equipment and storage medium
CN115297191A (en) * 2022-09-30 2022-11-04 成都云智北斗科技有限公司 Multi-data-stream server

Families Citing this family (92)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7675854B2 (en) 2006-02-21 2010-03-09 A10 Networks, Inc. System and method for an adaptive TCP SYN cookie with time validation
US9960967B2 (en) 2009-10-21 2018-05-01 A10 Networks, Inc. Determining an application delivery server based on geo-location information
US9215275B2 (en) 2010-09-30 2015-12-15 A10 Networks, Inc. System and method to balance servers based on server load status
US9609052B2 (en) 2010-12-02 2017-03-28 A10 Networks, Inc. Distributing application traffic to servers based on dynamic service response time
US8612550B2 (en) 2011-02-07 2013-12-17 Microsoft Corporation Proxy-based cache content distribution and affinity
JP5724687B2 (en) * 2011-07-04 2015-05-27 富士通株式会社 Information processing apparatus, server selection method, and program
US8897154B2 (en) 2011-10-24 2014-11-25 A10 Networks, Inc. Combining stateless and stateful server load balancing
EP2748714B1 (en) 2011-11-15 2021-01-13 Nicira, Inc. Connection identifier assignment and source network address translation
US9094364B2 (en) 2011-12-23 2015-07-28 A10 Networks, Inc. Methods to manage services over a service gateway
US10044582B2 (en) 2012-01-28 2018-08-07 A10 Networks, Inc. Generating secure name records
US8850002B1 (en) * 2012-07-02 2014-09-30 Amazon Technologies, Inc. One-to many stateless load balancing
US8805990B2 (en) 2012-07-12 2014-08-12 Microsoft Corporation Load balancing for single-address tenants
US9843484B2 (en) 2012-09-25 2017-12-12 A10 Networks, Inc. Graceful scaling in software driven networks
WO2014052099A2 (en) 2012-09-25 2014-04-03 A10 Networks, Inc. Load distribution in data networks
US10021174B2 (en) 2012-09-25 2018-07-10 A10 Networks, Inc. Distributing service sessions
US10002141B2 (en) 2012-09-25 2018-06-19 A10 Networks, Inc. Distributed database in software driven networks
US9246998B2 (en) 2012-10-16 2016-01-26 Microsoft Technology Licensing, Llc Load balancer bypass
US9531846B2 (en) 2013-01-23 2016-12-27 A10 Networks, Inc. Reducing buffer usage for TCP proxy session based on delayed acknowledgement
US9900252B2 (en) 2013-03-08 2018-02-20 A10 Networks, Inc. Application delivery controller and global server load balancer
US9992107B2 (en) 2013-03-15 2018-06-05 A10 Networks, Inc. Processing data packets using a policy based network path
US9483286B2 (en) 2013-03-15 2016-11-01 Avi Networks Distributed network services
US10135914B2 (en) 2013-04-16 2018-11-20 Amazon Technologies, Inc. Connection publishing in a distributed load balancer
US9559961B1 (en) 2013-04-16 2017-01-31 Amazon Technologies, Inc. Message bus for testing distributed load balancers
US10069903B2 (en) 2013-04-16 2018-09-04 Amazon Technologies, Inc. Distributed load balancer
US10038626B2 (en) 2013-04-16 2018-07-31 Amazon Technologies, Inc. Multipath routing in a distributed load balancer
US9553809B2 (en) 2013-04-16 2017-01-24 Amazon Technologies, Inc. Asymmetric packet flow in a distributed load balancer
US9871712B1 (en) 2013-04-16 2018-01-16 Amazon Technologies, Inc. Health checking in a distributed load balancer
WO2014179753A2 (en) 2013-05-03 2014-11-06 A10 Networks, Inc. Facilitating secure network traffic by an application delivery controller
US9225638B2 (en) 2013-05-09 2015-12-29 Vmware, Inc. Method and system for service switching using service tags
US10110684B1 (en) 2013-08-15 2018-10-23 Avi Networks Transparent network service migration across service devices
US9843520B1 (en) * 2013-08-15 2017-12-12 Avi Networks Transparent network-services elastic scale-out
CN104426936A (en) * 2013-08-22 2015-03-18 中兴通讯股份有限公司 Load balancing method and system
US9577845B2 (en) 2013-09-04 2017-02-21 Nicira, Inc. Multiple active L3 gateways for logical networks
US9998530B2 (en) 2013-10-15 2018-06-12 Nicira, Inc. Distributed global load-balancing system for software-defined data centers
US9407692B2 (en) * 2013-11-27 2016-08-02 Avi Networks Method and system for distributed load balancing
US9590901B2 (en) 2014-03-14 2017-03-07 Nicira, Inc. Route advertisement by managed gateways
US9942152B2 (en) 2014-03-25 2018-04-10 A10 Networks, Inc. Forwarding data packets using a service-based forwarding policy
US9667711B2 (en) 2014-03-26 2017-05-30 International Business Machines Corporation Load balancing of distributed services
US9942162B2 (en) 2014-03-31 2018-04-10 A10 Networks, Inc. Active application response delay time
US9560124B2 (en) * 2014-05-13 2017-01-31 Google Inc. Method and system for load balancing anycast data traffic
US9906422B2 (en) 2014-05-16 2018-02-27 A10 Networks, Inc. Distributed system to determine a server's health
US9917727B2 (en) 2014-06-03 2018-03-13 Nicira, Inc. Consistent hashing for network traffic dispatching
US10129122B2 (en) 2014-06-03 2018-11-13 A10 Networks, Inc. User defined objects for network devices
US9992229B2 (en) 2014-06-03 2018-06-05 A10 Networks, Inc. Programming a data network device using user defined scripts with licenses
US9986061B2 (en) 2014-06-03 2018-05-29 A10 Networks, Inc. Programming a data network device using user defined scripts
US9674302B1 (en) * 2014-06-13 2017-06-06 Amazon Technologies, Inc. Computing resource transition notification and pending state
US9774537B2 (en) 2014-09-30 2017-09-26 Nicira, Inc. Dynamically adjusting load balancing
US10225137B2 (en) 2014-09-30 2019-03-05 Nicira, Inc. Service node selection by an inline service switch
US9825810B2 (en) 2014-09-30 2017-11-21 Nicira, Inc. Method and apparatus for distributing load among a plurality of service nodes
EP3235198A1 (en) * 2014-12-18 2017-10-25 Nokia Solutions and Networks Oy Network load balancer
US9800653B2 (en) 2015-03-06 2017-10-24 Microsoft Technology Licensing, Llc Measuring responsiveness of a load balancing system
US11283697B1 (en) 2015-03-24 2022-03-22 Vmware, Inc. Scalable real time metrics management
US10594743B2 (en) 2015-04-03 2020-03-17 Nicira, Inc. Method, apparatus, and system for implementing a content switch
US10038628B2 (en) 2015-04-04 2018-07-31 Nicira, Inc. Route server mode for dynamic routing between logical and physical networks
US10581976B2 (en) 2015-08-12 2020-03-03 A10 Networks, Inc. Transmission control of protocol state exchange for dynamic stateful service insertion
US10243791B2 (en) 2015-08-13 2019-03-26 A10 Networks, Inc. Automated adjustment of subscriber policies
US10333849B2 (en) 2016-04-28 2019-06-25 Nicira, Inc. Automatic configuration of logical routers on edge nodes
US10091161B2 (en) 2016-04-30 2018-10-02 Nicira, Inc. Assignment of router ID for logical routers
US10560320B2 (en) 2016-06-29 2020-02-11 Nicira, Inc. Ranking of gateways in cluster
US10938668B1 (en) * 2016-09-30 2021-03-02 Amazon Technologies, Inc. Safe deployment using versioned hash rings
US10700960B2 (en) * 2016-11-17 2020-06-30 Nicira, Inc. Enablement of multi-path routing in virtual edge systems
US10237123B2 (en) 2016-12-21 2019-03-19 Nicira, Inc. Dynamic recovery from a split-brain failure in edge nodes
US10616045B2 (en) 2016-12-22 2020-04-07 Nicira, Inc. Migration of centralized routing components of logical router
US10541909B2 (en) 2017-06-23 2020-01-21 International Business Machines Corporation Distributed affinity tracking for network connections
US10797966B2 (en) 2017-10-29 2020-10-06 Nicira, Inc. Service operation chaining
US11012420B2 (en) 2017-11-15 2021-05-18 Nicira, Inc. Third-party service chaining using packet encapsulation in a flow-based forwarding element
CN107979646A (en) * 2017-12-07 2018-05-01 郑州云海信息技术有限公司 A kind of PaaS platform load-balancing method based on consistent hashing strategy
US10616321B2 (en) 2017-12-22 2020-04-07 At&T Intellectual Property I, L.P. Distributed stateful load balancer
US10797910B2 (en) 2018-01-26 2020-10-06 Nicira, Inc. Specifying and utilizing paths through a network
US10659252B2 (en) 2018-01-26 2020-05-19 Nicira, Inc Specifying and utilizing paths through a network
US10728174B2 (en) 2018-03-27 2020-07-28 Nicira, Inc. Incorporating layer 2 service between two interfaces of gateway device
US10805192B2 (en) 2018-03-27 2020-10-13 Nicira, Inc. Detecting failure of layer 2 service using broadcast messages
US10673764B2 (en) 2018-05-22 2020-06-02 International Business Machines Corporation Distributed affinity tracking for network connections
US11258760B1 (en) 2018-06-22 2022-02-22 Vmware, Inc. Stateful distributed web application firewall
US11595250B2 (en) 2018-09-02 2023-02-28 Vmware, Inc. Service insertion at logical network gateway
US10944673B2 (en) 2018-09-02 2021-03-09 Vmware, Inc. Redirection of data messages at logical network gateway
US10771318B1 (en) 2018-10-24 2020-09-08 Vmware, Inc High availability on a distributed networking platform
CN111833189A (en) 2018-10-26 2020-10-27 创新先进技术有限公司 Data processing method and device
US11467861B2 (en) 2019-02-22 2022-10-11 Vmware, Inc. Configuring distributed forwarding for performing service chain operations
US10812576B1 (en) 2019-05-31 2020-10-20 Microsoft Technology Licensing, Llc Hardware load balancer gateway on commodity switch hardware
US11283717B2 (en) 2019-10-30 2022-03-22 Vmware, Inc. Distributed fault tolerant service chain
US11140218B2 (en) 2019-10-30 2021-10-05 Vmware, Inc. Distributed service chain across multiple clouds
US11223494B2 (en) 2020-01-13 2022-01-11 Vmware, Inc. Service insertion for multicast traffic at boundary
US11153406B2 (en) 2020-01-20 2021-10-19 Vmware, Inc. Method of network performance visualization of service function chains
US11659061B2 (en) 2020-01-20 2023-05-23 Vmware, Inc. Method of adjusting service function chains to improve network performance
US11438257B2 (en) 2020-04-06 2022-09-06 Vmware, Inc. Generating forward and reverse direction connection-tracking records for service paths at a network edge
US11429452B2 (en) 2020-04-16 2022-08-30 Paypal, Inc. Method for distributing keys using two auxiliary hashing functions
US11734043B2 (en) 2020-12-15 2023-08-22 Vmware, Inc. Providing stateful services in a scalable manner for machines executing on host computers
US11611625B2 (en) 2020-12-15 2023-03-21 Vmware, Inc. Providing stateful services in a scalable manner for machines executing on host computers
US11799761B2 (en) 2022-01-07 2023-10-24 Vmware, Inc. Scaling edge services with minimal disruption
US11888747B2 (en) 2022-01-12 2024-01-30 VMware LLC Probabilistic filters for use in network forwarding and services
CN114928615B (en) * 2022-05-19 2023-10-24 网宿科技股份有限公司 Load balancing method, device, equipment and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020040402A1 (en) * 2000-09-28 2002-04-04 International Business Machines Corporation System and method for implementing a clustered load balancer
US20030005080A1 (en) * 2001-06-28 2003-01-02 Watkins James S. Systems and methods for accessing data
CN1578320A (en) * 2003-06-30 2005-02-09 微软公司 Network load balancing with main machine status information
US20100302940A1 (en) * 2009-05-28 2010-12-02 Microsoft Corporation Load balancing across layer-2 domains

Family Cites Families (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5371852A (en) * 1992-10-14 1994-12-06 International Business Machines Corporation Method and apparatus for making a cluster of computers appear as a single host on a network
US5793763A (en) 1995-11-03 1998-08-11 Cisco Technology, Inc. Security system for network address translation systems
US5774660A (en) 1996-08-05 1998-06-30 Resonate, Inc. World-wide-web server with delayed resource-binding for resource-based load balancing on a distributed resource multi-node network
US6351775B1 (en) 1997-05-30 2002-02-26 International Business Machines Corporation Loading balancing across servers in a computer network
US6128279A (en) 1997-10-06 2000-10-03 Web Balance, Inc. System for balancing loads among network servers
US6070191A (en) 1997-10-17 2000-05-30 Lucent Technologies Inc. Data distribution techniques for load-balanced fault-tolerant web access
US6993027B1 (en) * 1999-03-17 2006-01-31 Broadcom Corporation Method for sending a switch indicator to avoid out-of-ordering of frames in a network switch
US7299294B1 (en) 1999-11-10 2007-11-20 Emc Corporation Distributed traffic controller for network data
AU4839300A (en) * 1999-05-11 2000-11-21 Webvan Group, Inc. Electronic commerce enabled delivery system and method
US6704278B1 (en) * 1999-07-02 2004-03-09 Cisco Technology, Inc. Stateful failover of service managers
US6970913B1 (en) 1999-07-02 2005-11-29 Cisco Technology, Inc. Load balancing using distributed forwarding agents with application based feedback for different virtual machines
US20010034752A1 (en) * 2000-01-26 2001-10-25 Prompt2U Inc. Method and system for symmetrically distributed adaptive matching of partners of mutual interest in a computer network
US20020032755A1 (en) 2000-09-13 2002-03-14 Marc Abrahams Registration system and method using a back end server
US6970939B2 (en) 2000-10-26 2005-11-29 Intel Corporation Method and apparatus for large payload distribution in a network
US8112545B1 (en) 2000-12-19 2012-02-07 Rockstar Bidco, LP Distributed network address translation control
US6549997B2 (en) 2001-03-16 2003-04-15 Fujitsu Limited Dynamic variable page size translation of addresses
US20020159456A1 (en) 2001-04-27 2002-10-31 Foster Michael S. Method and system for multicasting in a routing device
US7245632B2 (en) * 2001-08-10 2007-07-17 Sun Microsystems, Inc. External storage for modular computer systems
EP1315349B1 (en) 2001-11-21 2008-03-19 Sun Microsystems, Inc. A method for integrating with load balancers in a client and server system
JP2003163689A (en) * 2001-11-28 2003-06-06 Hitachi Ltd Network linkage information processing system and method for moving access between load distributors
US7289525B2 (en) * 2002-02-21 2007-10-30 Intel Corporation Inverse multiplexing of managed traffic flows over a multi-star network
US6856991B1 (en) 2002-03-19 2005-02-15 Cisco Technology, Inc. Method and apparatus for routing data to a load balanced server using MPLS packet labels
US7512702B1 (en) 2002-03-19 2009-03-31 Cisco Technology, Inc. Method and apparatus providing highly scalable server load balancing
US20030225859A1 (en) 2002-05-31 2003-12-04 Sun Microsystems, Inc. Request mapping for load balancing
US7020706B2 (en) 2002-06-17 2006-03-28 Bmc Software, Inc. Method and system for automatically updating multiple servers
US7280557B1 (en) 2002-06-28 2007-10-09 Cisco Technology, Inc. Mechanisms for providing stateful NAT support in redundant and asymetric routing environments
US7561587B2 (en) 2002-09-26 2009-07-14 Yhc Corporation Method and system for providing layer-4 switching technologies
US7616638B2 (en) * 2003-07-29 2009-11-10 Orbital Data Corporation Wavefront detection and disambiguation of acknowledgments
US20080008202A1 (en) * 2002-10-31 2008-01-10 Terrell William C Router with routing processors and methods for virtualization
US7243351B2 (en) * 2002-12-17 2007-07-10 International Business Machines Corporation System and method for task scheduling based upon the classification value and probability
US7890633B2 (en) 2003-02-13 2011-02-15 Oracle America, Inc. System and method of extending virtual address resolution for mapping networks
US7912954B1 (en) * 2003-06-27 2011-03-22 Oesterreicher Richard T System and method for digital media server load balancing
US7606929B2 (en) 2003-06-30 2009-10-20 Microsoft Corporation Network load balancing with connection manipulation
US7567504B2 (en) 2003-06-30 2009-07-28 Microsoft Corporation Network load balancing with traffic routing
US7613822B2 (en) 2003-06-30 2009-11-03 Microsoft Corporation Network load balancing with session information
US7590736B2 (en) * 2003-06-30 2009-09-15 Microsoft Corporation Flexible network load balancing
US9584360B2 (en) 2003-09-29 2017-02-28 Foundry Networks, Llc Global server load balancing support for private VIP addresses
US20050097185A1 (en) 2003-10-07 2005-05-05 Simon Gibson Localization link system
US8572249B2 (en) 2003-12-10 2013-10-29 Aventail Llc Network appliance for balancing load and platform services
US20050188055A1 (en) 2003-12-31 2005-08-25 Saletore Vikram A. Distributed and dynamic content replication for server cluster acceleration
US8689319B2 (en) 2004-04-19 2014-04-01 Sollitionary, Inc. Network security system
US20060064478A1 (en) * 2004-05-03 2006-03-23 Level 3 Communications, Inc. Geo-locating load balancing
US7813263B2 (en) * 2004-06-30 2010-10-12 Conexant Systems, Inc. Method and apparatus providing rapid end-to-end failover in a packet switched communications network
US20060294584A1 (en) 2005-06-22 2006-12-28 Netdevices, Inc. Auto-Configuration of Network Services Required to Support Operation of Dependent Network Services
EP1669864B1 (en) 2004-12-03 2010-06-02 STMicroelectronics Srl A process for managing virtual machines in a physical processing machine, corresponding processor system and computer program product therefor
US7334076B2 (en) 2005-03-08 2008-02-19 Microsoft Corporation Method and system for a guest physical address virtualization in a virtual machine environment
US7693050B2 (en) * 2005-04-14 2010-04-06 Microsoft Corporation Stateless, affinity-preserving load balancing
US20070055789A1 (en) * 2005-09-08 2007-03-08 Benoit Claise Method and apparatus for managing routing of data elements
US8554758B1 (en) 2005-12-29 2013-10-08 Amazon Technologies, Inc. Method and apparatus for monitoring and maintaining health in a searchable data service
US7694011B2 (en) 2006-01-17 2010-04-06 Cisco Technology, Inc. Techniques for load balancing over a cluster of subscriber-aware application servers
US8274989B1 (en) 2006-03-31 2012-09-25 Rockstar Bidco, LP Point-to-multipoint (P2MP) resilience for GMPLS control of ethernet
WO2008100536A1 (en) * 2007-02-12 2008-08-21 Mushroom Networks Inc. Access line bonding and splitting methods and appartus
US20080201540A1 (en) 2007-02-16 2008-08-21 Ravi Sahita Preservation of integrity of data across a storage hierarchy
US7768907B2 (en) 2007-04-23 2010-08-03 International Business Machines Corporation System and method for improved Ethernet load balancing
US8561061B2 (en) 2007-05-14 2013-10-15 Vmware, Inc. Adaptive dynamic selection and application of multiple virtualization techniques
US8128279B2 (en) 2008-07-16 2012-03-06 GM Global Technology Operations LLC Cloud point monitoring systems for determining a cloud point temperature of diesel fuel
US8180896B2 (en) 2008-08-06 2012-05-15 Edgecast Networks, Inc. Global load balancing on a content delivery network
US20100036903A1 (en) 2008-08-11 2010-02-11 Microsoft Corporation Distributed load balancer
JP2010061283A (en) 2008-09-02 2010-03-18 Fujitsu Ltd Load balancer setting program, load balancer setting method and load balancer setting apparatus
US8433749B2 (en) 2009-04-15 2013-04-30 Accenture Global Services Limited Method and system for client-side scaling of web server farm architectures in a cloud data center
US8533317B2 (en) * 2009-06-22 2013-09-10 Citrix Systems, Inc. Systems and methods for monitor distribution in a multi-core system
US8737407B2 (en) * 2009-06-22 2014-05-27 Citrix Systems, Inc. Systems and methods for distributed hash table in multi-core system
JP5338555B2 (en) * 2009-08-11 2013-11-13 富士通株式会社 Load distribution apparatus, load distribution method, and load distribution program
US8645508B1 (en) 2010-03-03 2014-02-04 Amazon Technologies, Inc. Managing external communications for provided computer networks
US8266204B2 (en) 2010-03-15 2012-09-11 Microsoft Corporation Direct addressability and direct server return
EP2553901B1 (en) * 2010-03-26 2016-04-27 Citrix Systems, Inc. System and method for link load balancing on a multi-core device
US8619584B2 (en) * 2010-04-30 2013-12-31 Cisco Technology, Inc. Load balancing over DCE multipath ECMP links for HPC and FCoE
US8533337B2 (en) 2010-05-06 2013-09-10 Citrix Systems, Inc. Continuous upgrading of computers in a load balanced environment
US8547835B2 (en) 2010-10-21 2013-10-01 Telefonaktiebolaget L M Ericsson (Publ) Controlling IP flows to bypass a packet data network gateway using multi-path transmission control protocol connections
US9191327B2 (en) 2011-02-10 2015-11-17 Varmour Networks, Inc. Distributed service processing of network gateways using virtual machines
US8676980B2 (en) 2011-03-22 2014-03-18 Cisco Technology, Inc. Distributed load balancer in a virtual machine environment
US20120303809A1 (en) 2011-05-25 2012-11-29 Microsoft Corporation Offloading load balancing packet modification
US8958298B2 (en) 2011-08-17 2015-02-17 Nicira, Inc. Centralized logical L3 routing
US20130159487A1 (en) 2011-12-14 2013-06-20 Microsoft Corporation Migration of Virtual IP Addresses in a Failover Cluster
US9083709B2 (en) 2012-05-11 2015-07-14 Cisco Technology, Inc. Virtual internet protocol migration and load balancing
US20140006681A1 (en) 2012-06-29 2014-01-02 Broadcom Corporation Memory management in a virtualization environment
US8805990B2 (en) 2012-07-12 2014-08-12 Microsoft Corporation Load balancing for single-address tenants

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020040402A1 (en) * 2000-09-28 2002-04-04 International Business Machines Corporation System and method for implementing a clustered load balancer
US20030005080A1 (en) * 2001-06-28 2003-01-02 Watkins James S. Systems and methods for accessing data
CN1578320A (en) * 2003-06-30 2005-02-09 微软公司 Network load balancing with main machine status information
US20100302940A1 (en) * 2009-05-28 2010-12-02 Microsoft Corporation Load balancing across layer-2 domains

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114221907A (en) * 2021-12-06 2022-03-22 北京百度网讯科技有限公司 Network hash configuration method and device, electronic equipment and storage medium
CN114221907B (en) * 2021-12-06 2023-09-01 北京百度网讯科技有限公司 Network hash configuration method, device, electronic equipment and storage medium
CN115297191A (en) * 2022-09-30 2022-11-04 成都云智北斗科技有限公司 Multi-data-stream server
CN115297191B (en) * 2022-09-30 2022-12-16 成都云智北斗科技有限公司 Multi-data-stream server

Also Published As

Publication number Publication date
EP2652924A2 (en) 2013-10-23
EP2652924B1 (en) 2020-04-01
JP2014504484A (en) 2014-02-20
US9438520B2 (en) 2016-09-06
CN102857438B (en) 2015-12-02
WO2012083264A2 (en) 2012-06-21
EP2652924A4 (en) 2017-10-18
US8755283B2 (en) 2014-06-17
US20150063115A1 (en) 2015-03-05
US20120155266A1 (en) 2012-06-21
US20140185446A1 (en) 2014-07-03
JP5889914B2 (en) 2016-03-22
WO2012083264A3 (en) 2012-10-26

Similar Documents

Publication Publication Date Title
CN102857438B (en) The state of synchronized loading balancer inter-module
US10728175B2 (en) Adaptive service chain management
ES2328426T5 (en) Optimized location of network resources
US20200142788A1 (en) Fault tolerant distributed system to monitor, recover and scale load balancers
EP1530859B1 (en) Heuristics-based routing of a query message in peer to peer networks
EP3371954A1 (en) Selective encryption configuration
CN109302498A (en) A kind of network resource access method and device
JP2015534769A (en) Load balancing in data networks
US10430304B2 (en) Communication continuation during content node failover
KR101343310B1 (en) Localization of peer to peer traffic
CN107329827A (en) Support lvs dispatching methods, equipment and the storage medium of Hash scheduling strategy
CN101803289B (en) Fitness based routing
US10033805B1 (en) Spanning tree approach for global load balancing
US10977141B2 (en) Systems and methods for handling server failovers
CN114726776B (en) CDN scheduling method, device, equipment and medium for content delivery network
CN106664217A (en) Identification of candidate problem network entities
US11245752B2 (en) Load balancing in a high-availability cluster
EP4236246A1 (en) Determining a best destination over a best path using multifactor path selection
CN117008951A (en) Node debugging method, device and storage medium
CN116389350A (en) Route detection method and device for data center network
CN116346698A (en) System and method for replicating traffic statistics on packet forwarding engine system
CN109618014A (en) Message forwarding method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: MICROSOFT TECHNOLOGY LICENSING LLC

Free format text: FORMER OWNER: MICROSOFT CORP.

Effective date: 20150728

C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20150728

Address after: Washington State

Applicant after: Micro soft technique license Co., Ltd

Address before: Washington State

Applicant before: Microsoft Corp.

C14 Grant of patent or utility model
GR01 Patent grant