WO2019205892A1 - 分布式设备中的报文处理方法和分布式设备 - Google Patents

分布式设备中的报文处理方法和分布式设备 Download PDF

Info

Publication number
WO2019205892A1
WO2019205892A1 PCT/CN2019/080706 CN2019080706W WO2019205892A1 WO 2019205892 A1 WO2019205892 A1 WO 2019205892A1 CN 2019080706 W CN2019080706 W CN 2019080706W WO 2019205892 A1 WO2019205892 A1 WO 2019205892A1
Authority
WO
WIPO (PCT)
Prior art keywords
port number
address
packet
processing unit
destination
Prior art date
Application number
PCT/CN2019/080706
Other languages
English (en)
French (fr)
Inventor
熊鹰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP19792098.6A priority Critical patent/EP3780552B1/en
Publication of WO2019205892A1 publication Critical patent/WO2019205892A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/04Interdomain routing, e.g. hierarchical routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • H04L61/2503Translation of Internet protocol [IP] addresses
    • H04L61/2514Translation of Internet protocol [IP] addresses between local and global IP addresses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • H04L61/2503Translation of Internet protocol [IP] addresses
    • H04L61/2517Translation of Internet protocol [IP] addresses using port numbers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/50Address allocation
    • H04L61/5007Internet protocol [IP] addresses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/50Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1023Server selection for load balancing based on a hash applied to IP addresses or costs

Definitions

  • the present application relates to the field of computer and communication technologies, and in particular, to a message processing method in a distributed device and a distributed device.
  • the distributed processing architecture scales performance by scale-out, which increases performance by increasing the number of parallel processing units with similar capabilities in the device.
  • Network devices that employ a distributed processing architecture include, but are not limited to, routers, switches, firewalls, and server load balancers.
  • the application scenarios of the distributed processing architecture have certain limitations. This limitation is reflected in the fact that the performance improvement of the device has a strong correlation with the nature of the data on which the network device operates.
  • the data it depends on is mainly the routing table (English: Routing Info Base, RIB) and the forwarding table (English: Forward Information Base, FIB).
  • the main information stored in the routing table and forwarding table is the network topology information.
  • the characteristics of the data stored in the routing table and the forwarding table are that the change speed is not fast (the general change is in seconds), and the amount of data is not large (the number of entries in the routing table ranges from tens of thousands to hundreds of thousands, and the table in the forwarding table) The number of items is less).
  • a router or switch can synchronously copy a routing table, forwarding table, or MAC forwarding table on each processing unit contained therein to ensure that the device works properly.
  • the session table far exceeds the above routing table and forwarding table. Due to the large amount of data in the session table and the rapid change of data, the network device cannot synchronously copy a complete data on each processing unit included, and can only store the session table in different processing units. In this case, if the upstream data stream and the corresponding downstream data stream are offloaded to different processing units by the offloading unit in the network device in the device, the processing unit for processing the downstream data stream may not save the session entry. It is not possible to process the downstream data stream.
  • the present invention provides a packet processing method in a distributed device, which is used to solve the problem in the prior art that it is difficult to ensure that a data stream and a corresponding reverse data stream are offloaded to the same processing unit of the distributed device. Handling failure issues.
  • the first aspect provides a packet processing method in a distributed device, where the distributed device is deployed between a first network and a second network, where the distributed device includes a first splitting unit and a second splitting unit. And at least two processing units, each of the at least two processing units being coupled to the first shunt unit and the second shunt unit, the method comprising:
  • the first offloading unit receives the first packet from the first network, and selects the first processing unit from the at least two processing units according to the data stream identifier of the first packet and the first offload algorithm.
  • the data flow identifier of the first packet includes at least a source Internet Protocol IP address, a destination IP address, a source port number, and a destination port number of the first packet.
  • the first offloading unit sends the first packet to the first processing unit
  • the first processing unit allocates an IP address and a port number to the first packet, and the allocated port number satisfies the condition: the data of the reverse packet of the first packet according to the address translation a stream identifier and a second offload algorithm, wherein the second processing unit selected from the at least two processing units is the same processing unit as the first processing unit, and the source of the first packet after the address translation
  • the port number is the assigned port number, and one of the source IP address and the destination IP address of the first packet after the address translation is the allocated IP address
  • the second offload algorithm is An algorithm used by the second offloading unit to perform the offloading of the reverse packet of the first packet after the address translation;
  • the first processing unit performs address translation on the first packet to obtain the address-converted first packet, where the address translation includes replacing the source port number of the first packet with the The port number assigned;
  • the second offloading unit sends the first packet after the address translation to the second network.
  • the processing unit in the distributed device When the processing unit in the distributed device provided by the embodiment of the present application allocates a source address for a packet, the reversed packet of the packet after the address translation is also forwarded to the local device by the assigned source port number.
  • the processing unit ensures that two-way messages belonging to the same session are processed by the same processing unit in the distributed device. The problem of packet processing failure caused by the processing of two-way messages belonging to the same session by different processing units in the distributed device is avoided.
  • the first network is a private network
  • the second network is a public network
  • the second offloading algorithm refers to a source IP address, a destination IP address, and a source port of the input packet.
  • the first processing unit allocates an IP address and a port number to the first packet, including:
  • the first processing unit allocates a pre-configured public network IP address to the first packet as an allocated IP address
  • the first processing unit selects a port number from the port segment corresponding to the first processing unit as a second port number, and the IP allocated by the first processing unit and other processing units in the at least two processing units
  • the port segments having the same address and corresponding to the other processing units of the at least two processing units do not overlap each other;
  • the source IP address, the destination IP address, the destination port number, and the second port number of the first packet after the address translation according to the sequence indicated by the second manner Determining a hash algorithm, obtaining a hash operation result, the source IP address of the first packet after the address translation is the allocated IP address, and the source IP address in the sequence indicated by the second mode is The location of the destination IP address in the sequence indicated by the first mode is the same, the destination IP address is the same as the location of the source IP address in the sequence indicated by the first mode, and the destination port number and the source port number in the sequence indicated by the first mode are the same. The same location;
  • the first processing unit obtains a result of the remainder operation of the hash operation result pair N;
  • the first processing unit obtains a difference between a value of 0 to N-1 corresponding to the first processing unit and a result of the operation of the remainder;
  • the first processing unit allocates, as the first port number, the sum of the second port number and the difference value as the allocated port number.
  • the embodiment of the present application introduces the concept of an adjustment value when designing the second offload algorithm.
  • the function of the adjustment value is that when the first processing unit allocates the port number, the method corresponding to the shunt algorithm is used to inversely push the adjustment value that meets the condition according to the known shunting result, thereby finally determining the port number that satisfies the condition according to the adjustment value.
  • the introduction of the adjustment value in the shunt algorithm is a technology-level innovation that improves the efficiency of calculating the port number that satisfies the condition.
  • each processing unit in the distributed device can share the same public network IP address, thereby saving address resources.
  • the selecting a port number from the corresponding port segment as the second port number includes:
  • the first processing unit divides the port number segment into at least one port number group according to a predetermined number of port numbers
  • a port number group is randomly selected from the at least one port number group, and the port number of the predetermined location in the selected port number group is determined to be the second port number.
  • the selecting a port number from the corresponding port segment as the second port number includes:
  • a port number group is randomly selected from the at least one port number group, and the port number of the predetermined location in the selected port number group is determined to be the second port number.
  • the address translation further includes replacing the source IP address of the first packet with the assigned IP address.
  • the first offload algorithm is the same as the second offload algorithm.
  • the two offloading units in the distributed device use the same algorithm, so that the same algorithm is used for offloading the first packet and the reverse packet of the first packet after the address translation. It avoids the storage resources occupied by different shunting units when storing multiple shunting algorithms, which simplifies the shunting process.
  • the input packet of the first offload algorithm is the first packet
  • the data flow identifier and the first offload algorithm according to the first packet are from the at least
  • the first processing unit is selected from the two processing units, including:
  • the first shunting unit determines the first port number and the adjustment value
  • the determining the first port number and the adjustment value including:
  • the consecutive port numbers from the reserved port number to the destination port number of the first packet are divided into a group according to a predetermined number of port numbers, and are divided into at least one port number group;
  • the adjustment value is a difference obtained by subtracting the destination port number of the first packet from the first port number.
  • the determining the first port number and the adjustment value including:
  • the adjustment value is a difference obtained by subtracting the destination port number of the first packet from the first port number.
  • the first packet is a packet from the public network.
  • the second offloading algorithm is to input the source IP address, the destination IP address, the source port number, the protocol number, and the first port number of the input packet into the predetermined hash algorithm in the order indicated by the first mode, and then After the result of the hash algorithm is summed with the adjusted value, the N-remainder operation is performed on the summation result, and the processing unit corresponding to the result of the remainder operation is used as the selected processing unit, and the adjustment value is the input message.
  • the first processing unit allocates an IP address and a port number to the first packet, including:
  • the first processing unit selects a first address port pair from the saved at least one address port pair according to a load balancing policy, where the first address port pair includes a private network IP address and a port number;
  • the first processing unit allocates, as the allocated IP address, a private network IP address in the first address port pair to the first packet;
  • the first processing unit Transmitting, by the first processing unit, the source IP address, the destination IP address, the destination port number, and the second port number of the first packet after the address translation according to the sequence indicated by the second manner Determining a hash algorithm, and obtaining a result of the hash operation, wherein one of the destination IP addresses of the first packet after the address translation is the allocated IP address, and the first packet after the address translation
  • the destination port number is the port number in the first address port pair
  • the source IP address in the sequence indicated by the second mode is the same as the destination IP address in the order indicated by the first mode
  • the destination IP address is The location of the source IP address in the sequence indicated by the first mode is the same, and the destination port number is the same as the location of the source port number in the sequence indicated by the first mode;
  • the first processing unit obtains a result of the remainder operation of the hash operation result pair N;
  • the first processing unit calculates a difference between a value of 0 to N-1 corresponding to the first processing unit and a result of the remainder operation;
  • the first processing unit allocates, as the first port, the sum of the second port number and the difference as the assigned port number.
  • the input packet of the first offload algorithm is the first packet
  • the data flow identifier and the first offload algorithm according to the first packet are from the at least
  • the first processing unit is selected from the two processing units, including:
  • the first shunting unit determines the first port number and the adjustment value
  • the address translation further includes: replacing the destination IP address of the first packet with The private network IP address of the first address port pair replaces the destination port number of the first packet with the port number of the first address port pair.
  • the third processing unit is the same processing unit as the first processing unit, and the third processing unit is a data flow identifier and a location of the reverse packet according to the first packet.
  • the second shunt algorithm selects a processing unit from the two processing units.
  • the second offloading algorithm is a symmetric algorithm, so that the first processing unit does not need to determine in which scenario the port number is assigned to the first packet, that is, it is not necessary to determine that the first packet is a private network to the public network.
  • the sent packet is also the packet sent by the public network to the private network.
  • the second offloading algorithm refers to inputting the source IP address, the destination IP address, the source port number, the protocol number, and the first port number of the input packet according to the order indicated by the first mode.
  • the N-remainder operation is performed on the summation result, and the processing unit corresponding to the result of the remainder operation is used as the selected processing unit.
  • a sum of the first port number and the adjustment value is a destination port number of the input message, where N is a number of processing units included in the distributed device, and each of the at least two processing units respectively Corresponding to a value from 0 to N-1,
  • the hash algorithm performs a logical exclusive OR on the input source IP address, the destination IP address, the source port number, the protocol number, and the first port number, and outputs a logical exclusive OR result, or the hash algorithm pairs the input source
  • the IP address, destination IP address, source port number, protocol number, and first port number are summed in bytes, and the result of summing by byte is output.
  • the method further includes: the first processing unit establishes a mapping entry, The mapping entry includes a data flow identifier of the first packet before the address translation and a data flow identifier of the first packet after the address translation, where the mapping entry is used to follow the first processing unit
  • the received second message performs address translation.
  • the first processing unit performs address translation on the subsequent packets of the data stream to which the first packet belongs based on the established mapping entry, and does not need to perform a process of assigning a port number to each packet, thereby improving processing efficiency.
  • a distributed apparatus the method of any one of the possible implementations of the first aspect or the first aspect.
  • the distributed device comprises means for performing the method of the first aspect or any one of the possible implementations of the first aspect.
  • These units may be implemented by a program module, or may be implemented by hardware or firmware.
  • a distributed device is further provided, where the distributed device is deployed between a first network and a second network, where the distributed device includes: a first splitting unit, a second splitting unit, and at least two Processing units;
  • the first shunting unit includes a network interface and a processor, and the network interface and the processor are configured to cooperate with each other to implement the steps performed by the first shunting unit in the foregoing first aspect or any one of the possible implementation manners of the first aspect. The detailed description is not repeated here;
  • the first processing unit of the at least two processing units includes a network interface and a processor, and the network interface and the processor are configured to cooperate with each other to implement the first processing in any one of the foregoing possible implementation manners of the first aspect or the first aspect.
  • the network interface and the processor are configured to cooperate with each other to implement the first processing in any one of the foregoing possible implementation manners of the first aspect or the first aspect.
  • the second shunting unit includes a network interface and a processor, and the network interface and the processor are configured to cooperate with each other to implement the steps performed by the second shunting unit in the foregoing first aspect or any one of the possible implementation manners of the first aspect. The detailed description is not repeated here.
  • a computer storage medium for storing computer software instructions for use in the distributed device, including any one of the above-described first aspects, or any one of the possible implementations of the first aspect. program of.
  • a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of the first aspect or the first aspect of the first aspect.
  • FIG. 1 is a schematic diagram of a deployment and a structure of a typical NAT device according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of an application scenario of a packet processing method in a distributed device according to an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of a power splitting unit in a distributed device according to an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of a processing unit in a distributed device according to an embodiment of the present disclosure
  • FIG. 5 is a flowchart of a packet processing method according to an embodiment of the present application.
  • FIG. 6 is a flowchart of a method for processing a packet received by a local area network user to initiate an access to a service provided by the Internet, and the method for processing the uplink packet and the corresponding downlink packet is processed by the NAT device.
  • FIG. 7 is a schematic diagram of a method for processing a message by a NAT device in the scenario shown in FIG. 6 in the embodiment of the present application;
  • FIG. 8 is a schematic diagram of a packet processing method provided by an embodiment of the present application, in the process of accessing a service provided by a server in a local area network, in the embodiment of the present application, the uplink packet and the corresponding downlink packet. Schematic diagram of the scene to be processed;
  • FIG. 9 is a schematic diagram of a method for processing a message by an SLB device in the scenario shown in FIG. 8 in the embodiment of the present application; FIG.
  • FIG. 10 is a schematic structural diagram of a distributed device according to an embodiment of the present disclosure.
  • the distributed processing architecture is not suitable for network devices such as firewall devices and SLBs is that the data dependent on the operation of such devices changes very fast, or the amount of data is very large. It is difficult for network devices to synchronously replicate on each processing unit it contains. Complete data. Moreover, the traditional shunting algorithm applied in the foregoing network device is difficult to ensure that different shunting units allocate the uplink data stream and the corresponding downlink data stream to the same processing unit for processing, so that the processing unit may not store the corresponding session entry. Processing failed.
  • the traditional shunting algorithm is relatively simple. For example, each element in the data stream identifier (for example, a quintuple) of the offloaded message is hashed, and the operation result is spared.
  • the traditional shunting algorithm is difficult to ensure that different shunting units allocate the upstream data stream and the corresponding downlink data stream to the same processing unit for processing.
  • the processing unit performs address translation processing on the data stream before and after address translation.
  • the quintuple of the data stream will change. Therefore, the result of the shunting of the data stream before the address conversion by one shunt unit and the shunting of the data stream after the address conversion by another shunt unit are likely to be different.
  • FIG. 1 is a schematic diagram of a typical network address translation (English: network address translation, NAT) NAT device deployment and structure.
  • the NAT device includes two offloading units, namely LB 1 and LB 2, and several processing units, respectively SPU 1, SPU 2, ..., SPU n.
  • LB 1 is connected to the private network.
  • the private network can be a local area network (LAN).
  • LB 2 is connected to the public network.
  • the public network can be the Internet.
  • the LB 1 offloads the access request message to the SPU 1 for processing according to the offload algorithm.
  • the SPU 1 performs address translation on the access request packet, and after the address mapping table and the session entry are established, sends the address-converted request packet to the public network to the LB 2, and the LB 2 sends the address-converted request to the public network.
  • the address mapping table stores a correspondence between data stream identifiers of the access request messages before and after the address translation.
  • the downlink packet is offloaded to the SPU 2 for processing according to the offload algorithm.
  • the packets sent from the public network to the public network are simply referred to as uplink packets, and the packets sent from the public network to the private network are simply referred to as downlink packets.
  • the SPU 2 cannot perform address translation on the downlink packet, and the downlink packet after the address translation cannot be sent to the host in the private network that is initiated by the network.
  • the embodiment of the present application provides a packet processing method in a distributed device.
  • the packet processing method improves the packet processing flow and the shunting algorithm applied in the packet processing flow, so that the uplink packet and the corresponding downlink packet are offloaded to the same processing unit for processing, thereby avoiding the processing.
  • the packet failed to be processed due to the address mapping entry or session entry.
  • the processing unit that receives the session packet performs a port translation process when assigning a port number as the source port number after the address translation.
  • the assigned port number should be satisfied: another shunt unit uses the same shunting scheme to split the reverse packet, and the result of the shunting is still the processing unit.
  • the network scenario applied in this embodiment of the present application is as shown in FIG. 2 .
  • the distributed device in FIG. 2 may be various devices adopting a distributed processing architecture, including but not limited to a NAT device, a firewall device, an SLB, a carrier-grade NAT (CGN) device, or an IPv6 transition address. Conversion equipment, etc.
  • the distributed device includes two offloading units, LB 1 and LB 2, and several processing units, respectively SPU 1, SPU 2, ... SPU n.
  • the LB 1 is connected to the private network, and the private network can be a local area network (LAN).
  • LB 2 is connected to the public network, and the public network can be the Internet.
  • the splitting unit and the processing unit in the distributed device may be different boards inserted into the same chassis, and the switch boards passing through the chassis, such as a backboard switch network (crossbar switch).
  • the connection in which the switch board functions as a switch, is used to implement packet exchange between multiple boards in the same chassis.
  • the standard Ethernet connection can also be adopted.
  • the standard Ethernet refers to the implementation of the physical layer and the data link layer in compliance with the Institute of Electrical and Electronics. Engineers, IEEE) Ethernet specified in the 802.3 series of standards.
  • the shunting algorithm applied to the shunt unit (such as LB 1 or LB 2 in FIG. 1 or FIG. 2) in the conventional scheme may be a hash algorithm, and the hash algorithm is denoted as HASH1, specifically:
  • Target processing unit ID HASH1 (X1, X2, X3, X4, X5) MOD(N).
  • X1, X2, X3, X4, and X5 are each element in the identifier of the data stream. For example, suppose the identifier of the data stream is a quintuple ⁇ source IP address, destination IP address, source port, destination port, protocol type>, where X1 is the source IP address in the quintuple and X2 is the destination in the quintuple.
  • the above HASH1 indicated algorithm may be a general hashing algorithm capable of mapping different inputs to one of the finite value spaces as long as the basic discreteness requirements are met. Since different inputs may be mapped to the same hash value, it is not possible to determine a unique input value from the hash value.
  • the hash algorithm may perform an exclusive OR operation on each element in X1 to X5, and then spare the result of the exclusive OR operation.
  • the packet processing method provided by the embodiment of the present application includes at least two aspects of improvement: First, the shunting algorithm is applied to both the shunting unit and the processing unit, instead of the shunting algorithm being applied only to the shunting unit as in the conventional scheme, and is not applied to the processing. On the unit. Further, the shunting algorithm is improved, and the improved shunting algorithm is recorded as HASH2. When the processing unit allocates the source port number for a packet, the analog shunting unit performs similar processing on the reversed packet of the packet after the address translation. The offloading process ensures that the assigned source port number enables the reverse message to be shunted to the processing unit, thereby calculating the source port number that meets the condition.
  • the improved shunt algorithm inputs the source IP address, the destination IP address, the source port number, the protocol number, and the first port number of the input packet into the predetermined hash algorithm in the order indicated by the first mode, and then performs the hash algorithm. After the result is summed with the adjustment value, the N remainder operation is performed on the summation result. The processing unit corresponding to the result of the remainder operation is taken as the selected processing unit.
  • the adjustment value is a difference obtained by subtracting the first port number from the destination port number of the input packet, where N is the number of processing units included in the distributed device.
  • the output of the improved shunt algorithm is one of 0 to N-1, that is, the output space of the improved shunt algorithm has a value of 0 to N-1.
  • Each processing unit in the distributed network device corresponds to one of 0 to N-1, respectively.
  • the original intention of the above-mentioned improved shunt algorithm is to obtain the convenience of the calculation level when the port number that satisfies the condition is obtained:
  • the improved traffic distribution algorithm splits the destination port number into the first port number and the adjustment value, which is equivalent to the destination port number of the reverse packet when the reverse packet of the incoming packet is split by the same traffic distribution algorithm.
  • the process of assigning the port number in this way is actually converted to the inverse adjustment of the conditional value according to the reverse message of the input message on the premise of the known shunting result.
  • the port number to be allocated is calculated based on the calculated adjustment value, so that the port number to be separately can be quickly calculated according to the calculation formula.
  • NewPort1 is the first port number
  • NewPort2 is the adjustment value
  • X4 NewPort1+NewPort2.
  • NewPort1 is calculated according to X4, it is recorded as NewPort1 (X4).
  • the above-mentioned shunt algorithm may also be adaptively extended.
  • the identifier of the data stream is a six-tuple ⁇ source IP address, destination IP address, source port, destination port, protocol type, extension element>, where the extension element may be a local area network identifier or a virtual private network (English: virtual private network, The identifier of the VPN, the identifier of the local area network can be represented as a VLAN-ID, and the identifier of the virtual private network is recorded as a VPN-ID.
  • X1 ⁇ X5 have the same meaning as above
  • X6 is VLAN-ID or VPN-ID.
  • the shunt algorithm can perform similar extensions if the data stream's identity also contains a greater number of optional elements.
  • the structure of the shunt unit LB 1 or LB 2 in Fig. 2 is as shown in Fig. 3.
  • the shunting unit includes a processor 31, a memory 32, and a network interface 33, and the memory 32 and the network interface 33 are connected to each other through a data bus 34.
  • the processor 31 is one or more processors, and may also be a Network Processor (NP).
  • NP Network Processor
  • the memory 32 includes, but is not limited to, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory (EPROM or flash memory), or a portable read only memory (CD- ROM).
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • CD- ROM portable read only memory
  • the memory 32 is configured to save the flow distribution table, and the corresponding relationship between the data flow identifier and the processing unit identifier is recorded in the traffic distribution table.
  • the traffic distribution unit After receiving the packet from the other network device, the traffic distribution unit first extracts the data flow identifier of the packet from the packet header, and queries whether the data stream identifier exists in the traffic distribution table.
  • the processing unit identifier corresponding to the data flow identifier is obtained from the traffic distribution table, and the packet is sent to the processing unit indicated by the processing unit identifier. If the data flow identifier does not exist in the traffic distribution table, the output result of the traffic distribution algorithm is obtained according to the data flow identifier and the traffic distribution algorithm. The output indicates a processing unit in the distributed device. Sending the packet to the processing unit indicated by the output result, and storing the correspondence between the data stream identifier of the packet and the output result in the traffic distribution table, so that the network device receives the subsequent report of the same data stream from the other network device. After the text, the shunt is performed according to the shunt table.
  • the processor 31 is configured to perform a offloading process when the data stream identifier of the packet is not saved in the offloading table, and control to send the packet to the processing unit.
  • a detailed process of the processor 31 performing the offload based on the shunt algorithm please refer to the description in the following embodiments.
  • the network interface 33 in FIG. 3 may be, for example, a standard Ethernet interface, or may be an inter-board cell switching interface in a box device, such as HiGig.
  • the structure of one of the processing units SPU 1 to SPU n in FIG. 2 is as shown in FIG.
  • the processing unit comprises a processor 41, a memory 42 and a network interface 43, to which the processor 41, the memory 42 and the network interface 43 are connected.
  • the processor 41 may be one or more processors, and the processor may be a single core processor or a multi-core processor, such as an Intel Corporation X86 processor.
  • Memory 42 includes random access memory, flash memory, content addressable memory (CAM), or any combination thereof.
  • the memory 42 is used to store a public network address pool (or load balancing table) and an address mapping table.
  • the public address pool is used to store the public IP address pre-configured for the processing unit and the port number configured on the public network address.
  • the load balancing table is used to save the private network IP address and port number of the server in the private network for providing the specified service in the SLB scenario.
  • the address mapping table is used to store the correspondence between the data stream identifier of a packet before the address translation and the data stream identifier after the address translation.
  • the memory table 42 further stores a session table, where the session table is used to save a correspondence between the data flow identifier and the data flow state.
  • the processor 41 is configured to query the corresponding data stream identifier in the address mapping table according to the data flow identifier of the packet when receiving the packet sent by the offloading unit. If the corresponding data stream identifier is queried, the address of the packet is translated by using the found data stream identifier, and the address-converted packet is sent to another branching unit. When the corresponding data stream identifier is not queried in the address mapping table, the data stream identifier is allocated to the packet according to the public network address pool or the load balancing table, and the packet is addressed according to the allocated data stream identifier. Convert, send the address-converted message to another shunt unit.
  • the entry is used to record the correspondence between the data flow identifier before the address translation and the data flow identifier after the address translation.
  • the processor 41 is further configured to update the session table, for example, create a session table entry or update a session state, a hit time, and the like in the saved session table entry.
  • FIG. 5 is a flowchart of a packet processing method provided by an embodiment of the present application.
  • Step 51 The first offloading unit of the distributed device receives the first packet from the first network, and according to the data flow identifier of the first packet and the first offloading algorithm, from at least two processing units of the distributed device. Select the first processing unit.
  • the output of the first shunt algorithm is used to indicate one of the at least two processing units.
  • the processing unit indicated by the output result of the first offload algorithm is referred to as a target processing unit. It is assumed in the present embodiment that the target processing unit indicated by the output result of the first shunt algorithm is the first processing unit.
  • the first network may be a private network or a public network.
  • the second network that appears later is a public network using the Internet as an example. If the first network is a public network using the Internet as an example, the second network that appears later is a private network using a local area network as an example.
  • the structure of the distributed device in this embodiment is as shown in FIG. 2, and the first shunting unit may be LB 1 or LB 2 in FIG. 2. If the first shunt unit is LB 1 in Fig. 2, the second shunt unit that appears later is LB 2 in Fig. 2. If the first shunt unit is LB 2 in Fig. 2, the second shunt unit is LB 1 in Fig. 2.
  • the first offloading algorithm may be the above-mentioned example existing offloading algorithm, or may be the improved shunting algorithm described above.
  • the "first message” is named in order to distinguish it from other messages that appear later.
  • Step 52 The first offloading unit sends the first packet to the first processing unit.
  • Step 53 The first processing unit allocates an IP address and a port number to the first packet, and the allocated port number satisfies the condition: the data flow of the reverse packet of the first packet according to the address translation
  • the identifier and the second offload algorithm, the second processing unit selected from the at least two processing units and the first processing unit are the same processing unit.
  • the second offloading algorithm is an algorithm used by the second offloading unit to split the reverse packet of the first packet after the address translation.
  • the source port number of the first packet after the address translation is the allocated port number, and one of the source IP address and the destination IP address of the first packet after the address translation is The assigned IP address.
  • one of the source IP address and the destination IP address of the first packet after the address translation is the assigned IP address.
  • the source IP address of the first packet after the address translation is the assigned IP address
  • the destination IP address of the first packet after the address translation is The assigned IP address. Details will be described later in connection with different examples.
  • the first offloading algorithm used by the first offloading unit to split the first packet and the second offloading algorithm used by the second offloading unit to split the reverse packet of the first packet may be different shunting algorithms.
  • the first offload algorithm may be the same as the second offload algorithm. It is assumed below that both the first shunting unit and the second shunting unit use the improved shunting algorithm mentioned above, and the shunting process of the first shunting unit and the second shunting unit is briefly described here.
  • the quintuple ⁇ source IP address, destination IP address, source port, destination port, protocol type> is used to indicate the data flow identifier.
  • step 51 the shunting process of the first shunting unit includes:
  • the first shunting unit inputs each element in the quintuple of the first message into a predetermined shunting algorithm in a first manner.
  • the first mode indicates the order in which the elements are input to the offload algorithm.
  • the first mode includes, but is not limited to, inputting HASH2 in the order of “source IP address, destination IP address, source port number, destination port number, protocol type”.
  • the first mode is equivalent to inputting HASH1 in the order of "source IP address, destination IP address, source port number, first port number, protocol type, and adjustment value", where X1 is the source IP address in the quintuple.
  • X2 is the destination IP address in the quintuple
  • X3 is the source port in the quintuple
  • X4 is the destination port in the quintuple
  • X5 is the protocol type in the quintuple
  • NewPort1 is the first port number
  • the calculation process of the target processing unit can be expressed by the following formula:
  • Target processing unit ID HASH2 (the source IP address of the first packet, the destination IP address of the first packet, the source port number of the first packet, the destination port number of the first packet, and the protocol type of the first packet.
  • MOD(N) (HASH1 (source IP address of the first packet, destination IP address of the first packet, source port number of the first packet, first port number, protocol type of the first packet) + Adjustment value) MOD(N))
  • step 53 Similar to step 51, the process of assigning the port number by the first processing unit in step 53 is as follows:
  • the design goal is: the first processing unit assumes that each element in the data stream identifier of the reverse packet of the first packet after the address translation is input into the second offload algorithm according to the first manner, and the obtained shunting result should be the first shunt.
  • the corresponding value of the unit it is equivalent to: assuming that the first processing unit inputs each element in the data stream identifier of the first packet after the address conversion into a predetermined shunting algorithm according to the second manner, and the obtained shunting result should be corresponding to the first shunting unit.
  • the second mode is a mode corresponding to the first mode.
  • the data stream identifier of the reverse packet has a correspondence with the data stream identifier of the first packet after the address translation: the source IP address of the reverse packet is the destination IP address of the first packet after the address translation, and the reverse packet is reported.
  • the source port number of the packet is the destination port number of the first packet after the address translation.
  • the destination IP address of the reverse packet is the source address of the first packet after the address translation.
  • the destination port of the reverse packet is the address translation.
  • the first processing unit may input the data stream identifier corresponding to the first packet after the address translation into the second offload algorithm in the order indicated by the second manner.
  • the location where the source IP address is located is the same as the location where the destination IP address is located in the order indicated by the first mode.
  • the location where the destination IP address is located is the same as the location where the source IP address is in the order indicated by the first mode.
  • the source port number is located at the same location as the destination port number in the order indicated by the first mode.
  • the location of the destination port number is the same as the location of the source port number in the order indicated by the first mode.
  • the order indicated by the first mode is “source IP address, destination IP address, source port number, destination port number, protocol type”
  • the order indicated by the second mode is “destination IP address, source IP address, destination port number”. , source port number, protocol type.
  • the address translation does not change the protocol type of the first packet. Therefore, the protocol type of the first packet after the address translation is the same as the protocol type of the first packet.
  • the source port number of the first packet after the address translation is the assigned port number. Therefore, the order indicated by the second mode can also be expressed as the destination IP address of the first packet after the address translation and the first address after the address translation.
  • this step is equivalent to the fact that the shunting result is known, that is, the shunting result is the first processing unit, and the assigned port number is obtained.
  • Adjustment value target processing unit ID - HASH1 (destination IP address of the first packet after address translation, source IP address of the first packet after address translation, destination port number of the first packet after address translation, Two port number, the protocol type of the first message) MOD(N).
  • the port number is specially designed to ensure that the reverse traffic can be offloaded to the processing unit.
  • This special design is to calculate the unknown source port number to be assigned, and divide the source port number to be allocated into a setting part and a compensation part.
  • the setting part is the above second port number
  • the compensation part is the above adjustment value.
  • the setting portion refers to a port number that can be known before the above formula is substituted according to a predetermined determination manner.
  • the first processing unit can determine the adjustment value by inputting the known target processing unit identifier and the second port number into a known shunting algorithm. Then, the second port number and the adjustment value are summed, and the summation result is used as the source port number that satisfies the requirement.
  • the second port number is determined first.
  • the manner in which the first processing unit determines the second port number is to select a port number from the port number segment corresponding to the first processing unit as the second port number.
  • the first shunting unit uses the sum of the second port number and the adjustment value as the assigned port number, and the allocated port number is used as the source port number of the first packet after the address conversion.
  • Step 54 The first processing unit performs address translation on the first packet, so as to obtain the converted first packet, where the address translation includes: replacing the source port number of the first packet with the allocated location. The port number.
  • Step 55 The first processing unit sends the first packet after the address conversion to the second offloading unit.
  • Step 56 The second offloading unit sends the first packet after the address translation to the second network.
  • the embodiment of the present application provides a packet processing method in a distributed device, where the distributed device includes two offloading units and several processing units.
  • the processing unit receives the packet from a splitting unit, when the IP address is assigned to the packet and a port number is used as the source port after the address translation, the assigned port number is satisfied: another shunting unit uses the shunting algorithm.
  • the reverse packet of the packet after the address translation is offloaded, and the reverse packet can still be offloaded to the processing unit.
  • the source port number to be allocated is split into two parts, and the port number that satisfies the condition is reversed according to the improved shunting algorithm under the condition of the known shunting result.
  • the distributed device does not need to consume resources to synchronize and maintain all session table data among multiple processing units, and frees the constraints of the characteristics of the data that the work depends on for the distributed device application scenario.
  • the packet processing method provided by the embodiment of the present application can ensure that the bidirectional data stream of the two sides of the session can be offloaded to the same processing unit of the distributed device, so that the service caused by the processing unit not storing the address mapping relationship no longer occurs. Handling failure issues, thereby improving the operational reliability of distributed devices.
  • FIG. 6 is a schematic diagram of a scenario in which a NAT device applies a packet processing method shown in FIG. 5 to process an uplink packet and a corresponding downlink packet in a process in which a LAN user initiates an access to a service provided by the Internet.
  • the distributed device is specifically a NAT device.
  • the NAT device includes two offloading units, namely LB 1 and LB 2.
  • the NAT device further includes eight processing units, which are SPU 1 to SPU 8.
  • the correspondence between the values of the output results of the SPU 1 to SPU 8 and the second shunt algorithm is as shown in Table 1.
  • the name of the processing unit The value of the output of the shunt algorithm SPU 1 0 SPU 2 1 SPU 3 2 SPU 4 3 SPU 5 4 SPU 6 5 SPU 7 6 SPU 8 7
  • FIG. 7 is a flow chart of processing a message by a NAT device in the scenario shown in FIG. 6.
  • the message processing method shown in FIG. 7 includes the following steps.
  • Step 71 The host in the local area network sends the first packet, where the first packet is an uplink packet.
  • the first packet is an access request packet triggered by a network access behavior.
  • the host When the user accesses the website abc.com by using a browser running on the host in the local area network, the host generates an access request message and sends the access request message to the NAT device.
  • the source address of the access request packet is the private network address 10.10.10.10 of the LAN host, and the source port number is 10000.
  • the destination address of the access request packet is the public network address 30.30.30.30 of the web server and the destination port number is 30000.
  • the web server provides the web service of the website abc.com through the port 30000 on the public network address 30.30.30.30.
  • the uplink packet and the downlink packet may be distinguished according to the pre-configured port-to-network correspondence.
  • the first port segment of the NAT device is connected to the local area network, and the second port segment is connected to the Internet.
  • the port in the first port segment of the NAT receives the packet as the upstream packet.
  • the packet received by the port in the second port segment of the NAT device is the downlink packet.
  • the first packet is a packet received by the port in the first port segment of the NAT device, and thus the first packet is determined to be an uplink packet.
  • Step 72 The offloading unit LB1 in the NAT device receives the first packet sent by the host. If the data stream identifier of the first packet is not stored in the traffic distribution table of the traffic distribution unit LB1, the first packet is input into the traffic distribution algorithm, and the processing result is selected from the eight processing units of the NAT device according to the output result of the traffic distribution algorithm.
  • Unit SPU 5 The offloading unit LB1 in the NAT device receives the first packet sent by the host. If the data stream identifier of the first packet is not stored in the traffic distribution table of the traffic distribution unit LB1, the first packet is input into the traffic distribution algorithm, and the processing result is selected from the eight processing units of the NAT device according to the output result of the traffic distribution algorithm.
  • Unit SPU 5 The offloading unit LB1 in the NAT device receives the first packet sent by the host. If the data stream identifier of the first packet is not stored in the traffic distribution table of the traffic distribution unit LB1, the first packet is input into the traffic distribution algorithm, and the processing
  • the shunt algorithm used by LB 1 is the same as the shunt algorithm used by LB 2, and is called a “split algorithm”.
  • the shunt algorithm used by LB 1 can also be used with the shunt algorithm used by LB 2 . Different, as long as the processing unit and LB 2 use the same shunt algorithm.
  • the symmetric algorithm refers to the location of the source IP address and the destination IP address in the input sequence, and the source port number and destination when the elements in the quintuple of a message are input into the algorithm. The position of the port number in the input sequence. After the position is changed, the output of the algorithm is the same as the output of the algorithm before the position is changed.
  • the asymmetric algorithm means that the output of the algorithm after the position is changed is different from the output of the algorithm before the position of the swap.
  • the advantage of the symmetric algorithm is that although the algorithm specifies the order in which the elements of the quintuple of the message to be offloaded are input into the algorithm, it is not necessary to distinguish whether the message to be offloaded is an uplink message or a downlink message. Text.
  • specific examples of symmetric algorithms and asymmetric algorithms are given here with examples.
  • the source IP address of the packet to be offloaded is 10.10.10.10
  • the destination IP address is 30.30.30.30
  • the source port number is 10000
  • the destination port number is 30000
  • the protocol is TCP.
  • the protocol number of TCP in the relevant standard is 6.
  • HASH source IP address, destination IP address, source port number, destination port number, protocol number
  • source IP address destination IP address
  • XOR source port number - destination port number
  • XOR protocol number where XOR represents sixteen
  • the byte is XORed by byte.
  • the algorithm first finds the difference between the source IP address and the destination IP address in hexadecimal by byte, the difference between the source port number and the destination port number in hexadecimal, and then compares the two differences with The protocol number is XORed into a single byte.
  • HASH source IP address, destination IP address, source port number, destination port number, protocol number
  • source IP address source IP address XOR destination IP address XOR source port number XOR destination port number XOR protocol number.
  • symmetric and asymmetric algorithms are more than just the above.
  • Another symmetric algorithm can also be
  • the algorithm first finds the hexadecimal value of the source IP address and the destination IP address, the hexadecimal value of the source port number and the destination port number, and then the two sum values.
  • the protocol number is XORed into a single byte. Other symmetric and asymmetric algorithms cannot be enumerated here.
  • the shunting algorithm is used as an example for the first symmetric algorithm listed above, and the shunting process is described.
  • the NAT device can distinguish between the uplink packet and the downlink packet, so an asymmetric algorithm can also be used in this embodiment.
  • the data flow identifier is a quintuple ⁇ source IP address, destination IP address, source port, destination port, protocol type>.
  • the LB 1 in FIG. 2 receives the first packet sent by the host, and if the data stream identifier ⁇ 10.10.10.10, 30.30.30.30, 10000, 30000, TCP> is not stored in the offload table of the LB 1, the data is The flow identifier is input to the first offload algorithm in a first manner.
  • the first mode is “source IP address, destination IP address, source port number, first port number, protocol type, and adjustment value” as an example.
  • the value corresponding to the target SPU HASH2 (10.10.10.10, 30.30.30.30, 10000, 30000, TCP)
  • Method 1 The consecutive port number starting from the reserved port number is divided into at least one port number group, and the first port number in at least one port number group is represented by hexadecimal when the last three digits are 0.
  • the port number of the predetermined location before the destination port number in the port number group in which the destination port number of the first packet is located is the first port number.
  • the packet is started from the port number 1000.
  • Method 2 The contiguous port number from the reserved port number to the destination port number of the first packet is divided into a group according to a predetermined number of port numbers, and is divided into at least one port number group; The port number specified before the destination port number in the port number group in which the destination port number of the first packet is located is the first port number.
  • the reserved port number is 1 to 999, and the consecutive port numbers starting with 1000 are grouped, and each of the 8 consecutive port numbers is a group.
  • Step 73 The offloading unit LB1 in the NAT device sends the first packet to the processing unit SPU 5.
  • step 74 the processing unit SPU 5 allocates an IP address and a port number to the first packet.
  • the assigned new IP address is used as the source IP address after the address translation, and the assigned port number is used as the source port number after the address translation.
  • the allocated port number satisfies the condition: the SPU 5 inputs the data stream identifier of the first packet after the address translation into the predetermined shunt algorithm according to the second manner corresponding to the first manner, according to the output result of the shunt algorithm, The processing unit selected among the at least two processing units is still the SPU 5.
  • the public network addresses configured by the processing units are both 20.20.20.20, that is, each processing unit shares the same public network address, but each processing unit is configured with a different port number range on 20.20.20.20. This can save limited public address resources.
  • the port number configured on the SPU 5 ranges from 2000 to 9999, for a total of 8000 port numbers.
  • the data stream identifier of the first packet is ⁇ 20.20.20.20, 30.30.30.30, new source port number, 30000, TCP>.
  • the SPU 5 inputs the data stream identifier of the converted first packet into the offload algorithm according to the second manner.
  • the second mode is the destination IP address, the source IP address, the destination port number, the second port number, the protocol type, and the adjustment value.
  • the SPU 5 Since the output of the target SPU corresponding to the shunt algorithm is known to be 4, the process of calculating the new source port number is actually converted to the process of calculating NewPort1 and NewPort2.
  • the SPU 5 selects a port number from the port number segment corresponding to the SPU 5 as NewPort1.
  • the manner of selecting is that the SPU 5 divides the port number segment 2000-9999 into at least one port number group according to a predetermined number of port numbers; the SPU 5 randomly selects one of the at least one port number group.
  • the port number group determines that the port number of the specified location in the selected port number group is the second port number.
  • the SPU 5 divides the port number segment 2000-9999 into 1000 groups according to the 8-port number, and randomly selects one group from the 1000 groups, for example, selects the second port group, and the second port group includes The port number is 2008 to 2015. Select the first port number 2008 in the second port group as NewPort1.
  • the selection mode may also be that the SPU 5 expresses the port number in the port number segment 2000-9999 by a hexadecimal number, and the port number in which the last three digits are 0 in the hexadecimal notation is used as each group.
  • the first port number which groups the port number segments 2000-9999.
  • the SPU 5 randomly selects a port number group from at least one port number group, and determines that the port number of the predetermined location in the selected port number group is the second port number.
  • the SPU 5 After determining the NewPort1, the SPU 5 further calculates NewPort2.
  • the first processing unit further sets a usage status identifier for each port number in the corresponding port segment, and the usage status identifier is used to record the port number. Whether the number has been used. For example, if the identifier has a value of 1, it indicates that it has been used. When the identifier has a value of 0, it indicates that it is not used.
  • the usage status identifier when the port number assigned by the first processing unit for the first packet has been used, the second port number is re-determined by the above method, and then the adjustment value is recalculated according to the re-determined second port number and The adjustment values are summed to assign a port number again.
  • the usage status identifier corresponding to the port number is set to use the corresponding value.
  • the first processing unit determines to end the session established using the port number that has been allocated according to the session state recorded in the session table, re-set the usage state identifier of the assigned port number to the unused corresponding value, so as to facilitate the The port number can be reused. For example, when the first processing unit assigns the port number 2008 to the first message, the state of the port number 2008 is set to 1. After the first processing unit determines that the session to which the first message belongs according to the session state in the session table, the first processing unit sets the state of the port number 2008 to 0.
  • step 75 the SPU 5 performs address translation on the first packet.
  • the address translation includes: replacing the source IP address 10.10.10.10 of the first packet with the new IP address 20.20.20.20, and replacing the source port number 10000 with the assigned port number 2008, thereby obtaining the converted first packet.
  • the data stream identifier of the first packet after the address translation is ⁇ 20.20.20.20, 30.30.30.30, 2008, 30000, TCP>.
  • step 76 the SPU 5 sends the first packet after the address conversion to the offloading unit LB 2.
  • step 77 the LB 2 sends the first packet after the address conversion to the server in the Internet.
  • the method further includes the step 78, the SPU 1 creates a new entry in the address mapping table, where the entry includes the data flow identifier of the first packet before the address translation ⁇ 10.10.10.10, 30.30 .30.30, 10000, 30000, TCP> and the data stream identifier of the first message after the address conversion ⁇ 20.20.20.20, 30.30.30.30, 2008, 30000, TCP>.
  • the NAT device can ensure that after the LB 2 receives the second packet from the Web server, the LB 2 can offload the second packet to the SPU 5 for processing.
  • the second packet is a response packet corresponding to the access request packet, and is also a reverse packet of the first packet.
  • processing method of FIG. 7 further includes steps 79-714.
  • step 79 the LB 2 on the NAT device receives the second packet, and the second packet is the response packet of the first packet sent by the web server.
  • the web server After receiving the first packet, the web server obtains the URL path/index.html of the resource requested by the host from the request line “GET/index.html HTTP/1.1” of the first packet. If there is a resource with the path /index.html, the server carries the status line HTTP/1.1 200 OK and other information in the second message to explain the server and the policy when the resource continues to be accessed in the future.
  • Step 710 On the condition that the LB 2 of the NAT device does not store the data flow identifier of the second packet in the traffic distribution table of the traffic distribution unit LB 2, the second packet is input into the traffic distribution algorithm according to the output result of the traffic distribution algorithm.
  • the processing unit SPU 5 is selected among a plurality of processing units of the device.
  • the data stream identifier of the second message is ⁇ 30.30.30.30, 20.20.20.20, 30000, 2008, TCP>.
  • the LB 2 inputs the data stream identifier into the offload algorithm in the first manner.
  • the value corresponding to the target SPU HASH2 (30.30.30.30, 20.20.20.20, 30000, 2008, TCP)
  • the processing unit corresponding to the output result 4 is the SPU 5.
  • step 711 the offloading unit LB 2 on the NAT device sends the second packet to the processing unit SPU 5.
  • Step 712 The processing unit SPU 5 searches the address mapping table, and performs address translation on the second packet according to the found entry.
  • the processing unit SPU 5 can be based on the session table.
  • the corresponding data stream identifier in the item performs address translation on the second packet, converts the destination address of the second packet to 10.10.10.10, and converts the destination port to 10000, thereby obtaining the second packet after the address conversion.
  • processing unit SPU 5 may further modify the session state recorded in the corresponding session entry according to other information carried in the second packet, for example, the content of the status line.
  • step 713 the processing unit SPU 5 sends the second packet after the address conversion to the splitting unit LB1.
  • step 714 the offloading unit LB1 sends the second packet after the address translation to the host in the local area network, where the host is the host that sent the first packet.
  • the processing unit SPU 5 can process the second packet according to the address mapping entry and the session entry after receiving the second packet. Processed to avoid processing failures.
  • each processing unit can use the same public network IP address when performing address translation, thereby avoiding waste of address resources and saving limited address resources.
  • FIG. 8 is a schematic diagram of a scenario in which a SLB device processes a downlink packet and a corresponding uplink packet by using a packet processing method shown in FIG. 5 in a process in which a user accesses a service provided by a server in a local area network.
  • the distributed device is specifically a SLB device, and the SLB device includes two offloading units, namely LB 1 and LB 2, and the SLB device further includes eight processing units, which are respectively SPU 1 to SPU 8.
  • the values of 0-7 corresponding to SPU 1 to SPU 8 refer to Table 1.
  • FIG. 9 is a flow chart showing the processing of the message by the SLB device in the scenario shown in Figure 8.
  • the message processing method shown in FIG. 9 includes the following steps.
  • Step 91 The host in the Internet sends the first packet, where the first packet is a downlink packet.
  • the server cluster in the local area network provides the web service of the website cd.com.
  • the host on the Internet is parsed by the Domain Name System (DNS) to obtain the virtual IP address of the web service providing the website cd.com is 22.22.22.22, and the port number is 2222.
  • DNS Domain Name System
  • the first message sent by the host on the Internet is an access request message.
  • the destination address of the access request packet is the above virtual IP address 22.22.22.22, the destination port number is 2222, the source IP address is the IP address of the host in the Internet 33.33.33.33, and the source port number is 3333.
  • the offloading unit LB 2 in the SLB device receives the first packet.
  • the splitting unit LB 2 inputs the first packet into a predetermined shunting algorithm under the condition that the data stream identifier of the first packet is not stored in the split table, and selects from the eight processing units of the SLB device according to the output result of the shunt algorithm.
  • the shunting algorithm used by the shunting unit LB 2 is the same as the LB 1 and the shunting algorithm used by each processing unit, and thus is not separately referred to as a shunting algorithm.
  • the symmetric algorithm in the above embodiment is taken as an example to describe the shunting process.
  • an asymmetric algorithm may also be used in this embodiment.
  • the SLB device can also distinguish between the uplink packet and the downlink packet by using the pre-port configuration information. Therefore, an asymmetric algorithm can also be used in this embodiment.
  • LB 2 inputs the data stream identifier into the offload algorithm in the first manner.
  • the first mode in this embodiment is the same as the embodiment shown in FIG.
  • LB 2 adopts a method similar to step 72 of FIG. 7 to determine that NewPort1 is 3328 and NewPort2 is 5, and the specific process is not repeated here. LB 2 substitutes NewPort1 and NewPort2 into the above formula.
  • the output result of the shunt algorithm calculated by LB 2 is 2, and the indicated processing unit is SPU 3.
  • step 93 the offloading unit LB 3 in the SLB device sends the first packet to the processing unit SPU 3.
  • step 94 the processing unit SPU 3 allocates an IP address and a port number to the first packet.
  • the assigned new IP address is used as the destination IP address after the address translation, and the assigned port number is used as the source port number after the address translation.
  • the assigned port number satisfies the condition: the SPU 3 inputs the data stream identifier of the first packet after the address translation into the predetermined shunt algorithm according to the second manner corresponding to the first manner, according to the output result of the shunt algorithm,
  • the processing unit selected among the at least two processing units is still the SPU 3.
  • the SPU 3 selects the real address and port number of the server that actually processes the access request according to a predetermined load balancing algorithm. For example, in a local area network, four servers share the web service of the website cd.com. The address and port number of the four servers are shown in Table 2.
  • the load balancing algorithm may randomly select one server from the four servers, or select a server with the lowest current resource occupancy rate to provide the server according to the current resource occupancy rate of the four servers, and no longer one by one here. List.
  • the server selected by the SPU 3 is the server 1. Then, the destination IP address of the first packet after the address translation is 11.1.11.11, and the destination port number is 1111. After SPU 3 performs address translation on the first packet, the data stream identifier of the first packet is ⁇ 33.33.33.33, 11.11.11.11, new source port number, 1111, TCP>.
  • the SPU 3 inputs the data stream identifier of the converted first packet into the offload algorithm according to the second manner.
  • the second mode is the destination IP address, the source IP address, the destination port number, the second port number, the protocol type, and the adjustment value.
  • NewPort1 is 3328.
  • the SPU 3 After determining the NewPort1, the SPU 3 further calculates NewPort2 by using a similar method of calculating NewPort2 in step 74 of FIG.
  • NewPort2 2-HASH1(11.11.11.11,33.33.33.33,1111,NewPort1,TCP)MOD(8)
  • step 95 the SPU 3 performs address translation on the first packet.
  • the address translation includes: replacing the destination IP address 22.22.22.22 of the first packet with the new IP address 11.11.11.11, replacing the destination port of the first packet with the port number 1111, and replacing the source port number 3333 with the assigned 3330. Thereby obtaining the converted first message.
  • the data stream identifier of the converted first packet is ⁇ 33.33.33.33, 11.11.11.11, 3330, 1111, TCP>.
  • step 96 the SPU 3 sends the address-converted first packet to the offloading unit LB1.
  • Step 97 The offloading unit LB1 sends the first packet after the address conversion to the server in the local area network.
  • the method further includes the step 98, the SPU 3 creates a new entry in the address mapping table, where the entry includes the data flow identifier of the first packet before the address translation ⁇ 33.33.33.33, 22.22 .22.22, 3333, 2222, TCP> and the data stream identifier of the first packet after address translation ⁇ 33.33.33.33, 11.11.11.11, 3330, 1111, TCP>.
  • the SLB device can ensure that after the LB 1 receives the second packet from the server 1 in the local area network, the LB 1 can offload the second packet to the SPU 3 for processing.
  • the second packet is a response packet corresponding to the access request, and is also a reverse packet of the first packet.
  • processing method of FIG. 9 further includes steps 99-914.
  • step 99 the LB1 of the SLB device receives the second packet, and the second packet is the response packet of the first packet sent by the server 1.
  • the server 1 After receiving the first packet, the server 1 obtains the URL path/index.html of the resource requested by the host from the request line “GET/index.html HTTP/1.1” of the first packet. If there is a resource with the path /index.html, the server carries the status line HTTP/1.1 200 OK and other information in the second message to explain the server and the policy when the resource continues to be accessed in the future.
  • Step 910 On the condition that the LB 1 of the SLB device does not store the data flow identifier of the second packet in the traffic distribution table of the traffic distribution unit LB 1, the second packet is input into the traffic distribution algorithm, and the SLB is output from the SLB according to the output result of the traffic distribution algorithm.
  • the processing unit SPU 3 is selected among a plurality of processing units of the device.
  • the data stream identifier of the second packet is ⁇ 11.11.11.11, 33.33.33.33, 1111, 3330, TCP>.
  • the offloading unit on the SLB device inputs the data stream identifier into the shunting algorithm according to the first method.
  • the processing unit corresponding to the output result 2 is SPU 3.
  • step 911 the offloading unit LB1 on the SLB device sends the second packet to the processing unit SPU3.
  • Step 912 The processing unit SPU 3 searches the address mapping table, and performs address translation on the second packet according to the found entry.
  • the processing unit SPU3 can be based on the corresponding data stream in the entry, because the data stream identifier ⁇ 11.11.11.11, 33.33.33.33, 1111, 3330, TCP> of the second packet exists in the processing unit SPU3.
  • the identifier performs address translation on the second packet, converts the destination address of the second packet to 33.33.33.33, and converts the destination port to 3333, thereby obtaining the second packet after the address conversion.
  • the processing unit SPU 3 may further modify the session state recorded in the corresponding session entry according to other information carried in the second packet, for example, the content of the status line.
  • step 913 the processing unit SPU 3 sends the second packet after the address conversion to the splitting unit LB 2 .
  • step 914 the offloading unit LB 2 sends the second packet after the address translation to the host in the Internet, where the host is the host that sent the first packet.
  • the embodiment of the invention also provides a distributed device.
  • 10 is a schematic structural diagram of the distributed device, which includes a first shunting unit 101, a second shunting unit 102, and at least two processing units including a first processing unit 103.
  • the first shunt unit 101 includes a receiving subunit 1011, a processing subunit 1012, and a transmitting subunit 1013.
  • the second offload unit 102 includes a receiving subunit 1021, a processing subunit 1022, and a transmitting subunit 1023.
  • the first processing unit 103 includes a receiving subunit 1031, a processing subunit 1032, and a transmitting subunit 1033.
  • the receiving subunit 1011 in the first offloading unit 101 is configured to receive the first packet from the first network.
  • the processing sub-unit 1012 in the first offloading unit 101 is configured to select a first processing unit from the at least two processing units according to the data flow identifier of the first packet and the first offloading algorithm, where the first The data stream identifier of the packet includes at least a source Internet Protocol IP address, a destination IP address, a source port number, and a destination port number of the first packet.
  • the transmitting subunit 1013 in the first offloading unit 101 is configured to send the first packet to the first processing unit 103.
  • the receiving subunit 1031 in the first processing unit 103 is configured to receive the first packet sent by the first streaming unit 101.
  • the processing sub-unit 1032 in the first processing unit 103 is configured to allocate an IP address and a port number to the first packet, and the allocated port number satisfies the condition: the first report according to the address translation a data stream identifier of the reverse message of the text and a second offload algorithm, wherein the second processing unit selected from the at least two processing units is the same processing unit as the first processing unit, and the address is converted
  • the source port number of the first packet is the allocated port number, and one of the source IP address and the destination IP address of the first packet after the address translation is the allocated IP address.
  • the second shunting algorithm is an algorithm used by the second shunting unit to perform shunting of the reverse packet of the first packet after the address translation; performing address translation on the first packet, thereby obtaining In the first packet after the address translation, the address translation includes replacing the source port number of the first packet with the allocated port number.
  • the sending subunit 1032 in the first processing unit 103 is configured to send the first packet after the address conversion to the second offloading unit.
  • the receiving sub-unit 1021 in the second shunting unit 102 is configured to receive the first packet after the address conversion sent by the first processing unit 103.
  • the processing subunit 1022 in the second offloading unit 102 is configured to instruct the sending subunit 1023 in the second offloading unit 102 to send the address translated first message to the second network.
  • the computer program product includes one or more computer instructions.
  • the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer readable storage medium or transferred from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be from a website site, computer, server or data center Transfer to another website site, computer, server, or data center by wire (eg, coaxial cable, fiber optic, digital subscriber line (DSL), or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer readable storage medium can be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that includes one or more available media.
  • the usable medium may be a magnetic medium (eg, a floppy disk, a hard disk, a magnetic tape), an optical medium (eg, a DVD), or a semiconductor medium (such as a solid state disk (SSD)).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请公开了分布式设备中的报文处理方法,用以解决报文处理失败问题。分布式设备包括第一分流单元、第二分流单元、至少两个处理单元。第一分流单元从第一网络接收第一报文,将第一报文发送给第一处理单元;第一处理单元为第一报文分配一个IP地址、以及一个端口号,分配的端口号满足条件:根据地址转换后的第一报文的反向报文的数据流标识和第二分流算法,从至少两个处理单元中选择出的第二处理单元与第一处理单元为同一处理单元,地址转换后的第一报文的源端口号为所述分配的端口号,第二分流算法是第二分流单元对所述反向报文进行分流时使用的算法;第一处理单元对第一报文进行地址转换后,第二分流单元向第二网络发送地址转换后的第一报文。

Description

分布式设备中的报文处理方法和分布式设备
本申请要求于2018年4月28日提交中国国家知识产权局、申请号为201810399892.5、申请名称为“分布式设备中的报文处理方法和分布式设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机及通信技术领域,尤其涉及一种分布式设备中的报文处理方法及一种分布式设备。
背景技术
为了提高报文转发设备的处理能力,越来越多的作为报文转发节点的网络设备采用分布式处理构架。分布式处理构架通过横向扩展的方式扩展性能,即通过增加设备中具有类似功能的并行处理单元的数量来提升性能。采用分布式处理构架的网络设备包括但不限于路由器、交换机、防火墙、和服务器负载均衡器等。
然而,分布式处理构架的应用场景存在一定局限。这种局限体现在设备性能提升的效果与网络设备工作所依赖的数据的性质具有很强的相关性。对于路由器或交换机来说,其工作所依赖的数据主要是路由表(英文:Routing Info Base,RIB)和转发表(英文:Forward Information Base,FIB)。路由表和转发表中保存的主要是网络拓扑信息。路由表和转发表中保存的数据的特点是变化速度不快(一般变化以秒计),数据量不大(路由表中表项的数量范围是几万条到几十万条,转发表中表项数量更少)。路由器或交换机在其中包含的每个处理单元上同步地复制一份路由表、转发表或者MAC转发表就可以保证设备正常工作。
然而,对于另外一些工作依赖的数据变化非常快、或者数据量非常大的网络设备而言,横向扩展的效果非常有限。这是因为网络设备如果采用分布式处理架构,需要在其所包含的每个处理单元上同步地复制一份数据,这一过程将导致网络设备耗费大量处理资源和存储资源用于同步和维护多个处理单元分别保存的数据。例如,对于防火墙设备或服务器负载均衡器(英文:Server Load Balancer,SLB)而言,工作依赖的数据主要是会话表。可以理解,在网络拓扑不变的情况下,设备之间在一段时间内可以反复或同时建立大量不同会话,其中涉及会话表项的建立、刷新和删除操作。因而从所包含的数据的变化速度和数据量来看,会话表都远远超过上述路由表和转发表。由于会话表数据量大、数据变化速度很快的特点,网络设备无法在包含的每个处理单元上都同步复制一份完整数据,只能将会话表分布地保存在不同的处理单元上。在这种情况下,如果上行数据流和对应的下行数据流被设备中的网络设备中的分流单元分流到不同的处理单元上,用于处理下行数据流的处理单元会因为未保存会话表项而无法对下行数据流进行处理。
如何为工作依赖的数据变化非常快、或者数据量非常大的分布式网络设备设计一种普遍适用的报文处理方案,从而能够将一个数据流以及对应的反向数据流分流到网络设备的同一处理单元上,成为一个影响分布式设备处理性能的一个关键问题。
发明内容
本申请提供一种分布式设备中的报文处理方法,用以解决现有技术中因难以保证将一个数据流以及对应的反向数据流分流到分布式设备的同一处理单元而造成的报文处理失败问题。
第一方面,提供了一种分布式设备中的报文处理方法,所述分布式设备部署于第一网络和第二网络之间,所述分布式设备包括第一分流单元、第二分流单元、以及至少两个处理单元,所述至少两个处理单元中的每个处理单元与所述第一分流单元和所述第二分流单元连接,所述方法包括:
所述第一分流单元从所述第一网络接收第一报文,根据所述第一报文的数据流标识和第一分流算法,从所述至少两个处理单元中选择第一处理单元,所述第一报文的数据流标识至少包括所述第一报文的源互联网协议IP地址、目的IP地址、源端口号和目的端口号;
所述第一分流单元将所述第一报文发送给所述第一处理单元;
所述第一处理单元为所述第一报文分配一个IP地址、以及一个端口号,分配的所述端口号满足条件:根据地址转换后的所述第一报文的反向报文的数据流标识和第二分流算法,从所述至少两个处理单元中选择出的第二处理单元与所述第一处理单元为同一处理单元,所述地址转换后的所述第一报文的源端口号为所述分配的端口号,所述地址转换后的所述第一报文的源IP地址和目的IP地址中的一个为所述分配的IP地址,所述第二分流算法是所述第二分流单元对所述地址转换后的所述第一报文的反向报文进行分流时使用的算法;
所述第一处理单元对所述第一报文进行地址转换,从而得到所述地址转换后的第一报文,所述地址转换包括将所述第一报文的源端口号替换为所述分配的所述端口号;
所述第一处理单元向所述第二分流单元发送所述地址转换后的第一报文;
所述第二分流单元向所述第二网络发送所述地址转换后的第一报文。
本申请实施例提供的分布式设备中的处理单元在为一个报文分配源地址时,通过分配的源端口号使得地址转换后的该报文的反向报文也将被分流单元分流到本处理单元中,从而保证由分布式设备中的同一处理单元处理属于同一会话的双向报文。避免了由分布式设备中的不同处理单元处理属于同一会话的双向报文而引发的报文处理失败问题。
在一种可能的实现方式中,所述第一网络是私网,所述第二网络是公网,所述第二分流算法是指将输入报文的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号按照第一方式指示的顺序输入预定哈希算法后,将所述哈希算法的结果与调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为选择出的处理单元,所述调整值为所述输入报文的目的端口号减去第一端口号得到的差值,N为所述分布式设备包含的处理单元的数目,所述至少两个处理单元中的每个处理单元分别对应0至N-1中的一个值;
所述第一处理单元为所述第一报文分配一个IP地址、以及一个端口号,包括:
所述第一处理单元为所述第一报文分配预先配置的公网IP地址作为分配的IP地址;
所述第一处理单元从所述第一处理单元对应的端口段中选择一个端口号作为第二端口号,所述第一处理单元与所述至少两个处理单元中的其他处理单元分配的IP地址相同、 且第一处理单元与所述至少两个处理单元中的其他处理单元对应的端口段互不重叠;
所述第一处理单元按照第二方式指示的顺序,将所述地址转换后的所述第一报文的源IP地址、目的IP地址、目的端口号,以及所述第二端口号输入所述预定哈希算法,得到哈希运算结果,所述地址转换后的所述第一报文的源IP地址为所述分配的IP地址,所述第二方式指示的顺序中源IP地址与所述第一方式指示的顺序中目的IP地址的位置相同、目的IP地址与所述第一方式指示的顺序中源IP地址的位置相同、目的端口号与所述第一方式指示的顺序中源端口号的位置相同;
所述第一处理单元求取所述哈希运算结果对N的取余运算结果;
所述第一处理单元求取所述第一处理单元对应的0至N-1中的值与取余运算结果之间的差值;
所述第一处理单元为所述第一报文分配所述第二端口号和所述差值之和作为所述分配的端口号。
本申请实施例在设计第二分流算法时,引入调整值的概念。该调整值的作用是在第一处理单元在分配端口号时,采用与分流算法相对应的方法根据已知分流结果反推符合条件的调整值,从而根据调整值最终确定满足条件的端口号。通过在分流算法中引入调整值是一种技术实现层次的创新,提升了计算满足条件的端口号时的效率。进一步地,该分布式设备中的各处理单元可以共享同一公网IP地址,从而可以节省地址资源。
在一种可能的实现方式中,所述从对应的端口段中选择一个端口号作为第二端口号,包括:
所述第一处理单元按照预定数量的端口号划分为一组的方式,将所述端口号段划分为至少一个端口号组;
随机从所述至少一个端口号组中选取一个端口号组,确定选取的端口号组中预定位置的端口号为所述第二端口号。
在一种可能的实现方式中,所述从对应的端口段中选择一个端口号作为第二端口号,包括:
将从预留端口号开始的连续端口号划分为至少一个端口号组,所述至少一个端口号组中的第一个端口号用十六进制表示时后三位为0;
随机从所述至少一个端口号组中选取一个端口号组,确定选取的端口号组中预定位置的端口号为所述第二端口号。
在一种可能的实现方式中,在上述第一网络是私网,第二网络是公网的场景中,地址转换还包括将所述第一报文的源IP地址替换为分配的IP地址。
在一种可能的实现方式中,所述第一分流算法与所述第二分流算法相同。这样分布式设备中的两个分流单元采用相同的算法,从而使得对第一报文和地址转换后的第一报文的反向报文进行分流时采用相同的算法。避免不同分流单元保存多个分流算法时占用的存储资源,简化了分流过程。
在一种可能的实现方式中,所述第一分流算法的输入报文是所述第一报文,所述根据所述第一报文的数据流标识和第一分流算法,从所述至少两个处理单元中选择第一处理单元,包括:
所述第一分流单元确定所述第一端口号和所述调整值;
按照第一方式指示的顺序,将所述第一报文的源IP地址、目的IP地址、源端口号、 以及所述第一端口号输入所述预定哈希算法;
将所述哈希算法的结果与所述调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为所述第一处理单元。
在一种可能的实现方式中,所述确定所述第一端口号和所述调整值,包括:
将从预留端口号开始至所述第一报文的目的端口号为止的连续端口号,按照预定数量的端口号划分为一组的方式,划分为至少一个端口号组;
确定所述第一报文的目的端口号所在的端口号组中所述目的端口号之前预定位置的端口号为所述第一端口号;
确定所述调整值为所述第一报文的目的端口号减去所述第一端口号得到的差值。
在一种可能的实现方式中,所述确定所述第一端口号和所述调整值,包括:
将从预留端口号开始的连续端口号划分为至少一个端口号组,所述至少一个端口号组中的第一个端口号用十六进制表示时后三位为0;
确定所述第一报文的目的端口号所在的端口号组中所述目的端口号之前预定位置的端口号为所述第一端口号;
确定所述调整值为所述第一报文的目的端口号减去所述第一端口号得到的差值。
在一种可能的实现方式中,在上述第一网络是公网,第二网络是私网的场景中,所述第一报文为来自所述公网的报文,
所述第二分流算法是指将输入报文的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号按照第一方式指示的顺序输入预定哈希算法后,将所述哈希算法的结果与调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为选择出的处理单元,所述调整值为所述输入报文的目的端口号减去第一端口号得到的差值,N为所述分布式设备包含的处理单元的数目,所述至少两个处理单元中的每个处理单元分别对应0至N-1中的一个值;
所述第一处理单元为所述第一报文分配一个IP地址、以及一个端口号,包括:
所述第一处理单元按照负载均衡策略,从保存的至少一个地址端口对中选择出第一地址端口对,所述第一地址端口对包括一个私网IP地址和一个端口号;
所述第一处理单元为所述第一报文分配所述第一地址端口对中的私网IP地址作为分配的IP地址;
所述第一处理单元根据所述第一报文的源端口号确定所述第二端口号;
所述第一处理单元按照第二方式指示的顺序,将所述地址转换后的所述第一报文的源IP地址、目的IP地址、目的端口号、以及所述第二端口号输入所述预定哈希算法,得到哈希运算结果,所述地址转换后的所述第一报文的目的IP地址中的一个为所述分配的IP地址,所述地址转换后的所述第一报文的目的端口号是所述第一地址端口对中的端口号,所述第二方式指示的顺序中源IP地址与所述第一方式指示的顺序中目的IP地址的位置相同、目的IP地址与所述第一方式指示的顺序中源IP地址的位置相同、目的端口号与所述第一方式指示的顺序中源端口号的位置相同;
所述第一处理单元求取所述哈希运算结果对N的取余运算结果;
所述第一处理单元计算所述第一处理单元对应的0至N-1中的值与所述取余运算结果之间的差值;
所述第一处理单元为所述第一报文分配所述第二端口号和所述差值之和作为所述分 配的端口号。
在一种可能的实现方式中,所述第一分流算法的输入报文是所述第一报文,所述根据所述第一报文的数据流标识和第一分流算法,从所述至少两个处理单元中选择第一处理单元,包括:
所述第一分流单元确定所述第一端口号和所述调整值;
按照所述第一方式指示的顺序,将所述第一报文的源IP地址、目的IP地址、源端口号、以及所述第一端口号输入所述预定哈希算法;
将所述哈希算法的结果与所述调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为所述第一处理单元。
在一种可能的实现方式中,在上述第一网络是公网,所述第二网络是私网的场景中,所述地址转换还包括:将所述第一报文的目的IP地址替换为所述第一地址端口对中的私网IP地址,将所述第一报文的目的端口号替换为所述第一地址端口对中的端口号。
在一种可能的实现方式中,第三处理单元与所述第一处理单元为同一处理单元,所述第三处理单元是根据所述第一报文的反向报文的数据流标识和所述第二分流算法从所述两个处理单元中选择出的处理单元。换句话说,第二分流算法是对称算法,这样第一处理单元在为第一报文分配端口号时,无需判断是在何种场景下,即无需判断第一报文是私网向公网发送的报文,还是公网向私网发送的报文。从而进一步简化了分流过程,提高了分流效率。
在一种可能的实现方式中,所述第二分流算法是指将输入报文的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号按照第一方式指示的顺序输入预定哈希算法后,将所述哈希算法的结果与调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为选择出的处理单元,所述第一端口号和所述调整值之和为所述输入报文的目的端口号,N为所述分布式设备包含的处理单元的数目,所述至少两个处理单元中的每个处理单元分别对应0至N-1中的一个值,
所述哈希算法对输入的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号进行逻辑异或,输出逻辑异或的结果,或者所述哈希算法对输入的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号进行按字节求和,输出按字节求和的结果。
在一种可能的实现方式中,所述第一处理单元为所述第一报文分配一个IP地址、以及一个端口号之后,所述方法还包括:所述第一处理单元建立映射表项,所述映射表项中包括地址转换之前所述第一报文的数据流标识以及地址转换之后所述第一报文的数据流标识,所述映射表项用于对所述第一处理单元后续接收到的第二报文进行地址转换。第一处理单元基于建立的映射表项对第一报文所属数据流的后续报文进行地址转换,无需对每个报文都执行分配端口号的过程,提升了处理效率。
第二方面,提供了一种分布式设备,执行第一方面或第一方面的任意一种可能的实现方式中的方法。具体地,该分布式设备包括用于执行第一方面或第一方面的任意一种可能的实现方式中的方法的单元。这些单元可以由程序模块实现,也可以由硬件或固件实现,具体参见实施例中的详细描述,此处不再赘述。
第三方面,还提供了一种分布式设备,所述分布式设备部署于第一网络和第二网络之间,所述分布式设备包括,第一分流单元、第二分流单元、以及至少两个处理单元;
第一分流单元包括网络接口和处理器,网络接口和处理器用于相互配合,实现上述第 一方面或第一方面的任意一种可能的实现方式中第一分流单元执行的步骤,具体参见方法示例中的详细描述,此处不再赘述;
所述至少两个处理单元中的第一处理单元包括网络接口和处理器,网络接口和处理器用于相互配合,实现上述第一方面或第一方面的任意一种可能的实现方式中第一处理单元执行的步骤,具体参见方法示例中的详细描述,此处不再赘述;
第二分流单元包括网络接口和处理器,网络接口和处理器用于相互配合,实现上述第一方面或第一方面的任意一种可能的实现方式中第二分流单元执行的步骤,具体参见方法示例中的详细描述,此处不再赘述。
第四方面,提供了一种计算机存储介质,用于储存为上述分布式设备所用的计算机软件指令,其包括用于执行上述第一方面、或第一方面的任意一种可能的实现方式所设计的程序。
第五方面,提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面或第一方面的任意一种可能的实现方式所述的方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例中一个典型的NAT设备的部署及结构示意图;
图2为本申请实施例提供的分布式设备中的报文处理方法的应用场景示意图;
图3为本申请实施例提供的分布式设备中的分流单元的结构示意图;
图4为本申请实施例提供的分布式设备中的处理单元的结构示意图;
图5为本申请实施例提供的报文处理方法的流程图;
图6为本申请实施例提供的在局域网用户发起对互联网提供的服务的访问过程中,NAT设备应用本申请实施例提供的报文处理方法,对上行报文和对应的下行报文进行处理的场景示意图;
图7为在本申请实施例图6所示的场景中,NAT设备执行报文处理方法的示意图;
图8为本申请实施例提供的在互联网中的用户访问局域网中的服务器提供的服务的过程中,SLB设备应用本申请实施例提供的报文处理方法,对上行报文和对应的下行报文进行处理的场景示意图;
图9为在本申请实施例图8所示的场景中,SLB设备执行报文处理方法的示意图;
图10为本申请实施例提供的一种分布式设备的结构示意图。
具体实施方式
分布式处理构架不适合防火墙设备、SLB等网络设备的原因是这类设备工作依赖的数据变化非常快、或者数据量非常大,网络设备难以在其所包含的每个处理单元上都同步复制一份完整数据。并且,上述网络设备中应用的传统的分流算法难以保证不同的分流单元将上行数据流、以及对应的下行数据流分配给同一个处理单元处理,造成处理单元可能因 为未存储对应的会话表项而处理失败。
传统的分流算法比较简单,例如对待分流报文的数据流标识(例如五元组)中的各元素进行哈希运算,对运算结果取余。传统分流算法难以保证不同的分流单元将上行数据流、以及对应的下行数据流分配给同一个处理单元处理的原因在于,在上述场景中,处理单元对数据流执行了地址转换处理,地址转换前后数据流的五元组将会发生变化。因此一个分流单元对地址转换前的数据流进行分流的结果与另一个分流单元对地址转换后的数据流进行分流的结果很可能不同。图1是一个典型的网络地址转换(英文:network address translation,NAT)NAT设备的部署及结构示意图。该NAT设备中包括两个分流单元,分别为LB 1和LB 2,以及若干个处理单元,分别为SPU 1、SPU 2……SPU n。其中LB 1与私网相连。私网可以是局域网(英文:local area network,LAN)。LB 2与公网相连。公网可以是互联网。当私网中的一个主机想要访问公网中的一个服务器时,向公网发送访问请求报文。LB 1根据分流算法,将该访问请求报文分流到SPU 1中进行处理。SPU 1对访问请求报文进行地址转换,并且建立地址映射表和会话表项后,向过LB 2向公网发送地址转换后的请求报文,LB 2再向公网发送地址转换后的请求报文。上述地址映射表保存有地址转换前后访问请求报文的数据流标识之间的对应关系。
当LB 2接收到上述访问请求报文对应的下行报文时,根据分流算法,将该下行报文分流到SPU 2中进行处理。其中,在本申请实施例中为了将不同方向的数据流进行简单区分,将私网向公网发送的报文简称为上行报文,将公网向私网发送的报文简称为下行报文。然而由于SPU 2中未保存上述地址映射关系,SPU 2无法对下行报文进行地址转换,也就无法把地址转换后的下行报文通过LB 1发送给私网中上述发起网络访问的主机。
本申请实施例提供了一种分布式设备中的报文处理方法。该报文处理方法对报文处理流程、以及报文处理流程中应用的分流算法进行改进,使得上行报文和对应的下行报文被分流到同一个处理单元中进行处理,从而避免出现因处理单元未保存地址映射表项或会话表项导致的报文处理失败的问题。具体来说,一个分流单元将接收到的会话报文分流到一个处理单元后,接收到会话报文的处理单元在进行地址转换过程中,在分配一个端口号作为地址转换后的源端口号时,分配的端口号要满足:另一个分流单元采用相同的分流方案对反向报文进行分流,分流的结果仍然是本处理单元。
下面结合各个附图对本发明实施例技术方案的主要实现原理、具体实施方式及其对应能够达到的有益效果进行详细的阐述。
本申请实施例应用的网络场景如图2所示。图2中的分布式设备可以是采用分布式处理架构的各种设备,包括但不限于NAT设备、防火墙设备、SLB、运营商级NAT(英文:carrier-grade NAT,CGN)设备或IPv6过渡地址转换设备等。分布式设备包括两个分流单元,分别为LB 1和LB 2,以及若干个处理单元,分别为SPU 1、SPU 2……SPU n。其中LB 1与私网相连,私网可以是局域网(英文:Local Area Network,LAN)。LB 2与公网相连,公网可以是互联网。如图3或图4所示,分布式设备中的分流单元和处理单元可以是插入到同一机框内的不同板卡,相互之间通过机框的交换板,例如背板交换网(crossbar switch)连接,其中交换板起交换机的作用,用以实现同一机框中的多个板卡之间的报文交换。此外当分流单元和处理单元是不同板卡时,也可以通过标准的以太网连接,标准的以太网是指物理层和数据链路层的实现方式遵从电气和电子工程师协会(Institute of Electrical and Electronics Engineers,IEEE)802.3系列标准规定的以太网。
传统方案中应用于分流单元(如图1或图2中的LB 1或LB 2)的分流算法可以是一个哈希算法,该哈希算法记为HASH1,具体地:
目标处理单元标识=HASH1(X1,X2,X3,X4,X5)MOD(N)。其中,X1,X2,X3,X4,X5为数据流的标识中的各个元素。例如,假设数据流的标识为五元组<源IP地址,目的IP地址,源端口,目的端口,协议类型>,其中X1为五元组中的源IP地址,X2为五元组中的目的IP地址,X3为五元组中的源端口,X4为五元组中的目的端口,X5为五元组中的传输层协议类型,N为分布式设备中处理单元的数目。上述HASH1指示的算法可以是通常的散列算法,只要满足基本的离散性要求,能够将不同输入映射到有限取值空间中的一个散列值。由于不同的输入可能会映射为同一个散列值,所以不可能从散列值来确定唯一的输入值。例如散列算法可以是对X1~X5中的各个元素做异或运算,然后对异或运算结果求余。
本申请实施例提供的报文处理方法至少包含两方面的改进:首先,分流算法同时应用于分流单元和处理单元,而不是像传统方案一样分流算法只应用于分流单元上,并不应用于处理单元上。进一步地,对分流算法进行改进,改进的分流算法记为HASH2,处理单元在为一个报文分配源端口号时,模拟分流单元对地址转换后的该报文的反向报文也进行类似的分流处理,以确保分配的源端口号能使得上述反向报文也被分流到本处理单元上,从而计算出符合该条件的源端口号。
改进的分流算法将输入报文的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号按照第一方式指示的顺序输入预定哈希算法后,将所述哈希算法的结果与调整值求和后,对求和结果执行对N取余运算。将取余运算结果对应的处理单元作为选择出的处理单元。其中,所述调整值为所述输入报文的目的端口号减去第一端口号得到的差值,N为所述分布式设备包含的处理单元的数目。由于改进的分流算法最终的输出是对N取余运算的结果,因此改进的分流算法的输出是0至N-1中的一个值,即改进的分流算法的输出结果的取值空间是0至N-1。分布式网络设备中的每个处理单元分别对应0至N-1中的一个值。
上述改进的分流算法设计的初衷是为了求取满足条件的端口号时计算层面的便捷:
改进的分流算法将目的端口号拆分为第一端口号和调整值,等同于采用相同的分流算法对输入报文的反向报文进行分流时,将反向报文的目的端口号(转换后的输入报文的源端口号)拆分为第二端口号和调整值。这样分配端口号的过程实际上就转化为在已知分流结果的前提下,根据输入报文的反向报文反推满足条件的调整值。计算调整值后基于计算出的调整值计算待分配端口号,这样可以依据计算公式快速计算出待分别的端口号。相比较依次将所有可能的端口号代入分流算法,判断端口号是否符合条件这种穷举测试方式,本申请实施例提供的上述方案效率明显较高。
假设用HASH2表示上述改进的分流算法,用HASH1表示现有的一个哈希算法,那么上述分流算法的工作原理可以被描述为:
针对输入分流算法的一个报文,假设从报文头中提取的以五元组表示的数据流的标识为<源IP地址,目的IP地址,源端口,目的端口,协议类型>,则
目标处理单元对应的取值=HASH2(X1,X2,X3,X4,X5)MOD(N)=(HASH1(X1,X2,X3,NewPort1,X5)+NewPort2)MOD(N)),其中X1~X5依然分别代表作为数据流的标识的五元组中的各个元素,其中X1为五元组中的源IP地址,X2为五元组中的目的IP地址, X3为五元组中的源端口号,X4为五元组中的目的端口号,X5为五元组中的协议类型,N为分布式设备中处理单元的数目。
NewPort1为第一端口号,NewPort2为调整值,X4=NewPort1+NewPort2。
可选地,如果NewPort1是根据X4计算出的,记为NewPort1(X4)。
可选地,如果数据流的标识中还包含更多可选元素,则上述分流算法也可以做适应性的扩展。例如数据流的标识为六元组<源IP地址,目的IP地址,源端口,目的端口,协议类型,扩展元素>,其中扩展元素可以是局域网的标识或者虚拟专用网(英文:virtual private network,VPN)的标识,局域网的标识可以表示为VLAN-ID,虚拟专用网的标识记为VPN-ID。
相应地,目标处理单元标识=HASH2(X1,X2,X3,X4,X5,X6)MOD(N)=(HASH1(X1,X2,X3,NewPort1,X5,X6)+NewPort2)MOD(N)),其中X1~X5的含义同上,X6为VLAN-ID或VPN-ID。
类似地,如果数据流的标识中还包含更多数目的可选元素,分流算法也可以进行类似的扩展。
附图2中的分流单元LB 1或LB 2的结构如附图3所示。分流单元包括处理器31、存储器32和网络接口33,存储器32和网络接口33通过数据总线34相互连接。
可选地,处理器31是一个或多个处理器,还可以是网络处理器(NP,Network Processor)。
存储器32包括但不限于是随机存取存储器(random access memory,RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或者快闪存储器)、或便携式只读存储器(CD-ROM)。存储器32用于保存分流表,分流表中记录了数据流标识与处理单元标识之间的对应关系。分流单元接收到一个来自于其他网络设备的报文之后,先从报文头中提取该报文的数据流标识,在分流表中查询是否已存在该数据流标识。如果分流表中存在该数据流标识,则从分流表中获得该数据流标识对应的处理单元标识,将该报文发送给处理单元标识所指示的处理单元。如果分流表中不存在该数据流标识,则根据该数据流标识和分流算法,获得分流算法的输出结果。输出结果指示了分布式设备中的一个处理单元。将该报文发送给输出结果指示的处理单元,并在分流表中保存该报文的数据流标识与输出结果的对应关系,以便于网络设备从其他网络设备中接收到同一数据流的后续报文后,根据分流表进行分流。
处理器31用于在分流表中未保存报文的数据流标识时,执行分流过程,以及控制向处理单元发送报文。处理器31基于分流算法执行分流的详细过程请参考后面实施例中的描述。
附图3中的网络接口33可以是例如标准的以太网接口,也可以是框式设备中的板间信元交换接口,例如HiGig。
附图2中处理单元SPU 1~SPU n中一个处理单元的结构如附图4所示。处理单元包括处理器41、存储器42和网络接口43,处理器41、存储器42和网络接口43通过数据总线44相互连接。可选地,处理器41可以为一个或多个处理器,处理器可以是单核处理器,也可以是多核处理器,例如Intel公司的X86处理器。
存储器42包括随机存取存储器、快闪存储器、内容可寻址存储器(content addressable memory,CAM)或其任意组合。存储器42用于保存公网地址池(或负载均衡 表)和地址映射表。公网地址池用于保存预先配置给本处理单元使用的公网IP地址、以及该公网地址上配置的端口号。负载均衡表用于在SLB场景下保存私网中用于提供指定服务的服务器的私网IP地址和端口号。
地址映射表用于保存一个报文在地址转换之前的数据流标识与地址转换之后的数据流标识之间的对应关系。可选地,存储器42中还保存会话表,所述会话表用于保存数据流标识与数据流状态的对应关系。
处理器41用于在接收到分流单元发送的报文时,根据该报文的数据流标识在地址映射表中查询对应的数据流标识。如果查询到对应的数据流标识,则用查找到的数据流标识对报文进行地址转换,向另一分流单元发送地址转换后的报文。还用于在地址映射表中未查询到对应的数据流标识时,根据公网地址池、或者负载均衡表为所述报文分配一个数据流标识,根据分配的数据流标识对报文进行地址转换,向另一分流单元发送地址转换后的报文。并且在地址映射表中创建一个表项,该表项用于记录地址转换之前的数据流标识与地址转换之后的数据流标识之间的对应关系。可选地,处理器41还用于根据更新会话表,例如创建会话表项或者更新已保存的会话表项中的会话状态、命中时间等等。
其中,处理器41在地址映射表中未查询到对应的数据流标识时,基于分流算法为报文分配数据流标识的详细过程请参考后面实施例中的描述。
下面通过结合附图2、附图3和附图4对本申请实施例提供的报文处理方法进行简单说明。附图5是本申请实施例提供的报文处理方法的流程图。
步骤51,分布式设备的第一分流单元从第一网络接收第一报文,根据所述第一报文的数据流标识和第一分流算法,从该分布式设备的至少两个处理单元中选择第一处理单元。第一分流算法的输出结果用于指示至少两个处理单元中的一个处理单元。在本申请实施例中将第一分流算法的输出结果指示的处理单元称为目标处理单元。在本实施例中假定第一分流算法的输出结果指示的目标处理单元是第一处理单元。其中第一网络可以是私网或公网。如果第一网络是以局域网为例的私网,则后面出现的第二网络为以互联网为例的公网。如果第一网络是以互联网为例的公网,则后面出现的第二网络为以局域网为例的私网。
本实施例中分布式设备的结构如附图2所示,第一分流单元可以为附图2中的LB 1或LB 2。如果第一分流单元是附图2中的LB 1,则后面出现的第二分流单元为附图2中的LB 2。如果第一分流单元是附图2中的LB 2,则第二分流单元为附图2中的LB 1。
可选地,第一分流算法可以是上述举例现有分流算法,也可以是上述描述的改进的分流算法。
在本申请实施例中,“第一报文”这种命名方式是为了与后面出现的其他报文相区分。
步骤52,第一分流单元将第一报文发送给第一处理单元。
步骤53,第一处理单元为第一报文分配一个IP地址、以及一个端口号,分配的所述端口号满足条件:根据地址转换后的所述第一报文的反向报文的数据流标识和第二分流算法,从所述至少两个处理单元中选择出的第二处理单元与所述第一处理单元为同一处理单元。所述第二分流算法是所述第二分流单元对地址转换后的所述第一报文的反向报文进行分流时使用的算法。其中所述地址转换后的所述第一报文的源端口号为所述分配的端口号,所述地址转换后的所述第一报文的源IP地址和目的IP地址中的一个为所述分配的IP地址。
在不同场景中,地址转换后的所述第一报文的源IP地址和目的IP地址中的一个为所 述分配的IP地址。例如,在NAT场景中,地址转换后的所述第一报文的源IP地址为所述分配的IP地址;在SLB场景中,地址转换后的所述第一报文的目的IP地址为所述分配的IP地址。后面将结合不同实例进行详细说明。
在本实施例中第一分流单元对第一报文分流使用的第一分流算法和第二分流单元对第一报文的反向报文分流使用的第二分流算法可以是不同的分流算法。在实际应用场景中,为了部署成本的考虑,同一个网络设备可能需要适用于多种不同场景,因此出于配置简便的考虑,第一分流算法可以与第二分流算法相同。下面假定第一分流单元和第二分流单元均使用上面提及的改进的分流算法,这里对第一分流单元和第二分流单元的分流过程进行简单描述。
为了描述方便,本实施例中用五元组<源IP地址,目的IP地址,源端口,目的端口,协议类型>表示数据流标识。
在步骤51中,第一分流单元的分流过程包括:
第一分流单元按照第一方式将第一报文的五元组中的各个元素输入预定的分流算法。第一方式指示了各元素输入分流算法时的先后顺序。
可选地,第一方式包括但不限于按照“源IP地址、目的IP地址、源端口号、目的端口号、协议类型”的顺序输入HASH2。换句话说,第一方式等同于按照“源IP地址、目的IP地址、源端口号、第一端口号、协议类型、调整值”的顺序输入HASH1,其中X1为五元组中的源IP地址,X2为五元组中的目的IP地址,X3为五元组中的源端口,X4为五元组中的目的端口,X5为五元组中的协议类型,NewPort1为第一端口号,NewPort2为调整值,则第一方式表示为“X1,X2,X3,NewPort1,X5,NewPort2”,其中X4=NewPort1+NewPort2。
目标处理单元的计算过程可以用以下公式表示:
目标处理单元标识=HASH2(第一报文的源IP地址,第一报文的目的IP地址,第一报文的源端口号,第一报文的目的端口号,第一报文的协议类型)MOD(N)=(HASH1(第一报文的源IP地址,第一报文的目的IP地址,第一报文的源端口号,第一端口号,第一报文的协议类型)+调整值)MOD(N))        公式(1)
对于第一分流单元而言,在计算分流算法的输出结果时,首先确定满足条件“X4=NewPort1+NewPort2”的第一端口号和调整值,然后再将第一报文的源IP地址、第一报文的目的IP地址、第一报文的源端口号、第一端口号、第一报文的协议类型和调整值依次输入分流算法,得到输出结果。
第一分流单元确定第一端口号和调整值的方法有多种,只要满足条件“X4=NewPort1+NewPort2”即可。后面将结合附图6和附图8中的具体场景对计算第一端口号和调整值的方法进行详细说明。
与步骤51类似的,步骤53中第一处理单元分配端口号的过程如下:
设计目标是:第一处理单元假设按照第一方式将地址转换后的第一报文的反向报文的数据流标识中的各个元素输入第二分流算法,得到的分流结果应该是第一分流单元对应的取值。或者,也等价于:假设第一处理单元按照第二方式将地址转换后的第一报文的数据流标识中的各个元素输入预定的分流算法,得到的分流结果应该是第一分流单元对应的取值。这里第二方式是与第一方式对应的方式。
反向报文的数据流标识与地址转换后的第一报文的数据流标识具有对应性:反向报文的源IP地址为地址转换后的第一报文的目的IP地址,反向报文的源端口号为地址转换后 的第一报文的目的端口号,反向报文的目的IP地址为地址转换后的第一报文的源地址,反向报文的目的端口为地址转换后的第一报文的源端口。
正是由于上述对应性,在实际处理过程中,第一处理单元可以将地址转换后的第一报文对应的数据流标识按照第二方式指示的顺序输入第二分流算法。在第二方式指示的顺序中,源IP地址所处的位置与第一方式指示的顺序中目的IP地址所处的位置相同。在第二方式指示的顺序中,目的IP地址所处的位置与第一方式指示的顺序中源IP地址所处的位置相同。在第二方式指示的顺序中,源端口号所处的位置与第一方式指示的顺序中目的端口号所处的位置相同。在第二方式指示的顺序中,目的端口号所处的位置与第一方式指示的顺序中源端口号所处的位置相同。
例如,第一方式指示的顺序是“源IP地址、目的IP地址、源端口号、目的端口号、协议类型”,那么第二方式指示的顺序是“目的IP地址、源IP地址、目的端口号、源端口号,协议类型”。其中地址转换并不改变第一报文的协议类型,因此地址转换后的第一报文的协议类型与第一报文的协议类型相同。地址转换后的第一报文的源端口号即为分配的端口号,因此第二方式指示的顺序也可以表示为“地址转换后的第一报文的目的IP地址、地址转换后的第一报文的源IP地址、地址转换后的第一报文的目的端口号、第二端口号、所述第一报文的协议类型、调整值”,其中第二端口号和所述调整值之和为分配的所述端口号。即第一处理单元在计算第一报文的反向报文分流结果也可以被表示为:
目标处理单元标识=HASH2(地址转换后的第一报文的目的IP地址,地址转换后的第一报文的源IP地址,地址转换后的第一报文的目的端口号,地址转换后的第一报文的源端口号,第一报文的协议类型)MOD(N)=(HASH1(地址转换后的第一报文的目的IP地址,地址转换后的第一报文的源IP地址,地址转换后的第一报文的目的端口号,第二端口号,第一报文的协议类型)+调整值)MOD(N)         公式(2)
对于第一处理单元而言,本步骤相当于分流结果是已知的,即分流结果为第一处理单元,求取分配的端口号。对上述公式(2)进行变形,得到公式(3)
调整值=目标处理单元标识-HASH1(地址转换后的第一报文的目的IP地址,地址转换后的第一报文的源IP地址,地址转换后的第一报文的目的端口号,第二端口号,第一报文的协议类型)MOD(N)。          公式(3)
可以理解,第一处理单元在分配端口号作为转换后的第一报文的源端口号时,为了保证反向流量能够被分流到本处理单元上,对端口号进行了特殊设计。这种特殊设计就是为了计算未知的待分配的源端口号,将待分配的源端口号拆分为设定部分和补偿部分,设定部分是上述第二端口号,补偿部分为上述调整值。设定部分是指根据预先的确定方式能够在代入上述公式之前知晓的一个端口号。第一处理单元通过将已知的目标处理单元标识和第二端口号输入已知的分流算法,能够求取调整值。再对第二端口号和调整值求和,将求和结果作为分配的满足要求的源端口号。
在第一处理单元根据公式(3)得到调整值之前,先要确定第二端口号。第一处理单元确定第二端口号的方式是从第一处理单元对应的端口号段中选择一个端口号作为第二端口号。
第一分流单元确定第二端口号的方法有多种,只要确定出了第二端口号,代入公式(3)就可以得到调整值。第一分流单元将第二端口号和调整值之和作为分配的端口号,该分配的端口号作为地址转换后的第一报文的源端口号。
后面将结合附图6和附图8中的具体场景对计算第二端口号和调整值的方法进行详细说明。
步骤54,第一处理单元对第一报文进行地址转换,从而得到转换后的第一报文,所述地址转换包括:将所述第一报文的源端口号替换为所述分配的所述端口号。
步骤55,第一处理单元向第二分流单元发送所述地址转换后的第一报文。
步骤56,第二分流单元向所述第二网络发送所述地址转换后的第一报文。
本申请实施例提供了一种分布式设备中的报文处理方法,分布式设备中包括两个分流单元以及若干个处理单元。处理单元从一个分流单元中接收到的报文后,在为该报文分配IP地址、以及一个端口号作为地址转换后的源端口时,分配的端口号要满足:另一个分流单元采用分流算法对地址转换后的所述报文的反向报文进行分流,反向报文仍然能被分流到本处理单元上。在端口分配的过程中将待分配的源端口号拆分为两部分,根据改进的分流算法在已知分流结果的条件下,反推满足条件的端口号。采用提供的报文处理方法后,分布式设备无需耗费资源在多个处理单元间同步并维护所有会话表数据,摆脱了工作所依赖的数据的特性对于分布式设备应用场景造成的制约。与此同时本申请实施例提供的报文处理方法能够保证会话双方的双向数据流能够被分流到分布式设备的同一处理单元上,因而不再出现因处理单元未存储地址映射关系而造成的业务处理失败问题,从而提高了分布式设备的工作可靠性。
附图6为在局域网用户发起对互联网提供的服务的访问过程中,NAT设备应用附图5所示的报文处理方法,对上行报文和对应的下行报文进行处理的场景示意图。如附图6,分布式设备具体为NAT设备。该NAT设备中包含2个分流单元,分别为LB 1和LB 2,该NAT设备还包括8个处理单元,分别为SPU 1~SPU 8。SPU 1~SPU 8与第二分流算法的输出结果取值之间的对应关系如表1。
表1
处理单元的名称 分流算法输出结果的取值
SPU 1 0
SPU 2 1
SPU 3 2
SPU 4 3
SPU 5 4
SPU 6 5
SPU 7 6
SPU 8 7
附图7为在附图6所示的场景中,NAT设备对报文进行处理的流程图。附图7所示的报文处理方法包括以下步骤。
步骤71,局域网中的主机发送第一报文,第一报文是上行报文。
具体地,第一报文是因网络访问行为而触发的访问请求报文。用户使用局域网中的主机上运行的浏览器访问网站abc.com,则主机生成一个访问请求报文并向NAT设备发送该访问请求报文。该访问请求报文的源地址是局域网主机的私网地址10.10.10.10,源端口号为10000。该访问请求报文的目的地址是Web服务器的公网地址30.30.30.30,目的端 口号为30000。Web服务器通过公网地址30.30.30.30上的端口30000提供网站abc.com的网页服务。
可选地,对于NAT设备而言,可以根据预先配置的端口与网络的对应关系区分上行报文和下行报文,例如NAT设备的第一端口段与局域网连接,第二端口段与互联网连接。NAT的第一端口段中的端口接收到报文是上行报文,NAT设备的第二端口段中的端口接收到的报文是下行报文。在本实施例中,第一报文是NAT设备的第一端口段中的端口接收到的报文,因而确定第一报文是上行报文。
步骤72,NAT设备中的分流单元LB 1接收到主机发送的第一报文。在分流单元LB 1的分流表中未存储第一报文的数据流标识的条件下,将第一报文输入分流算法,根据分流算法的输出结果,从NAT设备的8个处理单元中选择处理单元SPU 5。
为了描述简洁,在本实施例中,LB 1使用的分流算法与LB 2使用的分流算法相同,都称为“分流算法”,实际上LB 1使用的分流算法也可以与LB 2使用的分流算法不同,只要处理单元和LB 2均使用相同的分流算法即可。
请参考前面提及的公式(1)和公式(2)中均涉及到一个哈希算法HASH1。首先,对本实施例中分流时使用的哈希算法进行一个简单介绍。哈希算法的设计方式有很多种,可以分为对称算法和非对称算法。在本申请实施例中对称算法是指将一个报文的五元组中的各元素输入该算法时,如果对调源IP地址和目的IP地址在输入顺序中的位置、同时对调源端口号和目的端口号在输入顺序中的位置,调换位置之后算法的输出结果与调换位置之前算法的输出结果相同。相应地,非对称算法是指调换位置之后算法的输出结果与调换位置之前算法的输出结果不同。
相比较非对称算法,对称算法的优点在于虽然算法规定了将待分流的报文的五元组中各元素输入算法时的顺序,但是无需事先区分待分流的报文是上行报文还是下行报文。为了便于理解,这里结合实例给出对称算法和非对称算法的具体例子。
假定一个待分流的报文的源IP地址为10.10.10.10,目的IP地址为30.30.30.30,源端口号是10000,目的端口号是30000,协议为TCP。TCP在相关标准中的协议号为6。
非对称算法示例:
HASH(源IP地址,目的IP地址,源端口号,目的端口号,协议号)=(源IP地址-目的IP地址)XOR(源端口号-目的端口号)XOR协议号,其中XOR表示十六进制按字节异或。该算法先求取源IP地址与目的IP地址的十六进制按字节的差值、源端口号与目的端口号的十六进制按字节的差值,再将两个差值与协议号按字节异或成一个单字节。
在对调位置之前,按照“源IP地址、目的IP地址、源端口号、目的端口号、协议类型”的顺序将上述报文的数据流标识输入HASH算法的过程表示为:
目标SPU对应的取值=HASH(10.10.10.10,30.30.30.30,10000,30000,6)MOD(8)=((10.10.10.10-30.30.30.30)XOR(10000-30000)XOR(6))MOD(8),
用十六进制表示上述算法中涉及的异或运算,则上述计算过程为
目标SPU对应的取值=(0xEC^0xEC^0xEC^0xEC^0xB1^0xE0^0x6)MOD(8)=0x57MOD(8)=7。
在对调位置之后,按照“目的IP地址、源IP地址、目的端口号、源端口号、协议类型”的顺序将上述报文的数据流标识输入分流算法的过程表示为:
目标SPU对应的取值=HASH(30.30.30.30,10.10.10.10,30000,10000,6)MOD(8)= ((30.30.30.30-10.10.10.10)XOR(30000-10000)XOR(6))MOD(8),
用十六进制表示上述算法涉及的异或运算,则上述计算过程为
目标SPU对应的取值=(0x14^0x14^0x14^0x14^0x4E^0x20^0x6)MOD(8)=0x68MOD(8)=4。
可见,对调位置之前算法的输出结果是7,对调位置之后算法的输出结果是4,二者不同。
对称算法示例:
HASH(源IP地址,目的IP地址,源端口号,目的端口号,协议号)=源IP地址XOR目的IP地址XOR源端口号XOR目的端口号XOR协议号。该算法是将五元组中的五个元素按字节异或为一个单字节。
在对调位置之前,按照“源IP地址、目的IP地址、源端口号、目的端口号、协议类型”的顺序将上述报文的数据流标识输入分流算法的过程表示为:
目标SPU对应的取值=HASH(10.10.10.10,30.30.30.30,10000,30000,6)MOD(8)=((10.10.10.10XOR 30.30.30.30)XOR(10000 XOR 30000)XOR(6))MOD(8),
用十六进制表示上述算法中涉及的异或运算,则上述计算过程为
目标SPU对应的取值=(0x1E^0x1E^0x1E^0x1E^0x14^0x14^0x14^0x14^0x75^0x30^0x7^0xd8^0x6)MOD(8)=0x9C MOD(8)=1。
在对调位置之后,按照“目的IP地址、源IP地址、目的端口号、源端口号、协议类型”的顺序将上述报文的数据流标识输入分流算法的过程表示为:
目标SPU对应的取值=HASH(30.30.30.30,10.10.10.10,30000,10000,6)MOD(8)=((30.30.30.30XOR 10.10.10.10)XOR(30000 XOR 10000)XOR(6))MOD(8),
用十六进制表示上述算法中涉及的异或运算,则上述计算过程为
目标SPU对应的取值=(0x14^0x14^0x14^0x14^0x1E^0x1E^0x1E^0x1E^0x7^0xd8^0x75^0x30^0x6)MOD(8)=0x9C MOD(8)=1。
可见,对调位置之前算法的输出结果是1,对调位置之后算法的输出结果也是1,二者相同。
显然,对称算法和非对称算法不止上述列举的几种。另一个对称算法还可以是
目标SPU对应的取值=HASH(源IP地址,目的IP地址,源端口号,目的端口号,协议号)=(源IP地址+目的IP地址)XOR(源端口号+目的端口号)XOR协议号。该算法先求取源IP地址与目的IP地址的十六进制按字节的和值、源端口号与目的端口号的十六进制按字节的和值,再将两个和值与协议号按字节异或成一个单字节。其他的对称算法和非对称算法在此无法进行一一列举。
后面以本实施例使用分流算法为上述列举的第一个对称算法为例,对分流过程进行描述。显然由于本实施例中NAT设备能够区分上行报文和下行报文,因此本实施例中也可以使用非对称算法。
示例性地,假设数据流标识是五元组<源IP地址,目的IP地址,源端口,目的端口,协议类型>。附图2中的LB 1接收到主机发送的第一报文,在LB 1的分流表中未存储数据流标识<10.10.10.10,30.30.30.30,10000,30000,TCP>的情况下,将数据流标识按照第一方式输入第一分流算法。在本实施例中,以第一方式为“源IP地址、目的IP地址、源端口号、第一端口号、协议类型、调整值”为例,那么
目标SPU对应的取值=HASH2(10.10.10.10,30.30.30.30,10000,30000,TCP)
=HASH1((10.10.10.10,30.30.30.30,10000,NewPort1,TCP)+NewPort2)MOD(8)
LB 1在计算目标SPU之前,需要先计算NewPort1和NewPort2。LB 1首先确定NewPort1,再根据NewPort2=目的端口号-NewPort1来确定NewPort2。LB 1确定NewPort1的方法有很多种,下面仅进行举例说明。
方法一、将从预留端口号开始的连续端口号划分为至少一个端口号组,至少一个端口号组中的第一个端口号用十六进制表示时后三位为0。确定第一报文的目的端口号所在的端口号组中目的端口号之前预定位置的端口号为所述第一端口号。
例如预留端口号为1~999,则从端口号1000开始分组,按照上述分组方式得到的端口号组中,目的端口号30000所在的端口号组为[30000,30007]。假定预定位置的端口号为每组中的第一个端口号,NewPort1=30000,则NewPort2=30000-30000=0。
方法二、将从预留端口号开始至所述第一报文的目的端口号为止的连续端口号,按照预定数量的端口号划分为一组的方式,划分为至少一个端口号组;确定所述第一报文的目的端口号所在的端口号组中所述目的端口号之前指定位置的端口号为所述第一端口号
例如,预留端口号为1~999,将1000开始的连续端口号进行分组,每8个连续端口号为一组。则端口号30000所在的端口号组为[30000,30007]。确定[30000,30007]中的第一个端口号为NewPort1,即NewPort1=30000,则NewPort2=30000-30000=0。
继续上面的例子,假定LB 1得到了NewPort1为30000,NewPort2为0,将NewPort1和NewPort2代入上述公式后得到
目标SPU对应的取值=HASH1((10.10.10.10,30.30.30.30,10000,NewPort1,TCP)+NewPort2)MOD(8)
=HASH1((10.10.10.10,30.30.30.30,10000,30000,6)+0)MOD(8)
=(10.10.10.10 XOR 30.30.30.30 XOR 10000 XOR 30000 XOR 6)MOD(8)
用十六进制表示上述算法中涉及的异或运算,则上述计算过程为
目标SPU对应的取值
=(0xA^0xA^0xA^0xA^0x1E^0x1E^0x1E^0x1E^0x27^0x10^0x75^0x30^0x6)MOD(8)
=0x74MOD(8)=4,即目标SPU的为SPU 5。
步骤73,NAT设备中的分流单元LB 1将第一报文发送给处理单元SPU 5。
步骤74,处理单元SPU 5为第一报文分配一个IP地址、以及一个端口号。分配的新的IP地址作为地址转换后的源IP地址,分配的端口号作为地址转换后的源端口号。分配的端口号满足条件:SPU 5将地址转换后的第一报文的数据流标识按照与第一方式对应的第二方式输入所述预定的分流算法,根据所述分流算法的输出结果,从所述至少两个处理单元中选择出的处理单元仍然是SPU 5。
假定在本实施例中,各处理单元配置的公网地址均为20.20.20.20,即各处理单元共享相同的公网地址,但各处理单元分别配置20.20.20.20上的不同端口号范围。这样可以节约有限的公网地址资源。
例如,SPU 5上配置的端口号范围为2000-9999共计8000个端口号。SPU 5对第一报文进行地址转换后,第一报文的数据流标识为<20.20.20.20,30.30.30.30,新源端口号,30000,TCP>。
SPU 5将转换后的第一报文的数据流标识按照第二方式输入分流算法。在本实施例中, 以第二方式为“目的IP地址,源IP地址,目的端口号,第二端口号,协议类型、调整值”为例,那么
目标SPU对应的取值=HASH2(30.30.30.30,20.20.20.20,30000,新源端口号,TCP)=HASH1((30.30.30.30,20.20.20.20,30000,NewPort1,TCP)+NewPort2)MOD(8)=4。
由于目标SPU对应的分流算法的输出结果为4是已知的,那么计算新源端口号的过程实际上就转化为计算NewPort1和NewPort2的过程。SPU 5首先确定出NewPort1,再根据确定出的NewPort1计算出NewPort2,最后根据新源端口号=NewPort1+NewPort2确定出新源端口号。SPU 5确定NewPort1的方法有很多种,下面仅进行举例说明。
SPU 5从SPU 5对应的端口号段中选择出一个端口号作为NewPort1。可选地,选择的方式是SPU 5按照预定数量的端口号划分为一组的方式,将端口号段2000-9999划分为至少一个端口号组;SPU 5随机从至少一个端口号组中选取一个端口号组,确定选取的端口号组中指定位置的端口号为所述第二端口号。
例如,SPU 5按照8端口号划分为一组的方式,将端口号段2000-9999划分为1000组,从1000组中随机选择一组,例如选择第2个端口组,第2个端口组包含的端口号是2008~2015。选取第2个端口组中的第1个端口号2008作为NewPort1。
可选地,选择方式也可以是SPU 5将端口号段2000-9999中端口号用十六进制数表示,以其中十六进制表示时后三位为0的端口号作为每个分组的第一个端口号,从而将端口号段2000-9999进行分组。SPU 5随机从至少一个端口号组中选取一个端口号组,确定选取的端口号组中预定位置的端口号为所述第二端口号。
SPU 5在确定出NewPort1后,再进一步计算NewPort2。
NewPort2=4-HASH1(30.30.30.30,20.20.20.20,30000,NewPort1,TCP)MOD(8)=4-HASH1(30.30.30.30,20.20.20.20,30000,2008,TCP))MOD(8)
用十六进制表示上述算法中涉及的异或运算,则上述计算过程为
NewPort2=
4-(0x1E^0x1E^0x1E^0x1E^0x14^0x14^0x14^0x14^0x75^0x30^0x7^0xD8^0x6)MOD(8)=4–0x9C MOD(8)=4-4=0。
再进一步根据计算出的NewPort1和NewPort2确定新源端口号,新源端口号=NewPort1+NewPort2=2008。
为了避免同时为来自局域网中不同主机的报文分配相同的端口号,第一处理单元还以在对应的端口段中为每个端口号设置一个使用状态标识,该使用状态标识用于记录该端口号是否已被使用。例如该标识取值为1时表示已被使用,该标识取值为0时表示未被使用。根据该使用状态标识,当第一处理单元为第一报文分配的端口号已被使用时,采用上述方法重新确定第二端口号,进而重新计算调整值,根据重新确定的第二端口号和调整值求和,从而再次分配一个端口号。当第一处理单元为第一报文分配一个端口号后,将该端口号对应的使用状态标识设置为已使用对应的值。当第一处理单元根据会话表中记录的会话状态确定使用已被分配的端口号建立的会话结束时,重新将已被分配的端口号的使用状态标识设置为未使用对应的值,以便于该端口号可以被重复利用。例如,当第一处理单元为第一报文分配端口号2008后,将端口号2008的状态设置为1。当第一处理单元根据会话表中的会话状态确定第一报文所属的会话结束后,第一处理单元将端口号2008的状态设置为0。
步骤75,SPU 5对所述第一报文进行地址转换。地址转换包括:将第一报文的源IP 地址10.10.10.10替换为新IP地址20.20.20.20,将源端口号10000替换为分配的端口号2008,从而得到转换后的第一报文。地址转换后的第一报文的数据流标识是<20.20.20.20,30.30.30.30,2008,30000,TCP>。
步骤76,SPU 5向分流单元LB 2发送地址转换后的第一报文。
步骤77,LB 2向互联网中的服务器发送地址转换后的第一报文。
可选地,在步骤74之后,还包括步骤78,SPU 1在地址映射表中建立一个新的表项,该表项中包括地址转换之前第一报文的数据流标识<10.10.10.10,30.30.30.30,10000,30000,TCP>和地址转换之后所述第一报文的数据流标识<20.20.20.20,30.30.30.30,2008,30000,TCP>。
通过步骤71~步骤77,NAT设备能够保证在LB 2接收到来自于Web服务器的第二报文后,LB 2能够将第二报文分流到SPU 5上进行处理。在本实施例中第二报文是对应访问请求报文的响应报文,也是第一报文的反向报文。
可选地,附图7的处理方法还包括步骤79~714。
步骤79,NAT设备上的LB 2接收到第二报文,第二报文是Web服务器发送的第一报文的响应报文。
具体地,Web服务器接收到第一报文后,从第一报文的请求行“GET/index.html HTTP/1.1”中得到主机请求的资源的URL路径/index.html。如果存在路径为/index.html的资源,则服务器在第二报文中携带状态行HTTP/1.1 200 OK以及其他用于说明服务器的一些信息以及将来继续访问该资源时的策略。
步骤710,NAT设备上的LB 2在分流单元LB 2的分流表中未存储第二报文的数据流标识的条件下,将第二报文输入分流算法,根据分流算法的输出结果,从NAT设备的多个处理单元中选择处理单元SPU 5。
第二报文的数据流标识是<30.30.30.30,20.20.20.20,30000,2008,TCP>。LB 2按照第一方式将数据流标识输入分流算法。
目标SPU对应的取值=HASH2(30.30.30.30,20.20.20.20,30000,2008,TCP)
=HASH1((30.30.30.30,20.20.20.20,30000,2008,6)+0)MOD(8),
用十六进制表示上述算法涉及的异或运算,则上述计算过程为
目标SPU对应的取值=
((0x1E^0x1E^0x1E^0x1E^0x14^0x14^0x14^0x14^0x75^0x30^0x7^0xD8^0x6)+2)MOD(8)=4
输出结果4对应的处理单元是SPU 5。
步骤711,NAT设备上的分流单元LB 2将第二报文发送给处理单元SPU 5。
步骤712,处理单元SPU 5查找地址映射表,根据查找到的表项对第二报文进行地址转换。
由于此时处理单元SPU 5的地址映射表中存在包含第二报文的数据流标识<30.30.30.30,20.20.20.20,30000,2008,TCP>的表项,因此处理单元SPU 5可以根据会话表项中对应的数据流标识对第二报文进行地址转换,将第二报文的目的地址转换为10.10.10.10,目的端口转换为10000,从而得到地址转换后的第二报文。
可选地,处理单元SPU 5还可以根据第二报文中携带的其他信息,例如状态行的内容修改对应的会话表项中记录的会话状态。
步骤713,处理单元SPU 5将地址转换后的第二报文发送给分流单元LB 1。
步骤714,分流单元LB 1将地址转换后的第二报文发送给局域网中的主机,该主机为此前发送第一报文的主机。
通过上述方案,一方面能够保证局域网的主机向互联网服务器发送的第一报文被被分流单元分流到LB 5,并由处理单元SPU 5进行NAT转换后,互联网服务器发送的、作为地址转换后的第一报文的响应报文的第二报文仍由处理单元SPU 5进行处理。由于处理单元SPU 5在进行NAT转换后已建立了地址映射表项以及会话表项,在接收到第二报文后,处理单元SPU 5能够根据地址映射表项以及会话表项对第二报文进行处理,从而避免了处理失败。同时,在该方案中各处理单元在进行地址转换时可以使用同一公网IP地址,避免地址资源浪费,节约了有限的地址资源。
附图8为互联网中的用户访问局域网中的服务器提供的服务的过程中,SLB设备应用附图5所示的报文处理方法,对下行报文和对应的上行报文进行处理的场景示意图。如附图8,分布式设备具体为SLB设备,SLB设备中包含2个分流单元,分别为LB 1和LB 2,SLB设备中还包含8个处理单元,分别为SPU 1~SPU 8。SPU 1~SPU 8分别对应的0-7的取值仍然请参照表1。
附图9为在附图8所示的场景中,SLB设备对报文进行处理的流程图。附图9所示的报文处理方法包括以下步骤。
步骤91,互联网中的主机发送第一报文,第一报文是下行报文。
局域网中的服务器集群提供网站cd.com的网页服务。互联网中的主机经过域名系统(英文:Domain Name System,DNS)解析得到提供网站cd.com的网页服务的虚拟IP地址是22.22.22.22,端口号是2222。互联网中的主机发送的第一报文是访问请求报文。该访问请求报文的目的地址是上述虚拟IP地址22.22.22.22,目的端口号是2222,源IP地址是互联网中的主机的IP地址33.33.33.33,源端口号是3333。
步骤92,SLB设备中的分流单元LB 2接收到第一报文。分流单元LB 2在分流表中未存储第一报文的数据流标识的条件下,将第一报文输入预定的分流算法,根据分流算法的输出结果,从SLB设备的8个处理单元中选择处理单元SPU 3。
在本实施例中,分流单元LB 2使用的分流算法与LB 1、以及各处理单元使用的分流算法相同,因此不区分地称为分流算法。
关于分流算法中涉及的哈希算法的说明请参照前面实施例中附图7步骤72的说明。本实施例中仍以上面实施例中的对称算法为例,对分流过程进行说明。显然由于本实施例中实现区分了上行报文和下行报文,因此本实施例中也可以使用非对称算法。由于本实施例中SLB设备也能够通过预先端口配置信息区分上行报文和下行报文,因此本实施例中也可以使用非对称算法。
具体地,LB 2将数据流标识按照第一方式输入分流算法。本实施例中的第一方式与附图7所示的实施例相同。
目标SPU对应的取值=HASH2(33.33.33.33,22.22.22.22,3333,2222,TCP)=HASH1((33.33.33.33,22.22.22.22,3333,NewPort1,TCP)+NewPort2)MOD(8)。
LB 2采用与附图7步骤72类似的方法,确定出NewPort1为3328,NewPort2为5,具体过程在这里不再重复。LB 2将NewPort1和NewPort2代入上述公式后得到
目标SPU对应的取值=HASH1(33.33.33.33,22.22.22.22,NewPort1(3333),2222, TCP)+NewPort2)MOD(8)
=HASH1(33.33.33.33,22.22.22.22,3328,2222,6)+5)MOD(8)
用十六进制表示上述算法中涉及的异或运算,则上述计算过程为
目标SPU对应的取值
=((0x21^0x21^0x21^0x21^0x16^0x16^0x16^0x16^0xD^0x00^0x8^0xAE^0x6)+5)MOD(8)
=(0xAD+5)MOD(8)=2
在本实施例中LB 2计算出的分流算法的输出结果为2,其指示的处理单元为SPU 3。
步骤93,SLB设备中的分流单元LB 3将第一报文发送给处理单元SPU 3。
步骤94,处理单元SPU 3为第一报文分配一个IP地址、以及一个端口号。分配的新的IP地址作为地址转换后的目的IP地址,分配的端口号作为地址转换后的源端口号。分配的端口号满足条件:SPU 3将地址转换后的第一报文的数据流标识按照与第一方式对应的第二方式输入所述预定的分流算法,根据所述分流算法的输出结果,从所述至少两个处理单元中选择出的处理单元仍然是SPU 3。
在本实施例中,SPU 3根据预定的负载均衡算法,选择实际处理访问请求的服务器的真实地址和端口号。例如,在局域网中,4个服务器共同分担网站cd.com的网页服务。4个服务器的地址和端口号如表2所示。
表2
服务器名称 服务器地址 端口号
Server 1 11.11.11.11 1111
Server 2 11.11.11.12 1111
Server 3 11.11.11.13 1111
Server 4 11.11.11.14 1111
可选地,负载均衡算法可以随机从4个服务器中选择出一个服务器,也可以根据4个服务器的当前资源占用率,选择一个当前资源占用率最低的服务器来提供服务器,在这里不再一一列举。
假定在本实施例中,SPU 3选择出的服务器是服务器1。那么地址转换之后第一报文的目的IP地址是11.11.11.11、目的端口号是1111。SPU 3对第一报文进行地址转换后,第一报文的数据流标识为<33.33.33.33,11.11.11.11,新源端口号,1111,TCP>。
SPU 3将转换后的第一报文的数据流标识按照第二方式输入分流算法。在本实施例中,以第二方式为“目的IP地址,源IP地址,目的端口号,第二端口号,协议类型、调整值”为例,那么
目标SPU对应的取值=HASH2(11.11.11.11,33.33.33.33,1111,新源端口号,TCP)=HASH1((11.11.11.11,33.33.33.33,1111,NewPort1,TCP)+NewPort2)MOD(8)=1。
由于目标SPU对应的分流算法的输出结果为2是已知的,那么计算新源端口号的过程实际上就转化为计算NewPort1和NewPort2的过程。SPU 3首先确定出NewPort1,再根据确定出的NewPort1计算出NewPort2,最后根据新源端口号=NewPort1+NewPort2确定出新源端口号。SPU 3确定NewPort1的方法有很多种,与附图7步骤72中确定NewPort1的方法类似,请参照附图7中的说明。在本实施例中,SPU 3确定出NewPort1为3328。
SPU 3在确定出NewPort1后,再采用附图7步骤74中计算NewPort2的类似方法,进 一步计算NewPort2。
NewPort2=2-HASH1(11.11.11.11,33.33.33.33,1111,NewPort1,TCP)MOD(8)
=2-HASH1(11.11.11.11,33.33.33.33,1111,3328,TCP)MOD(8)
用十六进制表示上述算法中涉及的异或运算,则上述计算过程为
NewPort2=
2-(0xB^0xB^0xB^0xB^0x21^0x21^0x21^0x21^0x4^0x57^0xD^0x0^0x6)MOD(8)
=2–0x58 MOD(0x8)=2-0=2
再进一步根据计算出的NewPort1和NewPort2确定新源端口号,新源端口号=NewPort1+NewPort2=3328+2=3330。
步骤95,SPU 3对第一报文进行地址转换。地址转换包括:将第一报文的目的IP地址22.22.22.22替换为新IP地址11.11.11.11,将第一报文的目的端口替换为端口号1111,将源端口号3333替换为分配的3330,从而得到转换后的第一报文。转换后的第一报文的数据流标识是<33.33.33.33,11.11.11.11,3330,1111,TCP>。
步骤96,SPU 3向分流单元LB1发送地址转换后的第一报文。
步骤97,分流单元LB1向局域网中的服务器发送地址转换后的第一报文。
可选地,在步骤94之后,还包括步骤98,SPU 3在地址映射表中建立一个新的表项,该表项中包括地址转换之前第一报文的数据流标识<33.33.33.33,22.22.22.22,3333,2222,TCP>和地址转换之后所述第一报文的数据流标识<33.33.33.33,11.11.11.11,3330,1111,TCP>。
通过步骤91~97,SLB设备能够保证在LB 1接收到来自于局域网中的服务器1的第二报文后,LB 1能够将第二报文分流到SPU 3上进行处理。在本实施例中第二报文是对应访问请求的响应报文,也是第一报文的反向报文。
可选地,附图9的处理方法还包括步骤99~914。
步骤99,SLB设备的LB1接收到第二报文,第二报文是服务器1发送的第一报文的响应报文。
具体地,服务器1接收到第一报文后,从第一报文的请求行“GET/index.html HTTP/1.1”中得到主机请求的资源的URL路径/index.html。如果存在路径为/index.html的资源,则服务器在第二报文中携带状态行HTTP/1.1 200 OK以及其他用于说明服务器的一些信息以及将来继续访问该资源时的策略。
步骤910,SLB设备上的LB 1在分流单元LB 1的分流表中未存储第二报文的数据流标识的条件下,将第二报文输入分流算法,根据分流算法的输出结果,从SLB设备的多个处理单元中选择处理单元SPU 3。
第二报文的数据流标识是<11.11.11.11,33.33.33.33,1111,3330,TCP>。SLB设备上的分流单元均是按照第一方式将数据流标识输入分流算法。
目标SPU对应的取值=HASH2(11.11.11.11,33.33.33.33,1111,3330,TCP)
=HASH1((11.11.11.11,33.33.33.33,1111,3328,TCP)+2)MOD(8)
用十六进制表示上述算法涉及的异或运算,则上述计算过程为
目标SPU对应的取值
=((0xB^0xB^0xB^0xB^0x21^0x21^0x21^0x21^0x4^0x57^0xD^0x0^0x6)+2)MOD(8)
=(0+2)MOD(8)=2。
输出结果2对应的处理单元是SPU 3。
步骤911,SLB设备上的分流单元LB 1将第二报文发送给处理单元SPU 3。
步骤912,处理单元SPU 3查找地址映射表,根据查找到的表项对第二报文进行地址转换。
由于此时处理单元SPU 3中存在包含第二报文的数据流标识<11.11.11.11,33.33.33.33,1111,3330,TCP>的表项,因此处理单元SPU3可以根据表项中对应的数据流标识对第二报文进行地址转换,将第二报文的目的地址转换为33.33.33.33,目的端口转换为3333,从而得到地址转换后的第二报文。
可选地,处理单元SPU 3还可以根据第二报文中携带的其他信息,例如状态行的内容修改对应的会话表项中记录的会话状态。
步骤913,处理单元SPU 3将地址转换后的第二报文发送给分流单元LB 2。
步骤914,分流单元LB 2向互联网中的主机发送地址转换后的第二报文,该主机为此前发送第一报文的主机。
本发明实施例还提供了一种分布式设备。附图10为该分布式设备的结构示意图,该分布式设备包括第一分流单元101、第二分流单元102、以及包括第一处理单元103在内至少两个处理单元。
第一分流单元101包括接收子单元1011,处理子单元1012和发送子单元1013。
第二分流单元102包括接收子单元1021,处理子单元1022和发送子单元1023。
第一处理单元103包括接收子单元1031,处理子单元1032和发送子单元1033。
第一分流单元101中的接收子单元1011,用于从所述第一网络接收第一报文。
第一分流单元101中的处理子单元1012,用于根据所述第一报文的数据流标识和第一分流算法,从所述至少两个处理单元中选择第一处理单元,所述第一报文的数据流标识至少包括所述第一报文的源互联网协议IP地址、目的IP地址、源端口号和目的端口号;
第一分流单元101中的发送子单元1013,用于将所述第一报文发送给所述第一处理单元103。
第一处理单元103中的接收子单元1031,用于接收第一分流单元101发送的所述第一报文。
第一处理单元103中的处理子单元1032,用于为所述第一报文分配一个IP地址、以及一个端口号,分配的所述端口号满足条件:根据地址转换后的所述第一报文的反向报文的数据流标识和第二分流算法,从所述至少两个处理单元中选择出的第二处理单元与所述第一处理单元为同一处理单元,所述地址转换后的所述第一报文的源端口号为所述分配的端口号,所述地址转换后的所述第一报文的源IP地址和目的IP地址中的一个为所述分配的IP地址,所述第二分流算法是所述第二分流单元对所述地址转换后的所述第一报文的反向报文进行分流时使用的算法;对所述第一报文进行地址转换,从而得到所述地址转换后的第一报文,所述地址转换包括将所述第一报文的源端口号替换为所述分配的所述端口号。
第一处理单元103中的发送子单元1032,用于向所述第二分流单元发送所述地址转换后的第一报文。
第二分流单元102中的接收子单元1021,用于接收第一处理单元103发送的地址转换后的第一报文。
第二分流单元102中的处理子单元1022,用于指示第二分流单元102中的发送子单元1023向所述第二网络发送所述地址转换后的第一报文。
上述第一分流单元101、第二分流单元102、第一处理单元103、以及各个子单元的具体工作流程请参照前面实施例中的说明,在这里不再详述。这些单元可以全部或部分地通过软件、硬件、固件或者其任意组合来实现,当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。
所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘Solid State Disk(SSD))等。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的范围。这样,倘若本申请的这些修改和变型属于本发明权利要求的范围之内,则本发明也意图包括这些改动和变型在内。

Claims (24)

  1. 一种分布式设备中的报文处理方法,所述分布式设备部署于第一网络和第二网络之间,所述分布式设备包括第一分流单元、第二分流单元、以及至少两个处理单元,所述至少两个处理单元中的每个处理单元与所述第一分流单元和所述第二分流单元连接,其特征在于,所述方法包括:
    所述第一分流单元从所述第一网络接收第一报文,根据所述第一报文的数据流标识和第一分流算法,从所述至少两个处理单元中选择第一处理单元,所述第一报文的数据流标识至少包括所述第一报文的源互联网协议IP地址、目的IP地址、源端口号和目的端口号;
    所述第一分流单元将所述第一报文发送给所述第一处理单元;
    所述第一处理单元为所述第一报文分配一个IP地址、以及一个端口号,分配的所述端口号满足条件:根据地址转换后的所述第一报文的反向报文的数据流标识和第二分流算法,从所述至少两个处理单元中选择出的第二处理单元与所述第一处理单元为同一处理单元,所述地址转换后的所述第一报文的源端口号为所述分配的端口号,所述地址转换后的所述第一报文的源IP地址和目的IP地址中的一个为所述分配的IP地址,所述第二分流算法是所述第二分流单元对所述地址转换后的所述第一报文的反向报文进行分流时使用的算法;
    所述第一处理单元对所述第一报文进行地址转换,从而得到所述地址转换后的第一报文,所述地址转换包括将所述第一报文的源端口号替换为所述分配的所述端口号;
    所述第一处理单元向所述第二分流单元发送所述地址转换后的第一报文;
    所述第二分流单元向所述第二网络发送所述地址转换后的第一报文。
  2. 根据权利要求1所述的方法,其特征在于,所述第一网络是私网,所述第二网络是公网,
    所述第二分流算法是指将输入报文的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号按照第一方式指示的顺序输入预定哈希算法后,将所述哈希算法的结果与调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为选择出的处理单元,所述调整值为所述输入报文的目的端口号减去第一端口号得到的差值,N为所述分布式设备包含的处理单元的数目,所述至少两个处理单元中的每个处理单元分别对应0至N-1中的一个值;
    所述第一处理单元为所述第一报文分配一个IP地址、以及一个端口号,包括:
    所述第一处理单元为所述第一报文分配预先配置的公网IP地址作为分配的IP地址;
    所述第一处理单元从所述第一处理单元对应的端口段中选择一个端口号作为第二端口号,所述第一处理单元与所述至少两个处理单元中的其他处理单元分配的IP地址相同、且第一处理单元与所述至少两个处理单元中的其他处理单元对应的端口段互不重叠;
    所述第一处理单元按照第二方式指示的顺序,将所述地址转换后的所述第一报文的源IP地址、目的IP地址、目的端口号,以及所述第二端口号输入所述预定哈希算法,得到哈希运算结果,所述地址转换后的所述第一报文的源IP地址为所述分配的IP地址,所述第二方式指示的顺序中源IP地址与所述第一方式指示的顺序中目的IP地址的位置相同、目的IP地址与所述第一方式指示的顺序中源IP地址的位置相同、目的端口号与所述第一方式指示的顺序中源端口号的位置相同;
    所述第一处理单元求取所述哈希运算结果对N的取余运算结果;
    所述第一处理单元求取所述第一处理单元对应的0至N-1中的值与取余运算结果之间的差值;
    所述第一处理单元为所述第一报文分配所述第二端口号和所述差值之和作为所述分配的端口号。
  3. 根据权利要求2所述的方法,其特征在于,所述从对应的端口段中选择一个端口号作为第二端口号,包括:
    所述第一处理单元按照预定数量的端口号划分为一组的方式,将所述端口号段划分为至少一个端口号组;
    随机从所述至少一个端口号组中选取一个端口号组,确定选取的端口号组中预定位置的端口号为所述第二端口号。
  4. 根据权利要求2所述的方法,其特征在于,所述从对应的端口段中选择一个端口号作为第二端口号,包括:
    将从预留端口号开始的连续端口号划分为至少一个端口号组,所述至少一个端口号组中的第一个端口号用十六进制表示时后三位为0;
    随机从所述至少一个端口号组中选取一个端口号组,确定选取的端口号组中预定位置的端口号为所述第二端口号。
  5. 根据权利要求2至4任一所述的方法,其特征在于,所述第一分流算法与所述第二分流算法相同。
  6. 根据权利要求5所述的方法,其特征在于,所述第一分流算法的输入报文是所述第一报文,所述根据所述第一报文的数据流标识和第一分流算法,从所述至少两个处理单元中选择第一处理单元,包括:
    所述第一分流单元确定所述第一端口号和所述调整值;
    按照第一方式指示的顺序,将所述第一报文的源IP地址、目的IP地址、源端口号、以及所述第一端口号输入所述预定哈希算法;
    将所述哈希算法的结果与所述调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为所述第一处理单元。
  7. 根据权利要求6所述的方法,其特征在于,所述确定所述第一端口号和所述调整值,包括:
    将从预留端口号开始至所述第一报文的目的端口号为止的连续端口号,按照预定数量的端口号划分为一组的方式,划分为至少一个端口号组;
    确定所述第一报文的目的端口号所在的端口号组中所述目的端口号之前预定位置的端口号为所述第一端口号;
    确定所述调整值为所述第一报文的目的端口号减去所述第一端口号得到的差值。
  8. 根据权利要求6所述的方法,其特征在于,所述确定所述第一端口号和所述调整值,包括:
    将从预留端口号开始的连续端口号划分为至少一个端口号组,所述至少一个端口号组中的第一个端口号用十六进制表示时后三位为0;
    确定所述第一报文的目的端口号所在的端口号组中所述目的端口号之前预定位置的端口号为所述第一端口号;
    确定所述调整值为所述第一报文的目的端口号减去所述第一端口号得到的差值。
  9. 根据权利要求1所述的方法,其特征在于,所述第一网络是公网,所述第二网络是私网,所述第一报文为来自所述公网的报文,
    所述第二分流算法是指将输入报文的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号按照第一方式指示的顺序输入预定哈希算法后,将所述哈希算法的结果与调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为选择出的处理单元,所述调整值为所述输入报文的目的端口号减去第一端口号得到的差值,N为所述分布式设备包含的处理单元的数目,所述至少两个处理单元中的每个处理单元分别对应0至N-1中的一个值;
    所述第一处理单元为所述第一报文分配一个IP地址、以及一个端口号,包括:
    所述第一处理单元按照负载均衡策略,从保存的至少一个地址端口对中选择出第一地址端口对,所述第一地址端口对包括一个私网IP地址和一个端口号;
    所述第一处理单元为所述第一报文分配所述第一地址端口对中的私网IP地址作为分配的IP地址;
    所述第一处理单元根据所述第一报文的源端口号确定所述第二端口号;
    所述第一处理单元按照第二方式指示的顺序,将所述地址转换后的所述第一报文的源IP地址、目的IP地址、目的端口号、以及所述第二端口号输入所述预定哈希算法,得到哈希运算结果,所述地址转换后的所述第一报文的目的IP地址中的一个为所述分配的IP地址,所述地址转换后的所述第一报文的目的端口号是所述第一地址端口对中的端口号,所述第二方式指示的顺序中源IP地址与所述第一方式指示的顺序中目的IP地址的位置相同、目的IP地址与所述第一方式指示的顺序中源IP地址的位置相同、目的端口号与所述第一方式指示的顺序中源端口号的位置相同;
    所述第一处理单元求取所述哈希运算结果对N的取余运算结果;
    所述第一处理单元计算所述第一处理单元对应的0至N-1中的值与所述取余运算结果之间的差值;
    所述第一处理单元为所述第一报文分配所述第二端口号和所述差值之和作为所述分配的端口号。
  10. 根据权利要求9所述的方法,其特征在于,所述第一分流算法的输入报文是所述第一报文,所述根据所述第一报文的数据流标识和第一分流算法,从所述至少两个处理单元中选择第一处理单元,包括:
    所述第一分流单元确定所述第一端口号和所述调整值;
    按照所述第一方式指示的顺序,将所述第一报文的源IP地址、目的IP地址、源端口号、以及所述第一端口号输入所述预定哈希算法;
    将所述哈希算法的结果与所述调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为所述第一处理单元。
  11. 根据权利要求9-10任一所述的方法,其特征在于,所述地址转换还包括:将所述第一报文的目的IP地址替换为所述第一地址端口对中的私网IP地址,将所述第一报文的目的端口号替换为所述第一地址端口对中的端口号。
  12. 根据权利要求1所述的方法,其特征在于,第三处理单元与所述第一处理单元为同一处理单元,所述第三处理单元是根据所述第一报文的反向报文的数据流标识和所述第 二分流算法从所述两个处理单元中选择出的处理单元。
  13. 根据权利要求12所述的方法,其特征在于,所述第二分流算法是指将输入报文的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号按照第一方式指示的顺序输入预定哈希算法后,将所述哈希算法的结果与调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为选择出的处理单元,所述第一端口号和所述调整值之和为所述输入报文的目的端口号,N为所述分布式设备包含的处理单元的数目,所述至少两个处理单元中的每个处理单元分别对应0至N-1中的一个值,
    所述哈希算法对输入的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号进行逻辑异或,输出逻辑异或的结果,或者所述哈希算法对输入的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号进行按字节求和,输出按字节求和的结果。
  14. 根据权利要求1-13任一所述的方法,其特征在于,所述第一处理单元为所述第一报文分配一个IP地址、以及一个端口号之后,所述方法还包括:
    所述第一处理单元建立映射表项,所述映射表项中包括地址转换之前所述第一报文的数据流标识以及地址转换之后所述第一报文的数据流标识,所述映射表项用于对所述第一处理单元后续接收到的第二报文进行地址转换。
  15. 一种分布式设备,所述分布式设备部署于第一网络和第二网络之间,所述分布式设备包括,第一分流单元、第二分流单元、以及至少两个处理单元,其特征在于,
    所述第一分流单元包括网络接口和处理器,
    所述第一分流单元的网络接口,用于从所述第一网络接收第一报文;
    所述第一分流单元的处理器,用于根据所述第一报文的数据流标识和第一分流算法,从所述至少两个处理单元中选择第一处理单元,所述第一报文的数据流标识至少包括所述第一报文的源互联网协议IP地址、目的IP地址、源端口号和目的端口号;
    所述第一分流单元的网络接口,还用于将所述第一报文发送给所述第一处理单元;
    所述第一处理单元包括网络接口和处理器,
    所述第一处理单元的网络接口,用于接收所述第一报文;
    所述第一处理单元的处理器,用于为所述第一报文分配一个IP地址、以及一个端口号,分配的所述端口号满足条件:根据地址转换后的所述第一报文的反向报文的数据流标识和第二分流算法,从所述至少两个处理单元中选择出的第二处理单元与所述第一处理单元为同一处理单元,所述地址转换后的所述第一报文的源端口号为所述分配的端口号,所述地址转换后的所述第一报文的源IP地址和目的IP地址中的一个为所述分配的IP地址,所述第二分流算法是所述第二分流单元对所述地址转换后的所述第一报文的反向报文进行分流时使用的算法;对所述第一报文进行地址转换,从而得到所述地址转换后的第一报文,所述地址转换包括将所述第一报文的源端口号替换为所述分配的所述端口号;
    所述第一处理单元的网络接口,还用于向所述第二分流单元发送所述地址转换后的第一报文;
    所述第二分流单元包括网络接口和处理器,
    所述第二分流单元的网络接口,用于接收所述地址转换后的第一报文;
    所述第二分流单元的处理器,用于指示所述第二分流单元的网络接口向所述第二网络发送所述地址转换后的第一报文。
  16. 根据权利要求15所述的分布式设备,其特征在于,所述第一网络是私网,所述 第二网络是公网,
    所述第二分流算法是指将输入报文的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号按照第一方式指示的顺序输入预定哈希算法后,将所述哈希算法的结果与调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为选择出的处理单元,所述调整值为所述输入报文的目的端口号减去第一端口号得到的差值,N为所述分布式设备包含的处理单元的数目,所述至少两个处理单元中的每个处理单元分别对应0至N-1中的一个值;
    所述第一处理单元的处理器,用于为所述第一报文分配预先配置的公网IP地址作为分配的IP地址;
    从所述第一处理单元对应的端口段中选择一个端口号作为第二端口号,所述第一处理单元与所述至少两个处理单元中的其他处理单元分配的IP地址相同、且第一处理单元与所述至少两个处理单元中的其他处理单元对应的端口段互不重叠;
    按照第二方式指示的顺序,将所述地址转换后的所述第一报文的源IP地址、目的IP地址、目的端口号,以及所述第二端口号输入所述预定哈希算法,得到哈希运算结果,所述地址转换后的所述第一报文的源IP地址为所述分配的IP地址,所述第二方式指示的顺序中源IP地址与所述第一方式指示的顺序中目的IP地址的位置相同、目的IP地址与所述第一方式指示的顺序中源IP地址的位置相同、目的端口号与所述第一方式指示的顺序中源端口号的位置相同;
    求取所述哈希运算结果对N的取余运算结果;
    求取所述第一处理单元对应的0至N-1中的值与取余运算结果之间的差值;
    为所述第一报文分配所述第二端口号和所述差值之和作为所述分配的端口号。
  17. 根据权利要求16所述的分布式设备,其特征在于,
    所述第一处理单元的处理器,用于按照预定数量的端口号划分为一组的方式,将所述端口号段划分为至少一个端口号组;
    随机从所述至少一个端口号组中选取一个端口号组,确定选取的端口号组中预定位置的端口号为所述第二端口号。
  18. 根据权利要求16所述的分布式设备,其特征在于,所述第一分流算法与所述第二分流算法相同,所述第一分流算法的输入报文是所述第一报文,
    所述第一分流单元的处理器,用于确定所述第一端口号和所述调整值;
    按照第一方式指示的顺序,将所述第一报文的源IP地址、目的IP地址、源端口号、以及所述第一端口号输入所述预定哈希算法;
    将所述哈希算法的结果与所述调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为所述第一处理单元。
  19. 根据权利要求18所述的分布式设备,其特征在于,
    所述第一分流单元的处理器,用于将从预留端口号开始至所述第一报文的目的端口号为止的连续端口号,按照预定数量的端口号划分为一组的方式,划分为至少一个端口号组;
    确定所述第一报文的目的端口号所在的端口号组中所述目的端口号之前预定位置的端口号为所述第一端口号;
    确定所述调整值为所述第一报文的目的端口号减去所述第一端口号得到的差值。
  20. 根据权利要求15所述的分布式设备,其特征在于,所述第一网络是公网,所述 第二网络是私网,所述第一报文为来自所述公网的报文,
    所述第二分流算法是指将输入报文的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号按照第一方式指示的顺序输入预定哈希算法后,将所述哈希算法的结果与调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为选择出的处理单元,所述调整值为所述输入报文的目的端口号减去第一端口号得到的差值,N为所述分布式设备包含的处理单元的数目,所述至少两个处理单元中的每个处理单元分别对应0至N-1中的一个值;
    所述第一处理单元的处理器,用于按照负载均衡策略,从保存的至少一个地址端口对中选择出第一地址端口对,所述第一地址端口对包括一个私网IP地址和一个端口号;
    为所述第一报文分配所述第一地址端口对中的私网IP地址作为分配的IP地址;
    根据所述第一报文的源端口号确定所述第二端口号;
    按照第二方式指示的顺序,将所述地址转换后的所述第一报文的源IP地址、目的IP地址、目的端口号、以及所述第二端口号输入所述预定哈希算法,得到哈希运算结果,所述地址转换后的所述第一报文的目的IP地址中的一个为所述分配的IP地址,所述地址转换后的所述第一报文的目的端口号是所述第一地址端口对中的端口号,所述第二方式指示的顺序中源IP地址与所述第一方式指示的顺序中目的IP地址的位置相同、目的IP地址与所述第一方式指示的顺序中源IP地址的位置相同、目的端口号与所述第一方式指示的顺序中源端口号的位置相同;
    求取所述哈希运算结果对N的取余运算结果;
    计算所述第一处理单元对应的0至N-1中的值与所述取余运算结果之间的差值;
    为所述第一报文分配所述第二端口号和所述差值之和作为所述分配的端口号。
  21. 根据权利要求20所述的分布式设备,其特征在于,所述第一分流算法的输入报文是所述第一报文,
    所述第一分流单元的处理器,用于确定所述第一端口号和所述调整值;
    按照所述第一方式指示的顺序,将所述第一报文的源IP地址、目的IP地址、源端口号、以及所述第一端口号输入所述预定哈希算法;
    将所述哈希算法的结果与所述调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为所述第一处理单元。
  22. 根据权利要求20、或21所述的分布式设备,其特征在于,
    所述第一处理单元的处理器,还用于将所述第一报文的目的IP地址替换为所述第一地址端口对中的私网IP地址,将所述第一报文的目的端口号替换为所述第一地址端口对中的端口号。
  23. 根据权利要求15所述的分布式设备,其特征在于,第三处理单元与所述第一处理单元为同一处理单元,所述第三处理单元是根据所述第一报文的反向报文的数据流标识和所述第二分流算法从所述两个处理单元中选择出的处理单元。
  24. 根据权利要求23所述的分布式设备,其特征在于,所述第二分流算法是指将输入报文的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号按照第一方式指示的顺序输入预定哈希算法后,将所述哈希算法的结果与调整值求和后,对求和结果执行对N取余运算,将取余运算结果对应的处理单元作为选择出的处理单元,所述第一端口号和所述调整值之和为所述输入报文的目的端口号,N为所述分布式设备包含的处理单元的 数目,所述至少两个处理单元中的每个处理单元分别对应0至N-1中的一个值,
    所述哈希算法对输入的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号进行逻辑异或,输出逻辑异或的结果,或者所述哈希算法对输入的源IP地址、目的IP地址、源端口号、协议号,以及第一端口号进行按字节求和,输出按字节求和的结果。
PCT/CN2019/080706 2018-04-28 2019-04-01 分布式设备中的报文处理方法和分布式设备 WO2019205892A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP19792098.6A EP3780552B1 (en) 2018-04-28 2019-04-01 Message processing method in distributed device and distributed device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810399892.5 2018-04-28
CN201810399892.5A CN110417924B (zh) 2018-04-28 2018-04-28 分布式设备中的报文处理方法和分布式设备

Publications (1)

Publication Number Publication Date
WO2019205892A1 true WO2019205892A1 (zh) 2019-10-31

Family

ID=68294875

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/080706 WO2019205892A1 (zh) 2018-04-28 2019-04-01 分布式设备中的报文处理方法和分布式设备

Country Status (3)

Country Link
EP (1) EP3780552B1 (zh)
CN (1) CN110417924B (zh)
WO (1) WO2019205892A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111447300A (zh) * 2020-03-26 2020-07-24 深信服科技股份有限公司 一种目标端口确定方法、装置、设备及可读存储介质
CN112565102A (zh) * 2020-11-30 2021-03-26 锐捷网络股份有限公司 一种负载均衡方法、装置、设备及介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110838974A (zh) * 2019-11-14 2020-02-25 苏州盛科科技有限公司 一种实现负载均衡的方法和装置
CN111147390B (zh) * 2019-11-22 2023-06-30 国家计算机网络与信息安全管理中心 负载分担求余的方法及装置
US11303609B2 (en) * 2020-07-02 2022-04-12 Vmware, Inc. Pre-allocating port groups for a very large scale NAT engine
US11316824B1 (en) 2020-11-30 2022-04-26 Vmware, Inc. Hybrid and efficient method to sync NAT sessions
CN112583736A (zh) * 2020-12-11 2021-03-30 北京锐安科技有限公司 一种信令报文分流方法、装置、设备及介质
CN112738290B (zh) * 2020-12-25 2022-08-26 杭州迪普科技股份有限公司 一种nat转换方法、装置及设备
CN115086274B (zh) * 2022-06-10 2023-12-22 北京启明星辰信息安全技术有限公司 一种网络流量分配方法、装置、设备和存储介质
CN115988574B (zh) * 2023-03-15 2023-08-04 阿里巴巴(中国)有限公司 基于流表的数据处理方法、系统、设备和存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1434406A2 (en) * 2002-12-20 2004-06-30 Alcatel Establishing a bi-directional IP-tunnel in a mobile IP communication system in case of private address conflicts
CN104580550A (zh) * 2014-12-30 2015-04-29 北京天融信科技有限公司 分布式系统中多业务板分流时的nat处理方法及设备
CN107948076A (zh) * 2017-12-29 2018-04-20 杭州迪普科技股份有限公司 一种转发报文的方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7483374B2 (en) * 2003-08-05 2009-01-27 Scalent Systems, Inc. Method and apparatus for achieving dynamic capacity and high availability in multi-stage data networks using adaptive flow-based routing
US8005098B2 (en) * 2008-09-05 2011-08-23 Cisco Technology, Inc. Load balancing across multiple network address translation (NAT) instances and/or processors
CN103825976B (zh) * 2014-03-04 2017-05-10 新华三技术有限公司 分布式系统架构中的nat处理方法及装置
CN106534394B (zh) * 2015-09-15 2020-01-07 瞻博网络公司 用于管理端口的设备、系统和方法
CN107222408B (zh) * 2017-06-01 2020-08-04 杭州迪普科技股份有限公司 一种分流方法及装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1434406A2 (en) * 2002-12-20 2004-06-30 Alcatel Establishing a bi-directional IP-tunnel in a mobile IP communication system in case of private address conflicts
CN104580550A (zh) * 2014-12-30 2015-04-29 北京天融信科技有限公司 分布式系统中多业务板分流时的nat处理方法及设备
CN107948076A (zh) * 2017-12-29 2018-04-20 杭州迪普科技股份有限公司 一种转发报文的方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3780552A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111447300A (zh) * 2020-03-26 2020-07-24 深信服科技股份有限公司 一种目标端口确定方法、装置、设备及可读存储介质
CN112565102A (zh) * 2020-11-30 2021-03-26 锐捷网络股份有限公司 一种负载均衡方法、装置、设备及介质

Also Published As

Publication number Publication date
CN110417924A (zh) 2019-11-05
EP3780552A4 (en) 2021-06-16
EP3780552B1 (en) 2023-03-22
EP3780552A1 (en) 2021-02-17
CN110417924B (zh) 2021-10-01

Similar Documents

Publication Publication Date Title
WO2019205892A1 (zh) 分布式设备中的报文处理方法和分布式设备
US10320683B2 (en) Reliable load-balancer using segment routing and real-time application monitoring
US9397946B1 (en) Forwarding to clusters of service nodes
CN109937401B (zh) 经由业务旁路进行的负载均衡虚拟机的实时迁移
US11223565B2 (en) Designs of an MPTCP-aware load balancer and load balancer using the designs
JP6169251B2 (ja) 分散型ロードバランサにおける非対称パケットフロー
US9231871B2 (en) Flow distribution table for packet flow load balancing
US20140153577A1 (en) Session-based forwarding
US11743230B2 (en) Network address translation (NAT) traversal and proxy between user plane function (UPF) and session management function (SMF)
WO2019237288A1 (zh) 域名解析方法、装置及计算机可读存储介质
US20090063706A1 (en) Combined Layer 2 Virtual MAC Address with Layer 3 IP Address Routing
US9350666B2 (en) Managing link aggregation traffic in a virtual environment
TWI661698B (zh) 轉發乙太網路封包的方法和裝置
WO2016134624A1 (zh) 路由方法、装置及系统、网关调度方法及装置
US20130089092A1 (en) Method for preventing address conflict, and access node
US9923814B2 (en) Media access control address resolution using internet protocol addresses
US10764241B2 (en) Address assignment and data forwarding in computer networks
US20230283589A1 (en) Synchronizing dynamic host configuration protocol snoop information
US20230275868A1 (en) Anonymizing server-side addresses
CN114827015B (zh) 一种数据转发方法和虚拟化云网络架构
EP3026851B1 (en) Apparatus, network gateway, method and computer program for providing information related to a specific route to a service in a network
Yamanaka et al. Openflow networks with limited l2 functionality
WO2020254838A1 (en) Large scale nat system
US11902158B2 (en) System and method for forwarding packets in a hierarchical network architecture using variable length addresses
WO2023161052A1 (en) Ip packet load balancer based on hashed ip addresses

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19792098

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019792098

Country of ref document: EP

Effective date: 20201103