WO2015067118A1 - Multiple protocol stack load balancing method and apparatus - Google Patents

Multiple protocol stack load balancing method and apparatus Download PDF

Info

Publication number
WO2015067118A1
WO2015067118A1 PCT/CN2014/088442 CN2014088442W WO2015067118A1 WO 2015067118 A1 WO2015067118 A1 WO 2015067118A1 CN 2014088442 W CN2014088442 W CN 2014088442W WO 2015067118 A1 WO2015067118 A1 WO 2015067118A1
Authority
WO
WIPO (PCT)
Prior art keywords
socket
network card
protocol stack
protocol
data packet
Prior art date
Application number
PCT/CN2014/088442
Other languages
French (fr)
Chinese (zh)
Inventor
文刘飞
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2015067118A1 publication Critical patent/WO2015067118A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/16Implementation or adaptation of Internet protocol [IP], of transmission control protocol [TCP] or of user datagram protocol [UDP]
    • H04L69/161Implementation details of TCP/IP or UDP/IP stack architecture; Specification of modified or new header fields
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a multi-protocol stack load balancing method and apparatus.
  • the rapid development of cloud computing makes the computing work more and more concentrated in the data center, and the terminal is more to use the network to quickly send the requested task to the data center, so the terminal's demand for computing power is decreasing, but The demand for network capabilities is increasing.
  • the protocol stack has not developed rapidly and has gradually become a bottleneck between the two. It has become inevitable that multiple protocol stacks combine to handle the expansion of single or multiple ports.
  • the shunt algorithm is used to forward the data packets belonging to the same connection to different protocol stacks. Since all the protocol stacks share one distribution module, it is not really parallel processing, and it is easy to have performance bottlenecks at the distribution module.
  • each network card interface 100 has multiple protocol stacks, such as protocol stack 0, protocol stack 1, protocol stack 2, and protocol stack 3. Each protocol stack is bound with at least one RSS network card receiving and sending queue. The RSS network card receiving queue is processed by the corresponding protocol stack. For example, a packet sent by a firewall gateway usually has the same IP (Internet Protocol).
  • IP Internet Protocol
  • RSS offload of the network card is only based on source, destination IP, and protocol triplet
  • hashing is performed.
  • the data packets of the same gateway are likely to be allocated to the same RSS network card receiving queue, which may cause overload of the protocol stack connected to the queue. Therefore, simple hash shunting based on packet-based triples/quintuples has the disadvantage of being unable to perform flexible load balancing distribution by sensing the real load situation of the protocol stack.
  • the embodiment of the invention provides a multi-protocol stack load balancing method and device, which can realize the protocol by combining the load sensing of the protocol stack and the application with the RSS network card receiving/transmitting queue and the flow table matching in the multi-protocol environment.
  • Stack load balancing reduces CPU data distribution overhead.
  • the first aspect provides a multi-stack stack load balancing method, the method comprising: creating a first socket in response to a request of an application and deploying on all protocol stacks; receiving a data packet requesting connection; determining a data packet requesting connection Protocol type, if the protocol type is a transport control protocol, then: create a second socket to establish a session connection; select a protocol stack for the second socket according to the load condition of each protocol stack; in the second socket
  • a matching flow table is created on the network card according to the network card's traffic off policy, and is received.
  • the received second socket data packet is offloaded to the RSS network card receiving queue; the data distribution between the second socket and the selected protocol stack is performed.
  • the method further includes: after the session ends, releasing the second socket, and deleting the matching flow table created on the network card.
  • the protocol type is a user datagram protocol
  • the protocol is processed by the protocol stack of the data packet requesting the connection.
  • the network card and all protocol stacks are performed before the step of creating the first socket and deploying on all protocol stacks in response to the request of the application
  • the initial configuration includes: reading and storing the hardware configuration information of the network card; obtaining user configuration information, forming a network card configuration policy in combination with the hardware configuration information, writing the network card; starting multiple protocol stacks, and according to the network card configuration policy, for each protocol
  • the stack is bound to at least one RSS network card receiving queue and one RSS network card sending queue.
  • the creating the first socket in response to the request of the application and deploying on all the protocol stacks includes: calling the application programming interface to create the first socket; After the first socket is created, the bind function is called to bind the first socket to a specific IP address, and the listen function is called to listen for the packet request of the specified port; when the listen method of the first socket is received, The first socket is deployed on all protocol stacks.
  • the second The step of establishing a session connection by the socket includes: creating a second socket according to the actual situation of the network operation of each protocol stack.
  • the step of creating a second socket to establish a session connection includes: transmitting, to the application, the data packet of the request connection sent by the received peer end; Create a second socket after applying the confirmation.
  • the end of the session includes receiving and responding to the request issued by the application to release the second socket, or receiving and responding to the connection release request sent by the peer.
  • a second aspect provides a multi-protocol stack load balancing method, the method comprising: creating a first socket, and selecting a protocol stack for the first socket to establish a session connection according to a load condition of each protocol stack;
  • the data packet of the first socket cannot be offloaded to the receiver extended RSS network card receiving queue bound by the protocol stack through the default offloading rule of the network card, and the matching flow table is created on the network card according to the network card's traffic off policy, and is received.
  • the received data packet is offloaded to the RSS network card receiving queue; the data packet distribution between the first socket and the selected protocol stack is performed.
  • the method further includes: after the session ends, releasing the first socket, and deleting the matching flow table created on the network card
  • the network card and all the protocol stacks are initially configured before the first socket is created, including: reading and storing the hardware configuration information of the network card Obtain user configuration information, and combine the hardware configuration information to form a network card configuration policy, write the network card; start multiple protocol stacks, and bind at least one RSS network card receiving queue and one RSS network card to each protocol stack according to the network card configuration policy. queue.
  • the end of the session includes receiving and responding to the request issued by the application for releasing the first socket, or receiving and responding to the connection release request sent by the peer end.
  • the third aspect provides a multi-instance protocol stack load balancing device, the device includes: a protocol stack module, a network card, a data distribution module, and a load balancing module, where the protocol stack module includes multiple protocol stacks, wherein: the data distribution module is configured to respond The application request creates the first socket and deploys it on all protocol stacks; the protocol stack module is configured to receive the data packet requesting the connection, determine the protocol type of the data packet requesting the connection; the data distribution module is used, if the protocol Type is the transport control protocol, then create a second socket to establish a session connection; load balancing module, For if the protocol type is a transmission control protocol, select a protocol stack for the second socket according to the load condition of each protocol stack, and the data packet of the second socket cannot be offloaded by the default traffic distribution rule of the network card.
  • the protocol stack module includes multiple protocol stacks, wherein: the data distribution module is configured to respond The application request creates the first socket and deploys it on all protocol stacks; the protocol stack module
  • the matching flow table is created on the network card according to the network card's traffic distribution policy, and after receiving the data packet, the received second socket data is received.
  • the packet is distributed to the RSS network card receiving queue; the data distribution module is further configured to perform data packet distribution between the second socket and the selected protocol stack.
  • the protocol stack module is further configured to: control the selected protocol stack to release the second socket; the load balancing module is further used Delete the matching flow table created on the NIC.
  • the protocol stack module is further configured to: if the protocol type is a user datagram protocol, control a protocol stack that receives the data packet requesting the connection to perform the protocol. deal with.
  • the load balancing module is further configured to perform initial configuration on the network card and all protocol stacks, including: a hardware configuration specifically for reading and storing the network card. Information, obtain user configuration information and form a network card configuration policy in combination with hardware configuration information, and write the network card; the protocol stack module is also used to start multiple protocol stacks, and at least one RSS network card is bound to each protocol stack according to the network card configuration policy. The queue and an RSS NIC send queue.
  • the data distribution module is configured to create a first socket and deploy on all protocol stacks in response to the request of the application, specifically: a data distribution module.
  • a first socket is created in response to a notification that the application invokes the application programming interface, and receives a listen method call of the first socket.
  • the application calls the bind function to bind the first socket.
  • Set a specific IP address, and call the listen function to listen to the packet request of the specified port; the load balancing module is also used to notify each protocol stack to deploy the first socket on all protocol stacks.
  • the data distribution module is configured to create a second socket to establish a session connection, specifically: a real situation for running according to a network of each protocol stack , create a second socket.
  • the protocol stack module is configured to create a second socket to establish a session connection, specifically: a request for connecting the received peer to the connection The data packet is forwarded to the application; the data distribution module is used for application confirmation After creating the second socket.
  • the data distribution module receives and responds to the request issued by the application for releasing the second socket, or the protocol stack module receives and responds to the connection sent by the peer end. Release the request to indicate the end of the session.
  • the fourth aspect provides a multi-instance protocol stack load balancing device, the device includes: a protocol stack module, a network card, a data distribution module, and a load balancing module, where the protocol stack module includes multiple protocol stacks, where: a data distribution module is used to create The first socket; the load balancing module is configured to select a protocol stack for the first socket according to the load condition of each protocol stack to establish a session connection, if the first socket data packet passes the default offload of the network card The rule cannot be offloaded to the receiver extended RSS network card receiving queue bound to the protocol stack. Then, according to the network card's traffic distribution policy, a matching flow table is created on the network card, and after receiving the data packet, the received data packet is offloaded to the RSS network card.
  • the data distribution module is further configured to perform data packet distribution between the first socket and the selected protocol stack.
  • the protocol stack module is configured to control the selected protocol stack to release the first socket; the load balancing module is further configured to delete A matching flow table created on the NIC.
  • the load balancing module is further configured to perform initial configuration on the network card and all protocol stacks, including: a hardware configuration specifically for reading and storing the network card Information, obtain user configuration information and form a network card configuration policy in combination with hardware configuration information, and write the network card; the protocol stack module is also used to start multiple protocol stacks, and at least one RSS network card is bound to each protocol stack according to the network card configuration policy.
  • the queue and an RSS NIC send queue.
  • the data distribution module receives and responds to the request issued by the application for releasing the first socket, or the protocol stack module receives and responds to the connection sent by the peer end. Release the request to indicate the end of the session.
  • the multi-protocol stack load balancing method and apparatus create a first socket and deploy it on all protocol stacks by responding to the request of the application; after receiving the data packet requesting the connection, if the data packet is requested to be connected
  • the protocol type is the transport control protocol
  • the second socket is created to establish a session connection; and according to the load condition of each protocol stack, a protocol stack is selected for the second socket, and the data packet of the second socket is used.
  • the traffic policy creates a matching flow table on the network card, and offloads the received second socket data packet to the RSS network card receiving queue; thus, through the load sensing of the protocol stack and the application, and the RSS network card receiving, sending queue, and flow table
  • FIG. 1 is a schematic structural diagram of a multi-protocol stack load balancing apparatus in the prior art
  • FIG. 2 is a schematic structural diagram of a multi-protocol stack load balancing apparatus according to a first embodiment of the present invention
  • FIG. 3 is a schematic structural diagram of a multi-protocol stack load balancing apparatus according to a second embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a multi-protocol stack load balancing method according to a first embodiment of the present invention
  • FIG. 5 is a schematic flowchart of initialization of a multi-protocol stack load balancing method according to a first embodiment of the present invention
  • FIG. 6 is a schematic diagram of a multi-protocol stack load balancing method according to a second embodiment of the present invention.
  • FIG. 7 is still another schematic structural diagram of a multi-protocol stack load balancing apparatus according to a third embodiment of the present invention.
  • FIG. 8 is still another schematic structural diagram of a multi-stack stack load balancing apparatus according to a fourth embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram of a multi-protocol stack load balancing apparatus according to a first embodiment of the present invention.
  • the multi-stack stack load balancing apparatus 10 includes: a protocol stack module 12, a data distribution module 13, a load balancing module 14, a network card 16, and a network card driver 17, wherein the protocol stack module 12 includes a plurality of protocol stacks 15
  • the network card 16 includes an RSS network card receiving and sending queue 18 and a matching flow table 19.
  • the RSS network card receiving and sending queue 18 includes an RSS network card receiving queue. And the RSS network card is sent out.
  • the application 11 invokes the application programming interface to notify the data distribution module 13 to create the first socket.
  • the data distribution module 13 is operative to create a first socket in response to the request of the application 11 and deploy it on all of the protocol stacks 15.
  • the protocol stack module 12 is configured to receive the data packet requesting the connection, determine the protocol type of the data packet requesting the connection, and if the protocol type is UDP (User Datagram Protocol), control the protocol for receiving the data packet requesting the connection.
  • UDP User Datagram Protocol
  • the data distribution module 13 is further configured to create a second socket to establish a session connection; and the load balancing module 14 is configured to load according to each protocol stack. Selecting a protocol stack 15 for the second socket, and when the data packet of the second socket cannot be offloaded to the RSS network card receiving queue bound to the selected protocol stack 15 by the default traffic distribution rule of the network card 16, according to the network card The offloading policy of 16 creates a matching flow table 19 on the network card 16, and after receiving the data packet, offloads the received data packet to the RSS network card receiving queue.
  • TCP Transmission Control Protocol
  • the appropriate protocol stack is selected for data processing, so that the protocol processing is fully paralleled, and the protocol processing capability is improved.
  • the data distribution module 13 is also used to perform packet distribution between the second socket and the selected protocol stack. After the session ends, the protocol stack module 12 is further configured to control the selected protocol stack 15 to release the second socket.
  • the load balancing module 14 is further configured to delete the matching flow table 19 created on the network card 16.
  • the load sensing of the protocol stack 15 and the application combined with the RSS network card receiving and sending queue 18 and the flow table matching 19 of the network card 16, the load balancing of the protocol stack is realized, and the CPU is reduced.
  • Central Processing Unit, central processor Packet distribution overhead.
  • the peer end may be other clients or servers in the network.
  • the load balancing module 14 is further configured to perform initial configuration on the network card 16 and all the protocol stacks 15, including: specifically for reading and storing the hardware configuration information of the network card 16 through the network card driver 17 to obtain user configuration information. And combining the hardware configuration information to form a network card configuration policy, and writing the network card 16 through the network card driver 17; the protocol stack module 12 is further configured to start multiple protocol stacks 15, and bind at least one for each protocol stack 15 according to the network card configuration policy.
  • the hardware configuration information of the network card 16 includes the number of the RSS network card receiving and sending queues 18, and the maximum supported flow table matching number.
  • the user configuration information includes the number of network card hardware queues to be opened, and the distribution policy of the data packets on the network card 16.
  • the destination address of the first socket is any, indicating that this is a service.
  • the socket on the side After the first socket is successfully created, the application 11 calls the bind function to bind the first socket to the specified IP address, and listens for the packet request from the specified port by calling the listen function.
  • the data distribution module 13 notifies the load balancing module 14 that the socket is a server socket.
  • the load balancing module 14 notifies each protocol stack 15 that the first socket is deployed on all the protocol stacks 15, and each protocol stack 15 has a first socket PCB (Protocol Control Block). .
  • the PCB includes various variables involved in establishing the connection and processing of the data packet.
  • the protocol stack module 12 receives the data packet of the request connection sent by the peer end, and the data distribution module 13 creates a second socket according to the actual situation of the network operation of each protocol stack 15. And notify the peer that the second socket is successfully created. If the creation is successful, the session connection is established, and the session can be performed; if the creation is unsuccessful, the establishment of the session connection fails, and the connection is interrupted.
  • the actual situation of the network running of the protocol stack 15 includes whether the socket of the same port has been created, whether the number of sockets in the protocol stack 15 reaches the upper limit of the created socket, and the like.
  • the protocol stack module 12 forwards the received data packet of the request connection sent by the opposite end to the application 11, and after the application 11 confirms, the data distribution module 13 creates a second socket and returns the result. Give the opposite end.
  • the load balancing module 14 selects a protocol stack 15 for the second socket, the protocol stack 15 is notified to create a corresponding PCB for the second socket.
  • the data packet of the second socket is preferentially branched to the RSS network card receiving queue bound to the protocol stack 15 by the default offloading rule of the network card 16.
  • the load balancing module 14 creates a match on the network card 16 according to the traffic distribution policy of the network card 16.
  • the flow table 19 and after receiving the data packet, offloads the data packet that receives the second socket to the RSS network card receiving queue for processing the data packet, that is, the session with the peer end.
  • packet splitting is preferably performed based on a quintuple/triple
  • the default offloading rule is preferably a hash rule.
  • other tuples may also be based on other tuples. For packet offloading, such as binary or quad.
  • the ternary group information includes a destination port, a destination IP address, and a protocol content
  • the quintuple information includes a source port, a destination port, a source IP address, a destination IP address, and a protocol content.
  • the data distribution module 13 further receives the data transmission request of the second socket and distributes it to the corresponding protocol stack 15; after the second socket is created, combined with the load balancing information, selects a protocol stack 15 The processing of the data packet is performed, and the processed network data packet is distributed to the second socket.
  • the data distribution module 13 receives and responds to the release issued by the application 11.
  • the request for the second socket, or the protocol stack 15 receives and responds to the connection release request sent by the opposite end, indicating that the session ends. If the data distribution module 13 receives and responds to the request for releasing the second socket issued by the application 11, notifying the corresponding protocol stack 15 to release the second socket and its associated PCB, and notifying the load balancing module 14 of the second The socket has been released; after receiving the second socket release notification of the data distribution module 13, the load balancing module 14 confirms whether the matching flow table 19 is created on the network card for the second socket, and if so, passes The NIC driver 17 is called to delete the matching stream table 19.
  • the protocol stack 15 receives and responds to the connection release request sent by the peer, the corresponding protocol stack 15 releases the second socket, and the data distribution module 13 notifies the application 11 and the load balancing module 14 that the second socket has been released.
  • the load balancing module 14 reconfirms whether the matching flow table 19 has been created on the network card for the second socket, and if so, the matching flow table 19 is deleted by calling the network card driver 17.
  • FIG. 3 is a schematic structural diagram of a multi-stack stack load balancing apparatus according to a second embodiment of the present invention.
  • the multi-stack stack load balancing device includes: a protocol stack module 22, a data distribution module 23, a load balancing module 24, a network card 26, and a network card driver 27.
  • the protocol stack module 22 includes a plurality of protocol stacks 25,
  • the network card 26 includes an RSS network card receiving and sending queue 28 and a matching flow table 29, and the RSS network card receiving and sending queue 28 includes an RSS network card receiving queue and an RSS network card sending queue.
  • the data distribution module 23 is configured to create a first socket in response to the notification that the application 21 invokes the application interface, each application 21 including at least one first socket.
  • the load balancing module 24 is configured to select a protocol stack 25 for the first socket according to the load condition of each protocol stack 25, to establish a session connection with the peer end, if the first socket data packet passes the default of the network card 26 The offloading rule cannot be offloaded to the RSS network card receiving queue bound to the protocol stack 25. Then, the matching flow table 29 is created on the network card 26 according to the splitting policy of the network card 26, and after receiving the data packet, the received data packet is offloaded to the The RSS network card is on the queue.
  • the data issuance module 23 is also used to perform packet distribution between the first socket and the selected protocol stack 25.
  • the protocol stack module 22 is configured to control the selected protocol stack 25 to release the first socket, and the load balancing module 24 is further configured to delete the matching flow table 29 created on the network card 26.
  • the peer end may be a server in the network.
  • the load balancing module is further configured to initialize the network card and all the protocol stacks, including: specifically for reading and storing the hardware configuration information of the network card 26, acquiring user configuration information, and forming the hardware configuration information.
  • the network card configuration policy is written into the network card 26 through the network card driver 27; the protocol stack module 22 is also used to start multiple protocol stacks 25, and is configured according to the network card.
  • the policy is to bind at least one RSS network card receiving queue and one RSS network card sending queue to each protocol stack 25.
  • the hardware configuration information of the network card 26 includes the number of the RSS network card receiving queues and the maximum supported flow table matching number, and the user configuration information includes the number of network card hardware queues to be opened and the data packet distribution policy on the network card 26.
  • the protocol stack module 22 receives the data packet of the request connection sent by the peer end, and the data distribution module 23 returns a false result to the application 21 according to the actual situation of the network operation of each protocol stack 25, and notifies the peer end of the first result. Whether the socket is created successfully. If the creation is successful, a session connection is established and the session can be performed. If the creation is unsuccessful, the establishment of the session connection fails and the connection is broken. In other embodiments of the present invention, the protocol stack module 22 forwards the received data packet of the request connection sent by the opposite end to the application 21. After the application 21 confirms, the data distribution module 23 creates the first socket and returns the result. Give the opposite end. The data distribution module 23 simultaneously creates a corresponding PCB when creating the first socket.
  • the actual situation of the network operation of the protocol stack 25 includes whether the socket of the same port has been created, whether the number of sockets in the protocol stack 25 reaches the upper limit of the created socket, and the like.
  • the PCB includes the various variables involved in establishing the connection and the processing of the packet.
  • the application 21 calls the connect function to connect the IP address of a server to establish a connection with the port, which is the application of the client.
  • the data packet of the first socket preferentially branches the data packet to the RSS network card receiving queue bound by the protocol stack 25 through the default traffic distribution rule of the network card 26. If the hashing rule of the network card 26 cannot be used to offload the data packet to the RSS network card receiving queue bound to the protocol stack 25, the matching flow table 29 is created on the network card 26 by the load balancing module 24 according to the traffic distribution policy of the network card 26.
  • the received data packet is offloaded to the RSS network card receiving queue for processing the data packet, that is, the session is performed with the peer end.
  • packet shunting is preferably performed based on a quintuple/triple, and the default shunting rule is preferably a hash rule.
  • other elements may be based on other elements.
  • Groups are used for packet offloading, such as binary or quad.
  • the ternary group information includes a destination port, a destination IP address, and a protocol content
  • the quintuple information includes a source port, a destination port, a source IP address, a destination IP address, and a protocol content.
  • the data distribution module 23 further receives the data transmission request of the first socket and distributes it to the corresponding protocol stack 25; after the first socket is created, combined with the load balancing information, selects a protocol stack 25 The processing of the data packet is performed, and the processed network data packet is distributed to the first socket.
  • the data distribution module 23 receives and responds to the release issued by the application 21.
  • a socket request, or the protocol stack 25 receives and responds to the connection release request sent by the peer, indicating that the session ends. If the data distribution module 23 receives and responds to the request for releasing the first socket issued by the application 21, notifying the selected protocol stack 25 to release the first socket and its associated PCB, and notifying the load balancing module 24 of the first The socket has been released; after receiving the first socket release notification of the data distribution module 23, the load balancing module 24 confirms whether the matching flow table 29 has been created on the network card 26 for the first socket, and if so, The matching flow table 29 is deleted by calling the network card driver 27.
  • the protocol stack 25 If the protocol stack 25 receives and responds to the connection release request sent by the peer, the protocol stack 25 releases the first socket, and the data distribution module 23 notifies the application 21 and the load balancing module 24 that the second socket has been released.
  • the load balancing module 24 reconfirms whether a matching flow table 29 has been created on the network card for the second socket, and if so, the matching flow table 29 is deleted by invoking the network card driver 27.
  • the data distribution module 23 creates a first socket to establish a session connection; the load balancing module 24 selects a protocol stack 25 for the first socket according to the load condition of each protocol stack 25; When the data packet of the socket cannot be offloaded to the RSS network card receiving queue bound to the protocol stack 25 by the default traffic sharing rule of the network card 26, the load balancing module 24 creates a matching flow table 29 on the network card 26 according to the traffic distribution policy of the network card 26. The received data packet is offloaded to the RSS network card receiving queue for packet processing.
  • the appropriate protocol stack is selected for data processing, the protocol processing is fully paralleled, the protocol processing capability is improved, and the protocol can be multi-protocol.
  • load balancing of the protocol stack is implemented, which reduces the data distribution overhead of the CPU.
  • FIG. 4 is a schematic diagram of a multi-protocol stack load balancing method according to a first embodiment of the present invention. As shown in FIG. 4, the multi-protocol stack load balancing method includes:
  • S10 Create a first socket in response to the application's request and deploy it on all protocol stacks.
  • S101 Read and store hardware configuration information of the network card.
  • the hardware configuration information includes the number of RSS queues and the maximum number of flow table matches that can be supported.
  • the hardware configuration information needs to be read by the network card driver.
  • S102 Acquire user configuration information, and form a network card configuration policy according to the hardware configuration information, and write the network card.
  • the user configuration information includes the number of network card hardware queues to be opened, and the distribution policy of the data packets on the network card.
  • the network card configuration information is also written into the network card through the network card driver.
  • S103 Start multiple protocol stacks, and bind at least one RSS network card receiving queue and one RSS network card sending queue to each protocol stack according to the network card configuration policy.
  • the application calls the bind function to bind the first socket to the specified IP address, and listens to the packet request from the specified port by calling the listen function.
  • the first socket When the listener method call of the first socket is received, the first socket is deployed on all protocol stacks, and each protocol stack has a first socket PCB.
  • the PCB includes various variables involved in establishing the connection and processing of the data packet.
  • S11 Receive a data packet requesting connection.
  • S12 Determine the protocol type of the data packet requesting the connection. If the protocol type is the UDP protocol, S13 is performed; if the protocol type is the TCP protocol, S14 is performed.
  • S13 Perform protocol processing by the protocol stack that receives the data packet requesting the connection.
  • the protocol type is the UDP protocol, it may also be handled by other protocol stacks.
  • the data packet of the request connection sent by the peer end is received, and the second socket is created according to the actual situation of the network operation of each protocol stack. And notify the peer that the second socket is successfully created. If the creation is successful, the session connection is established, and the session can be performed; if the creation is unsuccessful, the establishment of the session connection fails, and the connection is interrupted. In other embodiments of the present invention, the received data packet of the request connection sent by the peer end is forwarded to the application, and after the application is confirmed, the second socket is created, and the result is returned to the peer end.
  • S15 Select a protocol stack for the second socket according to the load condition of each protocol stack. At the same time, the protocol stack is notified to create a corresponding PCB for the second socket, thereby establishing a session connection with the peer.
  • the matching flow table is created on the network card according to the network card's traffic distribution policy, and After receiving the data packet, the received second socket data packet is offloaded to the RSS network card receiving queue.
  • packet splitting is preferably performed based on a quintuple/triple
  • the default offloading rule is preferably a hash rule.
  • other tuples may also be based on other tuples.
  • the ternary group information includes a destination port, a destination IP address, and a protocol content
  • the quintuple information includes a source port, a destination port, a source IP address, a destination IP address, and a protocol content.
  • the data packet of the second socket is preferentially branched to the RSS network card receiving queue bound by the protocol stack by using the default offloading rule of the network card. If the default packet distribution rule of the network card cannot be used to offload the data packet of the second socket to the RSS network card receiving queue bound to the selected protocol stack, create a matching flow table on the network card according to the network card's traffic distribution policy, and receive the data packet.
  • the second socket data packet is offloaded to the RSS network card receiving queue for processing the data packet, that is, the session with the peer end.
  • the appropriate protocol stack is selected for data processing, so that the protocol processing is fully paralleled and improved.
  • the protocol processing capability enables load balancing of the protocol stack and reduces the data distribution overhead of the CPU.
  • S17 Perform packet distribution between the second socket and the selected protocol stack. In S17, the correspondence between the second socket and the selected protocol stack is also recorded.
  • receiving and responding to the request issued by the application to release the second socket, or receiving and responding to the connection release request sent by the peer end through the selected protocol stack indicates that the session ends. If receiving and responding to the request issued by the application to release the second socket, notifying the protocol stack to release the second socket and its associated PCB; confirming whether a matching flow table is created on the network card for the second socket, If there is, delete the matching flow table. If the connection release request sent by the peer end is received through the selected protocol stack, the selected protocol stack releases the second socket, and notifies the application that the second socket has been released, confirming whether the second socket is in the second socket. A matching flow table has been created on the NIC, and if there is, the matching flow table is deleted. The first socket is released only when there is no longer any communication connection between the client and the peer.
  • FIG. 6 is a schematic diagram of a multi-protocol stack load balancing method according to a second embodiment of the present invention. As shown in FIG. 6, the multi-protocol stack load balancing method includes:
  • S21 Create a first socket, and select a protocol stack for the first socket to establish a session connection according to the load condition of each protocol stack.
  • initial configuration of the network card and all protocol stacks including: reading and storing the hardware configuration information of the network card through the network card driver; obtaining user configuration information, and forming a network card configuration policy by combining the hardware configuration information, and writing through the network card driver Incoming network card; starting multiple protocol stacks, and binding at least one RSS network card receiving queue and one RSS network card sending queue for each protocol stack according to the network card configuration policy.
  • the application calls the application programming interface to create the first socket and create the corresponding PCB.
  • PCB This includes establishing connections and various variables involved in packet processing.
  • the application calls the connect function to connect to the IP address of a server and establish a connection with the port. This is the application as the client.
  • S22 If the data packet of the first socket cannot be offloaded to the RSS network card receiving queue bound by the protocol stack by using the default traffic distribution rule of the network card, create a matching flow table on the network card according to the network card's traffic distribution policy, and receive the matching flow table. After the data packet, the received data packet is offloaded to the RSS network card receiving queue.
  • packet splitting is preferably performed based on a quintuple/triple
  • the default offloading rule is preferably a hash rule.
  • other tuples may also be based on other tuples.
  • the ternary group information includes a destination port, a destination IP address, and a protocol content
  • the quintuple information includes a source port, a destination port, a source IP address, a destination IP address, and a protocol content.
  • the data packet of the first socket is preferentially branched to the RSS network card receiving queue bound by the protocol stack by using the default offloading rule of the network card. If the data of the first socket cannot be offloaded to the RSS network card receiving queue bound by the protocol stack through the hash rule of the network card, a matching flow table is created on the network card according to the network card's traffic distribution policy, and the received data packet is received. It is distributed to the RSS network card receiving queue for processing the data packet, that is, the session with the peer end.
  • the appropriate protocol stack is selected for data processing, so that the protocol processing is fully paralleled and improved.
  • the protocol processing capability enables load balancing of the protocol stack and reduces the data distribution overhead of the CPU.
  • S23 Perform packet distribution between the first socket and the selected protocol stack. In S23, the correspondence between the first socket and the selected protocol stack is also recorded.
  • receiving and responding to the request issued by the application to release the second socket, or the protocol stack receiving and responding to the connection release request sent by the peer end indicates that the session ends. If the request for releasing the second socket issued by the application is received and responded, the protocol stack is notified to release the first socket and its associated protocol control block; and it is confirmed whether the matching stream is created on the network card for the first socket. Table, if any, deletes the matching flow table. If the protocol stack receives and responds to the connection release request sent by the peer, releases the first socket and notifies the application that the first socket has been released; and confirms whether the first socket has created a matching flow table on the network card. If there is, delete the matching flow table.
  • FIG. 7 is another embodiment of a multi-stack stack load balancing apparatus according to a third embodiment of the present invention.
  • the multi-stack stack load balancing apparatus 30 includes a processor 301, a memory 302, a receiver 303, and a bus 304.
  • the processor 301, the memory 302, and the receiver 303 are connected by a bus 304. among them:
  • the processor 301 creates a first socket in response to the application's request and deploys the first socket on all of the protocol stacks.
  • the receiver 303 receives the data packet requesting the connection.
  • the processor 301 determines the protocol type of the data packet requesting the connection. If the protocol type is the TCP protocol, the processor 301 creates a second socket to establish a session connection; the processor 301 is configured according to the load condition of each protocol stack.
  • the second socket selects a protocol stack; when the data packet of the second socket cannot be offloaded to the RSS network card receiving queue bound by the selected protocol stack by the default offloading rule of the network card, the processor 301 divides the traffic according to the network card.
  • a matching flow table is created on the network card and the received second socket data packet is offloaded to the RSS network card receiving queue.
  • the memory 302 records the correspondence between the second socket and the selected protocol stack.
  • the processor 301 performs data packet distribution between the second socket and the selected protocol stack; after the session is completed, the protocol stack releases the second socket, and the processor 301 deletes the matching flow table created on the network card.
  • the memory 302 reads and stores the hardware configuration information of the network card, including the number of RSS queues and the maximum number of flow table matches that can be supported.
  • the processor 301 acquires user configuration information, and forms a network card configuration policy in combination with the hardware configuration information, and writes the network card.
  • the processor 301 starts a plurality of protocol stacks, and according to the network card configuration policy, at least one RSS network card receiving queue and one RSS network card sending queue are bound to each protocol stack.
  • the user configuration information includes the number of network card hardware queues to be opened, and the distribution policy of data packets on the network card.
  • a corresponding PCB is also created, wherein the PCB includes various variables involved in establishing the connection and processing the data packet. If the processor 301 determines that the protocol type is the UDP protocol, the protocol processing is performed by the protocol stack that receives the data packet requesting the connection, and in other embodiments of the present invention, it may also be processed by other protocol stacks.
  • the receiver 303 receives the data packet of the request connection sent by the peer end, and the processor 301 creates a second socket according to the actual situation of the network operation of each protocol stack. And notify the peer that the second socket is successfully created. If the creation is successful, the session connection is established, and the session can be performed; if the creation is unsuccessful, the establishment of the session connection fails, and the connection is interrupted. In other embodiments of the present invention, the receiver 303 forwards the data packet of the request connection sent by the received peer to the application, and after the application is confirmed, creates a second socket, and returns the result to the peer.
  • Second socket data The packet priority is offloaded to the RSS network card receiving queue bound to the selected protocol stack by the default offloading rule of the network card; if the data of the second socket is passed the default offloading rule, the data cannot be offloaded to the RSS card bound to the selected protocol stack.
  • the processor 301 creates a matching flow table on the network card according to the traffic offloading policy of the network card, and after receiving the data packet, the receiver 301 offloads the received data packet of the second socket to the RSS network card receiving queue.
  • packet splitting is preferably performed based on a quintuple/triple
  • the default offloading rule is preferably a hash rule.
  • other tuples may also be based on other tuples. For packet offloading, such as binary or quad.
  • the receiver 303 receives the request for releasing the second socket issued by the application, or receives and responds to the connection release request sent by the peer end through the selected protocol stack, indicating that the session ends. If the receiver 303 receives the request issued by the application to release the second socket, the processor 301 responds to the request, and notifies the protocol stack to release the second socket; the processor 301 confirms whether the second socket is on the network card. A matching flow table has been created, and if so, the matching flow table is deleted. If the connection release request sent by the peer is received and responded to by the selected protocol stack, the selected protocol stack releases the second socket, and notifies the application that the second socket has been released, and the processor 301 confirms whether it is the second set. The connection creates a matching flow table on the NIC, and if so, deletes the matching flow table.
  • Processor 301 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 301 or an instruction in a form of software.
  • the processor 301 may be a general-purpose processor, a digital singal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or Other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components.
  • DSP digital singal processor
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • the methods, steps, and structural block diagrams disclosed in the embodiments of the present invention may be implemented or executed.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 302, and the processor 301 reads the information in the memory 302 and completes the above with its hardware. The steps of the method.
  • Processor 301 may also be referred to as a CPU.
  • Memory 302 can include read only memory and random access memory and provides instructions and data packets to processor 301.
  • a portion of the memory 302 may also include a Non-Volatile Random Access Memory (NVRAM).
  • NVRAM Non-Volatile Random Access Memory
  • the various components of device 30 are coupled together by a bus 304, which may include, in addition to the data bus, a power bus, a control bus, a status signal bus, and the like.
  • the various buses are labeled as bus 304 in the figure.
  • FIG. 8 is still another schematic structural diagram of a multi-stack stack load balancing apparatus according to a fourth embodiment of the present invention.
  • the multi-stack stack load balancing device 40 includes a processor 401, a memory 402, a receiver 403, a bus 404, and a transmitter 405.
  • the processor 401, the memory 402 and the receiver 403, and the transmitter 405 pass through a bus 404. Connected.
  • the processor 401 creates a first socket and selects a protocol stack for the first socket to establish a session connection according to the load condition of each protocol stack. If the data packet of the first socket cannot be offloaded to the RSS network card receiving queue bound by the protocol stack by using the default traffic distribution rule of the network card, the processor 401 creates a matching flow table on the network card according to the network card's traffic distribution policy, and receives the matching flow table. After receiving the data packet, the device 403 offloads the received data packet to the RSS network card receiving queue. The memory 402 records the correspondence between the first socket and the selected protocol stack. The processor 401 performs packet distribution between the first socket and the selected protocol stack. After the session ends, the selected protocol stack releases the first socket, and the processor 401 deletes the matching flow table created on the network card.
  • the memory 402 reads and stores the hardware configuration information of the network card, including the number of RSS queues and the maximum number of flow table matches that can be supported.
  • the processor 401 acquires user configuration information, and forms a network card configuration policy in combination with the hardware configuration information, and writes the network card.
  • the processor 401 starts a plurality of protocol stacks, and according to the network card configuration policy, at least one RSS network card receiving queue and one RSS network card sending queue are bound to each protocol stack.
  • the user configuration information includes the number of network card hardware queues to be opened, and the distribution policy of data packets on the network card.
  • the processor 401 When the processor 401 creates the first socket, it also creates a corresponding PCB, which includes various connections involved in establishing the connection and processing of the data packet. Specifically, the receiver 403 receives the data packet of the request connection sent by the peer end, and the processor 401 returns a false result according to the actual situation of the network operation of each protocol stack, and notifies whether the first socket of the opposite end is successfully created. In other embodiments of the present invention, the receiver 403 converts the received data packet of the request sent by the opposite end to Give the application, create a first socket after the application is confirmed, and return the result to the peer.
  • the data packet of the first socket is preferentially branched to the RSS network card receiving queue bound by the selected protocol stack by using the default offloading rule of the network card; if the data packet of the first socket passes After the default offloading rule of the network card, the network card cannot be offloaded to the RSS network card receiving queue bound to the selected protocol stack, and the processor 401 creates a matching flow table for the data packet on the network card, and offloads the data packet to the RSS network card receiving queue.
  • packet splitting is preferably performed based on a quintuple/triple
  • the default offloading rule is preferably a hash rule.
  • other tuples may also be based on other tuples. For packet offloading, such as binary or quad.
  • the transmitter 405 is used to send connection requests and data packets.
  • the receiver 403 is configured to receive a data packet.
  • the receiver 403 receives the request issued by the application to release the second socket, or the protocol stack receives and responds to the connection release request sent by the opposite end, indicating that the session ends. If the receiver 403 receives the request issued by the application to release the second socket, the processor 401 responds to the request, and notifies the protocol stack to release the second socket; the processor 401 confirms whether the second socket is on the network card. A matching flow table has been created, and if so, the matching flow table is deleted.
  • the protocol stack If the protocol stack receives and responds to the connection release request sent by the peer, the protocol stack releases the second socket and notifies the application that the second socket has been released, and the processor 401 confirms whether the second socket is created on the network card. Matches the flow table, if any, deletes the matching flow table.
  • Processor 401 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 401 or an instruction in a form of software.
  • the processor 401 described above may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or discrete hardware. Component.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA off-the-shelf programmable gate array
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present invention may be implemented or carried out.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like.
  • the storage medium is located in the memory 402, and the processor 401 reads the information in the memory 402 and performs the steps of the above method in combination with its hardware.
  • the processor 401 may also be referred to as a Central Processing Unit (CPU).
  • Memory 402 can include read only memory and random access memory and provides instructions and data packets to processor 401.
  • a portion of memory 402 may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the various components of device 40 are coupled together by a bus 404, which may include, in addition to the data bus, a power bus, a control bus, a status signal bus, and the like.
  • the various buses are labeled as bus 404 in the figure.
  • the present invention creates a first socket by responding to an application request and deploys it on all protocol stacks; after receiving a data packet requesting connection, if the protocol type of the data packet requesting the connection is a transmission control protocol, Then: create a second socket to establish a session connection; and select a protocol stack for the second socket according to the load condition of each protocol stack, and the data packet of the second socket cannot pass the default traffic distribution rule of the network card.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

Disclosed in the present invention are a multiple protocol stack load balancing method and apparatus. The method comprises: creating a first socket in response to an application request, and deploying the first socket on all protocol stacks; receiving data packets requesting connection; judging the protocol type of the data packets requesting connection, and creating a second socket to establish a session connection if the protocol type is TCP; choosing one protocol stack for the second socket on the basis of the load condition of each protocol stack, creating a matching flow table on a network card on the basis of a shunt policy of the network card when the data packets of the second socket cannot be shunted, through a default shunt rule of the network card, to an RSS network card reception queue bound to the chosen protocol stack, and upon the reception of the data packets, shunting the received data packets of the second socket to the RSS network card reception queue; and completing data packets distribution between the second socket and the chosen protocol stack. By doing so, the present invention enables protocol stack load to balance and reduces the data distribution overhead of CPU in an environment of multiple protocol stacks by matching and combining load awareness of protocol stack and application with RSS network card reception/transmission queues and flow table.

Description

多协议栈负载均衡方法及装置Multi-stack stack load balancing method and device 技术领域Technical field
本发明涉及通信技术领域,特别是涉及一种多协议栈负载均衡方法及装置。The present invention relates to the field of communications technologies, and in particular, to a multi-protocol stack load balancing method and apparatus.
背景技术Background technique
云计算的快速发展,使得计算的工作越来越集中在数据中心完成,而终端更多的是利用网络快速的将请求的任务发送到数据中心,所以终端对计算能力的需求在降低,而对网络能力的需求在增加。而协议栈作为应用和物理网络之间的桥梁却没有得到快速发展,已经逐渐成为了两者之间的瓶颈。多个协议栈组合处理单个或者多个端口的扩展方式已经成为必然。此时,要采用分流算法将属于相同连接的数据包转发给不同的协议栈,由于所有协议栈共用一个分发模块,不算是真正的并行处理,容易在分发模块处出现性能瓶颈。The rapid development of cloud computing makes the computing work more and more concentrated in the data center, and the terminal is more to use the network to quickly send the requested task to the data center, so the terminal's demand for computing power is decreasing, but The demand for network capabilities is increasing. As a bridge between applications and physical networks, the protocol stack has not developed rapidly and has gradually become a bottleneck between the two. It has become inevitable that multiple protocol stacks combine to handle the expansion of single or multiple ports. At this time, the shunt algorithm is used to forward the data packets belonging to the same connection to different protocol stacks. Since all the protocol stacks share one distribution module, it is not really parallel processing, and it is easy to have performance bottlenecks at the distribution module.
现在商用10G网卡大部分具备有RSS(Receive Side Scaling,接收方扩展)等分流功能,通过对接收的网络数据包基于三元组/五元组进行哈希(hash),完成硬件分流的任务,将属于同一个连接的数据包发给网卡的同一个RSS网卡收队列,即发给同一个协议栈实例来处理。如图1所示,每个网卡接口100有多个协议栈,如协议栈0,协议栈1、协议栈2、协议栈3,每个协议栈绑定至少1个RSS网卡收、发队列,RSS网卡收队列由对应的协议栈处理。例如,通常通过防火墙网关发出的数据包具有相同的IP(Internet Protocol,网络协议),如果网卡的RSS分流只是基于源、目的IP以及协议三元组来进行哈希(hash)分流的话,这些通过同一个网关的数据包很可能分配到了同一个RSS网卡收队列,导致和该队列相连的协议栈可能存在过载的情况。因此,基于数据包的三元组/五元组进行简单hash分流,存在不能通过感知协议栈的真实负载情况进行灵活的负载均衡分发的缺点。Most commercial 10G network cards now have a shunt function such as RSS (Receive Side Scaling), which performs hardware shunting tasks by hashing received network packets based on triples/quintuples. The data packets belonging to the same connection are sent to the same RSS network card receiving queue of the network card, which is sent to the same protocol stack instance for processing. As shown in FIG. 1 , each network card interface 100 has multiple protocol stacks, such as protocol stack 0, protocol stack 1, protocol stack 2, and protocol stack 3. Each protocol stack is bound with at least one RSS network card receiving and sending queue. The RSS network card receiving queue is processed by the corresponding protocol stack. For example, a packet sent by a firewall gateway usually has the same IP (Internet Protocol). If the RSS offload of the network card is only based on source, destination IP, and protocol triplet, hashing is performed. The data packets of the same gateway are likely to be allocated to the same RSS network card receiving queue, which may cause overload of the protocol stack connected to the queue. Therefore, simple hash shunting based on packet-based triples/quintuples has the disadvantage of being unable to perform flexible load balancing distribution by sensing the real load situation of the protocol stack.
发明内容 Summary of the invention
本发明实施方式提供一种多协议栈负载均衡方法及装置,能够在多协议栈环境下,通过对协议栈以及应用的负载感知,与RSS网卡收/发队列、流表匹配相结合,实现协议栈的负载均衡,减少了CPU的数据分发开销。The embodiment of the invention provides a multi-protocol stack load balancing method and device, which can realize the protocol by combining the load sensing of the protocol stack and the application with the RSS network card receiving/transmitting queue and the flow table matching in the multi-protocol environment. Stack load balancing reduces CPU data distribution overhead.
第一方面提供一种多协议栈负载均衡方法,该方法包括:响应应用的请求创建第一套接字并部署在所有的协议栈上;接收请求连接的数据包;判断请求连接的数据包的协议类型,如果协议类型是传输控制协议,则:创建第二套接字以建立会话连接;根据每个协议栈的负载情况,为第二套接字选择一个协议栈;在第二套接字的数据包通过网卡的默认分流规则不能把数据包分流到选择的协议栈所绑定的接收方扩展RSS网卡收队列上时,根据网卡的分流策略在网卡上创建匹配流表,并在接收到数据包后,把接收的第二套接字的数据包分流到RSS网卡收队列上;进行第二套接字与选择的协议栈间的数据包分发。The first aspect provides a multi-stack stack load balancing method, the method comprising: creating a first socket in response to a request of an application and deploying on all protocol stacks; receiving a data packet requesting connection; determining a data packet requesting connection Protocol type, if the protocol type is a transport control protocol, then: create a second socket to establish a session connection; select a protocol stack for the second socket according to the load condition of each protocol stack; in the second socket When the data packet cannot be offloaded to the receiver extended RSS network card receiving queue bound by the selected protocol stack by the default offloading rule of the network card, a matching flow table is created on the network card according to the network card's traffic off policy, and is received. After the data packet, the received second socket data packet is offloaded to the RSS network card receiving queue; the data distribution between the second socket and the selected protocol stack is performed.
结合第一方面,在第一方面的第一种可能的实现方式中,还包括:会话结束后,释放第二套接字,并删除网卡上创建的匹配流表。In conjunction with the first aspect, in a first possible implementation manner of the first aspect, the method further includes: after the session ends, releasing the second socket, and deleting the matching flow table created on the network card.
结合第一方面,在第一方面的第二种可能的实现方式中,如果协议类型是用户数据报协议,则:由收到请求连接的数据包的协议栈进行协议处理。In conjunction with the first aspect, in a second possible implementation of the first aspect, if the protocol type is a user datagram protocol, the protocol is processed by the protocol stack of the data packet requesting the connection.
结合第一方面,在第一方面的第三种可能的实现方式中,在响应应用的请求创建第一套接字并部署在所有的协议栈上的步骤之前,对网卡和所有的协议栈进行初始化配置,包括:读取并存储网卡的硬件配置信息;获取用户配置信息,并结合硬件配置信息形成网卡配置策略,写入网卡;启动多个协议栈,并根据网卡配置策略,为每个协议栈至少绑定一个RSS网卡收队列以及一个RSS网卡发队列。In conjunction with the first aspect, in a third possible implementation of the first aspect, the network card and all protocol stacks are performed before the step of creating the first socket and deploying on all protocol stacks in response to the request of the application The initial configuration includes: reading and storing the hardware configuration information of the network card; obtaining user configuration information, forming a network card configuration policy in combination with the hardware configuration information, writing the network card; starting multiple protocol stacks, and according to the network card configuration policy, for each protocol The stack is bound to at least one RSS network card receiving queue and one RSS network card sending queue.
结合第一方面,在第一方面的第四种可能的实现方式中,响应应用的请求创建第一套接字并部署在所有的协议栈上包括:调用应用编程接口创建第一套接字;第一套接字创建后,调用bind函数将第一套接字绑定到特定的IP地址,并调用listen函数监听指定端口的数据包请求;收到第一套接字的监听方法调用时,将第一套接字部署在所有的协议栈上。With reference to the first aspect, in a fourth possible implementation manner of the first aspect, the creating the first socket in response to the request of the application and deploying on all the protocol stacks includes: calling the application programming interface to create the first socket; After the first socket is created, the bind function is called to bind the first socket to a specific IP address, and the listen function is called to listen for the packet request of the specified port; when the listen method of the first socket is received, The first socket is deployed on all protocol stacks.
结合第一方面,在第一方面的第五种可能的实现方式中,创建第二 套接字以建立会话连接的步骤包括:根据各协议栈的网络运行的实际情况,创建第二套接字。In conjunction with the first aspect, in a fifth possible implementation of the first aspect, the second The step of establishing a session connection by the socket includes: creating a second socket according to the actual situation of the network operation of each protocol stack.
结合第一方面,在第一方面的第六种可能的实现方式中,创建第二套接字以建立会话连接的步骤包括:将接收的对端发送的请求连接的数据包转给应用;待应用确认后创建第二套接字。With reference to the first aspect, in a sixth possible implementation manner of the first aspect, the step of creating a second socket to establish a session connection includes: transmitting, to the application, the data packet of the request connection sent by the received peer end; Create a second socket after applying the confirmation.
结合第一方面,在第一方面的第七种可能的实现方式中,会话结束包括接收并响应应用下发的释放第二套接字的请求,或者接收并响应对端发送的连接释放请求。In conjunction with the first aspect, in a seventh possible implementation of the first aspect, the end of the session includes receiving and responding to the request issued by the application to release the second socket, or receiving and responding to the connection release request sent by the peer.
第二方面提供一种多协议栈负载均衡方法,该方法包括:创建第一套接字,并根据每个协议栈的负载情况,为第一套接字选择一个协议栈以建立会话连接;若第一套接字的数据包通过网卡的默认分流规则不能分流到协议栈所绑定的接收方扩展RSS网卡收队列上,则根据网卡的分流策略在网卡上创建匹配流表,并在接收到数据包后,把接收的数据包分流到RSS网卡收队列上;进行第一套接字与选择的协议栈间的数据包分发。A second aspect provides a multi-protocol stack load balancing method, the method comprising: creating a first socket, and selecting a protocol stack for the first socket to establish a session connection according to a load condition of each protocol stack; The data packet of the first socket cannot be offloaded to the receiver extended RSS network card receiving queue bound by the protocol stack through the default offloading rule of the network card, and the matching flow table is created on the network card according to the network card's traffic off policy, and is received. After the data packet, the received data packet is offloaded to the RSS network card receiving queue; the data packet distribution between the first socket and the selected protocol stack is performed.
结合第二方面,在第二方面的第一种可能的实现方式中,还包括:会话结束后,释放第一套接字,并删除网卡上创建的匹配流表With reference to the second aspect, in a first possible implementation manner of the second aspect, the method further includes: after the session ends, releasing the first socket, and deleting the matching flow table created on the network card
结合第二方面,在第二方面的第二种可能的实现方式中,在创建第一套接字之前,对网卡和所有的协议栈进行初始化配置,包括:读取并存储网卡的硬件配置信息;获取用户配置信息,并结合硬件配置信息形成网卡配置策略,写入网卡;启动多个协议栈,并根据网卡配置策略,为每个协议栈至少绑定一个RSS网卡收队列以及一个RSS网卡发队列。With reference to the second aspect, in a second possible implementation manner of the second aspect, the network card and all the protocol stacks are initially configured before the first socket is created, including: reading and storing the hardware configuration information of the network card Obtain user configuration information, and combine the hardware configuration information to form a network card configuration policy, write the network card; start multiple protocol stacks, and bind at least one RSS network card receiving queue and one RSS network card to each protocol stack according to the network card configuration policy. queue.
结合第二方面,在第二方面的第三种可能的实现方式中,会话结束包括接收并响应应用下发的释放第一套接字的请求,或者接收并响应对端发送的连接释放请求。With reference to the second aspect, in a third possible implementation manner of the second aspect, the end of the session includes receiving and responding to the request issued by the application for releasing the first socket, or receiving and responding to the connection release request sent by the peer end.
第三方面提供一种多实例协议栈负载均衡装置,该装置包括:协议栈模块、网卡、数据分发模块以及负载均衡模块,协议栈模块包括多个协议栈,其中:数据分发模块,用于响应应用的请求创建第一套接字并部署在所有的协议栈上;协议栈模块,用于接收请求连接的数据包,判断请求连接的数据包的协议类型;数据分发模块,用于,如果协议类型是传输控制协议,则创建第二套接字以建立会话连接;负载均衡模块, 用于,如果协议类型是传输控制协议,则根据每个协议栈的负载情况,为第二套接字选择一个协议栈,并在第二套接字的数据包通过网卡的默认分流规则不能分流到选择的协议栈所绑定的接收方扩展RSS网卡收队列上时,根据网卡的分流策略在网卡上创建匹配流表,并在接收到数据包后,把接收的第二套接字的数据包分流到RSS网卡收队列上;数据分发模块,还用于进行第二套接字与选择的协议栈间的数据包分发。The third aspect provides a multi-instance protocol stack load balancing device, the device includes: a protocol stack module, a network card, a data distribution module, and a load balancing module, where the protocol stack module includes multiple protocol stacks, wherein: the data distribution module is configured to respond The application request creates the first socket and deploys it on all protocol stacks; the protocol stack module is configured to receive the data packet requesting the connection, determine the protocol type of the data packet requesting the connection; the data distribution module is used, if the protocol Type is the transport control protocol, then create a second socket to establish a session connection; load balancing module, For if the protocol type is a transmission control protocol, select a protocol stack for the second socket according to the load condition of each protocol stack, and the data packet of the second socket cannot be offloaded by the default traffic distribution rule of the network card. When the receiver bound to the selected protocol stack extends the RSS network card receiving queue, the matching flow table is created on the network card according to the network card's traffic distribution policy, and after receiving the data packet, the received second socket data is received. The packet is distributed to the RSS network card receiving queue; the data distribution module is further configured to perform data packet distribution between the second socket and the selected protocol stack.
结合第三方面,在第三方面的第一种可能的实现方式中,在会话结束后,协议栈模块,还用于,控制选择的协议栈释放第二套接字;负载均衡模块,还用于删除网卡上创建的匹配流表。With reference to the third aspect, in a first possible implementation manner of the third aspect, after the session ends, the protocol stack module is further configured to: control the selected protocol stack to release the second socket; the load balancing module is further used Delete the matching flow table created on the NIC.
结合第三方面,在第三方面的第二种可能的实现方式中,协议栈模块,还用于,如果协议类型是用户数据报协议,则控制收到请求连接的数据包的协议栈进行协议处理。With reference to the third aspect, in a second possible implementation manner of the third aspect, the protocol stack module is further configured to: if the protocol type is a user datagram protocol, control a protocol stack that receives the data packet requesting the connection to perform the protocol. deal with.
结合第三方面,在第三方面的第三种可能的实现方式中,负载均衡模块,还用于对网卡和所有的协议栈进行初始化配置,包括:具体用于读取并存储网卡的硬件配置信息,获取用户配置信息并结合硬件配置信息形成网卡配置策略,写入网卡;协议栈模块,还用于启动多个协议栈,并根据网卡配置策略,为每个协议栈至少绑定一个RSS网卡收队列以及一个RSS网卡发队列。In conjunction with the third aspect, in a third possible implementation manner of the third aspect, the load balancing module is further configured to perform initial configuration on the network card and all protocol stacks, including: a hardware configuration specifically for reading and storing the network card. Information, obtain user configuration information and form a network card configuration policy in combination with hardware configuration information, and write the network card; the protocol stack module is also used to start multiple protocol stacks, and at least one RSS network card is bound to each protocol stack according to the network card configuration policy. The queue and an RSS NIC send queue.
结合第三方面,在第三方面的第四种可能的实现方式中,数据分发模块,用于响应应用的请求创建第一套接字并部署在所有的协议栈上,具体为:数据分发模块用于响应应用调用应用编程接口的通知创建第一套接字,并接收第一套接字的监听方法调用,其中,第一套接字创建后,应用调用bind函数将第一套接字绑定到特定的IP地址,并调用listen函数监听指定端口的数据包请求;负载均衡模块,还用于通知各协议栈将第一套接字部署在所有的协议栈上。With reference to the third aspect, in a fourth possible implementation manner of the third aspect, the data distribution module is configured to create a first socket and deploy on all protocol stacks in response to the request of the application, specifically: a data distribution module. A first socket is created in response to a notification that the application invokes the application programming interface, and receives a listen method call of the first socket. After the first socket is created, the application calls the bind function to bind the first socket. Set a specific IP address, and call the listen function to listen to the packet request of the specified port; the load balancing module is also used to notify each protocol stack to deploy the first socket on all protocol stacks.
结合第三方面,在第三方面的第五种可能的实现方式中,数据分发模块用于创建第二套接字以建立会话连接,具体为:用于根据各协议栈的网络运行的实际情况,创建第二套接字。In conjunction with the third aspect, in a fifth possible implementation manner of the third aspect, the data distribution module is configured to create a second socket to establish a session connection, specifically: a real situation for running according to a network of each protocol stack , create a second socket.
结合第三方面,在第三方面的第六种可能的实现方式中,协议栈模块用于创建第二套接字以建立会话连接,具体为:用于将接收的对端发送的请求连接的数据包转给应用;所述数据分发模块,用于待应用确认 后创建第二套接字。With reference to the third aspect, in a sixth possible implementation manner of the third aspect, the protocol stack module is configured to create a second socket to establish a session connection, specifically: a request for connecting the received peer to the connection The data packet is forwarded to the application; the data distribution module is used for application confirmation After creating the second socket.
结合第三方面,在第三方面的第七种可能的实现方式中,数据分发模块接收并响应应用下发的释放第二套接字的请求,或者协议栈模块接收并响应对端发送的连接释放请求,则表示会话结束。With reference to the third aspect, in a seventh possible implementation manner of the third aspect, the data distribution module receives and responds to the request issued by the application for releasing the second socket, or the protocol stack module receives and responds to the connection sent by the peer end. Release the request to indicate the end of the session.
第四方面提供一种多实例协议栈负载均衡装置,该装置包括:协议栈模块、网卡、数据分发模块以及负载均衡模块,协议栈模块包括多个协议栈,其中:数据分发模块,用于创建第一套接字;负载均衡模块,用于根据每个协议栈的负载情况,为第一套接字选择一个协议栈以建立会话连接,若第一套接字的数据包通过网卡的默认分流规则不能分流到协议栈所绑定的接收方扩展RSS网卡收队列上,则根据网卡的分流策略在网卡上创建匹配流表,并在接收到数据包后,把接收的数据包分流到RSS网卡收队列上;数据分发模块,还用于进行第一套接字与选择的协议栈间的数据包分发。The fourth aspect provides a multi-instance protocol stack load balancing device, the device includes: a protocol stack module, a network card, a data distribution module, and a load balancing module, where the protocol stack module includes multiple protocol stacks, where: a data distribution module is used to create The first socket; the load balancing module is configured to select a protocol stack for the first socket according to the load condition of each protocol stack to establish a session connection, if the first socket data packet passes the default offload of the network card The rule cannot be offloaded to the receiver extended RSS network card receiving queue bound to the protocol stack. Then, according to the network card's traffic distribution policy, a matching flow table is created on the network card, and after receiving the data packet, the received data packet is offloaded to the RSS network card. The data distribution module is further configured to perform data packet distribution between the first socket and the selected protocol stack.
结合第四方面,在第四方面的第一种可能的实现方式中,在会话结束后,协议栈模块,用于控制选择的协议栈释放第一套接字;负载均衡模块,还用于删除网卡上创建的匹配流表。With reference to the fourth aspect, in a first possible implementation manner of the fourth aspect, after the session ends, the protocol stack module is configured to control the selected protocol stack to release the first socket; the load balancing module is further configured to delete A matching flow table created on the NIC.
结合第四方面,在第四方面的第二种可能的实现方式中,负载均衡模块,还用于对网卡和所有的协议栈进行初始化配置,包括:具体用于读取并存储网卡的硬件配置信息,获取用户配置信息并结合硬件配置信息形成网卡配置策略,写入网卡;协议栈模块,还用于启动多个协议栈,并根据网卡配置策略,为每个协议栈至少绑定一个RSS网卡收队列以及一个RSS网卡发队列。With reference to the fourth aspect, in a second possible implementation manner of the fourth aspect, the load balancing module is further configured to perform initial configuration on the network card and all protocol stacks, including: a hardware configuration specifically for reading and storing the network card Information, obtain user configuration information and form a network card configuration policy in combination with hardware configuration information, and write the network card; the protocol stack module is also used to start multiple protocol stacks, and at least one RSS network card is bound to each protocol stack according to the network card configuration policy. The queue and an RSS NIC send queue.
结合第四方面,在第四方面的第三种可能的实现方式中,数据分发模块接收并响应应用下发的释放第一套接字的请求,或者协议栈模块接收并响应对端发送的连接释放请求,则表示会话结束。With reference to the fourth aspect, in a third possible implementation manner of the fourth aspect, the data distribution module receives and responds to the request issued by the application for releasing the first socket, or the protocol stack module receives and responds to the connection sent by the peer end. Release the request to indicate the end of the session.
本发明实施方式提供的多协议栈负载均衡方法及装置,通过响应应用的请求创建第一套接字并部署在所有的协议栈上;在接收请求连接的数据包后,如果请求连接的数据包的协议类型是传输控制协议,则创建第二套接字以建立会话连接;并根据每个协议栈的负载情况,为第二套接字选择一个协议栈,在第二套接字的数据包通过网卡的默认分流规则不能分流到选择的协议栈所绑定的RSS网卡收队列上时,根据网卡的分 流策略在网卡上创建匹配流表,把接收的第二套接字的数据包分流到RSS网卡收队列上;如此通过对协议栈以及应用的负载感知,与RSS网卡收、发队列、流表匹配相结合,选择合适的协议栈以进行数据处理,使协议处理充分并行,提高协议处理能力,能够在多协议栈环境下,实现协议栈的负载均衡,减少了CPU的数据分发开销。The multi-protocol stack load balancing method and apparatus provided by the embodiments of the present invention create a first socket and deploy it on all protocol stacks by responding to the request of the application; after receiving the data packet requesting the connection, if the data packet is requested to be connected The protocol type is the transport control protocol, then the second socket is created to establish a session connection; and according to the load condition of each protocol stack, a protocol stack is selected for the second socket, and the data packet of the second socket is used. When the default traffic distribution rule of the network card cannot be offloaded to the RSS network card receiving queue bound to the selected protocol stack, according to the network card The traffic policy creates a matching flow table on the network card, and offloads the received second socket data packet to the RSS network card receiving queue; thus, through the load sensing of the protocol stack and the application, and the RSS network card receiving, sending queue, and flow table The combination of matching, selecting the appropriate protocol stack for data processing, making the protocol processing fully parallel, improving the protocol processing capability, enabling load balancing of the protocol stack in a multi-protocol stack environment, and reducing the data distribution overhead of the CPU.
附图说明DRAWINGS
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。其中:In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the present invention. Other drawings may also be obtained from those of ordinary skill in the art in light of the inventive work. among them:
图1是现有技术中的多协议栈负载均衡装置结构示意图;1 is a schematic structural diagram of a multi-protocol stack load balancing apparatus in the prior art;
图2是本发明第一实施例的多协议栈负载均衡装置的结构示意图;2 is a schematic structural diagram of a multi-protocol stack load balancing apparatus according to a first embodiment of the present invention;
图3是本发明第二实施例的多协议栈负载均衡装置的结构示意图;3 is a schematic structural diagram of a multi-protocol stack load balancing apparatus according to a second embodiment of the present invention;
图4是本发明第一实施例的多协议栈负载均衡方法示意图;4 is a schematic diagram of a multi-protocol stack load balancing method according to a first embodiment of the present invention;
图5是本发明第一实施例的多协议栈负载均衡方法初始化的流程示意图;FIG. 5 is a schematic flowchart of initialization of a multi-protocol stack load balancing method according to a first embodiment of the present invention; FIG.
图6是本发明第二实施例的多协议栈负载均衡方法示意图;6 is a schematic diagram of a multi-protocol stack load balancing method according to a second embodiment of the present invention;
图7是本发明第三实施例的多协议栈负载均衡装置的又一结构示意图;FIG. 7 is still another schematic structural diagram of a multi-protocol stack load balancing apparatus according to a third embodiment of the present invention; FIG.
图8是本发明第四实施例的多协议栈负载均衡装置的又一结构示意图。FIG. 8 is still another schematic structural diagram of a multi-stack stack load balancing apparatus according to a fourth embodiment of the present invention.
具体实施方式detailed description
下面结合附图和实施方式对本发明进行详细说明。The invention will now be described in detail in conjunction with the drawings and embodiments.
首先请参见图2,图2是本发明第一实施例的多协议栈负载均衡装置的结构示意图。如图2所示,该多协议栈负载均衡装置10包括:协议栈模块12、数据分发模块13、负载均衡模块14、网卡16以及网卡驱动17,其中,协议栈模块12包括多个协议栈15,网卡16包括RSS网卡收、发队列18以及匹配流表19,RSS网卡收、发队列18包括RSS网卡收队列 和RSS网卡发队列。Referring first to FIG. 2, FIG. 2 is a schematic structural diagram of a multi-protocol stack load balancing apparatus according to a first embodiment of the present invention. As shown in FIG. 2, the multi-stack stack load balancing apparatus 10 includes: a protocol stack module 12, a data distribution module 13, a load balancing module 14, a network card 16, and a network card driver 17, wherein the protocol stack module 12 includes a plurality of protocol stacks 15 The network card 16 includes an RSS network card receiving and sending queue 18 and a matching flow table 19. The RSS network card receiving and sending queue 18 includes an RSS network card receiving queue. And the RSS network card is sent out.
在本实施例中,应用11调用应用编程接口通知数据分发模块13创建第一套接字。数据分发模块13用于响应应用11的请求创建第一套接字并部署在所有的协议栈15上。协议栈模块12用于接收请求连接的数据包,判断请求连接的数据包的协议类型,如果协议类型是UDP(User Datagram Protocol,用户数据报协议),则控制收到请求连接的数据包的协议栈15进行协议处理,当然,在本发明的其它实施例中,也可以由其它的协议栈处理。如果协议类型是TCP(Transmission Control Protocol,传输控制协议),则:数据分发模块13还用于创建第二套接字以建立会话连接;负载均衡模块14用于根据每个协议栈的负载情况,为第二套接字选择一个协议栈15,并在第二套接字的数据包通过网卡16的默认分流规则不能分流到选择的协议栈15所绑定的RSS网卡收队列上时,根据网卡16的分流策略在网卡16上创建匹配流表19,并在接收到数据包后,把接收的数据包分流到RSS网卡收队列上。如此通过对协议栈以及应用的负载感知,与RSS网卡收、发队列、流表匹配相结合,选择合适的协议栈以进行数据处理,使协议处理充分并行,提高协议处理能力。数据分发模块13还用于进行第二套接字与选择的协议栈间的数据包分发。会话结束后,协议栈模块12还用于控制选择的协议栈15释放第二套接字,负载均衡模块14还用于删除网卡16上创建的匹配流表19。如此,在多协议栈环境下,通过对协议栈15以及应用的负载感知,与网卡16的RSS网卡收、发队列18、流表匹配19相结合,实现协议栈的负载均衡,减少了CPU(Central Processing Unit,中央处理器)的数据包分发开销。其中,对端可以是网络中的其它客户端或者服务端。In the present embodiment, the application 11 invokes the application programming interface to notify the data distribution module 13 to create the first socket. The data distribution module 13 is operative to create a first socket in response to the request of the application 11 and deploy it on all of the protocol stacks 15. The protocol stack module 12 is configured to receive the data packet requesting the connection, determine the protocol type of the data packet requesting the connection, and if the protocol type is UDP (User Datagram Protocol), control the protocol for receiving the data packet requesting the connection. The stack 15 performs protocol processing, although it may of course be handled by other protocol stacks in other embodiments of the invention. If the protocol type is TCP (Transmission Control Protocol), the data distribution module 13 is further configured to create a second socket to establish a session connection; and the load balancing module 14 is configured to load according to each protocol stack. Selecting a protocol stack 15 for the second socket, and when the data packet of the second socket cannot be offloaded to the RSS network card receiving queue bound to the selected protocol stack 15 by the default traffic distribution rule of the network card 16, according to the network card The offloading policy of 16 creates a matching flow table 19 on the network card 16, and after receiving the data packet, offloads the received data packet to the RSS network card receiving queue. In this way, through the load sensing of the protocol stack and the application, combined with the RSS network card receiving, sending queue, and flow table matching, the appropriate protocol stack is selected for data processing, so that the protocol processing is fully paralleled, and the protocol processing capability is improved. The data distribution module 13 is also used to perform packet distribution between the second socket and the selected protocol stack. After the session ends, the protocol stack module 12 is further configured to control the selected protocol stack 15 to release the second socket. The load balancing module 14 is further configured to delete the matching flow table 19 created on the network card 16. In this way, in the multi-protocol stack environment, through the load sensing of the protocol stack 15 and the application, combined with the RSS network card receiving and sending queue 18 and the flow table matching 19 of the network card 16, the load balancing of the protocol stack is realized, and the CPU is reduced. Central Processing Unit, central processor) Packet distribution overhead. The peer end may be other clients or servers in the network.
在本实施例中,负载均衡模块14还用于对网卡16和所有的协议栈15进行初始化配置,包括:具体用于通过网卡驱动17读取并存储网卡16的硬件配置信息,获取用户配置信息,并结合硬件配置信息形成网卡配置策略,通过网卡驱动17写入网卡16;协议栈模块12还用于启动多个协议栈15,并根据网卡配置策略,为每个协议栈15至少绑定一个RSS网卡收队列以及一个RSS网卡发队列。其中,网卡16的硬件配置信息包括RSS网卡收、发队列18的个数、最大可支持的流表匹配数。用户配置信息包括需开启的网卡硬件队列数、网卡16上数据包的分发策略等。In this embodiment, the load balancing module 14 is further configured to perform initial configuration on the network card 16 and all the protocol stacks 15, including: specifically for reading and storing the hardware configuration information of the network card 16 through the network card driver 17 to obtain user configuration information. And combining the hardware configuration information to form a network card configuration policy, and writing the network card 16 through the network card driver 17; the protocol stack module 12 is further configured to start multiple protocol stacks 15, and bind at least one for each protocol stack 15 according to the network card configuration policy. The RSS network card receiving queue and an RSS network card sending queue. The hardware configuration information of the network card 16 includes the number of the RSS network card receiving and sending queues 18, and the maximum supported flow table matching number. The user configuration information includes the number of network card hardware queues to be opened, and the distribution policy of the data packets on the network card 16.
在本实施例中,第一套接字的目的地址为any,则表明这是一个服务 端的套接字。第一套接字创建成功后,应用11调用bind函数把第一套接字绑定到指定的IP地址,并通过调用listen函数监听指定端口过来的数据包请求。当收到bind及listen方法调用时,数据分发模块13通知负载均衡模块14该套接字是一个服务端的套接字。负载均衡模块14通知各协议栈15,将第一套接字部署在所有的协议栈15上,每个协议栈15上都有一个第一套接字的PCB(Protocol Control Block,协议控制块)。其中,PCB包括建立连接以及数据包处理过程中涉及的各种变量。In this embodiment, the destination address of the first socket is any, indicating that this is a service. The socket on the side. After the first socket is successfully created, the application 11 calls the bind function to bind the first socket to the specified IP address, and listens for the packet request from the specified port by calling the listen function. When the bind and listen method calls are received, the data distribution module 13 notifies the load balancing module 14 that the socket is a server socket. The load balancing module 14 notifies each protocol stack 15 that the first socket is deployed on all the protocol stacks 15, and each protocol stack 15 has a first socket PCB (Protocol Control Block). . Among them, the PCB includes various variables involved in establishing the connection and processing of the data packet.
在本实施例中,协议栈模块12接收对端发送的请求连接的数据包,数据分发模块13根据各协议栈15的网络运行的实际情况,创建第二套接字。并通知对端第二套接字是否创建成功,如果创建成功,则建立会话连接,可以进行会话;如果创建不成功,则建立会话连接失败,中断连接。其中,协议栈15的网络运行的实际情况包括是否已有相同端口的套接字创建、协议栈15中套接字的数量是否到达创建套接字的上限等信息。在本发明的其它实施例中,协议栈模块12将接收的对端发送的请求连接的数据包转给应用11,待应用11确认后数据分发模块13创建第二套接字,并将结果返回给对端。负载均衡模块14为第二套接字选择一个协议栈15时,通知协议栈15为第二套接字创建对应的PCB。第二套接字的数据包优先通过网卡16的默认分流规则分流到协议栈15所绑定的RSS网卡收队列上。如果第二套接字的数据包通过网卡16的默认分流规则不能分流到协议栈15所绑定的RSS网卡收队列上,则通过负载均衡模块14根据网卡16的分流策略在网卡16上创建匹配流表19,并在接收到数据包后,将接收第二套接字的数据包分流到RSS网卡收队列上以进行数据包的处理,即与对端进行会话。在本发明的实施例中,优选为基于五元组/三元组来进行数据包分流,而默认分流规则优选为哈希规则,在本发明的其它实施例中,也可以是基于其它元组来进行数据包分流,如二元组或四元组。其中,三元组信息包括目的端口、目的IP地址以及协议内容,五元组信息包括源端口、目的端口、源IP地址、目的IP地址和协议内容。在本实施例中,数据分发模块13还接收第二套接字的数据发送请求,并分发给对应的协议栈15;在第二套接字创建后,结合负载均衡信息,选择一个协议栈15以进行数据包的处理,并将处理后的网络数据包分发给第二套接字。In this embodiment, the protocol stack module 12 receives the data packet of the request connection sent by the peer end, and the data distribution module 13 creates a second socket according to the actual situation of the network operation of each protocol stack 15. And notify the peer that the second socket is successfully created. If the creation is successful, the session connection is established, and the session can be performed; if the creation is unsuccessful, the establishment of the session connection fails, and the connection is interrupted. The actual situation of the network running of the protocol stack 15 includes whether the socket of the same port has been created, whether the number of sockets in the protocol stack 15 reaches the upper limit of the created socket, and the like. In other embodiments of the present invention, the protocol stack module 12 forwards the received data packet of the request connection sent by the opposite end to the application 11, and after the application 11 confirms, the data distribution module 13 creates a second socket and returns the result. Give the opposite end. When the load balancing module 14 selects a protocol stack 15 for the second socket, the protocol stack 15 is notified to create a corresponding PCB for the second socket. The data packet of the second socket is preferentially branched to the RSS network card receiving queue bound to the protocol stack 15 by the default offloading rule of the network card 16. If the data packet of the second socket cannot be offloaded to the RSS network card receiving queue bound by the protocol stack 15 through the default traffic distribution rule of the network card 16, the load balancing module 14 creates a match on the network card 16 according to the traffic distribution policy of the network card 16. The flow table 19, and after receiving the data packet, offloads the data packet that receives the second socket to the RSS network card receiving queue for processing the data packet, that is, the session with the peer end. In an embodiment of the present invention, packet splitting is preferably performed based on a quintuple/triple, and the default offloading rule is preferably a hash rule. In other embodiments of the present invention, other tuples may also be based on other tuples. For packet offloading, such as binary or quad. The ternary group information includes a destination port, a destination IP address, and a protocol content, and the quintuple information includes a source port, a destination port, a source IP address, a destination IP address, and a protocol content. In this embodiment, the data distribution module 13 further receives the data transmission request of the second socket and distributes it to the corresponding protocol stack 15; after the second socket is created, combined with the load balancing information, selects a protocol stack 15 The processing of the data packet is performed, and the processed network data packet is distributed to the second socket.
在本实施例中,数据分发模块13接收并响应应用11下发的释放第 二套接字的请求,或者协议栈15接收并响应对端发送的连接释放请求,则表示会话结束。如果数据分发模块13接收并响应应用11下发的释放第二套接字的请求,则通知对应的协议栈15释放第二套接字及其相关的PCB,同时通知负载均衡模块14该第二套接字已经释放;负载均衡模块14收到数据分发模块13的第二套接字释放通知后,确认是否为该第二套接字在网卡上创建过匹配流表19,如果有,则通过调用网卡驱动17删除该匹配流表19。如果协议栈15接收并响应对端发送的连接释放请求,对应的协议栈15释放第二套接字,数据分发模块13通知应用11和负载均衡模块14第二套接字已经释放。负载均衡模块14再确认是否为该第二套接字在网卡上创建过匹配流表19,如果有,则通过调用网卡驱动17删除该匹配流表19。In this embodiment, the data distribution module 13 receives and responds to the release issued by the application 11. The request for the second socket, or the protocol stack 15 receives and responds to the connection release request sent by the opposite end, indicating that the session ends. If the data distribution module 13 receives and responds to the request for releasing the second socket issued by the application 11, notifying the corresponding protocol stack 15 to release the second socket and its associated PCB, and notifying the load balancing module 14 of the second The socket has been released; after receiving the second socket release notification of the data distribution module 13, the load balancing module 14 confirms whether the matching flow table 19 is created on the network card for the second socket, and if so, passes The NIC driver 17 is called to delete the matching stream table 19. If the protocol stack 15 receives and responds to the connection release request sent by the peer, the corresponding protocol stack 15 releases the second socket, and the data distribution module 13 notifies the application 11 and the load balancing module 14 that the second socket has been released. The load balancing module 14 reconfirms whether the matching flow table 19 has been created on the network card for the second socket, and if so, the matching flow table 19 is deleted by calling the network card driver 17.
请参见图3,图3是本发明第二实施例的多协议栈负载均衡装置的结构示意图。如图3所示,该多协议栈负载均衡装置包括:协议栈模块22、数据分发模块23、负载均衡模块24、网卡26以及网卡驱动27,其中,协议栈模块22包括多个协议栈25,网卡26包括RSS网卡收、发队列28以及匹配流表29,RSS网卡收、发队列28包括RSS网卡收队列和RSS网卡发队列。Referring to FIG. 3, FIG. 3 is a schematic structural diagram of a multi-stack stack load balancing apparatus according to a second embodiment of the present invention. As shown in FIG. 3, the multi-stack stack load balancing device includes: a protocol stack module 22, a data distribution module 23, a load balancing module 24, a network card 26, and a network card driver 27. The protocol stack module 22 includes a plurality of protocol stacks 25, The network card 26 includes an RSS network card receiving and sending queue 28 and a matching flow table 29, and the RSS network card receiving and sending queue 28 includes an RSS network card receiving queue and an RSS network card sending queue.
在本实施例中,数据分发模块23用于响应应用21调用应用程序接口的通知创建第一套接字,每个应用21包括至少一个第一套接字。负载均衡模块24用于根据每个协议栈25的负载情况,为第一套接字选择一个协议栈25,以与对端建立会话连接,若第一套接字的数据包通过网卡26的默认分流规则不能分流到协议栈25所绑定的RSS网卡收队列上,则根据网卡26的分流策略在网卡26上创建匹配流表29,并在接收到数据包后,把接收的数据包分流到RSS网卡收队列上。数据颁发模块23还用于进行第一套接字与选择的协议栈25间的数据包分发。会话结束后,协议栈模块22用于控制选择的协议栈25释放第一套接字,负载均衡模块24还用于删除网卡26上创建的匹配流表29。其中,对端可以是网络中的服务端。In the present embodiment, the data distribution module 23 is configured to create a first socket in response to the notification that the application 21 invokes the application interface, each application 21 including at least one first socket. The load balancing module 24 is configured to select a protocol stack 25 for the first socket according to the load condition of each protocol stack 25, to establish a session connection with the peer end, if the first socket data packet passes the default of the network card 26 The offloading rule cannot be offloaded to the RSS network card receiving queue bound to the protocol stack 25. Then, the matching flow table 29 is created on the network card 26 according to the splitting policy of the network card 26, and after receiving the data packet, the received data packet is offloaded to the The RSS network card is on the queue. The data issuance module 23 is also used to perform packet distribution between the first socket and the selected protocol stack 25. After the session ends, the protocol stack module 22 is configured to control the selected protocol stack 25 to release the first socket, and the load balancing module 24 is further configured to delete the matching flow table 29 created on the network card 26. The peer end may be a server in the network.
在本实施例中,负载均衡模块还用于对网卡和所有的协议栈进行初始化配置,包括:具体用于读取并存储网卡26的硬件配置信息,获取用户配置信息,并结合硬件配置信息形成网卡配置策略,通过网卡驱动27写入网卡26;协议栈模块22还用于启动多个协议栈25,并根据网卡配 置策略,为每个协议栈25至少绑定一个RSS网卡收队列以及一个RSS网卡发队列。其中,网卡26的硬件配置信息包括RSS网卡收队列的个数、最大可支持的流表匹配数,用户配置信息包括需开启的网卡硬件队列数、网卡26上数据包的分发策略等。In this embodiment, the load balancing module is further configured to initialize the network card and all the protocol stacks, including: specifically for reading and storing the hardware configuration information of the network card 26, acquiring user configuration information, and forming the hardware configuration information. The network card configuration policy is written into the network card 26 through the network card driver 27; the protocol stack module 22 is also used to start multiple protocol stacks 25, and is configured according to the network card. The policy is to bind at least one RSS network card receiving queue and one RSS network card sending queue to each protocol stack 25. The hardware configuration information of the network card 26 includes the number of the RSS network card receiving queues and the maximum supported flow table matching number, and the user configuration information includes the number of network card hardware queues to be opened and the data packet distribution policy on the network card 26.
在本实施例中,协议栈模块22接收对端发送的请求连接的数据包,数据分发模块23根据各协议栈25的网络运行的实际情况,给应用21返回伪的结果,通知对端第一套接字是否创建成功,如果创建成功,则建立会话连接,可以进行会话;如果创建不成功,则建立会话连接失败,中断连接。在本发明的其它实施例中,协议栈模块22将接收的对端发送的请求连接的数据包转给应用21,待应用21确认后数据分发模块23创建第一套接字,并将结果返回给对端。数据分发模块23在创建第一套接字时,同时创建对应的PCB。其中,协议栈25的网络运行的实际情况包括是否已有相同端口的套接字创建、协议栈25中套接字的数量是否到达创建套接字的上限等信息。PCB包括建立连接以及数据包处理过程中涉及的各种变量。In this embodiment, the protocol stack module 22 receives the data packet of the request connection sent by the peer end, and the data distribution module 23 returns a false result to the application 21 according to the actual situation of the network operation of each protocol stack 25, and notifies the peer end of the first result. Whether the socket is created successfully. If the creation is successful, a session connection is established and the session can be performed. If the creation is unsuccessful, the establishment of the session connection fails and the connection is broken. In other embodiments of the present invention, the protocol stack module 22 forwards the received data packet of the request connection sent by the opposite end to the application 21. After the application 21 confirms, the data distribution module 23 creates the first socket and returns the result. Give the opposite end. The data distribution module 23 simultaneously creates a corresponding PCB when creating the first socket. The actual situation of the network operation of the protocol stack 25 includes whether the socket of the same port has been created, whether the number of sockets in the protocol stack 25 reaches the upper limit of the created socket, and the like. The PCB includes the various variables involved in establishing the connection and the processing of the packet.
在本实施例中,第一套接字创建成功后,应用21调用connect函数去连接某个服务器的IP地址和端口建立连接,此即是作为客户端的应用。在与对端建立会话连接并收到数据包后,第一套接字的数据包优先通过网卡26的默认分流规则把数据包分流到协议栈25所绑定的RSS网卡收队列上。如果通过网卡26的哈希规则不能把数据包分流到协议栈25所绑定的RSS网卡收队列上,则通过负载均衡模块24根据网卡26的分流策略在网卡26上创建匹配流表29,并将接收的数据包分流到RSS网卡收队列上以进行数据包的处理,即与对端进行会话。在本发明的其它实施例中,优选为基于五元组/三元组来进行数据包分流,而默认分流规则优选为哈希规则,在本发明的其它实施例中,也可以是基于其它元组来进行数据包分流,如二元组或四元组。其中,三元组信息包括目的端口、目的IP地址以及协议内容,五元组信息包括源端口、目的端口、源IP地址、目的IP地址和协议内容。在本实施例中,数据分发模块23还接收第一套接字的数据发送请求,并分发给对应的协议栈25;在第一套接字创建后,结合负载均衡信息,选择一个协议栈25以进行数据包的处理,并将处理后的网络数据包分发给第一套接字。In this embodiment, after the first socket is successfully created, the application 21 calls the connect function to connect the IP address of a server to establish a connection with the port, which is the application of the client. After establishing a session connection with the peer end and receiving the data packet, the data packet of the first socket preferentially branches the data packet to the RSS network card receiving queue bound by the protocol stack 25 through the default traffic distribution rule of the network card 26. If the hashing rule of the network card 26 cannot be used to offload the data packet to the RSS network card receiving queue bound to the protocol stack 25, the matching flow table 29 is created on the network card 26 by the load balancing module 24 according to the traffic distribution policy of the network card 26. The received data packet is offloaded to the RSS network card receiving queue for processing the data packet, that is, the session is performed with the peer end. In other embodiments of the present invention, packet shunting is preferably performed based on a quintuple/triple, and the default shunting rule is preferably a hash rule. In other embodiments of the present invention, other elements may be based on other elements. Groups are used for packet offloading, such as binary or quad. The ternary group information includes a destination port, a destination IP address, and a protocol content, and the quintuple information includes a source port, a destination port, a source IP address, a destination IP address, and a protocol content. In this embodiment, the data distribution module 23 further receives the data transmission request of the first socket and distributes it to the corresponding protocol stack 25; after the first socket is created, combined with the load balancing information, selects a protocol stack 25 The processing of the data packet is performed, and the processed network data packet is distributed to the first socket.
在本实施例中,数据分发模块23接收并响应应用21下发的释放第 一套接字的请求,或者协议栈25接收并响应对端发送的连接释放请求,则表示会话结束。如果数据分发模块23接收并响应应用21下发的释放第一套接字的请求,则通知选择的协议栈25释放第一套接字及其相关的PCB,同时通知负载均衡模块24该第一套接字已经释放;负载均衡模块24收到数据分发模块23的第一套接字释放通知后,确认是否为该第一套接字在网卡26上创建过匹配流表29,如果有,则通过调用网卡驱动27删除该匹配流表29。如果协议栈25接收并响应对端发送的连接释放请求,协议栈25释放第一套接字,数据分发模块23通知应用21和负载均衡模块24第二套接字已经释放。负载均衡模块24再确认是否为该第二套接字在网卡上创建过匹配流表29,如果有,则通过调用网卡驱动27删除该匹配流表29。In this embodiment, the data distribution module 23 receives and responds to the release issued by the application 21. A socket request, or the protocol stack 25 receives and responds to the connection release request sent by the peer, indicating that the session ends. If the data distribution module 23 receives and responds to the request for releasing the first socket issued by the application 21, notifying the selected protocol stack 25 to release the first socket and its associated PCB, and notifying the load balancing module 24 of the first The socket has been released; after receiving the first socket release notification of the data distribution module 23, the load balancing module 24 confirms whether the matching flow table 29 has been created on the network card 26 for the first socket, and if so, The matching flow table 29 is deleted by calling the network card driver 27. If the protocol stack 25 receives and responds to the connection release request sent by the peer, the protocol stack 25 releases the first socket, and the data distribution module 23 notifies the application 21 and the load balancing module 24 that the second socket has been released. The load balancing module 24 reconfirms whether a matching flow table 29 has been created on the network card for the second socket, and if so, the matching flow table 29 is deleted by invoking the network card driver 27.
在本实施例中,数据分发模块23创建第一套接字以建立会话连接;负载均衡模块24根据每个协议栈25的负载情况,为第一套接字选择一个协议栈25;在第一套接字的数据包通过网卡26的默认分流规则不能分流到协议栈25所绑定的RSS网卡收队列上时,负载均衡模块24根据网卡26的分流策略在网卡26上创建匹配流表29以把接收的数据包分流到RSS网卡收队列上以进行数据包处理。如此通过对协议栈以及应用的负载感知,与RSS网卡收、发队列、流表匹配相结合,选择合适的协议栈以进行数据处理,使协议处理充分并行,提高协议处理能力,能够在多协议栈环境下,实现协议栈的负载均衡,减少了CPU的数据分发开销。In this embodiment, the data distribution module 23 creates a first socket to establish a session connection; the load balancing module 24 selects a protocol stack 25 for the first socket according to the load condition of each protocol stack 25; When the data packet of the socket cannot be offloaded to the RSS network card receiving queue bound to the protocol stack 25 by the default traffic sharing rule of the network card 26, the load balancing module 24 creates a matching flow table 29 on the network card 26 according to the traffic distribution policy of the network card 26. The received data packet is offloaded to the RSS network card receiving queue for packet processing. In this way, through the load sensing of the protocol stack and the application, combined with the RSS network card receiving, sending queue, and flow table matching, the appropriate protocol stack is selected for data processing, the protocol processing is fully paralleled, the protocol processing capability is improved, and the protocol can be multi-protocol. In the stack environment, load balancing of the protocol stack is implemented, which reduces the data distribution overhead of the CPU.
请参见图4,图4是本发明第一实施例的多协议栈负载均衡方法示意图。如图4所示,该多协议栈负载均衡方法包括:Referring to FIG. 4, FIG. 4 is a schematic diagram of a multi-protocol stack load balancing method according to a first embodiment of the present invention. As shown in FIG. 4, the multi-protocol stack load balancing method includes:
S10:响应应用的请求创建第一套接字并部署在所有的协议栈上。S10: Create a first socket in response to the application's request and deploy it on all protocol stacks.
在执行S10之前,需要对网卡和所有的协议栈进行初始化配置,如图5所示,包括:Before performing S10, you need to initialize the NIC and all protocol stacks, as shown in Figure 5, including:
S101:读取并存储网卡的硬件配置信息。其中,硬件配置信息包括RSS队列个数、最大可支持的流表匹配数。硬件配置信息需要经过网卡驱动来读取。S101: Read and store hardware configuration information of the network card. The hardware configuration information includes the number of RSS queues and the maximum number of flow table matches that can be supported. The hardware configuration information needs to be read by the network card driver.
S102:获取用户配置信息,并结合硬件配置信息形成网卡配置策略,写入网卡。其中,用户配置信息包括需开启的网卡硬件队列数、网卡上数据包的分发策略等,网卡配置信息也是通过网卡驱动写入网卡。 S102: Acquire user configuration information, and form a network card configuration policy according to the hardware configuration information, and write the network card. The user configuration information includes the number of network card hardware queues to be opened, and the distribution policy of the data packets on the network card. The network card configuration information is also written into the network card through the network card driver.
S103:启动多个协议栈,并根据网卡配置策略,为每个协议栈至少绑定一个RSS网卡收队列以及一个RSS网卡发队列。S103: Start multiple protocol stacks, and bind at least one RSS network card receiving queue and one RSS network card sending queue to each protocol stack according to the network card configuration policy.
第一套接字创建成功后,应用调用bind函数把第一套接字绑定到指定的IP地址,并通过调用listen函数监听指定端口过来的数据包请求。After the first socket is successfully created, the application calls the bind function to bind the first socket to the specified IP address, and listens to the packet request from the specified port by calling the listen function.
收到第一套接字的监听方法调用时,将第一套接字部署在所有的协议栈上,每个协议栈上都有一个第一套接字的PCB。其中,PCB包括建立连接以及数据包处理过程中涉及的各种变量。When the listener method call of the first socket is received, the first socket is deployed on all protocol stacks, and each protocol stack has a first socket PCB. Among them, the PCB includes various variables involved in establishing the connection and processing of the data packet.
S11:接收请求连接的数据包。S11: Receive a data packet requesting connection.
S12:判断请求连接的数据包的协议类型。如果协议类型是UDP协议,则执行S13;如果协议类型是TCP协议,则执行S14。S12: Determine the protocol type of the data packet requesting the connection. If the protocol type is the UDP protocol, S13 is performed; if the protocol type is the TCP protocol, S14 is performed.
S13:由收到请求连接的数据包的协议栈进行协议处理。在本发明的其它实施例中,如果协议类型是UDP协议,也可以由其它协议栈处理。S13: Perform protocol processing by the protocol stack that receives the data packet requesting the connection. In other embodiments of the invention, if the protocol type is the UDP protocol, it may also be handled by other protocol stacks.
S14:创建第二套接字以建立会话连接。S14: Create a second socket to establish a session connection.
在S14中,接收对端发送的请求连接的数据包,并根据各协议栈的网络运行的实际情况,创建第二套接字。并通知对端第二套接字是否创建成功,如果创建成功,则建立会话连接,可以进行会话;如果创建不成功,则建立会话连接失败,中断连接。在本发明的其它实施例中,将接收的对端发送的请求连接的数据包转给应用,待所述应用确认后创建第二套接字,并将结果返回给对端。In S14, the data packet of the request connection sent by the peer end is received, and the second socket is created according to the actual situation of the network operation of each protocol stack. And notify the peer that the second socket is successfully created. If the creation is successful, the session connection is established, and the session can be performed; if the creation is unsuccessful, the establishment of the session connection fails, and the connection is interrupted. In other embodiments of the present invention, the received data packet of the request connection sent by the peer end is forwarded to the application, and after the application is confirmed, the second socket is created, and the result is returned to the peer end.
S15:根据每个协议栈的负载情况,为第二套接字选择一个协议栈。同时通知协议栈为第二套接字创建对应的PCB,从而与对端建立起会话连接。S15: Select a protocol stack for the second socket according to the load condition of each protocol stack. At the same time, the protocol stack is notified to create a corresponding PCB for the second socket, thereby establishing a session connection with the peer.
S16:在第二套接字的数据包通过网卡的默认分流规则不能分流到选择的协议栈所绑定的RSS网卡收队列上时,根据网卡的分流策略在网卡上创建匹配流表,并在接收到数据包后,把接收的第二套接字的数据包分流到RSS网卡收队列上。在本发明的实施例中,优选为基于五元组/三元组来进行数据包分流,而默认分流规则优选为哈希规则,在本发明的其它实施例中,也可以是基于其它元组来进行数据包分流,如二元组或四元组。其中,三元组信息包括目的端口、目的IP地址以及协议内容,五元组信息包括源端口、目的端口、源IP地址、目的IP地址和协议内容。 S16: When the data packet of the second socket cannot be offloaded to the RSS network card receiving queue bound by the selected protocol stack by using the default traffic distribution rule of the network card, the matching flow table is created on the network card according to the network card's traffic distribution policy, and After receiving the data packet, the received second socket data packet is offloaded to the RSS network card receiving queue. In an embodiment of the present invention, packet splitting is preferably performed based on a quintuple/triple, and the default offloading rule is preferably a hash rule. In other embodiments of the present invention, other tuples may also be based on other tuples. For packet offloading, such as binary or quad. The ternary group information includes a destination port, a destination IP address, and a protocol content, and the quintuple information includes a source port, a destination port, a source IP address, a destination IP address, and a protocol content.
在S16中,第二套接字的数据包优先通过网卡的默认分流规则分流到协议栈所绑定的RSS网卡收队列上。如果通过网卡的默认分流规则不能把第二套接字的数据包分流到选择的协议栈所绑定的RSS网卡收队列上,则根据网卡的分流策略在网卡上创建匹配流表,并将接收的第二套接字的数据包分流到RSS网卡收队列上以进行数据包的处理,即与对端进行会话。如此,在多协议栈环境下,通过对协议栈以及应用的负载感知,与RSS网卡收、发队列、匹配流表相结合,选择合适的协议栈以进行数据处理,使协议处理充分并行,提高协议处理能力,实现协议栈的负载均衡,减少了CPU的数据分发开销。In S16, the data packet of the second socket is preferentially branched to the RSS network card receiving queue bound by the protocol stack by using the default offloading rule of the network card. If the default packet distribution rule of the network card cannot be used to offload the data packet of the second socket to the RSS network card receiving queue bound to the selected protocol stack, create a matching flow table on the network card according to the network card's traffic distribution policy, and receive the data packet. The second socket data packet is offloaded to the RSS network card receiving queue for processing the data packet, that is, the session with the peer end. In this way, in the multi-protocol stack environment, through the load sensing of the protocol stack and the application, combined with the RSS network card receiving, sending queue, and matching flow table, the appropriate protocol stack is selected for data processing, so that the protocol processing is fully paralleled and improved. The protocol processing capability enables load balancing of the protocol stack and reduces the data distribution overhead of the CPU.
S17:进行第二套接字与选择的协议栈间的数据包分发。在S17中,还记录第二套接字和选择的协议栈的对应关系。S17: Perform packet distribution between the second socket and the selected protocol stack. In S17, the correspondence between the second socket and the selected protocol stack is also recorded.
S18:会话结束后,释放第二套接字,并删除网卡上创建的匹配流表。S18: After the session ends, release the second socket and delete the matching flow table created on the network card.
在S18中,接收并响应应用下发的释放第二套接字的请求,或者通过选择的协议栈接收并响应对端发送的连接释放请求,则表示会话结束。如果接收并响应应用下发的释放第二套接字的请求,则通知协议栈释放第二套接字及其相关的PCB;确认是否为第二套接字在网卡上创建过匹配流表,如果有,则删除匹配流表。如果是通过选择的协议栈接收并响应对端发送的连接释放请求,则选择的协议栈释放第二套接字,并通知应用第二套接字已释放,确认是否为第二套接字在网卡上创建过匹配流表,如果有,则删除匹配流表。只有在客户端与对端不再进行任何通信连接时才释放第一套接字。In S18, receiving and responding to the request issued by the application to release the second socket, or receiving and responding to the connection release request sent by the peer end through the selected protocol stack indicates that the session ends. If receiving and responding to the request issued by the application to release the second socket, notifying the protocol stack to release the second socket and its associated PCB; confirming whether a matching flow table is created on the network card for the second socket, If there is, delete the matching flow table. If the connection release request sent by the peer end is received through the selected protocol stack, the selected protocol stack releases the second socket, and notifies the application that the second socket has been released, confirming whether the second socket is in the second socket. A matching flow table has been created on the NIC, and if there is, the matching flow table is deleted. The first socket is released only when there is no longer any communication connection between the client and the peer.
请参见图6,图6是本发明第二实施例的多协议栈负载均衡方法示意图。如图6所示,该多协议栈负载均衡方法包括:Referring to FIG. 6, FIG. 6 is a schematic diagram of a multi-protocol stack load balancing method according to a second embodiment of the present invention. As shown in FIG. 6, the multi-protocol stack load balancing method includes:
S21:创建第一套接字,并根据每个协议栈的负载情况,为第一套接字选择一个协议栈以建立会话连接。S21: Create a first socket, and select a protocol stack for the first socket to establish a session connection according to the load condition of each protocol stack.
在执行S21之前,对网卡和所有的协议栈进行初始化配置,包括:通过网卡驱动读取并存储网卡的硬件配置信息;获取用户配置信息,并结合硬件配置信息形成网卡配置策略,通过网卡驱动写入网卡;启动多个协议栈,并根据网卡配置策略,为每个协议栈至少绑定一个RSS网卡收队列以及一个RSS网卡发队列。Before performing S21, initial configuration of the network card and all protocol stacks, including: reading and storing the hardware configuration information of the network card through the network card driver; obtaining user configuration information, and forming a network card configuration policy by combining the hardware configuration information, and writing through the network card driver Incoming network card; starting multiple protocol stacks, and binding at least one RSS network card receiving queue and one RSS network card sending queue for each protocol stack according to the network card configuration policy.
应用调用应用程序编程接口创建第一套接字并创建对应的PCB。PCB 包括建立连接以及数据包处理过程中涉及的各种变量。第一套接字创建成功后,应用调用connect函数去连接某个服务器的IP地址和端口建立连接,此即是作为客户端的应用。The application calls the application programming interface to create the first socket and create the corresponding PCB. PCB This includes establishing connections and various variables involved in packet processing. After the first socket is successfully created, the application calls the connect function to connect to the IP address of a server and establish a connection with the port. This is the application as the client.
S22:若第一套接字的数据包通过网卡的默认分流规则不能分流到协议栈所绑定的RSS网卡收队列上,则根据网卡的分流策略在网卡上创建匹配流表,并在接收到数据包后,把接收的数据包分流到RSS网卡收队列上。在本发明的实施例中,优选为基于五元组/三元组来进行数据包分流,而默认分流规则优选为哈希规则,在本发明的其它实施例中,也可以是基于其它元组来进行数据包分流,如二元组或四元组。其中,三元组信息包括目的端口、目的IP地址以及协议内容,五元组信息包括源端口、目的端口、源IP地址、目的IP地址和协议内容。S22: If the data packet of the first socket cannot be offloaded to the RSS network card receiving queue bound by the protocol stack by using the default traffic distribution rule of the network card, create a matching flow table on the network card according to the network card's traffic distribution policy, and receive the matching flow table. After the data packet, the received data packet is offloaded to the RSS network card receiving queue. In an embodiment of the present invention, packet splitting is preferably performed based on a quintuple/triple, and the default offloading rule is preferably a hash rule. In other embodiments of the present invention, other tuples may also be based on other tuples. For packet offloading, such as binary or quad. The ternary group information includes a destination port, a destination IP address, and a protocol content, and the quintuple information includes a source port, a destination port, a source IP address, a destination IP address, and a protocol content.
在S22中,第一套接字的数据包优先通过网卡的默认分流规则分流到协议栈所绑定的RSS网卡收队列上。如果第一套接字的数据通过网卡的哈希规则不能包分流到协议栈所绑定的RSS网卡收队列上,则根据网卡的分流策略在网卡上创建匹配流表,并将接收的数据包分流到RSS网卡收队列上以进行数据包的处理,即与对端进行会话。如此,在多协议栈环境下,通过对协议栈以及应用的负载感知,与RSS网卡收、发队列、匹配流表相结合,选择合适的协议栈以进行数据处理,使协议处理充分并行,提高协议处理能力,实现协议栈的负载均衡,减少了CPU的数据分发开销。In S22, the data packet of the first socket is preferentially branched to the RSS network card receiving queue bound by the protocol stack by using the default offloading rule of the network card. If the data of the first socket cannot be offloaded to the RSS network card receiving queue bound by the protocol stack through the hash rule of the network card, a matching flow table is created on the network card according to the network card's traffic distribution policy, and the received data packet is received. It is distributed to the RSS network card receiving queue for processing the data packet, that is, the session with the peer end. In this way, in the multi-protocol stack environment, through the load sensing of the protocol stack and the application, combined with the RSS network card receiving, sending queue, and matching flow table, the appropriate protocol stack is selected for data processing, so that the protocol processing is fully paralleled and improved. The protocol processing capability enables load balancing of the protocol stack and reduces the data distribution overhead of the CPU.
S23:进行第一套接字与选择的协议栈间的数据包分发。在S23中,还记录第一套接字和选择的协议栈的对应关系。S23: Perform packet distribution between the first socket and the selected protocol stack. In S23, the correspondence between the first socket and the selected protocol stack is also recorded.
S24:会话结束后,释放第一套接字,并删除网卡上创建的匹配流表。S24: After the session ends, release the first socket and delete the matching flow table created on the network card.
在S24中,接收并响应应用下发的释放第二套接字的请求,或者协议栈接收并响应对端发送的连接释放请求,则表示会话结束。如果接收并响应应用下发的释放第二套接字的请求,则通知协议栈释放第一套接字及其相关的协议控制块;确认是否为第一套接字在网卡上创建过匹配流表,如果有,则删除匹配流表。如果协议栈接收并响应对端发送的连接释放请求,则释放第一套接字,并通知应用第一套接字已释放;确认是否为第一套接字在网卡上创建过匹配流表,如果有,则删除匹配流表。In S24, receiving and responding to the request issued by the application to release the second socket, or the protocol stack receiving and responding to the connection release request sent by the peer end indicates that the session ends. If the request for releasing the second socket issued by the application is received and responded, the protocol stack is notified to release the first socket and its associated protocol control block; and it is confirmed whether the matching stream is created on the network card for the first socket. Table, if any, deletes the matching flow table. If the protocol stack receives and responds to the connection release request sent by the peer, releases the first socket and notifies the application that the first socket has been released; and confirms whether the first socket has created a matching flow table on the network card. If there is, delete the matching flow table.
请参阅图7,图7是本发明第三实施例的多协议栈负载均衡装置的又 一结构示意图。如图7所示,该多协议栈负载均衡装置30包括处理器301、存储器302、接收器303、以及总线304,处理器301、存储器302和接收器303通过总线304相连。其中:Please refer to FIG. 7. FIG. 7 is another embodiment of a multi-stack stack load balancing apparatus according to a third embodiment of the present invention. A schematic diagram of the structure. As shown in FIG. 7, the multi-stack stack load balancing apparatus 30 includes a processor 301, a memory 302, a receiver 303, and a bus 304. The processor 301, the memory 302, and the receiver 303 are connected by a bus 304. among them:
处理器301响应应用的请求创建第一套接字,并将第一套接字部署在所有的协议栈上。接收器303接收请求连接的数据包。处理器301判断请求连接的数据包的协议类型,如果协议类型是TCP协议,则:处理器301创建第二套接字以建立会话连接;处理器301根据每个协议栈的负载情况,为第二套接字选择一个协议栈;在第二套接字的数据包通过网卡的默认分流规则不能分流到选择的协议栈所绑定的RSS网卡收队列上时,处理器301根据网卡的分流策略在网卡上创建匹配流表并把接收的第二套接字的数据包分流到RSS网卡收队列上。存储器302记录第二套接字和选择的协议栈的对应关系。处理器301进行第二套接字与选择的协议栈间的数据包分发;会话完成后,协议栈释放第二套接字,处理器301删除网卡上创建的匹配流表。The processor 301 creates a first socket in response to the application's request and deploys the first socket on all of the protocol stacks. The receiver 303 receives the data packet requesting the connection. The processor 301 determines the protocol type of the data packet requesting the connection. If the protocol type is the TCP protocol, the processor 301 creates a second socket to establish a session connection; the processor 301 is configured according to the load condition of each protocol stack. The second socket selects a protocol stack; when the data packet of the second socket cannot be offloaded to the RSS network card receiving queue bound by the selected protocol stack by the default offloading rule of the network card, the processor 301 divides the traffic according to the network card. A matching flow table is created on the network card and the received second socket data packet is offloaded to the RSS network card receiving queue. The memory 302 records the correspondence between the second socket and the selected protocol stack. The processor 301 performs data packet distribution between the second socket and the selected protocol stack; after the session is completed, the protocol stack releases the second socket, and the processor 301 deletes the matching flow table created on the network card.
在本实施例中,需要对网卡和所有的协议栈进行初始化配置,存储器302读取并存储网卡的硬件配置信息,包括RSS队列个数、最大可支持的流表匹配数。处理器301获取用户配置信息,并结合硬件配置信息形成网卡配置策略,写入网卡。处理器301启动多个协议栈,并根据网卡配置策略,为每个协议栈至少绑定一个RSS网卡收队列以及一个RSS网卡发队列。其中,用户配置信息包括需开启的网卡硬件队列数、网卡上数据包的分发策略等。In this embodiment, the network card and all the protocol stacks need to be initialized. The memory 302 reads and stores the hardware configuration information of the network card, including the number of RSS queues and the maximum number of flow table matches that can be supported. The processor 301 acquires user configuration information, and forms a network card configuration policy in combination with the hardware configuration information, and writes the network card. The processor 301 starts a plurality of protocol stacks, and according to the network card configuration policy, at least one RSS network card receiving queue and one RSS network card sending queue are bound to each protocol stack. The user configuration information includes the number of network card hardware queues to be opened, and the distribution policy of data packets on the network card.
在本实施例中,处理器301创建第一套接字时,还创建对应的PCB,其中,PCB包括建立连接以及数据包处理过程中涉及的各种变量。如果处理器301判断协议类型为是UDP协议,则由收到请求连接的数据包的协议栈进行协议处理,在本发明的其它实施例中,也可以由其它协议栈处理。In this embodiment, when the processor 301 creates the first socket, a corresponding PCB is also created, wherein the PCB includes various variables involved in establishing the connection and processing the data packet. If the processor 301 determines that the protocol type is the UDP protocol, the protocol processing is performed by the protocol stack that receives the data packet requesting the connection, and in other embodiments of the present invention, it may also be processed by other protocol stacks.
接收器303接收对端发送的请求连接的数据包,处理器301根据各协议栈的网络运行的实际情况,创建第二套接字。并通知对端第二套接字是否创建成功,如果创建成功,则建立会话连接,可以进行会话;如果创建不成功,则建立会话连接失败,中断连接。在本发明的其它实施例中,接收器303将接收的对端发送的请求连接的数据包转给应用,待应用确认后创建第二套接字,并将结果返回给对端。第二套接字的数据 包优先通过网卡的默认分流规则分流到选择的协议栈所绑定的RSS网卡收队列上;如果第二套接字的数据通过默认分流规则不能分流到选择的协议栈所绑定的RSS网卡收队列上,则处理器301根据网卡的分流策略在网卡上创建匹配流表,并在接收器303接收到数据包后,把接收的第二套接字的数据包分流到RSS网卡收队列上。在本发明的实施例中,优选为基于五元组/三元组来进行数据包分流,而默认分流规则优选为哈希规则,在本发明的其它实施例中,也可以是基于其它元组来进行数据包分流,如二元组或四元组。The receiver 303 receives the data packet of the request connection sent by the peer end, and the processor 301 creates a second socket according to the actual situation of the network operation of each protocol stack. And notify the peer that the second socket is successfully created. If the creation is successful, the session connection is established, and the session can be performed; if the creation is unsuccessful, the establishment of the session connection fails, and the connection is interrupted. In other embodiments of the present invention, the receiver 303 forwards the data packet of the request connection sent by the received peer to the application, and after the application is confirmed, creates a second socket, and returns the result to the peer. Second socket data The packet priority is offloaded to the RSS network card receiving queue bound to the selected protocol stack by the default offloading rule of the network card; if the data of the second socket is passed the default offloading rule, the data cannot be offloaded to the RSS card bound to the selected protocol stack. On the queue, the processor 301 creates a matching flow table on the network card according to the traffic offloading policy of the network card, and after receiving the data packet, the receiver 301 offloads the received data packet of the second socket to the RSS network card receiving queue. In an embodiment of the present invention, packet splitting is preferably performed based on a quintuple/triple, and the default offloading rule is preferably a hash rule. In other embodiments of the present invention, other tuples may also be based on other tuples. For packet offloading, such as binary or quad.
在本实施例中,接收器303接收应用下发的释放第二套接字的请求,或者通过选择的协议栈接收并响应对端发送的连接释放请求,则表示会话结束。如果接收器303接收应用下发的释放第二套接字的请求,则处理器301响应该请求,通知协议栈释放第二套接字;处理器301确认是否为第二套接字在网卡上创建过匹配流表,如果有,则删除匹配流表。如果是通过选择的协议栈接收并响应对端发送的连接释放请求,则选择的协议栈释放第二套接字,并通知应用第二套接字已释放,处理器301确认是否为第二套接字在网卡上创建过匹配流表,如果有,则删除匹配流表。In this embodiment, the receiver 303 receives the request for releasing the second socket issued by the application, or receives and responds to the connection release request sent by the peer end through the selected protocol stack, indicating that the session ends. If the receiver 303 receives the request issued by the application to release the second socket, the processor 301 responds to the request, and notifies the protocol stack to release the second socket; the processor 301 confirms whether the second socket is on the network card. A matching flow table has been created, and if so, the matching flow table is deleted. If the connection release request sent by the peer is received and responded to by the selected protocol stack, the selected protocol stack releases the second socket, and notifies the application that the second socket has been released, and the processor 301 confirms whether it is the second set. The connection creates a matching flow table on the NIC, and if so, deletes the matching flow table.
上述本发明实施例揭示的方法可以应用于处理器301中,或者由处理器301实现。处理器301可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器301中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器301可以是通用处理器、数字信号处理器(digital singnal processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及结构框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器302,处理器301读取存储器302中的信息,结合其硬件完成上述 方法的步骤。The method disclosed in the foregoing embodiments of the present invention may be applied to the processor 301 or implemented by the processor 301. Processor 301 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 301 or an instruction in a form of software. The processor 301 may be a general-purpose processor, a digital singal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or Other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components. The methods, steps, and structural block diagrams disclosed in the embodiments of the present invention may be implemented or executed. The general purpose processor may be a microprocessor or the processor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor. The software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like. The storage medium is located in the memory 302, and the processor 301 reads the information in the memory 302 and completes the above with its hardware. The steps of the method.
处理器301还可以称为CPU。存储器302可以包括只读存储器和随机存取存储器,并向处理器301提供指令和数据包。存储器302的一部分还可以包括非易失性随机存取存储器(Non-Volatile Random Access Memory,NVRAM)。装置30的各个组件通过总线304耦合在一起,其中总线304除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。在图中将各种总线都标为总线304。 Processor 301 may also be referred to as a CPU. Memory 302 can include read only memory and random access memory and provides instructions and data packets to processor 301. A portion of the memory 302 may also include a Non-Volatile Random Access Memory (NVRAM). The various components of device 30 are coupled together by a bus 304, which may include, in addition to the data bus, a power bus, a control bus, a status signal bus, and the like. The various buses are labeled as bus 304 in the figure.
请参阅图8,图8是本发明第四实施例的多协议栈负载均衡装置的又一结构示意图。如图8所示,该多协议栈负载均衡装置40包括处理器401、存储器402、接收器403、总线404以及发射器405,处理器401、存储器402和接收器403以及发射器405通过总线404相连。Please refer to FIG. 8. FIG. 8 is still another schematic structural diagram of a multi-stack stack load balancing apparatus according to a fourth embodiment of the present invention. As shown in FIG. 8, the multi-stack stack load balancing device 40 includes a processor 401, a memory 402, a receiver 403, a bus 404, and a transmitter 405. The processor 401, the memory 402 and the receiver 403, and the transmitter 405 pass through a bus 404. Connected.
在本实施例中,处理器401创建第一套接字,并根据每个协议栈的负载情况,为第一套接字选择一个协议栈以建立会话连接。若第一套接字的数据包通过网卡的默认分流规则不能分流到协议栈所绑定的RSS网卡收队列上,则处理器401根据网卡的分流策略在网卡上创建匹配流表,并在接收器403接收到数据包后,把接收的数据包分流到RSS网卡收队列上。存储器402记录第一套接字和选择的协议栈的对应关系。处理器401进行第一套接字与选择的协议栈间的数据包分发。会话结束后,选择的协议栈释放第一套接字,处理器401删除网卡上创建的匹配流表。In this embodiment, the processor 401 creates a first socket and selects a protocol stack for the first socket to establish a session connection according to the load condition of each protocol stack. If the data packet of the first socket cannot be offloaded to the RSS network card receiving queue bound by the protocol stack by using the default traffic distribution rule of the network card, the processor 401 creates a matching flow table on the network card according to the network card's traffic distribution policy, and receives the matching flow table. After receiving the data packet, the device 403 offloads the received data packet to the RSS network card receiving queue. The memory 402 records the correspondence between the first socket and the selected protocol stack. The processor 401 performs packet distribution between the first socket and the selected protocol stack. After the session ends, the selected protocol stack releases the first socket, and the processor 401 deletes the matching flow table created on the network card.
在本实施例中,需要对网卡和所有的协议栈进行初始化配置,存储器402读取并存储网卡的硬件配置信息,包括RSS队列个数、最大可支持的流表匹配数。处理器401获取用户配置信息,并结合硬件配置信息形成网卡配置策略,写入网卡。处理器401启动多个协议栈,并根据网卡配置策略,为每个协议栈至少绑定一个RSS网卡收队列以及一个RSS网卡发队列。其中,用户配置信息包括需开启的网卡硬件队列数、网卡上数据包的分发策略等。In this embodiment, the network card and all protocol stacks need to be initialized. The memory 402 reads and stores the hardware configuration information of the network card, including the number of RSS queues and the maximum number of flow table matches that can be supported. The processor 401 acquires user configuration information, and forms a network card configuration policy in combination with the hardware configuration information, and writes the network card. The processor 401 starts a plurality of protocol stacks, and according to the network card configuration policy, at least one RSS network card receiving queue and one RSS network card sending queue are bound to each protocol stack. The user configuration information includes the number of network card hardware queues to be opened, and the distribution policy of data packets on the network card.
处理器401创建第一套接字时,还创建对应的PCB,PCB包括建立连接以及数据包处理过程中涉及的各种变量。具体地,接收器403接收对端发送的请求连接的数据包,处理器401根据各协议栈的网络运行的实际情况,返回伪的结果,通知对端第一套接字是否创建成功。在本发明的其它实施例中,接收器403将接收的对端发送的请求连接的数据包转 给应用,待应用确认后创建第一套接字,并将结果返回给对端。接收器403接收到数据包后,第一套接字的数据包优先通过网卡的默认分流规则分流到选择的协议栈所绑定的RSS网卡收队列上;如果第一套接字的数据包通过网卡的默认分流规则后,不能分流到选择的协议栈所绑定的RSS网卡收队列上,则处理器401为数据包在网卡上建立匹配流表,把数据包分流到RSS网卡收队列上。在本发明的实施例中,优选为基于五元组/三元组来进行数据包分流,而默认分流规则优选为哈希规则,在本发明的其它实施例中,也可以是基于其它元组来进行数据包分流,如二元组或四元组。When the processor 401 creates the first socket, it also creates a corresponding PCB, which includes various connections involved in establishing the connection and processing of the data packet. Specifically, the receiver 403 receives the data packet of the request connection sent by the peer end, and the processor 401 returns a false result according to the actual situation of the network operation of each protocol stack, and notifies whether the first socket of the opposite end is successfully created. In other embodiments of the present invention, the receiver 403 converts the received data packet of the request sent by the opposite end to Give the application, create a first socket after the application is confirmed, and return the result to the peer. After the receiver 403 receives the data packet, the data packet of the first socket is preferentially branched to the RSS network card receiving queue bound by the selected protocol stack by using the default offloading rule of the network card; if the data packet of the first socket passes After the default offloading rule of the network card, the network card cannot be offloaded to the RSS network card receiving queue bound to the selected protocol stack, and the processor 401 creates a matching flow table for the data packet on the network card, and offloads the data packet to the RSS network card receiving queue. In an embodiment of the present invention, packet splitting is preferably performed based on a quintuple/triple, and the default offloading rule is preferably a hash rule. In other embodiments of the present invention, other tuples may also be based on other tuples. For packet offloading, such as binary or quad.
在本实施例中,发射器405用于发送连接请求以及数据包。接收器403用于接收数据包。接收器403接收应用下发的释放第二套接字的请求,或者协议栈接收并响应对端发送的连接释放请求,则表示会话结束。如果接收器403接收应用下发的释放第二套接字的请求,则处理器401响应该请求,通知协议栈释放第二套接字;处理器401确认是否为第二套接字在网卡上创建过匹配流表,如果有,则删除匹配流表。如果协议栈接收并响应对端发送的连接释放请求,则协议栈释放第二套接字,并通知应用第二套接字已释放,处理器401确认是否为第二套接字在网卡上创建过匹配流表,如果有,则删除匹配流表。In this embodiment, the transmitter 405 is used to send connection requests and data packets. The receiver 403 is configured to receive a data packet. The receiver 403 receives the request issued by the application to release the second socket, or the protocol stack receives and responds to the connection release request sent by the opposite end, indicating that the session ends. If the receiver 403 receives the request issued by the application to release the second socket, the processor 401 responds to the request, and notifies the protocol stack to release the second socket; the processor 401 confirms whether the second socket is on the network card. A matching flow table has been created, and if so, the matching flow table is deleted. If the protocol stack receives and responds to the connection release request sent by the peer, the protocol stack releases the second socket and notifies the application that the second socket has been released, and the processor 401 confirms whether the second socket is created on the network card. Matches the flow table, if any, deletes the matching flow table.
上述本发明实施例揭示的方法可以应用于处理器401中,或者由处理器401实现。处理器401可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器401中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器401可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器402,处理器401读取存储器402中的信息,结合其硬件完成上述方法的步骤。 The method disclosed in the foregoing embodiments of the present invention may be applied to the processor 401 or implemented by the processor 401. Processor 401 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 401 or an instruction in a form of software. The processor 401 described above may be a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or discrete hardware. Component. The methods, steps, and logical block diagrams disclosed in the embodiments of the present invention may be implemented or carried out. The general purpose processor may be a microprocessor or the processor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor. The software module can be located in a conventional storage medium such as random access memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, and the like. The storage medium is located in the memory 402, and the processor 401 reads the information in the memory 402 and performs the steps of the above method in combination with its hardware.
处理器401还可以称为中央处理单元(Central Processing Unit,CPU)。存储器402可以包括只读存储器和随机存取存储器,并向处理器401提供指令和数据包。存储器402的一部分还可以包括非易失性随机存取存储器(NVRAM)。装置40的各个组件通过总线404耦合在一起,其中总线404除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。在图中将各种总线都标为总线404。The processor 401 may also be referred to as a Central Processing Unit (CPU). Memory 402 can include read only memory and random access memory and provides instructions and data packets to processor 401. A portion of memory 402 may also include non-volatile random access memory (NVRAM). The various components of device 40 are coupled together by a bus 404, which may include, in addition to the data bus, a power bus, a control bus, a status signal bus, and the like. The various buses are labeled as bus 404 in the figure.
综上所述,本发明通过响应应用的请求创建第一套接字并部署在所有的协议栈上;在接收请求连接的数据包后,如果请求连接的数据包的协议类型是传输控制协议,则:创建第二套接字以建立会话连接;并根据每个协议栈的负载情况,为第二套接字选择一个协议栈,在第二套接字的数据包通过网卡的默认分流规则不能分流到选择的协议栈所绑定的RSS网卡收队列上时,根据网卡的分流策略在网卡上创建匹配流表,把接收的第二套接字的数据包分流到RSS网卡收队列上;如此通过对协议栈以及应用的负载感知,与RSS网卡收、发队列、流表匹配相结合,选择合适的协议栈以进行数据处理,使协议处理充分并行,提高协议处理能力,能够在多协议栈环境下,实现协议栈的负载均衡,减少了CPU的数据分发开销。In summary, the present invention creates a first socket by responding to an application request and deploys it on all protocol stacks; after receiving a data packet requesting connection, if the protocol type of the data packet requesting the connection is a transmission control protocol, Then: create a second socket to establish a session connection; and select a protocol stack for the second socket according to the load condition of each protocol stack, and the data packet of the second socket cannot pass the default traffic distribution rule of the network card. When the flow is offloaded to the RSS network card receiving queue bound to the selected protocol stack, a matching flow table is created on the network card according to the network card's traffic distribution policy, and the received second socket data packet is offloaded to the RSS network card receiving queue; Through the load sensing of the protocol stack and the application, combined with the RSS network card receiving, sending queue, and flow table matching, the appropriate protocol stack is selected for data processing, the protocol processing is fully paralleled, the protocol processing capability is improved, and the multi-protocol stack can be In the environment, load balancing of the protocol stack is implemented, which reduces the data distribution overhead of the CPU.
以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。 The above is only the embodiment of the present invention, and is not intended to limit the scope of the invention, and the equivalent structure or equivalent process transformation of the present invention and the contents of the drawings may be directly or indirectly applied to other related technologies. The fields are all included in the scope of patent protection of the present invention.

Claims (24)

  1. 一种多协议栈负载均衡方法,其特征在于,所述方法包括:A multi-stack stack load balancing method, the method comprising:
    响应应用的请求创建第一套接字并部署在所有的协议栈上;Create a first socket in response to the application's request and deploy it on all protocol stacks;
    接收请求连接的数据包;Receiving a data packet requesting connection;
    判断所述请求连接的数据包的协议类型,如果所述协议类型是传输控制协议,则:Determining a protocol type of the data packet requesting the connection, if the protocol type is a transmission control protocol, then:
    创建第二套接字以建立会话连接;Create a second socket to establish a session connection;
    根据每个协议栈的负载情况,为所述第二套接字选择一个协议栈;Selecting a protocol stack for the second socket according to the load condition of each protocol stack;
    在所述第二套接字的数据包通过网卡的默认分流规则不能分流到所述选择的协议栈所绑定的接收方扩展RSS网卡收队列上时,根据所述网卡的分流策略在所述网卡上创建匹配流表,并在接收到数据包后,把接收的所述第二套接字的数据包分流到所述RSS网卡收队列上;When the data packet of the second socket cannot be offloaded to the receiver extended RSS network card receiving queue bound by the selected protocol stack by the default offloading rule of the network card, according to the traffic off policy of the network card Creating a matching flow table on the network card, and after receiving the data packet, offloading the received data packet of the second socket to the RSS network card receiving queue;
    进行所述第二套接字与所述选择的协议栈间的数据包分发。Performing packet distribution between the second socket and the selected protocol stack.
  2. 根据权利要求1所述的方法,其特征在于,还包括:The method of claim 1 further comprising:
    所述会话结束后,释放所述第二套接字,并删除所述网卡上创建的所述匹配流表。After the session ends, the second socket is released, and the matching flow table created on the network card is deleted.
  3. 根据权利要求1所述的方法,其特征在于,如果所述协议类型是用户数据报协议,则:The method of claim 1 wherein if the protocol type is a user datagram protocol:
    由收到所述请求连接的数据包的协议栈进行协议处理。The protocol is processed by the protocol stack that receives the data packet to which the request is connected.
  4. 根据权利要求1所述的方法,其特征在于,在所述响应应用的请求创建第一套接字并部署在所有的协议栈上的步骤之前,对所述网卡和所有的协议栈进行初始化配置,包括:The method according to claim 1, wherein the network card and all protocol stacks are initially configured before the step of responding to the request to create a first socket and deploying on all protocol stacks ,include:
    读取并存储所述网卡的硬件配置信息;Reading and storing hardware configuration information of the network card;
    获取用户配置信息,并结合所述硬件配置信息形成网卡配置策略,写入所述网卡;Obtaining user configuration information, and forming a network card configuration policy according to the hardware configuration information, and writing the network card;
    启动多个协议栈,并根据所述网卡配置策略,为每个协议栈至少绑定一个RSS网卡收队列以及一个RSS网卡发队列。A plurality of protocol stacks are started, and at least one RSS network card receiving queue and one RSS network card sending queue are bound to each protocol stack according to the network card configuration policy.
  5. 根据权利要求1所述的方法,其特征在于,所述响应应用的请求创建第一套接字并部署在所有的协议栈上包括:The method of claim 1, wherein the requesting the application to create the first socket and deploying on all of the protocol stacks comprises:
    调用应用编程接口创建所述第一套接字;Calling an application programming interface to create the first socket;
    所述第一套接字创建后,调用bind函数将所述第一套接字绑定到特定的IP地址,并调用listen函数监听指定端口的数据包请求; After the first socket is created, the bind function is called to bind the first socket to a specific IP address, and the listen function is called to listen for a data packet request of the specified port;
    收到所述第一套接字的监听方法调用时,将所述第一套接字部署在所有的协议栈上。When the listener method call of the first socket is received, the first socket is deployed on all protocol stacks.
  6. 根据权利要求1所述的方法,其特征在于,所述创建第二套接字以建立会话连接的步骤包括:The method of claim 1 wherein said step of creating a second socket to establish a session connection comprises:
    根据各协议栈的网络运行的实际情况,创建所述第二套接字。The second socket is created according to the actual situation of the network operation of each protocol stack.
  7. 根据权利要求1所述的方法,其特征在于,所述创建第二套接字以建立会话连接的步骤包括:The method of claim 1 wherein said step of creating a second socket to establish a session connection comprises:
    将接收的对端发送的所述请求连接的数据包转给所述应用;Transmitting, by the receiving peer, the data packet of the request connection to the application;
    待所述应用确认后创建所述第二套接字。The second socket is created after the application is confirmed.
  8. 根据权利要求1所述的方法,其特征在于,所述会话结束包括接收并响应所述应用下发的释放所述第二套接字的请求,或者接收并响应对端发送的连接释放请求。The method according to claim 1, wherein the ending of the session comprises receiving and responding to a request issued by the application to release the second socket, or receiving and responding to a connection release request sent by the opposite end.
  9. 一种多协议栈负载均衡方法,其特征在于,所述方法包括:A multi-stack stack load balancing method, the method comprising:
    创建第一套接字,并根据每个协议栈的负载情况,为所述第一套接字选择一个协议栈以建立会话连接;Creating a first socket, and selecting a protocol stack for the first socket to establish a session connection according to a load condition of each protocol stack;
    若所述第一套接字的数据包通过网卡的默认分流规则不能分流到所述选择的协议栈所绑定的接收方扩展RSS网卡收队列上,则根据所述网卡的分流策略在所述网卡上创建匹配流表,并在接收到数据包后,把接收的所述数据包分流到所述RSS网卡收队列上;And if the data packet of the first socket cannot be offloaded to the receiver extended RSS network card receiving queue bound by the selected protocol stack by using a default offloading rule of the network card, according to the splitting policy of the network card, Creating a matching flow table on the network card, and after receiving the data packet, offloading the received data packet to the RSS network card receiving queue;
    进行所述第一套接字与所述选择的协议栈间的数据包分发。Data packet distribution between the first socket and the selected protocol stack is performed.
  10. 根据权利要求9所述的方法,其特征在于,还包括:The method of claim 9 further comprising:
    所述会话结束后,释放所述第一套接字,并删除所述网卡上创建的所述匹配流表。After the session ends, the first socket is released, and the matching flow table created on the network card is deleted.
  11. 根据权利要求9所述的方法,其特征在于,在所述创建第一套接字之前,对所述网卡和所有的协议栈进行初始化配置,包括:The method according to claim 9, wherein the initial configuration of the network card and all protocol stacks before the creating the first socket comprises:
    读取并存储所述网卡的硬件配置信息;Reading and storing hardware configuration information of the network card;
    获取用户配置信息,并结合所述硬件配置信息形成网卡配置策略,写入所述网卡;Obtaining user configuration information, and forming a network card configuration policy according to the hardware configuration information, and writing the network card;
    启动多个协议栈,并根据所述网卡配置策略,为每个协议栈至少绑定一个RSS网卡收队列以及一个RSS网卡发队列。A plurality of protocol stacks are started, and at least one RSS network card receiving queue and one RSS network card sending queue are bound to each protocol stack according to the network card configuration policy.
  12. 根据权利要求9所述的方法,其特征在于,所述会话结束包括接收并响应应用下发的释放所述第一套接字的请求,或者接收并响应对 端发送的连接释放请求。The method according to claim 9, wherein the end of the session comprises receiving and responding to a request issued by the application to release the first socket, or receiving and responding to the pair The connection release request sent by the terminal.
  13. 一种多实例协议栈负载均衡装置,其特征在于,所述装置包括:协议栈模块、网卡、数据分发模块以及负载均衡模块,所述协议栈模块包括多个协议栈,其中:A multi-instance protocol stack load balancing device is characterized in that: the device comprises: a protocol stack module, a network card, a data distribution module and a load balancing module, wherein the protocol stack module comprises a plurality of protocol stacks, wherein:
    所述数据分发模块,用于响应应用的请求创建第一套接字并部署在所有的协议栈上;The data distribution module is configured to create a first socket and deploy it on all protocol stacks in response to an application request;
    所述协议栈模块,用于接收请求连接的数据包,判断所述请求连接的数据包的协议类型;The protocol stack module is configured to receive a data packet requesting connection, and determine a protocol type of the data packet requesting the connection;
    所述数据分发模块,用于,如果所述协议类型是传输控制协议,则创建第二套接字以建立会话连接;The data distribution module is configured to: if the protocol type is a transmission control protocol, create a second socket to establish a session connection;
    所述负载均衡模块,用于,如果所述协议类型是传输控制协议,则根据每个协议栈的负载情况,为所述第二套接字选择一个协议栈,并在所述第二套接字的数据包通过所述网卡的默认分流规则不能分流到所述选择的协议栈所绑定的接收方扩展RSS网卡收队列上时,根据所述网卡的分流策略在所述网卡上创建匹配流表,并在接收到数据包后,把接收的所述第二套接字的数据包分流到所述RSS网卡收队列上;The load balancing module is configured to: if the protocol type is a transmission control protocol, select a protocol stack for the second socket according to a load condition of each protocol stack, and select the second socket in the second socket When the data packet of the word cannot be offloaded to the receiver extended RSS network card receiving queue bound to the selected protocol stack by using the default offloading rule of the network card, a matching flow is created on the network card according to the traffic off policy of the network card. a table, and after receiving the data packet, offloading the received data packet of the second socket to the RSS network card receiving queue;
    所述数据分发模块,还用于进行所述第二套接字与所述选择的协议栈间的数据包分发。The data distribution module is further configured to perform data packet distribution between the second socket and the selected protocol stack.
  14. 根据权利要求13所述的装置,其特征在于,在所述会话结束后,The apparatus according to claim 13, wherein after said session ends,
    所述协议栈模块,还用于,控制所述选择的协议栈释放所述第二套接字;The protocol stack module is further configured to: control the selected protocol stack to release the second socket;
    所述负载均衡模块,还用于删除所述网卡上创建的所述匹配流表。The load balancing module is further configured to delete the matching flow table created on the network card.
  15. 根据权利要求13所述的装置,其特征在于,所述协议栈模块,还用于,如果所述协议类型是用户数据报协议,则控制收到请求连接的数据包的所述协议栈进行协议处理。The device according to claim 13, wherein the protocol stack module is further configured to: if the protocol type is a user datagram protocol, control to receive the protocol stack of the data packet requesting the connection for protocol deal with.
  16. 根据权利要求13所述的装置,其特征在于,所述负载均衡模块,还用于对所述网卡和所有的协议栈进行初始化配置,包括:具体用于读取并存储所述网卡的硬件配置信息,获取用户配置信息并结合所述硬件配置信息形成网卡配置策略,写入所述网卡;The device according to claim 13, wherein the load balancing module is further configured to perform initial configuration on the network card and all protocol stacks, including: a hardware configuration specifically for reading and storing the network card. Obtaining user configuration information and forming a network card configuration policy in combination with the hardware configuration information, and writing the network card;
    所述协议栈模块,还用于启动多个协议栈,并根据所述网卡配置策略,为每个协议栈至少绑定一个RSS网卡收队列以及一个RSS网卡发 队列。The protocol stack module is further configured to start multiple protocol stacks, and according to the network card configuration policy, at least one RSS network card receiving queue and one RSS network card binding are configured for each protocol stack. queue.
  17. 根据权利要求13所述的装置,其特征在于,所述数据分发模块,用于响应应用的请求创建第一套接字并部署在所有的协议栈上,具体为:所述数据分发模块用于响应应用调用应用编程接口的通知创建所述第一套接字,并接收所述第一套接字的监听方法调用,其中,所述第一套接字创建后,所述应用调用bind函数将所述第一套接字绑定到特定的IP地址,并调用listen函数监听指定端口的数据包请求;所述负载均衡模块,还用于通知各协议栈将所述第一套接字部署在所有的协议栈上。The device according to claim 13, wherein the data distribution module is configured to create a first socket in response to a request of the application and deploy it on all protocol stacks, specifically: the data distribution module is used for The first socket is created in response to a notification that the application invokes the application programming interface, and receives a listen method call of the first socket, wherein after the first socket is created, the application calls the bind function The first socket is bound to a specific IP address, and the listen function is called to listen to a data packet request of the specified port. The load balancing module is further configured to notify each protocol stack to deploy the first socket. All on the protocol stack.
  18. 根据权利要求13所述的装置,其特征在于,所述数据分发模块用于创建第二套接字以建立会话连接,具体为:用于根据各协议栈的网络运行的实际情况,创建所述第二套接字。The device according to claim 13, wherein the data distribution module is configured to create a second socket to establish a session connection, specifically, the method is: creating the foregoing according to actual conditions of network operation of each protocol stack. The second socket.
  19. 根据权利要求13所述的装置,其特征在于,所述协议栈模块用于创建第二套接字以建立会话连接,具体为:用于将接收的对端发送的所述请求连接的数据包转给所述应用;所述数据分发模块,用于待所述应用确认后创建所述第二套接字。The device according to claim 13, wherein the protocol stack module is configured to create a second socket to establish a session connection, specifically: a data packet for connecting the request sent by the receiving peer end. Forwarding to the application; the data distribution module is configured to create the second socket after the application is confirmed.
  20. 根据权利要求13所述的装置,其特征在于,所述数据分发模块接收并响应所述应用下发的释放所述第二套接字的请求,或者所述协议栈模块接收并响应对端发送的连接释放请求,则表示所述会话结束。The device according to claim 13, wherein the data distribution module receives and responds to a request issued by the application to release the second socket, or the protocol stack module receives and responds to the peer sending The connection release request indicates that the session ends.
  21. 一种多实例协议栈负载均衡装置,其特征在于,所述装置包括:协议栈模块、网卡、数据分发模块以及负载均衡模块,所述协议栈模块包括多个协议栈,其中:A multi-instance protocol stack load balancing device is characterized in that: the device comprises: a protocol stack module, a network card, a data distribution module and a load balancing module, wherein the protocol stack module comprises a plurality of protocol stacks, wherein:
    所述数据分发模块,用于创建第一套接字;The data distribution module is configured to create a first socket;
    所述负载均衡模块,用于根据每个所述协议栈的负载情况,为所述第一套接字选择一个协议栈以建立会话连接,若所述第一套接字的数据包通过所述网卡的默认分流规则不能分流到所述选择的协议栈所绑定的接收方扩展RSS网卡收队列上,则根据所述网卡的分流策略在所述网卡上创建匹配流表,并在接收到数据包后,把接收的所述数据包分流到所述RSS网卡收队列上;The load balancing module is configured to select a protocol stack for the first socket to establish a session connection according to a load condition of each of the protocol stacks, if the data packet of the first socket passes the The default traffic distribution rule of the network card cannot be offloaded to the receiver extended RSS network card receiving queue bound to the selected protocol stack, and a matching flow table is created on the network card according to the network card's traffic distribution policy, and the data is received. After the packet, the received data packet is offloaded to the RSS network card receiving queue;
    所述数据分发模块,还用于进行所述第一套接字与所述选择的协议栈间的数据包分发。The data distribution module is further configured to perform data packet distribution between the first socket and the selected protocol stack.
  22. 根据权利要求21所述的装置,其特征在于,在所述会话结束 后,The device of claim 21, wherein said session ends Rear,
    所述协议栈模块,用于控制所述选择的协议栈释放所述第一套接字;The protocol stack module is configured to control the selected protocol stack to release the first socket;
    所述负载均衡模块,还用于删除所述网卡上创建的所述匹配流表。The load balancing module is further configured to delete the matching flow table created on the network card.
  23. 根据权利要求21所述的装置,其特征在于,所述负载均衡模块,还用于对所述网卡和所有的协议栈进行初始化配置,包括:具体用于读取并存储所述网卡的硬件配置信息,获取用户配置信息并结合所述硬件配置信息形成网卡配置策略,写入所述网卡;The device according to claim 21, wherein the load balancing module is further configured to perform initial configuration on the network card and all protocol stacks, including: a hardware configuration specifically for reading and storing the network card. Obtaining user configuration information and forming a network card configuration policy in combination with the hardware configuration information, and writing the network card;
    所述协议栈模块,还用于启动多个协议栈,并根据所述网卡配置策略,为每个协议栈至少绑定一个RSS网卡收队列以及一个RSS网卡发队列。The protocol stack module is further configured to start multiple protocol stacks, and according to the network card configuration policy, at least one RSS network card receiving queue and one RSS network card sending queue are bound to each protocol stack.
  24. 根据权利要求21所述的装置,其特征在于,所述数据分发模块接收并响应所述应用下发的释放所述第一套接字的请求,或者所述协议栈模块接收并响应所述对端发送的连接释放请求,则表示所述会话结束。 The device according to claim 21, wherein the data distribution module receives and responds to a request issued by the application to release the first socket, or the protocol stack module receives and responds to the pair The connection release request sent by the terminal indicates that the session ends.
PCT/CN2014/088442 2013-11-08 2014-10-13 Multiple protocol stack load balancing method and apparatus WO2015067118A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310554861.X 2013-11-08
CN201310554861.XA CN104639578B (en) 2013-11-08 2013-11-08 Multi-protocol stack load-balancing method and device

Publications (1)

Publication Number Publication Date
WO2015067118A1 true WO2015067118A1 (en) 2015-05-14

Family

ID=53040885

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/088442 WO2015067118A1 (en) 2013-11-08 2014-10-13 Multiple protocol stack load balancing method and apparatus

Country Status (2)

Country Link
CN (1) CN104639578B (en)
WO (1) WO2015067118A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017028399A1 (en) * 2015-08-18 2017-02-23 北京百度网讯科技有限公司 Communication data transmission method and system
US9934033B2 (en) 2016-06-13 2018-04-03 International Business Machines Corporation Operation of a multi-slice processor implementing simultaneous two-target loads and stores
US9983875B2 (en) 2016-03-04 2018-05-29 International Business Machines Corporation Operation of a multi-slice processor preventing early dependent instruction wakeup
US10037211B2 (en) 2016-03-22 2018-07-31 International Business Machines Corporation Operation of a multi-slice processor with an expanded merge fetching queue
US10037229B2 (en) 2016-05-11 2018-07-31 International Business Machines Corporation Operation of a multi-slice processor implementing a load/store unit maintaining rejected instructions
US10042647B2 (en) 2016-06-27 2018-08-07 International Business Machines Corporation Managing a divided load reorder queue
CN109039771A (en) * 2018-09-04 2018-12-18 山东浪潮云投信息科技有限公司 A kind of more network card binding configuration methods and system
US10318419B2 (en) 2016-08-08 2019-06-11 International Business Machines Corporation Flush avoidance in a load store unit
US10346174B2 (en) 2016-03-24 2019-07-09 International Business Machines Corporation Operation of a multi-slice processor with dynamic canceling of partial loads
US10761854B2 (en) 2016-04-19 2020-09-01 International Business Machines Corporation Preventing hazard flushes in an instruction sequencing unit of a multi-slice processor
CN116668375A (en) * 2023-07-31 2023-08-29 新华三技术有限公司 Message distribution method, device, network equipment and storage medium

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789152A (en) * 2016-11-17 2017-05-31 东软集团股份有限公司 Processor extended method and device based on many queue network interface cards
CN107317759A (en) * 2017-06-13 2017-11-03 国家计算机网络与信息安全管理中心 A kind of thread-level dynamic equalization dispatching method of network interface card
CN110022330B (en) * 2018-01-09 2022-01-21 阿里巴巴集团控股有限公司 Processing method and device for network data packet and electronic equipment
CN109165100A (en) * 2018-09-06 2019-01-08 郑州云海信息技术有限公司 A kind of network interface card RSS configuration device and method
CN109586965A (en) * 2018-12-04 2019-04-05 郑州云海信息技术有限公司 A kind of network interface card RSS method of automatic configuration, device, terminal and storage medium
CN111294293B (en) * 2018-12-07 2021-08-10 网宿科技股份有限公司 Network isolation method and device based on user mode protocol stack
CN109451045A (en) * 2018-12-12 2019-03-08 成都九洲电子信息系统股份有限公司 A kind of high-speed message acquisition network card control method can configure customized Ethernet header
CN109617833B (en) * 2018-12-25 2021-12-31 深圳市任子行科技开发有限公司 NAT data auditing method and system of multi-thread user mode network protocol stack system
CN112217772B (en) * 2019-07-11 2022-07-01 中移(苏州)软件技术有限公司 Protocol stack implementation method, device and storage medium
CN112291181B (en) * 2019-07-23 2023-03-10 腾讯科技(深圳)有限公司 Data transmission method based on multiple network cards and related device
CN111143062A (en) * 2019-12-19 2020-05-12 上海交通大学 Balanced partitioning strategy for external load process by user mode protocol stack
CN113395293B (en) * 2021-07-13 2023-09-15 上海睿赛德电子科技有限公司 Network socket realizing method based on RPC
CN113726611A (en) * 2021-09-01 2021-11-30 深圳市大洲智创科技有限公司 Method for flow control based on protocol
CN116192524B (en) * 2023-03-06 2024-03-12 北京亿赛通科技发展有限责任公司 Application firewall based on serial traffic

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005038615A2 (en) * 2003-10-16 2005-04-28 Adaptec, Inc. Methods and apparatus for offloading tcp/ip processing using a protocol driver interface filter driver
US20070297334A1 (en) * 2006-06-21 2007-12-27 Fong Pong Method and system for network protocol offloading
CN101778048A (en) * 2010-02-22 2010-07-14 浪潮(北京)电子信息产业有限公司 Data forwarding method, load balance scheduler and load balance system
CN102970244A (en) * 2012-11-23 2013-03-13 上海寰创通信科技股份有限公司 Network message processing method of multi-CPU (Central Processing Unit) inter-core load balance

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7424710B1 (en) * 2002-12-18 2008-09-09 Vmware, Inc. TCP/IP offloading for virtual machines
US8849972B2 (en) * 2008-11-25 2014-09-30 Polycom, Inc. Method and system for dispatching received sessions between a plurality of instances of an application using the same IP port
CN103049336A (en) * 2013-01-06 2013-04-17 浪潮电子信息产业股份有限公司 Hash-based network card soft interrupt and load balancing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005038615A2 (en) * 2003-10-16 2005-04-28 Adaptec, Inc. Methods and apparatus for offloading tcp/ip processing using a protocol driver interface filter driver
US20070297334A1 (en) * 2006-06-21 2007-12-27 Fong Pong Method and system for network protocol offloading
CN101778048A (en) * 2010-02-22 2010-07-14 浪潮(北京)电子信息产业有限公司 Data forwarding method, load balance scheduler and load balance system
CN102970244A (en) * 2012-11-23 2013-03-13 上海寰创通信科技股份有限公司 Network message processing method of multi-CPU (Central Processing Unit) inter-core load balance

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017028399A1 (en) * 2015-08-18 2017-02-23 北京百度网讯科技有限公司 Communication data transmission method and system
US10609125B2 (en) 2015-08-18 2020-03-31 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and system for transmitting communication data
US9983875B2 (en) 2016-03-04 2018-05-29 International Business Machines Corporation Operation of a multi-slice processor preventing early dependent instruction wakeup
US10564978B2 (en) 2016-03-22 2020-02-18 International Business Machines Corporation Operation of a multi-slice processor with an expanded merge fetching queue
US10037211B2 (en) 2016-03-22 2018-07-31 International Business Machines Corporation Operation of a multi-slice processor with an expanded merge fetching queue
US10346174B2 (en) 2016-03-24 2019-07-09 International Business Machines Corporation Operation of a multi-slice processor with dynamic canceling of partial loads
US10761854B2 (en) 2016-04-19 2020-09-01 International Business Machines Corporation Preventing hazard flushes in an instruction sequencing unit of a multi-slice processor
US10042770B2 (en) 2016-05-11 2018-08-07 International Business Machines Corporation Operation of a multi-slice processor implementing a load/store unit maintaining rejected instructions
US10255107B2 (en) 2016-05-11 2019-04-09 International Business Machines Corporation Operation of a multi-slice processor implementing a load/store unit maintaining rejected instructions
US10268518B2 (en) 2016-05-11 2019-04-23 International Business Machines Corporation Operation of a multi-slice processor implementing a load/store unit maintaining rejected instructions
US10037229B2 (en) 2016-05-11 2018-07-31 International Business Machines Corporation Operation of a multi-slice processor implementing a load/store unit maintaining rejected instructions
US9940133B2 (en) 2016-06-13 2018-04-10 International Business Machines Corporation Operation of a multi-slice processor implementing simultaneous two-target loads and stores
US9934033B2 (en) 2016-06-13 2018-04-03 International Business Machines Corporation Operation of a multi-slice processor implementing simultaneous two-target loads and stores
US10042647B2 (en) 2016-06-27 2018-08-07 International Business Machines Corporation Managing a divided load reorder queue
US10318419B2 (en) 2016-08-08 2019-06-11 International Business Machines Corporation Flush avoidance in a load store unit
CN109039771A (en) * 2018-09-04 2018-12-18 山东浪潮云投信息科技有限公司 A kind of more network card binding configuration methods and system
CN116668375A (en) * 2023-07-31 2023-08-29 新华三技术有限公司 Message distribution method, device, network equipment and storage medium
CN116668375B (en) * 2023-07-31 2023-11-21 新华三技术有限公司 Message distribution method, device, network equipment and storage medium

Also Published As

Publication number Publication date
CN104639578A (en) 2015-05-20
CN104639578B (en) 2018-05-11

Similar Documents

Publication Publication Date Title
WO2015067118A1 (en) Multiple protocol stack load balancing method and apparatus
US10694005B2 (en) Hardware-based packet forwarding for the transport layer
US11277313B2 (en) Data transmission method and corresponding device
US10129216B2 (en) Low latency server-side redirection of UDP-based transport protocols traversing a client-side NAT firewall
US11277341B2 (en) Resilient segment routing service hunting with TCP session stickiness
US10050870B2 (en) Handling multipath flows in service function chaining
WO2017050117A1 (en) Network load balance processing system, method, and apparatus
US20160352870A1 (en) Systems and methods for offloading inline ssl processing to an embedded networking device
US10530644B2 (en) Techniques for establishing a communication connection between two network entities via different network flows
US10367893B1 (en) Method and apparatus of performing peer-to-peer communication establishment
CA2968964A1 (en) Source ip address transparency systems and methods
KR101938623B1 (en) Openflow communication method, system, controller, and service gateway
US10129372B2 (en) Transferring multiple data sets using a multipath connection
CN113228571B (en) Method and apparatus for network optimization for accessing cloud services from a premise network
US11218570B2 (en) Network packet processing method and apparatus and network server
US20080205388A1 (en) Discovery of network devices logically located between a client and a service
WO2024011854A1 (en) Message transmission method and apparatus
US7907603B2 (en) Acceleration of label distribution protocol (LDP) session setup
CN110417632B (en) Network communication method, system and server
US10904207B2 (en) Intelligently routing a response packet along a same connection as a request packet
CN108064441B (en) Method and system for accelerating network transmission optimization
WO2015113437A1 (en) Data packet processing method and device based on parallel protocol stack instances
CN112838983B (en) Data transmission method, system, device, proxy server and storage medium
CN110602262A (en) Router and method for processing data message thereof
WO2019196853A1 (en) Tcp acceleration method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14859882

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14859882

Country of ref document: EP

Kind code of ref document: A1