WO2015096655A1 - 数据分流方法及分流器 - Google Patents

数据分流方法及分流器 Download PDF

Info

Publication number
WO2015096655A1
WO2015096655A1 PCT/CN2014/094180 CN2014094180W WO2015096655A1 WO 2015096655 A1 WO2015096655 A1 WO 2015096655A1 CN 2014094180 W CN2014094180 W CN 2014094180W WO 2015096655 A1 WO2015096655 A1 WO 2015096655A1
Authority
WO
WIPO (PCT)
Prior art keywords
thread
data packet
transport layer
data
communication protocol
Prior art date
Application number
PCT/CN2014/094180
Other languages
English (en)
French (fr)
Inventor
唐继元
黄彬
陈克平
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP14875040.9A priority Critical patent/EP3079313B1/en
Publication of WO2015096655A1 publication Critical patent/WO2015096655A1/zh
Priority to US15/190,774 priority patent/US10097466B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/10Current supply arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/19Flow control; Congestion control at layers above the network layer
    • H04L47/193Flow control; Congestion control at layers above the network layer at the transport layer, e.g. TCP related
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2483Traffic characterised by specific attributes, e.g. priority or QoS involving identification of individual flows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6255Queue scheduling characterised by scheduling criteria for service slots or service orders queue load conditions, e.g. longest queue first
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1027Persistence of sessions during load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/30Definitions, standards or architectural aspects of layered protocol stacks
    • H04L69/32Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
    • H04L69/322Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
    • H04L69/326Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the transport layer [OSI layer 4]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a data offloading method and a shunt.
  • processors have entered the multi-core era.
  • parallel processing is often implemented in multiple cores in multiple threads. Then, it is necessary to send packets to different threads through shunting to make multiple cores. Implement parallel processing.
  • the existing data offloading method splitting granularity is a process, that is, sending a data packet to a buffer queue of a process corresponding to the data packet.
  • a thread in the process gets a packet from the buffer queue of the process, and the connection information of the packet may be shared by other threads. If shared, the thread needs to ensure consistent access of data through the mutual exclusion and synchronization mechanism between threads when accessing the connection information corresponding to the data packet. If at this point just another thread is accessing the connection information, the thread needs to wait until another thread has finished accessing the connection information.
  • the existing data offloading method has the following disadvantages: multiple threads sharing the buffer queue of the same process will cause a large number of mutually exclusive and synchronization overheads between threads and cache failure between cores, resulting in insufficient processing capability of the multi-core processor. Play.
  • the embodiments of the present invention provide a data offloading method and a shunt to improve the processing capability of the multi-core processor.
  • an embodiment of the present invention provides a data offloading method, which is applied to a data offloading system, where the data offloading system includes a shunt, a memory, and multiple threads for processing data, and each thread corresponds to a buffer.
  • the data offloading system includes a shunt, a memory, and multiple threads for processing data, and each thread corresponds to a buffer.
  • a queue wherein the memory stores a correspondence between a transport layer communication protocol and a traffic distribution table, and a traffic distribution table corresponding to each transport layer communication protocol, where each traffic distribution table is configured with a data flow Correspondence between the identification information and the thread, the method comprising:
  • the splitter parses the received data packet to determine a transport layer communication protocol to which the data packet belongs;
  • the offloader obtains, from the data packet, identification information of a data flow to which the data packet corresponding to the determined transport layer communication protocol belongs, where the identifier information of the data flow is used to distinguish that the data packet belongs to Data flow
  • the offloading device obtains, according to the correspondence between the transport layer communication protocol and the offloading table, a traffic offloading table corresponding to the transport layer communication protocol to which the data packet belongs;
  • the offloader sends the data packet to a buffer queue of a thread corresponding to the data stream, so that a thread corresponding to the data stream obtains a data packet from the buffer queue.
  • the transport layer communication protocol to which the data packet belongs is a connectionless transport layer communication protocol
  • the identifier information of the data stream is two a tuple
  • the dual group includes: a destination IP address and a destination port of the data packet
  • the traffic distribution table corresponding to the connectionless transport layer communication protocol includes: a correspondence between a dual group and a thread identifier, and The thread identifier corresponds to a thread.
  • the flow divider is configured according to the identifier information of the data flow in the traffic distribution table corresponding to the transport layer communication protocol to which the data packet belongs. Determining, by the thread, the thread corresponding to the data flow to which the data packet belongs includes: a dual group and a thread identifier in the flow distribution table corresponding to the connectionless communication layer communication protocol to which the data packet belongs Corresponding relationship, searching for a thread identifier corresponding to the tuple of the data packet; the shunt determining the thread corresponding to the thread identifier corresponding to the tuple of the data packet as the data to which the data packet belongs The thread corresponding to the stream.
  • the data packet belongs to the transport layer
  • the letter protocol is a connection-oriented transport layer communication protocol
  • the identifier information of the data stream is a quad of the data packet
  • the quad group includes: a source IP address, a source port, and a destination IP of the data packet.
  • the traffic distribution table corresponding to the connection-oriented transport layer communication protocol includes: a first traffic distribution table and a second traffic distribution table;
  • the first traffic distribution table includes: a correspondence between a quad group and a thread identifier, and each The thread identifier corresponds to a thread;
  • the second flow table includes: a destination IP address and a correspondence between a destination port and a thread identifier, each thread identifier corresponds to a thread, and a load of each thread; wherein, the destination IP The thread corresponding to the address and destination port is a thread in a different process.
  • the current collector is configured according to the identifier information of the data flow in the traffic distribution table corresponding to the transport layer communication protocol to which the data packet belongs a thread corresponding to the thread, the thread corresponding to the data stream to which the data packet belongs includes: a quaternary group and a thread in the first flow table corresponding to the connection-oriented transport layer communication protocol to which the data packet belongs In the correspondence between the identifiers, the thread identifier corresponding to the quad of the data packet is searched; if the thread identifier corresponding to the quad of the data packet exists in the first distribution table, the shunt Determining, by the thread corresponding to the thread identifier corresponding to the quaternion of the data packet, a thread corresponding to the data stream to which the data packet belongs; if the quaternion corresponding to the data packet does not exist in the first flow distribution table Thread identifier, the destination IP address and the destination port and the thread identifier
  • the thread identifier is a buffer queue address corresponding to the thread.
  • the method further includes: the shunt updating the shunt table according to the state of the thread.
  • an embodiment of the present invention provides a shunt, which is applied to a data offloading system, where the data shunting system further includes a memory and a plurality of threads for processing data, and each thread corresponds to a buffer queue.
  • the memory stores a correspondence between a transport layer communication protocol and a traffic distribution table, and each of the traffic distribution tables is provided with a correspondence relationship between the identification information of the data flow and the thread, and the power splitter includes:
  • a parsing unit configured to parse the received data packet to determine a transport layer communication protocol to which the data packet belongs
  • a first acquiring unit configured to acquire, from the data packet, identifier information of a data flow to which the data packet corresponding to the determined transport layer communication protocol belongs, where the identifier information of the data stream is used to distinguish the The data stream to which the packet belongs;
  • a second obtaining unit configured to acquire, according to the correspondence between the transport layer communication protocol and the offloading table, a traffic offloading table corresponding to the transport layer communication protocol to which the data packet belongs from the memory;
  • a determining unit configured to determine, according to the identifier information of the data stream in the traffic distribution table corresponding to the transport layer communication protocol to which the data packet belongs, and the corresponding relationship of the thread, determine a thread corresponding to the data flow to which the data packet belongs;
  • a sending unit configured to send the data packet to a buffer queue of a thread corresponding to the data stream, so that a thread corresponding to the data stream acquires a data packet from the buffer queue.
  • the identifier information of the data stream is the second data packet.
  • the dual group includes: a destination IP address and a destination port of the data packet; and the traffic distribution table corresponding to the connectionless transport layer communication protocol includes: a correspondence between a dual group and a thread identifier, and The thread identifier corresponds to a thread.
  • the determining unit is specifically configured to: when the data packet belongs to a connectionless transport layer communication protocol pair In the correspondence between the binary group and the thread identifier in the shunt table, the thread identifier corresponding to the tuple of the data packet is searched; the thread corresponding to the thread identifier corresponding to the binary group of the data packet Determine the thread corresponding to the data stream to which the packet belongs.
  • the traffic distribution table corresponding to the connection-oriented transport layer communication protocol includes: a first traffic distribution table and a second traffic distribution table; and the first traffic distribution table includes: a quad group Corresponding relationship with the thread identifier, each thread identifier corresponds to one thread; the second split table includes: a destination IP address and a correspondence between a destination port and a thread identifier, each thread identifier corresponds to one thread, and each thread identifier The load of the thread; wherein the thread corresponding to the destination IP address and the destination port is a thread in a different process.
  • the determining unit is specifically configured to: use the first offload corresponding to the connection-oriented transport layer communication protocol to which the data packet belongs In the correspondence between the quaternary group and the thread identifier in the table, searching for a thread identifier corresponding to the quaternion of the data packet; if the thread corresponding to the quaternion of the data packet exists in the first distribution table And determining, by the identifier, a thread corresponding to the thread identifier corresponding to the quad of the data packet as a thread corresponding to the data stream to which the data packet belongs; if the data packet does not exist in the first flow distribution table
  • the thread identifier corresponding to the quaternary in the correspondence between the destination IP address and the destination port and the thread identifier in the second distribution table corresponding to the connection-oriented transport layer communication protocol to which the data packet belongs, The destination IP address of the data packet and the thread identifier corresponding to the destination port;
  • the thread identifier is a buffer queue address corresponding to the thread.
  • the power splitter further includes: an updating unit, configured to update the traffic distribution table according to a state of the thread.
  • the splitter uses the thread as the split granularity, and different transport layer communication protocols correspond to different split tables, and the splitter allocates the data packet to the data packet according to the split flow table corresponding to the transport layer communication protocol to which the data packet belongs.
  • the buffer queue of the thread corresponding to the data stream so that the thread obtains the data packet from the buffer queue corresponding to the thread. Since each thread has its own independent buffer queue, the connection information of the data packet and the data packet will not be shared by multiple threads, and the mutual exclusion and synchronization overhead between threads and the cache failure between cores can be avoided, thereby improving the multi-core processor. Processing power.
  • FIG. 1 is a schematic flowchart of a data offloading method according to Embodiment 1 of the present invention
  • FIG. 2 is a schematic diagram of a flow distribution table according to Embodiment 1 of the present invention.
  • FIG. 3 is a schematic diagram of another shunt table according to Embodiment 1 of the present invention.
  • FIG. 4 is a schematic flowchart of a method for determining a thread corresponding to a data flow to which a data packet belongs according to Embodiment 1 of the present invention
  • FIG. 5 is a schematic flowchart of another method for determining a corresponding thread to which a data packet belongs according to Embodiment 1 of the present invention
  • FIG. 6 is a schematic structural diagram of a shunt according to Embodiment 2 of the present invention.
  • FIG. 7 is a schematic structural diagram of another shunt according to Embodiment 2 of the present invention.
  • FIG. 1 is a schematic flowchart of a data offloading method according to Embodiment 1 of the present invention.
  • the data offloading method is applied to a data offloading system, the data shunting system comprising a shunt, a memory and a plurality of threads for processing data, each thread corresponding to a buffer queue.
  • the memory is used for the correspondence between the transport layer communication protocol and the traffic distribution table and the traffic distribution table corresponding to each transport layer communication protocol.
  • Each of the traffic distribution tables is provided with the correspondence information of the data flow and the corresponding relationship of the threads.
  • the memory may include a high speed RAM memory, and may also include a non-volatile memory such as at least one disk memory.
  • the splitter is the execution body of the data offloading method provided by this embodiment. As shown in FIG. 1, the data offloading method includes the following steps:
  • Step S101 The splitter parses the received data packet to determine a transport layer communication protocol to which the data packet belongs.
  • the transport layer communication protocol to which the data packet belongs can be known from the data packet header. If the data packet is a fragment of the complete data packet and is not the first fragment, the first fragment of the complete data packet is searched according to the IP header of the data packet, if the traffic splitter has not received the packet The first fragment of the complete packet, you need to wait until the splitter receives the first fragment of the complete packet. After finding the first fragment of the complete packet, the communication protocol to which the packet belongs is known from the first fragment header.
  • the transport layer communication protocol is divided into two types, one is a connection-oriented transport layer communication protocol, and the other is a connection-oriented transport layer communication protocol.
  • the non-connected transport layer communication protocol may be specifically a User Datagram Protocol (UDP), and the non-connected transport layer communication protocol may be specifically a Transmission Control Protocol (TCP) and a Flow Control Transmission Protocol ( Stream Control Transmission Protocol (SCTP).
  • UDP User Datagram Protocol
  • TCP Transmission Control Protocol
  • SCTP Flow Control Transmission Protocol
  • Step S102 The offloader obtains, from the data packet, identifier information of a data stream to which the data packet corresponding to the determined transport layer communication protocol belongs, where the identifier information of the data stream is used to distinguish the data stream to which the data packet belongs.
  • the identification information of the data stream is a binary group of the data packet, and the binary group includes: a destination IP address and a destination port of the data packet, then The splitter obtains a binary of the packet from the packet.
  • the identification information of the data flow is a quad of the data packet, and the quad group includes: a source IP address, a source port, and a destination IP address of the data packet. The address and destination port, then the shunt gets the quad of the packet from the packet.
  • Step S103 The flow divider obtains, according to the correspondence between the transport layer communication protocol and the traffic distribution table, the traffic distribution table corresponding to the transport layer communication protocol to which the data packet belongs.
  • the split table includes: a correspondence between the dual group and the thread identifier, and each thread identifier Corresponds to one thread.
  • the thread identifier may be an identifier number of the thread or a buffer queue address corresponding to the thread or other identifier capable of uniquely determining the thread.
  • the offloading table includes: a first offloading table and a second offloading table.
  • the first distribution table includes: a correspondence between the quaternion and the thread identifier, and each thread identifier corresponds to one thread;
  • the second distribution table includes: a correspondence between the destination IP address and the destination port and the thread identifier, and each thread is marked The symbol corresponds to a thread, and the load of each thread.
  • the multiple threads corresponding to the destination IP address and the destination port are threads in different processes, so that threads in different processes are bound to the same group of destination IP addresses and destination ports.
  • the thread identifier can be the thread's identification number ID or the buffer's corresponding buffer queue address or other identifier that uniquely identifies the thread.
  • Step S104 The flow divider determines, according to the identification information of the data flow in the traffic distribution table corresponding to the transport layer communication protocol to which the data packet belongs, and the corresponding relationship of the thread, the thread corresponding to the data flow to which the data packet belongs.
  • the thread corresponding to the data flow to which the data packet belongs is determined according to the traffic distribution table shown in FIG. 2, as shown in FIG. 4,
  • the process of determining the thread corresponding to the data stream to which the data packet belongs according to the split table shown in FIG. 2 includes the following steps:
  • Step S401 The shunt searches for a thread identifier corresponding to the tuple of the data packet in a correspondence between the dual group and the thread identifier in the traffic distribution table corresponding to the connectionless transport layer communication protocol.
  • Step S402 the shunt determines the thread corresponding to the thread identifier corresponding to the binary group of the data packet as the thread corresponding to the data stream to which the data packet belongs.
  • the thread corresponding to the data flow to which the data packet belongs is determined according to the split flow table shown in FIG. 3, as shown in FIG. 5, according to FIG.
  • the process of determining the thread corresponding to the data flow to which the data packet belongs includes the following steps:
  • Step S501 the shunt searches for the thread identifier corresponding to the quad of the data packet in the correspondence between the quaternion and the thread identifier in the first distribution table corresponding to the connection-oriented transport layer communication protocol to which the data packet belongs. .
  • the thread identifier corresponding to the quad of the data packet is looked up in the first traffic distribution table in the traffic distribution table shown in FIG. If the thread identifier corresponding to the quaternion of the data packet exists in the first traffic distribution table, step S502 is performed. If the thread identifier corresponding to the quad of the data packet does not exist in the first flow distribution table, step S503 and step S504 are performed.
  • Step S502 if there is a thread identifier corresponding to the quaternion of the data packet in the first distribution table, the shunt determines the thread corresponding to the thread identifier corresponding to the quaternion of the data packet as the data stream to which the data packet belongs. Thread.
  • a set of source IP addresses, source ports, and destinations can be known from the first split table in the split table shown in FIG.
  • the IP address and destination port correspond to only one thread identifier. Therefore, after determining that the thread identifier corresponding to the quaternion of the data packet exists in the first traffic distribution table, the thread corresponding to the thread identifier is directly determined as the thread corresponding to the data flow to which the data packet belongs.
  • Step S503 if the thread identifier corresponding to the quaternion of the data packet does not exist in the first traffic distribution table, the destination IP address in the second traffic distribution table corresponding to the connection-oriented transport layer communication protocol to which the data packet belongs In the correspondence between the address and the destination port and the thread identifier, the destination IP address of the data packet and the thread identifier corresponding to the destination port are searched.
  • Step S504 the shunt determines the thread with the smallest load among the thread corresponding to the thread identifier corresponding to the thread identifier of the data packet as the thread corresponding to the data stream to which the data packet belongs.
  • each thread identifier in the second split table corresponds to the load of the thread corresponding to the thread identifier. If the destination IP address and the destination port of the data packet only correspond to one thread identifier in the second traffic distribution table in the traffic distribution table shown in FIG. 4, the thread corresponding to the thread identifier is directly determined to be the data packet to which the data packet belongs. The corresponding stream of data streams. If the destination IP address and the destination port of the data packet correspond to multiple thread identifiers in the second traffic distribution table in the traffic distribution table shown in FIG. 3, the thread with the smallest load among the threads corresponding to the multiple thread identifiers is determined. The thread corresponding to the data flow to which the data packet belongs, to achieve load balancing of the thread, thereby making the multi-core processor load balanced.
  • Step S105 The shunt sends the data packet to a buffer queue of a thread corresponding to the data stream to which the data packet belongs, so that the thread corresponding to the data stream acquires the data packet from the buffer queue.
  • the flow divider After the flow divider confirms the thread corresponding to the data flow to which the data packet belongs according to the flow split table, the data packet is sent to the buffer queue of the thread corresponding to the data flow to which the data packet belongs. The thread then fetches the packet directly from its corresponding buffer queue, so the packet is not shared by multiple threads, avoiding mutual exclusion and synchronization overhead between threads and cache failure between cores.
  • the data offloading method provided in the first embodiment of the present invention may further include the following steps:
  • the connection is established.
  • the thread identifier corresponding to the thread and the destination IP address and destination port corresponding to the thread are added to the offload table.
  • the idle state in this embodiment refers to the case where the thread is not closed, the connection state of the thread is closed, or the connection state of the thread is abnormally interrupted
  • the thread identifier corresponding to the thread that changes from the connected state to the idle state and the destination IP address and destination port corresponding to the thread are deleted from the offload table.
  • the thread identifier of the thread in the first split table and the source corresponding to the thread are used.
  • the IP address, the source port, the destination IP address, and the destination port are deleted, and the thread identifier of the thread, the destination IP address and destination port corresponding to the thread, and the load of the thread are added to the second offload table.
  • the thread identifier of the thread in the second split table When the thread corresponding to the thread identifier in the second flow table changes from the listening state to the connected state, the thread identifier of the thread in the second split table, the destination IP address and the destination port corresponding to the thread, and the thread The load is deleted, and the thread identifier of the thread and the source IP address corresponding to the thread, the source port, the destination IP address, and the destination port are added to the first offload table.
  • the thread identifier corresponding to the thread changing from the idle state to the listening state, the destination IP address and the destination port corresponding to the thread, and the load of the thread are added to the second split table. in.
  • the thread identifier of the thread that becomes the idle state and the source IP address corresponding to the thread, the source port and the destination IP address are used.
  • the destination port is deleted from the first offloading table, or the thread identifier corresponding to the idle state thread is changed, and the destination IP address and the destination port corresponding to the thread and the load of the thread are deleted from the second offloading table.
  • the splitter uses the thread as the split granularity, and different transport layer communication protocols correspond to different split tables, and the splitter uses the split table according to the transport layer communication protocol to which the data packet belongs.
  • the packet is allocated to the buffer queue of the thread corresponding to the data stream to which the data packet belongs, so that the thread obtains the data packet from the buffer queue corresponding to the thread. Since each thread has its own independent buffer queue, the connection information of packets and packets is not blocked by multiple lines. Process sharing avoids mutual exclusion and synchronization overhead between threads and cache failure between cores. And for the connection-oriented transport layer communication protocol, multiple threads are used to monitor the same port, which can effectively balance the load between multiple cores, thereby improving the processing capability of the multi-core processor.
  • FIG. 6 is a schematic structural diagram of a shunt according to the second embodiment of the present invention.
  • the shunt is applied to the data offloading system to implement the data offloading method provided by the first embodiment of the present invention.
  • the data offloading system also includes a memory and a plurality of threads for processing data, each thread corresponding to a buffer queue.
  • the memory is used for the correspondence between the transport layer communication protocol and the traffic distribution table and the traffic distribution table corresponding to each transport layer communication protocol.
  • Each of the traffic distribution tables is provided with the correspondence information of the data flow and the corresponding relationship of the threads.
  • the memory may be comprised of a high speed RAM memory, and may also include a non-volatile memory, such as at least one disk memory.
  • the shunt includes a parsing unit 610, a first obtaining unit 620, a second obtaining unit 630, a determining unit 640, and a transmitting unit 650.
  • the parsing unit 610 is configured to parse the received data packet to determine a transport layer communication protocol to which the data packet belongs.
  • the parsing unit 610 After receiving the data packet, if the data packet is the complete data packet or the first fragment of the complete data packet, the parsing unit 610 can learn the transport layer communication protocol to which the data packet belongs from the data packet header. If the data packet is a fragment of the complete data packet and is not the first fragment, the parsing unit 610 searches for the first fragment of the complete data packet according to the IP header of the data packet, if the diverter has not yet Upon receiving the first fragment of the complete packet, it is necessary to wait until the splitter receives the first fragment of the complete packet. After the first fragment of the complete data packet is found, the parsing unit 610 learns the communication protocol to which the data packet belongs from the first fragment header.
  • the transport layer communication protocol is divided into two types, one is a connection-oriented transport layer communication protocol, and the other is a connection-oriented transport layer communication protocol.
  • the non-connected transport layer communication protocol may be specifically UDP, and the non-connected transport layer communication protocol may specifically be TCP and SCTP.
  • the first obtaining unit 620 is configured to obtain, from the data packet, a determined transport layer communication protocol pair.
  • the identification information of the data flow to which the data packet belongs, and the identification information of the data flow is used to distinguish the data flow to which the data packet belongs.
  • the identification information of the data stream is a binary group of the data packet, and the binary group includes: a destination IP address and a destination port of the data packet, then The first obtaining unit 620 acquires a binary group of the data packet from the data packet.
  • the identification information of the data flow is a quad of the data packet, and the quad group includes: a source IP address, a source port, and a destination IP address of the data packet.
  • the address and destination port then the first obtaining unit 620 obtains the quad of the data packet from the data packet.
  • the second obtaining unit 630 is configured to obtain, from the memory, the traffic distribution table corresponding to the transport layer communication protocol to which the data packet belongs according to the correspondence between the transport layer communication protocol and the traffic distribution table.
  • the split table includes: a correspondence between the dual group and the thread identifier, and each thread identifier Corresponds to one thread.
  • the thread identifier may be an identifier number of the thread or a buffer queue address corresponding to the thread or other identifier capable of uniquely determining the thread.
  • the offloading table includes: a first offloading table and a second offloading table.
  • the first distribution table includes: a correspondence between the quaternion and the thread identifier, and each thread identifier corresponds to one thread;
  • the second distribution table includes: a correspondence between the destination IP address and the destination port and the thread identifier, and each thread is marked The symbol corresponds to a thread, and the load of each thread.
  • the multiple threads corresponding to the destination IP address and the destination port are threads in different processes, so that threads in different processes are bound to the same group of destination IP addresses and destination ports.
  • the thread identifier can be the thread's identification number ID or the buffer's corresponding buffer queue address or other identifier that uniquely identifies the thread.
  • the determining unit 640 is configured to determine, according to the identifier information of the data stream in the traffic distribution table corresponding to the transport layer communication protocol to which the data packet belongs, and the corresponding relationship of the thread, the thread corresponding to the data flow to which the data packet belongs.
  • the thread corresponding to the data flow to which the data packet belongs is determined according to the traffic distribution table shown in FIG. 2 .
  • the determining unit 640 is specifically configured to: in the correspondence between the dual group and the thread identifier in the split table corresponding to the connectionless transport layer communication protocol, the thread identifier corresponding to the binary group of the data packet is searched for, The thread corresponding to the thread identifier corresponding to the binary group of the data packet is determined as the thread corresponding to the data stream to which the data packet belongs.
  • the thread corresponding to the data flow to which the data packet belongs is determined according to the split flow table shown in FIG. 3 .
  • the determining unit 640 is specifically configured to: in the correspondence between the quaternion and the thread identifier in the first distribution table corresponding to the connection-oriented transport layer communication protocol to which the data packet belongs, search for a thread identifier corresponding to the quaternion of the data packet.
  • the thread corresponding to the thread identifier corresponding to the quaternion of the data packet exists in the first traffic distribution table, the thread corresponding to the thread identifier corresponding to the quaternion of the data packet is determined as the thread corresponding to the data flow to which the data packet belongs;
  • the thread identifier corresponding to the quad of the data packet does not exist in the first traffic distribution table, and the destination IP address and the destination port and the thread in the second traffic distribution table corresponding to the connection-oriented transport layer communication protocol to which the data packet belongs
  • the destination IP address of the data packet and the thread identifier corresponding to the destination port are searched, and the thread with the smallest load among the thread corresponding to the thread identifier corresponding to the destination port of the data packet is determined as the thread.
  • each thread identifier in the second split table corresponds to the load of the thread corresponding to the thread identifier. If the destination IP address and the destination port of the data packet only correspond to one thread identifier in the second traffic distribution table in the traffic distribution table shown in FIG. 4, the thread corresponding to the thread identifier is directly determined to be the data packet to which the data packet belongs. The corresponding stream of data streams. If the destination IP address and the destination port of the data packet correspond to multiple thread identifiers in the second traffic distribution table in the traffic distribution table shown in FIG. 3, the thread with the smallest load among the threads corresponding to the multiple thread identifiers is determined. The thread corresponding to the data flow to which the data packet belongs, to achieve load balancing of the thread, thereby making the multi-core processor load balanced.
  • the sending unit 650 is configured to send the data packet to a line corresponding to the data flow to which the data packet belongs The buffer queue of the program, so that the thread corresponding to the data stream obtains the data packet from the buffer queue.
  • the sending unit 650 sends the data packet to the buffer queue of the thread corresponding to the data stream to which the data packet belongs.
  • the thread then fetches the packet directly from its corresponding buffer queue, so the packet is not shared by multiple threads, avoiding mutual exclusion and synchronization overhead between threads and cache failure between cores.
  • the shunt provided by the second embodiment of the present invention may further include: an updating unit 660.
  • the updating unit 660 is configured to update the diversion table according to the status of the thread.
  • the update unit 660 adds the thread identifier corresponding to the thread that establishes the connection and the destination IP address and the destination port corresponding to the thread to the split table.
  • the idle state in this embodiment refers to the case where the thread is not closed, the connection state of the thread is closed, or the connection state of the thread is abnormally interrupted
  • the updating unit 660 deletes the thread identifier corresponding to the thread that has changed from the connected state to the idle state and the destination IP address and destination port corresponding to the thread from the split table.
  • the updating unit 660 sets the thread identifier of the thread in the first split table and the thread.
  • Corresponding source IP address, source port, destination IP address, and destination port are deleted, and the thread identifier of the thread, the destination IP address and the destination port corresponding to the thread, and the load of the thread are added to the second offload table.
  • the updating unit 660 sets the thread identifier of the thread in the second distribution table, the destination IP address and the destination port corresponding to the thread, and The load of the thread is deleted, and the thread identifier of the thread and the source IP address corresponding to the thread, the source port, the destination IP address, and the destination port are added to the first offload table.
  • the updating unit 660 adds the thread identifier corresponding to the thread changing from the idle state to the listening state, the destination IP address and the destination port corresponding to the thread, and the load of the thread to the first In the two-way flow table.
  • the update unit 660 changes the thread identifier of the thread that becomes the idle state and the source IP address corresponding to the thread, the source port, The destination IP address and the destination port are deleted from the first offloading table, or the thread identifier corresponding to the idle state thread is changed, and the destination IP address and destination port corresponding to the thread and the load of the thread are deleted from the second offloading table.
  • the shunt provided by the second embodiment of the present invention uses the thread as the split granularity, and different transport layer communication protocols correspond to different split tables, and the splitter uses the split table according to the transport layer communication protocol to which the data packet belongs.
  • the packet is allocated to the buffer queue of the thread corresponding to the data stream to which the data packet belongs, so that the thread obtains the data packet from the buffer queue corresponding to the thread. Since each thread has its own independent buffer queue, the connection information of the data packet and the data packet will not be shared by multiple threads, and the mutual exclusion and synchronization overhead between threads and the cache failure between cores can be avoided. And for the connection-oriented transport layer communication protocol, multiple threads are used to monitor the same port, which can effectively balance the load between multiple cores, thereby improving the processing capability of the multi-core processor.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明涉及一种数据分流方法及分流器。该数据分流方法包括:分流器对所接收的数据包进行解析确定所述数据包所属的传输层通信协议;所述分流器从所述数据包中,获取与确定的所述传输层通信协议对应的所述数据包所属的数据流的标识信息;所述分流器根据所述传输层通信协议和分流表的对应关系,从所述存储器中获取所述数据包所属的传输层通信协议对应的分流表;所述分流器根据所述数据包所属的传输层通信协议对应的分流表中的数据流的标识信息和线程的对应关系,确定所述数据包所属的数据流对应的线程;所述分流器将所述数据包发送至所述数据流对应的线程的缓冲队列,以使所述数据流对应的线程从所述缓冲队列获取数据包。

Description

数据分流方法及分流器 技术领域
本发明涉及计算机技术领域,尤其涉及一种数据分流方法及分流器。
背景技术
当今,处理器已经迈入多核时代,在多核架构下,往往以多线程部署在多个核心的方式实现并行处理,那么就需要通过分流的方式将数据包发送至不同线程,以使多个核心实现并行处理。
现有的数据分流方法分流粒度是进程,即将数据包发送至该数据包对应的进程的缓冲队列中。该进程中的线程从该进程的缓冲列队中获取数据包,该数据包的连接信息可能被其他线程共享。如果共享,则该线程访问该数据包对应的连接信息时需要通过线程间互斥和同步机制保证数据的一致性访问。如果此时恰好另一个线程正在访问该连接信息,则该线程需要一直等待,直到另一个线程访问完毕才能继续访问该连接信息。
因此,现有的数据分流方法存在以下缺点:多个线程共享同一个进程的缓冲队列,将引起大量的线程间互斥和同步开销以及核心间cache失效,从而导致多核处理器的处理能力不能充分发挥。
发明内容
有鉴于此,本发明实施例提供一种数据分流方法及分流器,以提高多核处理器的处理能力。
在第一方面,本发明实施例提供一种数据分流方法,应用于数据分流系统中,所述数据分流系统中包括分流器,存储器和用于处理数据的多个线程,每个线程对应一个缓冲队列,所述存储器存储有传输层通信协议和分流表的对应关系及每个传输层通信协议对应的分流表,每个分流表中设置有数据流 的标识信息和线程的对应关系,所述方法包括:
分流器对所接收的数据包进行解析确定所述数据包所属的传输层通信协议;
所述分流器从所述数据包中,获取与确定的所述传输层通信协议对应的所述数据包所属的数据流的标识信息,所述数据流的标识信息用于区分所述数据包所属的数据流;
所述分流器根据所述传输层通信协议和分流表的对应关系,从所述存储器中获取所述数据包所属的传输层通信协议对应的分流表;
所述分流器根据所述数据包所属的传输层通信协议对应的分流表中的数据流的标识信息和线程的对应关系,确定所述数据包所属的数据流对应的线程;
所述分流器将所述数据包发送至所述数据流对应的线程的缓冲队列,以使所述数据流对应的线程从所述缓冲队列获取数据包。
在第一方面的第一种可能实现的方式中,若所述数据包所属的传输层通信协议为面向无连接的传输层通信协议,则所述数据流的标识信息为所述数据包的二元组,所述二元组包括:所述数据包的目的IP地址和目的端口;所述面向无连接的传输层通信协议对应的分流表包括:二元组与线程标示符的对应关系,每个线程标示符对应一个线程。
结合第一方面的第一种可能实现的方式,在第二种可能实现的方式中,所述分流器根据所述数据包所属的传输层通信协议对应的分流表中的数据流的标识信息和线程的对应关系,确定所述数据包所属的数据流对应的线程包括:所述分流器在所述数据包所属面向无连接的传输层通信协议对应的分流表中的二元组与线程标示符的对应关系中,查找所述数据包的二元组对应的线程标示符;所述分流器将所述数据包的二元组对应的线程标示符对应的线程确定为所述数据包所属的数据流对应的线程。
在第一方面的第三种可能实现的方式中,若所述数据包所属的传输层通 信协议为面向连接的传输层通信协议,则所述数据流的标识信息为所述数据包的四元组,所述四元组包括:所述数据包的源IP地址,源端口、目的IP地址和目的端口;所述面向连接的传输层通信协议对应的分流表包括:第一分流表和第二分流表;所述第一分流表包括:四元组与线程标示符的对应关系,每个线程标示符对应一个线程;所述第二分流表包括:目的IP地址和目的端口与线程标示符的对应关系,每个线程标示符对应一个线程,及每个线程的负载;其中,目的IP地址和目的端口对应的线程为不同进程中的线程。
结合第一方面的第三种可能实现的方式,在第四种可能实现的方式中,所述分流器根据所述数据包所属的传输层通信协议对应的分流表中的数据流的标识信息和线程的对应关系,确定所述数据包所属的数据流对应的线程包括:所述分流器在所述数据包所属的面向连接的传输层通信协议对应的第一分流表中的四元组与线程标示符的对应关系中,查找所述数据包的四元组对应的线程标示符;若所述第一分流表中存在所述数据包的四元组对应的线程标识符,则所述分流器将所述数据包的四元组对应的线程标示符对应的线程确定为所述数据包所属的数据流对应的线程;若所述第一分流表中不存在所述数据包的四元组对应的线程标示符,则所述分流器在所述数据包所属的面向连接的传输层通信协议对应的第二分流表中的目的IP地址和目的端口与线程标示符的对应关系中,查找所述数据包的目的IP地址和目的端口对应的线程标示符;所述分流器将所述数据包的目的IP地址和目的端口对应的线程标识符对应的线程中负载最小的线程确定为所述数据包所属的数据流对应的线程。
结合第一方面的第一种可能实现的方式或第一方面的第二种可能实现的方式或第一方面的第三种可能实现的方式或第一方面的第四种可能实现的方式,在第五种可能实现的方式中,所述线程标识符为线程对应的缓冲队列地址。
结合第一方面或第一方面的第一种可能实现的方式或第一方面的第二种 可能实现的方式或第一方面的第三种可能实现的方式或第一方面的第四种可能实现的方式或第一方面的第五种可能实现的方式,在第六种可能实现的方式中,所述方法还包括:分流器根据线程的状态更新分流表。
在第二方面,本发明实施例提供一种分流器,应用于数据分流系统中,所述数据分流系统中还包括存储器和用于处理数据的多个线程,每个线程对应一个缓冲队列,所述存储器存储有传输层通信协议和分流表的对应关系,每个分流表中设置有数据流的标识信息和线程的对应关系,所述分流器包括:
解析单元,用于对所接收的数据包进行解析确定所述数据包所属的传输层通信协议;
第一获取单元,用于从所述数据包中,获取与确定的所述传输层通信协议对应的所述数据包所属的数据流的标识信息,所述数据流的标识信息用于区分所述数据包所属的数据流;
第二获取单元,用于根据所述传输层通信协议和分流表的对应关系中,从所述存储器中获取所述数据包所属的传输层通信协议对应的分流表;
确定单元,用于根据所述数据包所属的传输层通信协议对应的分流表中的数据流的标识信息和线程的对应关系,确定所述数据包所属的数据流对应的线程;
发送单元,用于将所述数据包发送至所述数据流对应的线程的缓冲队列,以使所述数据流对应的线程从所述缓冲队列获取数据包。
在第二方面的第一种可能实现的方式中,若所述数据包所属的传输层通信协议为面向无连接的传输层通信协议,则所述数据流的标识信息为所述数据包的二元组,所述二元组包括:所述数据包的目的IP地址和目的端口;所述面向无连接的传输层通信协议对应的分流表包括:二元组与线程标示符的对应关系,每个线程标示符对应一个线程。
结合第二方面的第一种可能实现的方式,在第二种可能实现的方式中,所述确定单元具体用于:在所述数据包所属面向无连接的传输层通信协议对 应的分流表中的二元组与线程标示符的对应关系中,查找所述数据包的二元组对应的线程标示符;将所述数据包的二元组对应的线程标示符对应的线程确定为所述数据包所属的数据流对应的线程。
在第二方面的第三种可能实现的方式中,所述面向连接的传输层通信协议对应的分流表包括:第一分流表和第二分流表;所述第一分流表包括:四元组与线程标示符的对应关系,每个线程标示符对应一个线程;所述第二分流表包括:目的IP地址和目的端口与线程标示符的对应关系,每个线程标示符对应一个线程,及每个线程的负载;其中,目的IP地址和目的端口对应的线程为不同进程中的线程。
结合第二方面的第三种可能实现的方式,在第四种可能实现的方式中,所述确定单元具体用于:在所述数据包所属的面向连接的传输层通信协议对应的第一分流表中的四元组与线程标示符的对应关系中,查找所述数据包的四元组对应的线程标示符;若所述第一分流表中存在所述数据包的四元组对应的线程标识符,则将所述数据包的四元组对应的线程标示符对应的线程确定为所述数据包所属的数据流对应的线程;若所述第一分流表中不存在所述数据包的四元组对应的线程标示符,则在所述数据包所属的面向连接的传输层通信协议对应的第二分流表中的目的IP地址和目的端口与线程标示符的对应关系中,查找所述数据包的目的IP地址和目的端口对应的线程标示符;将所述数据包的目的IP地址和目的端口对应的线程标识符对应的线程中负载最小的线程确定为所述数据包所属的数据流对应的线程。
结合第二方面的第一种可能实现的方式或第二方面的第二种可能实现的方式或第二方面的第三种可能实现的方式或第二方面的第四种可能实现的方式,在第五种可能实现的方式中,所述线程标识符为线程对应的缓冲队列地址。
结合第二方面或第二方面的第一种可能实现的方式或第二方面的第二种可能实现的方式或第二方面的第三种可能实现的方式或第二方面的第四种可 能实现的方式第二方面的第五种可能实现的方式,在第六种可能实现的方式中,所述分流器还包括:更新单元,用于根据线程的状态更新分流表。
通过上述方案,分流器以线程为分流粒度,并且不同的传输层通信协议对应不同的分流表,分流器根据数据包所属的传输层通信协议对应的分流表将数据包分配到该数据包所属的数据流对应的线程的缓冲队列,以使线程从该线程对应的缓冲队列中获取数据包。由于每个线程都有其独立的缓冲队列,因此数据包和数据包的连接信息不会被多个线程共享,可避免线程间互斥和同步开销以及核心间cache失效,从而提高了多核处理器的处理能力。
附图说明
图1为本发明实施例一提供的一种数据分流方法的流程示意图;
图2为本发明实施例一提供的一种分流表的示意图;
图3为本发明实施例一提供的另一种分流表的示意图;
图4为本发明实施例一提供的一种确定数据包所属的数据流对应的线程的方法的流程示意图;
图5为本发明实施例一提供的另一种确定数据包所属的对应的线程的方法的流程示意图;
图6为本发明实施例二提供的一种分流器的结构示意图;
图7为本发明实施例二提供的另一种分流器的结构示意图。
具体实施方式
为了使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明作进一步地详细描述,显然,所描述的实施例仅仅是本发明一部份实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本发明保护的范围。
下面以图1为例详细说明本发明实施例一提供的一种数据分流方法,图1为本发明实施例一提供的一种数据分流方法的流程示意图。该数据分流方法应用于数据分流系统中,该数据分流系统包括分流器,存储器和用于处理数据的多个线程,每个线程对应一个缓冲队列。其中,存储器用于传输层通信协议和分流表的对应关系及每个传输层通信协议对应的分流表,每个分流表中设置有数据流的标识信息和线程的对应关系。该存储器可以为包含高速RAM存储器,也可以还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。分流器为本实施例提供的数据分流方法的执行主体,如图1所示,该数据分流方法包括以下步骤:
步骤S101,分流器对所接收的数据包进行解析确定该数据包所属的传输层通信协议。
分流器在接收到数据包之后,若该数据包是完整的数据包或完整的数据包的第一个分片,则从数据包头部可以获知该数据包所属的传输层通信协议。若该数据包是完整的数据包的分片且不是第一个分片时,则根据该数据包的IP头查找该完整的数据包的第一个分片,如果分流器还没有接收到该完整的数据包的第一个分片,则需要等待至分流器接收到该完整的数据包的第一个分片。在查找到该完整的数据包的第一个分片后,从该第一个分片头部获知该数据包所属的通信协议。
其中,传输层通信协议分为两种,一种是面向非连接的传输层通信协议,另一种是面向连接的传输层通信协议。面向非连接的传输层通信协议可以具体为用户数据包协议(User Datagram Protocol,UDP),面向非连接的传输层通信协议可以具体为传输控制协议(Transmission Control Protocol,TCP)和流控制传输协议(Stream Control Transmission Protocol,SCTP)。
步骤S102,分流器从该数据包中,获取与确定的传输层通信协议对应的数据包所属的数据流的标识信息,该数据流的标识信息用于区分数据包所属的数据流。
如果数据包所属的传输层通信协议为面向非连接的传输层通信协议,则数据流的标识信息为该数据包的二元组,二元组包括:数据包的目的IP地址和目的端口,那么分流器从该数据包中获取该数据包的二元组。
如果数据包所属的传输层通信协议为面向连接的传输层通信协议,则数据流的标识信息为该数据包的四元组,四元组包括:数据包的源IP地址、源端口、目的IP地址和目的端口,那么分流器从该数据包中获取该数据包的四元组。
步骤S103,分流器根据传输层通信协议和分流表的对应关系,从存储器中获取该数据包所属的传输层通信协议对应的分流表。
可选地,如图2所示,如果数据包所属的传输层通信协议为面向非连接的传输层通信协议,则分流表包括:二元组与线程标示符的对应关系,每个线程标示符对应一个线程。其中,线程标识符可以为线程的身份标识号码ID或线程对应的缓冲队列地址或其他能够唯一确定线程的标识。
可选地,如图3所述,如果数据包所属的传输层通信协议为面向连接的传输层通信协议,则分流表包括:第一分流表和第二分流表。第一分流表包括:四元组与线程标示符的对应关系,每个线程标示符对应一个线程;第二分流表包括:目的IP地址和目的端口与线程标示符的对应关系,每个线程标示符对应一个线程,及每个线程的负载。其中,目的IP地址和目的端口对应的多个线程为不同进程中的线程,从而实现不同进程中的线程与同一组目的IP地址和目的端口绑定。线程标识符可以为线程的身份标识号码ID或线程对应的缓冲队列地址或其他能够唯一确定线程的标识。
步骤S104,分流器根据该数据包所属的传输层通信协议对应的分流表中的数据流的标识信息和线程的对应关系,确定该数据包所属的数据流对应的线程。
可选地,数据包所属的传输层通信协议为面向非连接的通信协议时,则根据图2所示的分流表确定数据包所属的数据流对应的线程,如图4所示, 根据图2所示的分流表确定数据包所属的数据流对应的线程的过程包括以下步骤:
步骤S401,分流器在数据包所属面向无连接的传输层通信协议对应的分流表中的二元组与线程标示符的对应关系中,查找该数据包的二元组对应的线程标示符。
步骤S402,分流器将数据包的二元组对应的线程标示符对应的线程确定为该数据包所属的数据流对应的线程。
由图2所示的分流表可知一组目的IP地址和目的端口只对应一个线程标识符。因此,在确定分流表中存在该数据包的二元组只对应的线程标识符后,就直接将该线程标示符对应的线程确定为该数据包所属的数据流对应的线程。
可选地,数据包所属的传输层通信协议为面向连接的通信协议时,则根据图3所示的分流表确定数据包所属的数据流对应的线程,如图5所示,根据图3所示的分流表确定数据包所属的数据流对应的线程的过程包括以下步骤:
步骤S501,分流器在数据包所属的面向连接的传输层通信协议对应的第一分流表中的四元组与线程标示符的对应关系中,查找该数据包的四元组对应的线程标示符。
在图3所示的分流表中的第一分流表中查找数据包的四元组对应的线程标识符。若第一分流表中存在数据包的四元组对应的线程标示符,则执行步骤S502。若第一分流表中不存在数据包的四元组对应的线程标示符,则执行步骤S503和步骤S504。
步骤S502,若第一分流表中存在数据包的四元组对应的线程标识符,则分流器将数据包的四元组对应的线程标示符对应的线程确定为数据包所属的数据流对应的线程。
由图3所示的分流表中的第一分流表可知一组源IP地址,源端口,目的 IP地址和目的端口只对应一个线程标识符。因此,在确定第一分流表中存在该数据包的四元组只对应的线程标识符后,就直接将该线程标示符对应的线程确定为该数据包所属的数据流对应的线程。
步骤S503,若第一分流表中不存在该数据包的四元组对应的线程标示符,则分流器在该数据包所属的面向连接的传输层通信协议对应的第二分流表中的目的IP地址和目的端口与线程标示符的对应关系中,查找该数据包的目的IP地址和目的端口对应的线程标示符。
步骤S504,分流器将数据包的目的IP地址和目的端口对应的线程标识符对应的线程中负载最小的线程确定为该数据包所属的数据流对应的线程。
从图3所示的分流表可知第二分流表中的每个线程标识符都对应有该线程标识符对应的线程的负载。如果在图4所示的分流表中的第二分流表中,该数据包的目的IP地址和目的端口只对应一个线程标识符,则直接将该线程标示符对应的线程确定为该数据包所属的数据流对应的线程。如果在图3所示的分流表中的第二分流表中,该数据包的目的IP地址和目的端口对应多个线程标识符,则将多个线程标识符对应的线程中负载最小的线程确定为该数据包所属的数据流对应的线程,以实现线程的负载均衡,从而使得多核处理器负载均衡。
步骤S105,分流器将该数据包发送至该数据包所属的数据流对应的线程的缓冲队列,以使该数据流对应的线程从该缓冲队列获取数据包。
在分流器根据分流表确认该数据包所属的数据流对应的线程之后,将该数据包发送至该数据包所属的数据流对应的线程的缓冲队列。然后该线程直接从其对应的缓冲队列获取数据包,因此数据包不会被多个线程共享,可避免线程间互斥和同步开销以及核心间cache失效。
另外,本发明实施例一提供的数据分流方法还可以包括以下步骤:
根据线程的状态更新分流表。
具体的,对于图2所示的分流表,当有线程建立连接时,将该建立连 接的线程对应的线程标示符和该线程对应的目的IP地址和目的端口添加到分流表中。当分流表中的线程标示符对应的线程从连接状态变为空闲状态(本实施例中的空闲状态是指线程未关闭的情况下,线程的连接状态关闭,或线程的连接状态异常中断)时,将该从连接状态变为空闲状态的线程对应的线程标识符及该线程对应的目的IP地址和目的端口从分流表中删除。
对于图3所示的分流表,当第一分流表中的线程标示符对应的线程从连接状态变为监听状态时,将第一分流表中的该线程的线程标示符及该线程对应的源IP地址,源端口、目的IP地址和目的端口的删除,并将该线程的线程标示符、该线程对应的目的IP地址和目的端口及该线程的负载添加到第二分流表中。当第二分流表中的线程标示符对应的线程从监听状态变为连接状态时,将第二分流表中的该线程的线程标示符、该线程对应的目的IP地址和目的端口及该线程的负载删除,并将该线程的线程标示符及该线程对应的源IP地址,源端口、目的IP地址和目的端口添加到第一分流表中。当有线程从空闲状态变为监听状态时,将该从空闲状态变为监听状态的线程对应的线程标示符、该线程对应的目的IP地址和目的端口及该线程的负载添加到第二分流表中。当第一分流表或第二分流表中的线程标示符对应的线程变为空闲状态时,将该变为空闲状态线程的线程标示符及该线程对应的源IP地址,源端口、目的IP地址和目的端口从第一分流表删除,或将该变为空闲状态线程对应的线程标示符,该线程对应的目的IP地址和目的端口及该线程的负载从第二分流表中删除。
利用本发明实施例一提供的数据分流方法,分流器以线程为分流粒度,并且不同的传输层通信协议对应不同的分流表,分流器根据数据包所属的传输层通信协议对应的分流表将数据包分配到该数据包所属的数据流对应的线程的缓冲队列,以使线程从该线程对应的缓冲队列中获取数据包。由于每个线程都有其独立的缓冲队列,因此数据包和数据包的连接信息不会被多个线 程共享,可避免线程间互斥和同步开销以及核心间cache失效。并且针对面向连接的传输层通信协议实现了多个线程对同一个端口进行监听,可有效均衡多个核心之间的负载,从而提高了多核处理器的处理能力。
下面以图6为例详细说明本发明实施例二提供的一种分流器,图6为本发明实施例二提供的一种分流器的结构示意图。该分流器应用于数据分流系统中,用以实现本发明实施例一提供的数据分流方法。该数据分流系统还包括存储器和用于处理数据的多个线程,每个线程对应一个缓冲队列。其中,存储器用于传输层通信协议和分流表的对应关系及每个传输层通信协议对应的分流表,每个分流表中设置有数据流的标识信息和线程的对应关系。该存储器可以为包含高速RAM存储器,也可以还包括非易失性存储器,例如至少一个磁盘存储器。
如图6所示,该分流器包括:解析单元610,第一获取单元620、第二获取单元630,确定单元640和发送单元650。
解析单元610用于对所接收的数据包进行解析确定该数据包所属的传输层通信协议。
分流器在接收到数据包之后,若该数据包是完整的数据包或完整的数据包的第一个分片,则解析单元610从数据包头部可以获知该数据包所属的传输层通信协议。若该数据包是完整的数据包的分片且不是第一个分片时,则解析单元610根据该数据包的IP头查找该完整的数据包的第一个分片,如果分流器还没有接收到该完整的数据包的第一个分片,则需要等待至分流器接收到该完整的数据包的第一个分片。解析单元610在查找到该完整的数据包的第一个分片后,从该第一个分片头部获知该数据包所属的通信协议。
其中,传输层通信协议分为两种,一种是面向非连接的传输层通信协议,另一种是面向连接的传输层通信协议。面向非连接的传输层通信协议可以具体为UDP,面向非连接的传输层通信协议可以具体为TCP和SCTP。
第一获取单元620用于从该数据包中,获取与确定的传输层通信协议对 应的数据包所属的数据流的标识信息,该数据流的标识信息用于区分数据包所属的数据流。
如果数据包所属的传输层通信协议为面向非连接的传输层通信协议,则数据流的标识信息为该数据包的二元组,二元组包括:数据包的目的IP地址和目的端口,那么第一获取单元620从该数据包中获取该数据包的二元组。
如果数据包所属的传输层通信协议为面向连接的传输层通信协议,则数据流的标识信息为该数据包的四元组,四元组包括:数据包的源IP地址、源端口、目的IP地址和目的端口,那么第一获取单元620从该数据包中获取该数据包的四元组。
第二获取单元630用于根据传输层通信协议和分流表的对应关系,从存储器中获取该数据包所属的传输层通信协议对应的分流表。
可选地,如图2所示,如果数据包所属的传输层通信协议为面向非连接的传输层通信协议,则分流表包括:二元组与线程标示符的对应关系,每个线程标示符对应一个线程。其中,线程标识符可以为线程的身份标识号码ID或线程对应的缓冲队列地址或其他能够唯一确定线程的标识。
可选地,如图3所述,如果数据包所属的传输层通信协议为面向连接的传输层通信协议,则分流表包括:第一分流表和第二分流表。第一分流表包括:四元组与线程标示符的对应关系,每个线程标示符对应一个线程;第二分流表包括:目的IP地址和目的端口与线程标示符的对应关系,每个线程标示符对应一个线程,及每个线程的负载。其中,目的IP地址和目的端口对应的多个线程为不同进程中的线程,从而实现不同进程中的线程与同一组目的IP地址和目的端口绑定。线程标识符可以为线程的身份标识号码ID或线程对应的缓冲队列地址或其他能够唯一确定线程的标识。
确定单元640用于根据该数据包所属的传输层通信协议对应的分流表中的数据流的标识信息和线程的对应关系,确定该数据包所属的数据流对应的线程。
可选地,数据包所属的传输层通信协议为面向非连接的通信协议时,则根据图2所示的分流表确定数据包所属的数据流对应的线程。确定单元640具体用于在数据包所属面向无连接的传输层通信协议对应的分流表中的二元组与线程标示符的对应关系中,查找该数据包的二元组对应的线程标示符,将数据包的二元组对应的线程标示符对应的线程确定为该数据包所属的数据流对应的线程。
可选地,数据包所属的传输层通信协议为面向连接的通信协议时,则根据图3所示的分流表确定数据包所属的数据流对应的线程。确定单元640具体用于在数据包所属的面向连接的传输层通信协议对应的第一分流表中的四元组与线程标示符的对应关系中,查找该数据包的四元组对应的线程标示符;若第一分流表中存在数据包的四元组对应的线程标识符,则将数据包的四元组对应的线程标示符对应的线程确定为数据包所属的数据流对应的线程;若第一分流表中不存在该数据包的四元组对应的线程标示符,则在该数据包所属的面向连接的传输层通信协议对应的第二分流表中的目的IP地址和目的端口与线程标示符的对应关系中,查找该数据包的目的IP地址和目的端口对应的线程标示符,将数据包的目的IP地址和目的端口对应的线程标识符对应的线程中负载最小的线程确定为该数据包所属的数据流对应的线程。
从图3所示的分流表可知第二分流表中的每个线程标识符都对应有该线程标识符对应的线程的负载。如果在图4所示的分流表中的第二分流表中,该数据包的目的IP地址和目的端口只对应一个线程标识符,则直接将该线程标示符对应的线程确定为该数据包所属的数据流对应的线程。如果在图3所示的分流表中的第二分流表中,该数据包的目的IP地址和目的端口对应多个线程标识符,则将多个线程标识符对应的线程中负载最小的线程确定为该数据包所属的数据流对应的线程,以实现线程的负载均衡,从而使得多核处理器负载均衡。
发送单元650用于将该数据包发送至该数据包所属的数据流对应的线 程的缓冲队列,以使该数据流对应的线程从该缓冲队列获取数据包。
在确定单元640根据分流表确认该数据包所属的数据流对应的线程之后,发送单元650将该数据包发送至该数据包所属的数据流对应的线程的缓冲队列。然后该线程直接从其对应的缓冲队列获取数据包,因此数据包不会被多个线程共享,可避免线程间互斥和同步开销以及核心间cache失效。
另外,如图7所示,本发明实施例二提供的分流器还可以包括:更新单元660。
更新单元660用于根据线程的状态更新分流表。
具体的,对于图2所示的分流表,当有线程建立连接时,更新单元660将该建立连接的线程对应的线程标示符和该线程对应的目的IP地址和目的端口添加到分流表中。当分流表中的线程标示符对应的线程从连接状态变为空闲状态(本实施例中的空闲状态是指线程未关闭的情况下,线程的连接状态关闭,或线程的连接状态异常中断)时,更新单元660将该从连接状态变为空闲状态的线程对应的线程标识符及该线程对应的目的IP地址和目的端口从分流表中删除。
对于图3所示的分流表,当第一分流表中的线程标示符对应的线程从连接状态变为监听状态时,更新单元660将第一分流表中的该线程的线程标示符及该线程对应的源IP地址,源端口、目的IP地址和目的端口的删除,并将该线程的线程标示符、该线程对应的目的IP地址和目的端口及该线程的负载添加到第二分流表中。当第二分流表中的线程标示符对应的线程从监听状态变为连接状态时,更新单元660将第二分流表中的该线程的线程标示符、该线程对应的目的IP地址和目的端口及该线程的负载删除,并将该线程的线程标示符及该线程对应的源IP地址,源端口、目的IP地址和目的端口添加到第一分流表中。当有线程从空闲状态变为监听状态时,更新单元660将该从空闲状态变为监听状态的线程对应的线程标示符、该线程对应的目的IP地址和目的端口及该线程的负载添加到第二分流表中。 当第一分流表或第二分流表中的线程标示符对应的线程变为空闲状态时,更新单元660将该变为空闲状态线程的线程标示符及该线程对应的源IP地址,源端口、目的IP地址和目的端口从第一分流表删除,或将该变为空闲状态线程对应的线程标示符,该线程对应的目的IP地址和目的端口及该线程的负载从第二分流表中删除。
利用本发明实施例二提供的分流器,该分流器以线程为分流粒度,并且不同的传输层通信协议对应不同的分流表,分流器根据数据包所属的传输层通信协议对应的分流表将数据包分配到该数据包所属的数据流对应的线程的缓冲队列,以使线程从该线程对应的缓冲队列中获取数据包。由于每个线程都有其独立的缓冲队列,因此数据包和数据包的连接信息不会被多个线程共享,可避免线程间互斥和同步开销以及核心间cache失效。并且针对面向连接的传输层通信协议实现了多个线程对同一个端口进行监听,可有效均衡多个核心之间的负载,从而提高了多核处理器的处理能力。
专业人员应该还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
以上所述的具体实施方式,对本发明的目的、技术方案和有益效果进行 了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施方式而已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (14)

  1. 一种数据分流方法,其特征在于,应用于数据分流系统中,所述数据分流系统中包括分流器,存储器和用于处理数据的多个线程,每个线程对应一个缓冲队列,所述存储器存储有传输层通信协议和分流表的对应关系及每个传输层通信协议对应的分流表,每个分流表中设置有数据流的标识信息和线程的对应关系,所述方法包括:
    分流器对所接收的数据包进行解析确定所述数据包所属的传输层通信协议;
    所述分流器从所述数据包中,获取与确定的所述传输层通信协议对应的所述数据包所属的数据流的标识信息,所述数据流的标识信息用于区分所述数据包所属的数据流;
    所述分流器根据所述传输层通信协议和分流表的对应关系,从所述存储器中获取所述数据包所属的传输层通信协议对应的分流表;
    所述分流器根据所述数据包所属的传输层通信协议对应的分流表中的数据流的标识信息和线程的对应关系,确定所述数据包所属的数据流对应的线程;
    所述分流器将所述数据包发送至所述数据流对应的线程的缓冲队列,以使所述数据流对应的线程从所述缓冲队列获取数据包。
  2. 根据权利要求1所述的方法,其特征在于,若所述数据包所属的传输层通信协议为面向无连接的传输层通信协议,则所述数据流的标识信息为所述数据包的二元组,所述二元组包括:所述数据包的目的IP地址和目的端口;
    所述面向无连接的传输层通信协议对应的分流表包括:
    二元组与线程标示符的对应关系,每个线程标示符对应一个线程。
  3. 根据权利要求2所述的方法,其特征在于,所述分流器根据所述数据包所属的传输层通信协议对应的分流表中的数据流的标识信息和线程的对应 关系,确定所述数据包所属的数据流对应的线程包括:
    所述分流器在所述数据包所属面向无连接的传输层通信协议对应的分流表中的二元组与线程标示符的对应关系中,查找所述数据包的二元组对应的线程标示符;
    所述分流器将所述数据包的二元组对应的线程标示符对应的线程确定为所述数据包所属的数据流对应的线程。
  4. 根据权利要求1所述的方法,其特征在于,若所述数据包所属的传输层通信协议为面向连接的传输层通信协议,则所述数据流的标识信息为所述数据包的四元组,所述四元组包括:所述数据包的源IP地址,源端口、目的IP地址和目的端口;
    所述面向连接的传输层通信协议对应的分流表包括:第一分流表和第二分流表;
    所述第一分流表包括:四元组与线程标示符的对应关系,每个线程标示符对应一个线程;
    所述第二分流表包括:目的IP地址和目的端口与线程标示符的对应关系,每个线程标示符对应一个线程,及每个线程的负载;
    其中,目的IP地址和目的端口对应的线程为不同进程中的线程。
  5. 根据权利要求4所述的方法,其特征在于,所述分流器根据所述数据包所属的传输层通信协议对应的分流表中的数据流的标识信息和线程的对应关系,确定所述数据包所属的数据流对应的线程包括:
    所述分流器在所述数据包所属的面向连接的传输层通信协议对应的第一分流表中的四元组与线程标示符的对应关系中,查找所述数据包的四元组对应的线程标示符;
    若所述第一分流表中存在所述数据包的四元组对应的线程标识符,则所述分流器将所述数据包的四元组对应的线程标示符对应的线程确定为所述数据包所属的数据流对应的线程;
    若所述第一分流表中不存在所述数据包的四元组对应的线程标示符,则所述分流器在所述数据包所属的面向连接的传输层通信协议对应的第二分流表中的目的IP地址和目的端口与线程标示符的对应关系中,查找所述数据包的目的IP地址和目的端口对应的线程标示符;
    所述分流器将所述数据包的目的IP地址和目的端口对应的线程标识符对应的线程中负载最小的线程确定为所述数据包所属的数据流对应的线程。
  6. 根据权利要求2-5任一所述的方法,其特征在于,所述线程标识符为线程对应的缓冲队列地址。
  7. 根据权利要求1-6任一所述的方法,其特征在于,所述方法还包括:
    所述分流器根据线程的状态更新分流表。
  8. 一种分流器,其特征在于,应用于数据分流系统中,所述数据分流系统中还包括存储器和用于处理数据的多个线程,每个线程对应一个缓冲队列,所述存储器存储有传输层通信协议和分流表的对应关系,每个分流表中设置有数据流的标识信息和线程的对应关系,所述分流器包括:
    解析单元,用于对所接收的数据包进行解析确定所述数据包所属的传输层通信协议;
    第一获取单元,用于从所述数据包中,获取与确定的所述传输层通信协议对应的所述数据包所属的数据流的标识信息,所述数据流的标识信息用于区分所述数据包所属的数据流;
    第二获取单元,用于根据所述传输层通信协议和分流表的对应关系中,从所述存储器中获取所述数据包所属的传输层通信协议对应的分流表;
    确定单元,用于根据所述数据包所属的传输层通信协议对应的分流表中的数据流的标识信息和线程的对应关系,确定所述数据包所属的数据流对应的线程;
    发送单元,用于将所述数据包发送至所述数据流对应的线程的缓冲队列,以使所述数据流对应的线程从所述缓冲队列获取数据包。
  9. 根据权利要求8所述的分流器,其特征在于,若所述数据包所属的传输层通信协议为面向无连接的传输层通信协议,则所述数据流的标识信息为所述数据包的二元组,所述二元组包括:所述数据包的目的IP地址和目的端口;
    所述面向无连接的传输层通信协议对应的分流表包括:
    二元组与线程标示符的对应关系,每个线程标示符对应一个线程。
  10. 根据权利要求9所述的分流器,其特征在于,所述确定单元具体用于:
    在所述数据包所属面向无连接的传输层通信协议对应的分流表中的二元组与线程标示符的对应关系中,查找所述数据包的二元组对应的线程标示符;
    将所述数据包的二元组对应的线程标示符对应的线程确定为所述数据包所属的数据流对应的线程。
  11. 根据权利要求8所述的分流器,其特征在于,所述面向连接的传输层通信协议对应的分流表包括:第一分流表和第二分流表;
    所述第一分流表包括:四元组与线程标示符的对应关系,每个线程标示符对应一个线程;
    所述第二分流表包括:目的IP地址和目的端口与线程标示符的对应关系,每个线程标示符对应一个线程,及每个线程的负载;
    其中,目的IP地址和目的端口对应的线程为不同进程中的线程。
  12. 根据权利要求11所述的分流器,其特征在于,所述确定单元具体用于:
    在所述数据包所属的面向连接的传输层通信协议对应的第一分流表中的四元组与线程标示符的对应关系中,查找所述数据包的四元组对应的线程标示符;
    若所述第一分流表中存在所述数据包的四元组对应的线程标识符,则将所述数据包的四元组对应的线程标示符对应的线程确定为所述数据包所属的 数据流对应的线程;
    若所述第一分流表中不存在所述数据包的四元组对应的线程标示符,则在所述数据包所属的面向连接的传输层通信协议对应的第二分流表中的目的IP地址和目的端口与线程标示符的对应关系中,查找所述数据包的目的IP地址和目的端口对应的线程标示符;
    将所述数据包的目的IP地址和目的端口对应的线程标识符对应的线程中负载最小的线程确定为所述数据包所属的数据流对应的线程。
  13. 根据权利要求9-12任一所述的分流器,其特征在于,所述线程标识符为线程对应的缓冲队列地址。
  14. 根据权利要求8-13任一所述的分流器,其特征在于,所述分流器还包括:
    更新单元,用于根据线程的状态更新分流表。
PCT/CN2014/094180 2013-12-24 2014-12-18 数据分流方法及分流器 WO2015096655A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP14875040.9A EP3079313B1 (en) 2013-12-24 2014-12-18 Data splitting method and splitter
US15/190,774 US10097466B2 (en) 2013-12-24 2016-06-23 Data distribution method and splitter

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310721545.7 2013-12-24
CN201310721545.7A CN104734993B (zh) 2013-12-24 2013-12-24 数据分流方法及分流器

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/190,774 Continuation US10097466B2 (en) 2013-12-24 2016-06-23 Data distribution method and splitter

Publications (1)

Publication Number Publication Date
WO2015096655A1 true WO2015096655A1 (zh) 2015-07-02

Family

ID=53458440

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/094180 WO2015096655A1 (zh) 2013-12-24 2014-12-18 数据分流方法及分流器

Country Status (4)

Country Link
US (1) US10097466B2 (zh)
EP (1) EP3079313B1 (zh)
CN (1) CN104734993B (zh)
WO (1) WO2015096655A1 (zh)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106371921B (zh) * 2016-08-31 2019-10-29 成都科来软件有限公司 一种多线程分析的均衡分流方法
CN107087006B (zh) * 2017-05-24 2019-08-16 全讯汇聚网络科技(北京)有限公司 一种协议分流方法、系统及服务器
US10719350B2 (en) * 2017-10-10 2020-07-21 Sap Se Worker thread manager
CN108228366B (zh) * 2017-12-29 2022-01-25 北京奇虎科技有限公司 一种数据流的处理方法和装置
US10409743B1 (en) * 2018-06-29 2019-09-10 Xilinx, Inc. Transparent port aggregation in multi-chip transport protocols
CN108877178A (zh) * 2018-07-05 2018-11-23 成都希格玛光电科技有限公司 一种工业现场远程数据采集方法及采集系统
CN111131074B (zh) * 2018-10-31 2023-04-11 中移(杭州)信息技术有限公司 一种数据处理方法、装置、系统、服务器及可读存储介质
US10817455B1 (en) 2019-04-10 2020-10-27 Xilinx, Inc. Peripheral I/O device with assignable I/O and coherent domains
US10817462B1 (en) 2019-04-26 2020-10-27 Xilinx, Inc. Machine learning model updates to ML accelerators
US11586369B2 (en) 2019-05-29 2023-02-21 Xilinx, Inc. Hybrid hardware-software coherent framework
CN110247863A (zh) * 2019-07-12 2019-09-17 广州西麦科技股份有限公司 数据包处理方法、装置、sdn交换机及存储介质
US11074208B1 (en) 2019-07-24 2021-07-27 Xilinx, Inc. Routing network using global address map with adaptive main memory expansion for a plurality of home agents
US11372769B1 (en) 2019-08-29 2022-06-28 Xilinx, Inc. Fine-grained multi-tenant cache management
US11093394B1 (en) 2019-09-04 2021-08-17 Xilinx, Inc. Delegated snoop protocol
US11113194B2 (en) 2019-09-04 2021-09-07 Xilinx, Inc. Producer-to-consumer active direct cache transfers
US11474871B1 (en) 2019-09-25 2022-10-18 Xilinx, Inc. Cache coherent acceleration function virtualization
US11271860B1 (en) 2019-11-15 2022-03-08 Xilinx, Inc. Compressed tag coherency messaging
US11386031B2 (en) 2020-06-05 2022-07-12 Xilinx, Inc. Disaggregated switch control path with direct-attached dispatch
US11556344B2 (en) 2020-09-28 2023-01-17 Xilinx, Inc. Hardware coherent computational expansion memory
CN112256208B (zh) * 2020-11-02 2023-07-28 南京云信达科技有限公司 一种离线数据包存储分析方法及装置
CN112583722B (zh) * 2021-02-26 2021-05-28 紫光恒越技术有限公司 一种数据处理的方法、装置、云设备和存储设备
CN113176940A (zh) * 2021-03-29 2021-07-27 新华三信息安全技术有限公司 一种数据流分流方法、装置以及网络设备
CN113630331B (zh) * 2021-10-11 2021-12-28 北京金睛云华科技有限公司 全流量存储回溯分析系统中父子连接的处理方法
CN115225586B (zh) * 2022-07-14 2024-04-26 中科驭数(北京)科技有限公司 数据包发送方法、装置、设备及计算机可读存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101656677A (zh) * 2009-09-18 2010-02-24 杭州迪普科技有限公司 一种报文分流处理方法及装置
CN102377640A (zh) * 2010-08-11 2012-03-14 杭州华三通信技术有限公司 一种报文处理装置和报文处理方法、及预处理器
CN102497430A (zh) * 2011-12-13 2012-06-13 曙光信息产业(北京)有限公司 一种分流设备实现系统和方法
US20120230341A1 (en) * 2009-04-27 2012-09-13 Lsi Corporation Multi-threaded processing with hardware accelerators
CN102811169A (zh) * 2012-07-24 2012-12-05 成都卫士通信息产业股份有限公司 采用哈希算法进行多核并行处理的vpn实现方法及系统
US20130089109A1 (en) * 2010-05-18 2013-04-11 Lsi Corporation Thread Synchronization in a Multi-Thread, Multi-Flow Network Communications Processor Architecture

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6483804B1 (en) * 1999-03-01 2002-11-19 Sun Microsystems, Inc. Method and apparatus for dynamic packet batching with a high performance network interface
US7543067B2 (en) * 2001-08-01 2009-06-02 Canon Kabushiki Kaisha Flexible secure network data transfer and messaging
JP3925234B2 (ja) * 2002-02-18 2007-06-06 ソニー株式会社 データ通信システム、データ通信管理装置、および方法、並びにコンピュータ・プログラム
US7127700B2 (en) * 2002-03-14 2006-10-24 Openwave Systems Inc. Method and apparatus for developing web services using standard logical interfaces to support multiple markup languages
ES2249680T3 (es) * 2003-05-27 2006-04-01 Siemens Aktiengesellschaft Procedimiento para la transmision de datos orientada a paquetes en redes de telecomunicaciones por medio de la conversion en un nodo intermedio de un protocolo de transmision sin conexion en un protocolo de transmision orientado a la conexion y viceversa.
US20050086393A1 (en) * 2003-10-03 2005-04-21 Meng David Q. Controlling power of network processor engines
US7130401B2 (en) * 2004-03-09 2006-10-31 Discernix, Incorporated Speech to text conversion system
GB0408876D0 (en) * 2004-04-21 2004-05-26 Level 5 Networks Ltd User-level stack
US20060031571A1 (en) * 2004-04-29 2006-02-09 International Business Machines Corporation Data communications through a split connection proxy
US7984180B2 (en) * 2005-10-20 2011-07-19 Solarflare Communications, Inc. Hashing algorithm for network receive filtering
KR101252812B1 (ko) * 2006-04-25 2013-04-12 주식회사 엘지씨엔에스 네트워크 보안 장치 및 그를 이용한 패킷 데이터 처리방법
EP2632109B1 (en) * 2006-07-10 2017-05-10 Solarflare Communications Inc Data processing system and method therefor
US8838817B1 (en) * 2007-11-07 2014-09-16 Netapp, Inc. Application-controlled network packet classification
US8190960B1 (en) * 2007-12-13 2012-05-29 Force10 Networks, Inc. Guaranteed inter-process communication
CN101217467B (zh) * 2007-12-28 2010-10-27 杭州华三通信技术有限公司 核间负载分发装置及方法
US20090296685A1 (en) 2008-05-29 2009-12-03 Microsoft Corporation User-Mode Prototypes in Kernel-Mode Protocol Stacks
US8750112B2 (en) * 2009-03-16 2014-06-10 Echostar Technologies L.L.C. Method and node for employing network connections over a connectionless transport layer protocol
CN101867558B (zh) 2009-04-17 2012-11-14 深圳市永达电子股份有限公司 用户态网络协议栈系统及处理报文的方法
CN101909257B (zh) * 2009-06-04 2013-08-21 中兴通讯股份有限公司 M2m平台实现多种承载协议并发接入的方法及系统
CN101699788A (zh) * 2009-10-30 2010-04-28 清华大学 模块化的网络入侵检测系统
CN101873259B (zh) * 2010-06-01 2013-01-09 华为技术有限公司 Sctp报文识别方法和装置
CN103392314B (zh) * 2010-12-29 2016-06-15 思杰系统有限公司 用于可扩展的n核统计信息聚合的系统和方法
CN102821165B (zh) * 2012-04-13 2016-08-03 中兴通讯股份有限公司 Ip地址转换方法及装置
CN102664815A (zh) * 2012-05-21 2012-09-12 华为技术有限公司 报文流量的负荷分担方法、装置和系统
US8869157B2 (en) * 2012-06-21 2014-10-21 Breakingpoint Systems, Inc. Systems and methods for distributing tasks and/or processing recources in a system
US9088501B2 (en) * 2013-07-31 2015-07-21 Citrix Systems, Inc. Systems and methods for least connection load balancing by multi-core device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120230341A1 (en) * 2009-04-27 2012-09-13 Lsi Corporation Multi-threaded processing with hardware accelerators
CN101656677A (zh) * 2009-09-18 2010-02-24 杭州迪普科技有限公司 一种报文分流处理方法及装置
US20130089109A1 (en) * 2010-05-18 2013-04-11 Lsi Corporation Thread Synchronization in a Multi-Thread, Multi-Flow Network Communications Processor Architecture
CN102377640A (zh) * 2010-08-11 2012-03-14 杭州华三通信技术有限公司 一种报文处理装置和报文处理方法、及预处理器
CN102497430A (zh) * 2011-12-13 2012-06-13 曙光信息产业(北京)有限公司 一种分流设备实现系统和方法
CN102811169A (zh) * 2012-07-24 2012-12-05 成都卫士通信息产业股份有限公司 采用哈希算法进行多核并行处理的vpn实现方法及系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3079313A4 *

Also Published As

Publication number Publication date
US20160308771A1 (en) 2016-10-20
US10097466B2 (en) 2018-10-09
EP3079313A1 (en) 2016-10-12
EP3079313A4 (en) 2016-11-30
CN104734993B (zh) 2018-05-18
EP3079313B1 (en) 2018-03-21
CN104734993A (zh) 2015-06-24

Similar Documents

Publication Publication Date Title
WO2015096655A1 (zh) 数据分流方法及分流器
US11451476B2 (en) Multi-path transport design
JP6445621B2 (ja) 分散型ロードバランサ
KR101969194B1 (ko) 네트워킹 장치 가상화를 위한 패킷 처리 오프로딩 기법
JP6445015B2 (ja) ミドルウェアおよびアプリケーションの実行のためにエンジニアド・システムにおいてデータサービスを提供するためのシステムおよび方法
JP4898781B2 (ja) オペレーティング・システム・パーティションのためのネットワーク通信
US8913613B2 (en) Method and system for classification and management of inter-blade network traffic in a blade server
US10387358B2 (en) Multi-PCIe socket NIC OS interface
US20190394307A1 (en) Batch processing for quic
US9160659B2 (en) Paravirtualized IP over infiniband bridging
WO2015114473A1 (en) Method and apparatus for locality sensitive hash-based load balancing
US10212083B2 (en) Openflow data channel and control channel separation
US20170085500A1 (en) Streamlined processing in a network switch of network packets in a spliced connection
US20230208779A1 (en) Bandwidth Awareness in a Link Aggregation Group
US20230370421A1 (en) Scaling ip addresses in overlay networks
US20220210086A1 (en) Managing network state for high flow availability within distributed network platform
US9853891B2 (en) System and method for facilitating communication
US10284502B2 (en) Dynamic optimization for IP forwarding performance
US20190391856A1 (en) Synchronization of multiple queues
CN118138552A (zh) 数据传输方法以及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14875040

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2014875040

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014875040

Country of ref document: EP