WO2014180110A1 - 数据处理装置及数据处理方法 - Google Patents

数据处理装置及数据处理方法 Download PDF

Info

Publication number
WO2014180110A1
WO2014180110A1 PCT/CN2013/087107 CN2013087107W WO2014180110A1 WO 2014180110 A1 WO2014180110 A1 WO 2014180110A1 CN 2013087107 W CN2013087107 W CN 2013087107W WO 2014180110 A1 WO2014180110 A1 WO 2014180110A1
Authority
WO
WIPO (PCT)
Prior art keywords
protocol
data packet
data
pointer
network interface
Prior art date
Application number
PCT/CN2013/087107
Other languages
English (en)
French (fr)
Other versions
WO2014180110A9 (zh
Inventor
古强
文刘飞
施广宇
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2014180110A1 publication Critical patent/WO2014180110A1/zh
Publication of WO2014180110A9 publication Critical patent/WO2014180110A9/zh
Priority to US14/936,118 priority Critical patent/US10241830B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/545Interprogram communication where tasks reside in different layers, e.g. user- and kernel-space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/12Protocol engines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Definitions

  • the present invention relates to the field of computers, and in particular, to a data processing apparatus and a data processing method.
  • the Linux operating system is divided into two parts, one is core software, also called kernel space, and the other is common application, also called user space.
  • core software also called kernel space
  • user space there is only one protocol stack instance in the Linux system, which runs in the kernel space.
  • the protocol stack of a single embodiment cannot implement protocol processing on data in parallel, and the processing efficiency is low.
  • the application needs to be developed in the user mode. Therefore, when the user mode application needs to access the kernel space, the memory space needs to be copied to the user space, and the user state application is correspondingly Data is accessed. This leads to greater resource consumption. Summary of the invention
  • an embodiment of the present invention provides a data processing apparatus, where the apparatus includes M protocol stacks and at least one distribution service module, where the M protocol stacks and at least one distribution service module are set in a user space of an operating system.
  • M is a positive integer greater than 1, where
  • the M protocol stacks respectively run on different logical cores of the processor, and the M protocol stacks are configured to process the data packets independently according to the protocol processing rules;
  • the distribution service module configured to receive a data packet from at least one input port on the at least one network interface according to a pre-configuration rule, and store the data packet into a memory space, to facilitate one of the M protocol stacks
  • the protocol stack performs protocol processing on the data packet; and receives the data packet processed by the M protocol stacks, and sends the data packet to the outside through an output port on the network interface
  • the memory space is a memory space after the memory mapping, and the memory space user state and the kernel state are all accessible.
  • the apparatus further includes a memory management module disposed in a user space of the operating system;
  • the memory management module includes: the memory space for storing a data packet, M input queues corresponding to the M protocol stacks, and an output queue corresponding to the output port on the network interface;
  • the M input queues are used to store pointers of data packets that need to be processed by the M protocol stacks, and the output queues are used to store pointers of data packets that need to be sent to the outside, the pointers pointing to the data packets in the The address of the memory space;
  • the distribution service module is specifically configured to: store a data packet received from the input port on the network interface in the memory space; read a packet header of the data packet, and determine, according to the packet header, the data a protocol stack that performs protocol processing, and inserts a pointer of the data packet into the input queue corresponding to the determined protocol stack;
  • the protocol stack is specifically used to:
  • the data packet processed by the protocol needs to be submitted to the user application, the data packet is copied to the user application;
  • the protocol stack inserts a pointer of the data packet into an output queue of the memory management module
  • the distribution service module is specifically configured to: send, according to a pointer in the output queue, a data packet corresponding to the pointer from an output port on a network interface corresponding to the output queue.
  • the number of the distribution service modules is M, respectively corresponding to the M protocol stacks, and the M distribution service modules Run different logical cores on the processor.
  • the device further includes:
  • the protocol configuration module is configured to perform protocol processing rules on the protocol stack.
  • the apparatus further includes:
  • a general-purpose protocol processing module configured to perform general protocol processing on a data packet that needs to be processed by a common protocol; the distribution service module is further configured to: determine, according to the packet header of the received data packet, that a general protocol needs to be processed on the data packet Sending a pointer of the data packet to the general protocol processing module, so that the general protocol processing module performs general protocol processing on the data packet corresponding to the pointer.
  • the apparatus also includes a network input/output module disposed in a kernel space of the operating system;
  • the distribution service module is specifically configured to: receive, by the network input/output module, a data packet from an input port of the network interface, and send, by the network input/output module, an output port from the network interface to an external data pack.
  • the distribution service module uses a polling manner from the different input ports of the network interface by the network input/output module Receive a packet.
  • an embodiment of the present invention provides a data processing method, which is applied to a user space of an operating system, where the method includes:
  • the received data packet is stored in a memory space, where the memory space is a memory mapped storage space, and the memory space user state and the kernel state are all accessible;
  • a first protocol stack that needs to perform protocol processing on the data packet, where the first protocol stack is one of M protocol stacks, and the M protocol stacks are set in an operating system.
  • M is a positive integer greater than one; Inserting a pointer of the data packet into an input queue corresponding to the first protocol stack, where a pointer of the data packet points to an address of the data packet in the memory space, so that the first protocol stack is configured according to the The pointer in the input queue is used to retrieve a data packet corresponding to the pointer from the memory space, and perform protocol processing on the data packet according to a protocol processing rule.
  • the first protocol stack copies the data packet processed by the protocol to the user application. a program; if the data packet processed by the protocol needs to be sent to the outside, the first protocol stack inserts a pointer of the data packet into an output queue;
  • the method further includes:
  • the first protocol stack inserts the pointer of the data packet into the output queue, according to the pointer in the output queue, the data packet corresponding to the pointer is outputted from the output port of the network interface corresponding to the output queue send.
  • an embodiment of the present invention provides a data processing method, where the method includes: a first protocol stack storing a data packet processed by a protocol into a memory space, and using the data packet according to an attribute of the data packet a pointer is inserted into the output queue, so that the distribution service module reads the data packet corresponding to the pointer from the output queue, and the pointer is corresponding to an output port on the network interface corresponding to the output queue.
  • the packet is sent to the outside;
  • the first protocol stack is one of M protocol stacks located in a user space, where the M protocol stacks respectively run on a logical core of the processor, and the M protocol stacks and the distribution service module are set in an operating system.
  • User space the output queue corresponds to an output port on the network interface, the pointer of the data packet points to an address of the data packet in the memory space, and the memory space is a memory space after memory mapping. Both the memory space user state and the kernel state are accessible, and M is a positive integer greater than one.
  • the sending, by the output port on the network interface that is corresponding to the output queue, the data packet corresponding to the pointer is sent to the outside, specifically: Transmitting, by the network input/output module, a data packet corresponding to the pointer in the memory space to an output port on the network interface corresponding to the output queue;
  • the network input/output module is disposed in a kernel space of the operating system.
  • an embodiment of the present invention provides a computer host, where the computer host includes a hardware layer, an operating system layer running on a hardware layer, and the hardware layer includes at least one network interface and at least one processor.
  • the processor includes at least M logical cores, M is a positive integer greater than 1, and the operating system is divided into a kernel space and a user space, and the user space is set in the user space:
  • M protocol stacks respectively running on the processor's M logical cores, and the M protocol stacks are configured to process the data packets independently according to the protocol processing rules;
  • a distribution service module configured to receive a data packet from at least one input port on the at least one network interface according to a pre-configuration rule, and store the data packet into a memory space, so as to facilitate one of the M protocol stacks Performing protocol processing on the data packet; and receiving the data packet processed by the M protocol stacks, and sending the data packet to the outside through an output port on the network interface, where the memory space is memory mapped After the storage space, the memory space user state and the kernel state are all accessible.
  • the user space is further configured with: a memory management module, including the memory space for storing a data packet, and an M corresponding to the M protocol stacks. Input queues, and output queues corresponding to the output ports on the network interface; the M input queues are used to store pointers of data packets that need to be processed by the M protocol stacks, and the output queues are used for Storing a pointer of a data packet to be sent to the outside, the pointer pointing to an address of the data packet in the memory space;
  • a memory management module including the memory space for storing a data packet, and an M corresponding to the M protocol stacks. Input queues, and output queues corresponding to the output ports on the network interface; the M input queues are used to store pointers of data packets that need to be processed by the M protocol stacks, and the output queues are used for Storing a pointer of a data packet to be sent to the outside, the pointer pointing to an address of the data packet in the memory space;
  • the distribution service module is specifically configured to: store a data packet received from the input port on the network interface in the memory space; read a packet header of the data packet, and determine, according to the packet header, that the packet is required to be a protocol stack for protocol processing, and inserting a pointer of the data packet into the input queue corresponding to the protocol stack;
  • the protocol stack is specifically used to: And reading a data packet in the memory space according to a pointer in the input queue corresponding to the protocol stack, and performing protocol processing on the data packet.
  • the data packet processed by the protocol needs to be submitted to the user application, the data packet is copied to the user application;
  • the protocol stack inserts a pointer of the data packet into an output queue of the memory management module.
  • the distribution service module is specifically configured to: send, according to a pointer in the output queue, a data packet corresponding to the pointer from an output port on a network interface corresponding to the output queue.
  • the number of the distribution service modules is M, respectively corresponding to the M protocol stacks, where the M The distribution service modules run on the M logical cores of the processor.
  • the data processing apparatus provided by the embodiment of the present invention, by using the M protocol stacks of the operating system user space and respectively running the different logical cores of the processor, the data packets received by the at least one distribution service module from the outside, according to the pre-configuration rules, respectively.
  • the processing of the data packet for the protocol processing can be performed independently, and the efficiency of the protocol processing can be improved, and the application can be applied to the user space, so that the developer can develop the application, and almost all the data processing processes in the embodiment of the present invention are concentrated in the user mode operation.
  • the kernel state only needs to pass the memory mapping to enable the user state to directly read the data, and does not need to copy the data from the kernel state to the user state, thereby avoiding resource consumption.
  • FIG. 1 is a structural diagram of an embodiment of a data processing apparatus according to an embodiment of the present invention.
  • FIG. 2 is a structural diagram of an embodiment of a data processing apparatus according to an embodiment of the present invention
  • 3 is a schematic diagram of a working principle of a memory management module in a data processing device according to an embodiment of the present invention
  • FIG. 4 is a structural diagram of still another embodiment of a data processing apparatus according to an embodiment of the present invention
  • FIG. 5 is a structural diagram of another embodiment of a data processing apparatus according to an embodiment of the present invention
  • a structural diagram of still another embodiment of the data processing apparatus
  • FIG. 7 is a flowchart of an embodiment of a data processing method according to an embodiment of the present disclosure.
  • FIG. 8 is a flowchart of still another embodiment of a data processing method according to an embodiment of the present invention
  • FIG. 9 is a structural diagram of an embodiment of a computer host according to an embodiment of the present invention. detailed description
  • an embodiment of the present invention provides a data processing apparatus, including M protocol stacks and at least one distribution service module, where the M protocol stacks and at least one distribution service module are set in a user space of an operating system.
  • M is a positive integer greater than 1, wherein
  • the M protocol stacks are respectively running on different logic cores of the processor, and are used to process the data packets according to the protocol processing rules.
  • the distribution service module receives an input data packet from at least one input port on the at least one network interface according to a pre-configuration rule, and stores the data packet in a memory space to facilitate one of the M protocol stacks
  • the protocol stack performs protocol processing on the data packet; and receives the data packet processed by the M protocol stacks, and sends the data packet to the outside through an output port on the network interface, where the memory space It is a memory mapped memory space, and the memory space user state and kernel state are accessible.
  • the distribution service module may have only one or two protocol stacks corresponding to each other, and each of them runs on a different logical core of the processor.
  • the different cores of the processor refer to the integration of multiple complete calculation engines in a single chip, each of which is called a logical core.
  • the M user state protocol stacks are respectively executed on a separate logical core of the CPU, and run in parallel in the multi-core system.
  • the M protocol stacks are independently processed by the protocol, parallel to each other, and there is no intersection.
  • the apparatus further includes a network input/output module located in a kernel space, where the distribution service module inputs from the network interface through the network input/output module A data packet is received on the port, and the data packet is sent from the output port of the network interface to the outside through the network input/output module.
  • a network input/output module located in a kernel space, where the distribution service module inputs from the network interface through the network input/output module A data packet is received on the port, and the data packet is sent from the output port of the network interface to the outside through the network input/output module.
  • the device further includes a memory management module disposed in the user space.
  • the memory management module includes a memory space for storing data packets, and M inputs corresponding to the M protocol stacks. a queue, a pointer for storing data packets required to be processed by the M protocol stacks, and n output queues corresponding to n output ports on the network interface, for storing pointers of data packets to be sent to the outside, The pointer points to an address of the data packet in the memory space, and n is a positive integer greater than one.
  • the memory space of the memory space is a memory space after memory mapping, and both the user state and the kernel state are accessible.
  • the memory management module may further include an inter-process communication queue, where i is a positive integer greater than 0, and is used to store a pointer of a data packet transmitted between processes to avoid system resource loss.
  • the network input/output module stores a data packet received from the network interface in the memory space, the distribution service module reads a packet header of the data packet, and determines, according to the packet header, that the data packet needs to be performed.
  • a protocol stack processed by the protocol, and inserting a pointer of the data packet into an input queue corresponding to the protocol stack, where the protocol stack reads the data packet in the memory space according to a pointer in the input queue corresponding to the protocol stack And performing protocol processing on the data packet.
  • the protocol stack After the protocol stack performs protocol processing on the data packet, it determines whether the data packet needs to be submitted to the user application according to attributes such as a destination address of the data packet, and if the processed data packet needs to be submitted to the user application, Copying the data packet to the user application; if the processed data packet needs to be sent to the outside, the protocol stack inserts a pointer of the data packet into an output queue in the memory management module, and the distribution The service module sends, according to the pointer in the output queue, the data packet corresponding to the pointer from the output port on the network interface corresponding to the output queue by the network input/output module, the pointer of the data packet Point to the address of the data packet in the memory space.
  • the protocol processing rules that are followed by the M protocol stacks may be configured according to the default configuration of the system, or may be configured by using a protocol configuration module that is separately set in the embodiment shown in FIG. 4, and the protocol configuration module may be configured.
  • Configure system related parameters This includes protocol stack operation parameter configuration, CPU logic core resource allocation, and data distribution policy configuration.
  • a general protocol processing module may be configured in a user space, where a general protocol processing is performed on a data packet that needs to be processed by a general protocol, and the distribution service module receives the received The header of the data packet determines that a general protocol processing is required for the data packet, and the pointer of the data packet is sent to the general protocol processing module to perform general protocol processing on the data packet corresponding to the pointer.
  • an ARP Address Resolution Protocol
  • ICMP Internet Control Message Protocol
  • the universal protocol processing module implements a relatively simple and general protocol processing process, and can have a general protocol processing module in the whole system, and can uninstall a specific protocol from the protocol stack to a general protocol processing according to the load of the system.
  • the module, or the specific protocol handled by this module is offloaded to the protocol stack module for processing.
  • the generic protocol processing module is also located in the user space of the operating system.
  • the M protocol stacks set in the operating system user space and respectively running the different protocol cores of the processor and the data packets received by the at least one distribution service module from the outside are processed independently according to the pre-configuration rules.
  • the protocol processing can be performed, and the efficiency of the protocol processing can be improved, and the application is applied to the user space, so that the developer can develop the application, and the embodiment of the present invention Almost all data processing processes are focused on user-mode operations.
  • the kernel state only needs to pass the memory mapping to transparently pass data to the user state through memory mapping, thus avoiding resource consumption.
  • the distribution service module uses a polling method to fetch packets from different input ports of multiple network interfaces, and stores the received data packets directly through the network input/output (I/O) module in a memory-mapped manner.
  • the memory management module's memory space is mapped to user space by memory mapping.
  • the distribution service module After the network input/output module stores the received data packets in the memory space, the distribution service module reads the data packets into the memory space, and determines which protocol stacks are processed by the data packets according to the attributes such as the packet headers of the data packets, and then According to the distribution policy, the packet pointer is inserted into the input queue corresponding to the protocol stack.
  • Each user-mode protocol stack corresponds to an input queue, and the packet pointers distributed to the protocol stack are inserted into the receive queue.
  • the protocol stack reads the data packet in the memory space according to the pointer in the input queue, and performs protocol processing.
  • the protocol stack copies the processed data packet to the corresponding user application, and vice versa, discards the data packet as needed, or stores the data packet to Memory space.
  • the protocol stack inserts the pointer of the data packet into the output queue in the memory management module according to the attribute such as the destination address of the data packet, and the output queue and the output port on the network interface have The corresponding relationship of the configuration.
  • the distribution service module since the distribution service module has a configured correspondence relationship with the output port on the network interface, the distribution service module extracts a pointer corresponding to the corresponding output port in the output queue, and reads the data packet corresponding to the pointer, through the network.
  • the input/output module sends the data corresponding to the pointer in the output queue to the output port on the corresponding network interface.
  • the operating system user space is set and separately transported.
  • the M protocol stacks of different logical cores of the row processor and the data packets received by the distribution service module from the outside are processed according to the configured protocol processing rules, and the processing of the data packets is processed independently, which can improve the efficiency of protocol processing.
  • the application is in the user space, and the developer can be used to develop the application.
  • almost all the data processing processes in the embodiment of the present invention are concentrated in the user mode operation, and the kernel state only needs to pass the memory mapping to transparently transmit the data to the user through the memory mapping. State, thus avoiding resource consumption.
  • FIG. 6 is a structural diagram of another data processing apparatus according to an embodiment of the present invention. The difference from the embodiment shown in FIG. 5 is that the user space includes M distribution service modules, and each distribution service module separately has a protocol. The stack corresponds.
  • each distribution service module corresponds to a network interface and is only responsible for data transmission and reception of the network interface.
  • the RSS-enabled network interface can distribute the data packets received from the network interface to different hardware queues according to their header contents.
  • a distribution service module can correspond to one or more hardware queues on one or more interface cards, collect data packets from these hardware queues, and distribute them.
  • the process of sending a packet has a similar process.
  • FIG. 6 A schematic diagram of the configuration of the multi-distribution service module used in conjunction with the network interface hardware RSS function, as shown in FIG. 6.
  • the distribution service module fetches packets from one or more queues.
  • the distribution service module adopts a polling mode, and sequentially captures data packets from multiple receiving queues of multiple network interfaces through the network input/output module, and stores the captured data packets in the memory space of the memory management module.
  • the distribution service module sends the pointer of the data packet to the input queue corresponding to the specific protocol stack according to the value of the data packet header or other parts of the data packet according to the captured data packet.
  • the distribution service module identifies specific general protocols, such as ARP, ICMP, etc., which can be sent to the general protocol processing module for processing.
  • protocols such as ARP can also be placed in the protocol stack module for processing.
  • each of the processing packets to be processed independently for protocol processing can improve the efficiency of protocol processing, and is applied in the user space, which can facilitate developers to develop applications, and almost all data processing in the embodiments of the present invention.
  • the process is focused on user-mode operations.
  • the kernel state only needs to pass the memory mapping to transparently pass data to the user state through memory mapping, thus avoiding resource consumption.
  • the embodiment of the present invention further provides a data processing method, which is applied to a user space of an operating system, where the method includes:
  • the distribution service module uses a polling manner to capture data packets to the hardware input port of the network interface, and each of the distribution service modules captures data packets independently of each other.
  • the memory space is a memory mapped memory, and the memory space user state and the kernel state are all accessible; the network input/output module stores the received data packet to the memory management module by using a memory mapping manner. Memory space, each packet has a different address.
  • the distribution service module needs to read the header of each received data packet, and according to the attributes carried in the packet header, determine which protocol stack processing the data packet needs, A protocol stack is one of M protocol stacks, and the M protocol stacks and the distribution service module are set in a user space of an operating system, and M is a positive integer greater than one;
  • one distribution service module only reads the data packets received on the input ports of the corresponding network interfaces, and according to the data packets The information in the header of the packet determines which protocol stack is processed by the packet.
  • the first protocol stack according to the pointer in the input queue, extracts a data packet corresponding to the pointer from the memory space, and performs protocol processing on the data packet according to a protocol processing rule.
  • each protocol stack only focuses on the pointer in its corresponding input queue, takes the pointer in the input queue, and obtains the data packet according to the pointer to the memory space for protocol processing.
  • the first protocol stack After the first protocol stack processes the data packet protocol, if the data packet processed by the protocol needs to be processed by the user application, the first protocol stack copies the data packet processed by the protocol to the User application.
  • the data packet to be saved continues to be stored in the memory space, and the data packet that does not need to be saved is discarded by the protocol stack.
  • the protocol stack inserts a pointer of the data packet into an output queue of the memory management module, so that the distribution service module according to the output queue a pointer in the packet, and the data packet corresponding to the pointer is sent from a network interface output port corresponding to the output queue.
  • the data packets that are set in the operating system user space and respectively run the different protocol cores of the processor to the at least one distribution service module are processed independently, and the data packets are processed independently according to the pre-configuration rules.
  • Protocol processing can improve the efficiency of protocol processing.
  • the embodiment of the present invention further provides a data processing method, where the method includes:
  • the first protocol stack stores the processed data packet into a memory space, and inserts a pointer of the data packet into an output queue according to an attribute of the data packet, so that the distribution service module reads the output queue from the output queue. And the data packet corresponding to the pointer sends the data packet corresponding to the pointer to the outside through an output port on the network interface corresponding to the output queue;
  • the first protocol stack is one of two protocol stacks located in a user space, and the one protocol stack Running on the processor one logical core, the M protocol stack and the distribution service module are set in a user space of the operating system; the output queue corresponds to an output port on the network interface, and the pointer of the data packet points to The address of the data packet in the memory space, the memory space is a memory space after the memory mapping, the memory space user state and the kernel state are all accessible, and M is a positive integer greater than 1.
  • the data packet processed by the protocol stack is directly stored in the memory space when it is not needed to be processed by the user application. If the data packet needs to be sent to the outside, the first protocol stack needs to be The pointer of the data packet is inserted into the output queue corresponding to the corresponding output port according to attributes such as the destination address of the data packet.
  • the distribution service module reads the packet in the memory space based on the pointer of the packet. If there are multiple distribution service modules in the system, and each distribution service module has a configured correspondence with the network output port on the network interface, each distribution service module only needs to read the pointer corresponding to the corresponding output queue. data pack.
  • the distribution service module sends the data packet corresponding to the pointer to the outside through an output port on the network interface; after determining which network interface output port the data packet needs to be output, the memory is input through the network input/output module A data packet corresponding to the pointer in the space is sent to the output port on the network interface, and the output port corresponds to the output queue.
  • the M protocol stacks set in the operating system user space and respectively running the different protocol cores of the processor and the data packets received by the at least one distribution service module from the outside are processed independently according to the pre-configuration rules.
  • Protocol processing can improve the efficiency of protocol processing.
  • only the pointers to the data packets need to be transmitted between the functional entities of the user state, and no data packet copy is required, thereby reducing resource consumption.
  • an embodiment of the present invention further provides a computer host, where the computer host includes a hardware layer, an operating system layer running on a hardware layer, and the hardware layer includes at least one network interface and At least one processor, the processor includes at least M logical cores, M is a positive integer greater than 1, and the operating system is divided into a kernel space and a user space, and the user space is set in the user space: M protocol stacks respectively running on the M logical cores of the processor, wherein the M protocol stacks are configured to process the data packets according to the protocol processing rules independently;
  • the distribution service module is configured to receive an input data packet from at least one input port on the at least one network interface according to a pre-configuration rule, and store the data packet in a memory space, so as to be in the M protocol stacks.
  • a protocol stack performs protocol processing on the data packet; and receives the data packet processed by the M protocol stacks, and sends the data packet to the outside through an output port on the network interface, where the memory space It is a memory mapped memory space, and the memory space user state and kernel state are accessible.
  • the user space is further provided with:
  • a memory management module including the memory space for storing data packets, M input queues corresponding to the M protocol stacks, and an output queue corresponding to the output ports on the network interface; Input queues for storing pointers of data packets that need to be processed by the M protocol stacks, the output queues for storing pointers of data packets that need to be sent to the outside, the pointers pointing to the data packets in the memory space Address, the memory space is a memory mapped memory space, and the storage space user state and kernel state are accessible;
  • the distribution service module is specifically configured to: store a data packet received from the input port on the network interface in the memory space; read a packet header of the data packet, and determine, according to the packet header, that the packet is required to be a protocol stack for protocol processing, and inserting a pointer of the data packet into the input queue corresponding to the protocol stack;
  • the protocol stack is specifically used to:
  • the data packet processed by the protocol needs to be submitted to the user application, the data packet is copied to the user application;
  • the protocol stack inserts a pointer of the data packet into an output queue of the memory management module;
  • the distribution service module is specifically configured to: send, according to a pointer in the output queue, a data packet corresponding to the pointer from an output port on a network interface corresponding to the output queue.
  • the number of the distribution service modules is M, respectively corresponding to the M protocol stacks, and the M distribution service modules respectively run on the M logical cores of the processor.
  • the multi-core system parallel processing feature can be realized in a multi-core environment, and the multi-process parallel protocol processing function can be implemented in the operating system user space, and the resource consumption caused by the data packet copy can be reduced.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein may be implemented in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明涉及一种数据处理装置及数据处理方法,所述装置包括M个协议栈、至少一个分发服务模块,其中,所述M个协议栈,分别运行于处理器的不同逻辑核,用于根据协议处理规则,各自独立地对待处理数据包进行协议处理;所述分发服务模块,从网络接口上接收输入的数据包,并将所述数据包发送给所述M个协议栈中的一个协议栈进行协议处理;以及接收所述M个协议栈处理后的数据包,并将所述数据包通过所述网络接口向外部发送。本发明可以实现在多核的环境下,利用多核系统并行处理的特性,在操作系统用户空间实现多进程并行协议处理的功能,并减少数据包拷贝造成的资源消耗。

Description

数据处理装置及数据处理方法
技术领域
本发明涉及计算机领域, 具体涉及一种数据处理装置及数据处理方法。
背景技术
Linux操作系统自身分为两部分, 一部分为核心软件, 也称作内核空间, 另 一部分为普通应用程序, 也称为用户空间。 现有技术中, Linux系统中仅有一 个协议栈实例, 其运行在内核空间, 单个实施例协议栈无法实现对数据并行地 进行协议处理, 处理效率较低。 同时, 在开发人员在开发应用时, 需要在用户 态开发应用, 因此, 当用户态的应用需要访问内核态的内存空间时, 需要将内 存空间拷贝到用户空间, 用户态的应用再对相应的数据进行访问。 这导致了较 大的资源消耗。 发明内容
本发明的目的是提供一种数据处理装置, 以实现在多核的环境下, 利用多 核系统并行处理的特性, 在操作系统用户空间实现多进程并行协议处理的功能, 并通过内存映射技术, 减少数据包拷贝造成的资源消耗。
第一方面, 本发明实施例提供了一种数据处理装置, 所述装置包括 M个协 议栈和至少一个分发服务模块, 所述 M个协议栈和至少一个分发服务模块设置 在操作系统的用户空间, M为大于 1的正整数, 其中,
所述 M个协议栈, 分别运行于处理器的不同逻辑核, 所述 M个协议栈用于 根据协议处理规则, 各自独立地对待处理数据包进行协议处理;
所述分发服务模块, 用于根据预配置规则, 从至少一个网络接口上的至少 一个输入端口接收数据包, 并将所述数据包存储至内存空间, 以便于所述 M个 协议栈中的一个协议栈对所述数据包进行协议处理; 以及接收所述 M个协议栈 处理后的数据包, 并将所述数据包通过所述网络接口上的输出端口向外部发送, 其中, 所述内存空间是经过内存映射后的存储空间, 所述内存空间用户态和内 核态都可以访问。
结合第一方面, 在第一种可能的实施方式中, 所述装置还包括在所述操作 系统的用户空间设置的内存管理模块;
所述内存管理模块, 包括用于存储数据包的所述内存空间, 与所述 M个协 议栈对应的 M个输入队列, 以及与所述网络接口上的所述输出端口对应的输出 队列; 所述 M个输入队列用于存储需要所述 M个协议栈处理的数据包的指针, 所述输出队列用于存储需要向外部发送的数据包的指针, 所述指针指向所述数 据包在所述内存空间的地址;
所述分发服务模块具体用于: 从所述网络接口上的所述输入端口接收到的 数据包存储在所述内存空间; 读取所述数据包的包头, 根据所述包头确定需要 对该数据包进行协议处理的协议栈, 并将所述数据包的指针插入与所述确定的 所述协议栈对应的所述输入队列;
所述协议栈具体用于:
根据该协议栈对应的所述输入队列中的指针在所述内存空间中读取数据 包, 并对所述数据包进行协议处理;
若经过所述协议处理后的数据包需要提交给用户应用, 则将所述数据包拷 贝给所述用户应用;
若经过所述协议处理后的数据包需要向外部发送, 则所述协议栈将所述数 据包的指针插入所述内存管理模块的输出队列;
所述分发服务模块具体用于: 根据所述输出队列中的指针, 将与所述指针 对应的数据包从与所述输出队列对应的网络接口上的输出端口发送。
结合第一方面或第一种可能的实施方式, 在第二种可能的实施方式中, 所 述分发服务模块数目为 M个, 分别与所述 M个协议栈对应, 所述 M个分发服 务模块分别运行在处理器不同的逻辑核。
结合第一方面或第一种可能的实施方式, 或第二种可能的实施方式, 在第 三种可能的实施方式中, 所述装置还包括:
协议配置模块, 用于对所述协议栈进行协议处理规则的配置。
结合第一方面或第一种可能的实施方式, 或第二种可能的实施方式, 或第 三种可能的实施方式, 在第四种可能的实施方式中, 所述装置还包括:
通用协议处理模块, 用于对需要通用协议处理的数据包进行通用协议处理; 所述分发服务模块还用于: 根据接收到的数据包的所述包头确定需要对该 数据包需要进行通用协议处理时, 将所述数据包的指针发送给所述通用协议处 理模块, 以便于所述通用协议处理模块对所述指针对应的数据包进行通用协议 处理。
结合第一方面或第一种可能的实施方式, 或第二种可能的实施方式, 或第 三种可能的实施方式, 或第四种可能的实施方式, 在第五种可能的实施方式中, 所述装置还包括设置在所述操作系统的内核空间的网络输入 /输出模块;
所述分发服务模块具体用于: 通过所述网络输入 /输出模块从所述网络接口 的输入端口上接收数据包, 通过所述网络输入 /输出模块从所述网络接口的输出 端口上向外部发送数据包。
结合第一方面的第五种可能的实施方式, 在第六种可能的实施方式中, 所 述分发服务模块采用轮询的方式通过所述网络输入 /输出模块从所述网络接口的 不同输入端口接收数据包。
第二方面, 本发明实施例提供了一种数据处理方法, 应用于操作系统的用 户空间, 所述方法包括:
从网络接口的输入端口接收数据包;
将接收到的所述数据包存储到内存空间, 所述内存空间是经过内存映射后 的存储空间, 所述内存空间用户态和内核态都可以访问;
根据所述数据包的属性, 确定需要对所述数据包进行协议处理的第一协议 栈, 所述第一协议栈为 M个协议栈中的一个, 所述 M个协议栈设置在操作系统 的用户空间, M为大于 1的正整数; 将所述数据包的指针插入与所述第一协议栈对应的输入队列, 所述数据包 的指针指向所述数据包在所述内存空间中的地址, 以便于所述第一协议栈根据 所述输入队列中的所述指针, 从所述内存空间中取出所述指针对应的数据包, 根据协议处理规则对所述数据包进行协议处理。
结合第二方面, 在第一种实施方式中, 如果所述协议处理后的数据包需要 用户应用程序处理, 则所述第一协议栈将所述协议处理后的数据包拷贝给所述 用户应用程序; 如果所述协议处理后的所述数据包需要向外部发送, 则所述第 一协议栈将所述数据包的指针插入输出队列;
所述方法还包括:
若所述第一协议栈将所述数据包的指针插入输出队列, 则根据所述输出队 列中的指针, 将与所述指针对应的数据包从与所述输出队列对应的网络接口的 输出端口发送。
第三方面, 本发明实施例提供了一种数据处理方法, 所述方法包括: 第一协议栈将协议处理后的数据包存储到内存空间, 并根据所述数据包的 属性将所述数据包的指针插入输出队列, 以便于分发服务模块从所述输出队列 读取所述指针对应的所述数据包, 通过与所述输出队列对应的所述网络接口上 的输出端口将所述指针对应的数据包向外部发送;
所述第一协议栈为位于用户空间的 M个协议栈中的一个,所述 M个协议栈 分别运行在处理器一个逻辑核, 所述 M个协议栈和所述分发服务模块设置在操 作系统的用户空间; 所述输出队列与网络接口上的输出端口对应, 所述数据包 的指针指向所述数据包在所述内存空间中的地址, 所述内存空间是经过内存映 射后的存储空间, 所述内存空间用户态和内核态都可以访问, M为大于 1 的正 整数。
结合第三方面, 在第一种可能的实施方式中, 所述的通过与所述输出队列 对应的所述网络接口上的输出端口将所述指针对应的数据包向外部发送, 具体 包括: 通过网络输入 /输出模块将所述内存空间中与所述指针对应的数据包发送到 与所述输出队列对应的所述网络接口上的输出端口;
所述网络输入 /输出模块设置在所述操作系统的内核空间。
第四方面, 本发明实施例提供了一种计算机主机, 所述计算机主机包括硬 件层、 运行在硬件层之上的操作系统层, 所述硬件层包括至少一个网络接口和 至少一个处理器, 所述处理器包括至少 M个逻辑核, M为大于 1的正整数, 所 述操作系统分为内核空间和用户空间, 在所述用户空间设置有:
M个协议栈,分别运行于处理器的 M个逻辑核,所述 M个协议栈用于根据 协议处理规则, 各自独立地对待处理数据包进行协议处理;
分发服务模块, 用于根据预配置规则, 从至少一个网络接口上的至少一个 输入端口接收数据包, 并将所述数据包存储至内存空间, 以便于所述 M个协议 栈中的一个协议栈对所述数据包进行协议处理; 以及接收所述 M个协议栈处理 后的数据包, 并将所述数据包通过所述网络接口上的输出端口向外部发送, 所 述内存空间是经过内存映射后的存储空间, 所述内存空间用户态和内核态都可 以访问。
基于第四方面, 在第一种可能的实施方式中, 在所述用户空间还设置有: 内存管理模块, 包括用于存储数据包的所述内存空间, 与所述 M个协议栈 对应的 M个输入队列,以及与所述网络接口上的所述输出端口对应的输出队列; 所述 M个输入队列用于存储需要所述 M个协议栈处理的数据包的指针, 所述输 出队列用于存储需要向外部发送的数据包的指针, 所述指针指向所述数据包在 所述内存空间的地址;
所述分发服务模块具体用于: 将从所述网络接口上的所述输入端口接收到 的数据包存储在所述内存空间; 读取所述数据包的包头, 根据所述包头确定需 要对该数据包进行协议处理的协议栈, 并将所述数据包的指针插入与所述协议 栈对应的所述输入队列;
所述协议栈具体用于: 根据该协议栈对应的所述输入队列中的指针在所述内存空间中读取数据 包, 并对所述数据包进行协议处理。
若经过所述协议处理后的数据包需要提交给用户应用, 则将所述数据包拷 贝给所述用户应用;
若经过所述协议处理后的数据包需要向外部发送, 则所述协议栈将所述数 据包的指针插入所述内存管理模块的输出队列, 。
所述分发服务模块具体用于: 根据所述输出队列中的指针, 将与所述指针 对应的数据包从与所述输出队列对应的网络接口上的输出端口发送。
基于第四方面或第四方面的第一种可能的实施方式, 在第二种可能的实施 方式中, 所述分发服务模块数目为 M个, 分别与所述 M个协议栈对应, 所述 M 个分发服务模块分别运行在处理器的 M个逻辑核。
本发明实施例提供的数据处理装置, 通过设置在操作系统用户空间并且分 别运行处理器不同逻辑核的 M个协议栈对至少一个分发服务模块从外部接收到 的数据包, 根据预配置规则, 各自独立地对待处理数据包进行协议处理、 可以 在提高协议处理效率, 并且应用在用户空间, 可以方便开发人员开发应用, 并 且, 本发明实施例中的几乎全部数据处理过程都集中在用户态操作, 内核态只 需要通过内存映射即可使用户态直接读取数据, 不需要数据从内核态拷贝到用 户态, 从而避免资源消耗。 附图说明
为了更清楚地说明本发明实施例中的技术方案, 下面将对实施例或现有 技术描述中所需要使用的附图作简单地介绍, 显而易见地, 下面描述中的附 图仅仅是本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付出创 造性劳动性的前提下, 还可以根据这些附图获得其他的附图。
图 1为本发明实施例提供的数据处理装置一种实施例的结构图;
图 2为本发明实施例提供的数据处理装置一种实施例的结构图; 图 3 为本发明实施例提供的数据处理装置中内存管理模块的工作原理示意 图;
图 4为本发明实施例提供的数据处理装置再一种实施例的结构图; 图 5为本发明实施例提供的数据处理装置再一种实施例的结构图; 图 6本发明实施例提供的数据处理装置再一种实施例的结构图;
图 7为本发明实施例提供的数据处理方法一种实施例的流程图;
图 8为本发明实施例提供的数据处理方法又一种实施例的流程图; 图 9为本发明实施例提供的计算机主机的一种实施例的结构图。 具体实施方式
下面将结合本发明实施例中的附图, 对本发明实施例中的技术方案进行清 楚、 完整地描述, 显然, 所描述的实施例仅仅是本发明一部分实施例, 而不是 全部的实施例。 基于本发明中的实施例, 本领域普通技术人员在没有做出创造 性劳动前提下所获得的所有其他实施例, 都属于本发明保护的范围。 下面通过 附图和实施例, 对本发明的技术方案做进一步的详细描述。
如图 1所示, 本发明实施例提供了一种数据处理装置, 包括 M个协议栈和 至少一个分发服务模块, 所述 M个协议栈和至少一个分发服务模块设置在操作 系统的用户空间, M为大于 1的正整数, 其中,
所述 M个协议栈, 分别运行于处理器的不同逻辑核, 用于根据协议处理规 则, 各自独立地对待处理数据包进行协议处理
所述分发服务模块, 根据预配置规则, 从至少一个网络接口上的至少一个 输入端口接收输入的数据包, 并将所述数据包存储至内存空间, 以便于所述 M 个协议栈中的一个协议栈对所述数据包进行协议处理; 以及接收所述 M个协议 栈处理后的数据包, 并将所述数据包通过所述网络接口上的输出端口向外部发 送, 其中, 所述内存空间是经过内存映射后的存储空间, 所述内存空间用户态 和内核态都可以访问。 其中, 所述的分发服务模块可以只有一个也可以与 Μ个协议栈——对应, 设置 Μ个, 分别运行在处理器的不同逻辑核。
处理器的不同逻辑核( multicore chips )是指在一枚处理器( chip )中集成多 个完整的计算引擎, 每个计算引擎称为一个逻辑核。
更具体的,所述的 M个用户态协议栈分别在 CPU的一个独立的逻辑核上执 行, 在多核系统中并行运行。 M 个协议栈独立进行协议处理, 相互之间并行, 没有交集。
如图 2所示, 在一种可能的实施方式中, 所述装置还包括位于内核空间的 网络输入 /输出模块, 所述分发服务模块通过所述网络输入 /输出模块从所述网络 接口的输入端口上接收数据包, 通过所述网络输入 /输出模块从所述网络接口的 输出端口上向外部发送数据包。
所述装置还包括设置在用户空间的内存管理模块, 如图 2和图 3所示, 所 述内存管理模块包括用于存储数据包的内存空间, 与所述 M个协议栈对应的 M 个输入队列, 用于存储需要所述 M个协议栈处理的数据包的指针, 与所述网络 接口上的 n个输出端口对应的 n个输出队列, 用于存储需要向外部发送的数据 包的指针, 所述指针指向所述数据包在所述内存空间的地址, n为大于 1的正整 数。
更具体的, 所述的内存空间所述内存空间是经过内存映射后的存储空间, 用户态和内核态都可以访问。
可选的, 所述内存管理模块还可以包括 i个进程间通信队列, i为大于 0的 正整数, 用于存储在进程之间传递的数据包的指针, 避免系统资源损耗。
所述网络输入 /输出模块将从所述网络接口接收到的数据包存储在所述内存 空间, 所述分发服务模块读取所述数据包的包头, 根据所述包头确定需要对该 数据包进行协议处理的协议栈, 并将所述数据包的指针插入与所述协议栈对应 的输入队列, 所述协议栈根据该协议栈对应的输入队列中的指针在所述内存空 间中读取数据包, 并对所述数据包进行协议处理。 所述的协议栈对所述数据包进行协议处理后, 根据数据包的目的地址等属 性判断该数据包是否需要提交给用户应用, 如果处理后的所述数据包需要提交 给用户应用, 则将所述数据包拷贝给所述用户应用; 如果处理后的所述数据包 需要向外部发送, 则所述协议栈将所述数据包的指针插入所述内存管理模块中 的输出队列, 所述分发服务模块根据所述输出队列中的指针, 通过所述网络输 入 /输出模块将所述指针对应的数据包从与所述输出队列对应的网络接口上的输 出端口发送, 所述的数据包的指针指向所述数据包在所述内存空间的地址。
在本实施例中, 所述 M个协议栈中遵循的协议处理规则, 可以根据系统默 认配置, 也可以通过如图 4所示的实施例中另外设置的协议配置模块进行配置, 协议配置模块可对系统相关参数进行配置。 包括协议栈运行参数配置、 CPU逻 辑核资源分配、 数据分发策略配置等内容。
在一种可能的实施方式中, 如图 4 所示, 还可以在用户空间设置通用协议 处理模块, 用于对需要通用协议处理的数据包进行通用协议处理, 所述分发服 务模块根据接收到的数据包的所述包头确定需要对该数据包需要进行通用协议 处理时, 并将所述数据包的指针发送给所述通用协议处理模块对所述指针对应 的数据包进行通用协议处理。 例如 ARP (Address Resolution Protocol, 地址解析 协议)或者 ICMP ( Internet Control Message Protocol, Internet控制报文协议 )等处 理较为简单的协议, 可以通过通用协议处理模块进行协议处理。
该通用协议处理模块实现较为简单的、 通用的协议处理过程, 在整个系统 中可有一个通用协议处理模块, 并可根据系统的负载, 将特定的协议从所述协 议栈中卸载至通用协议处理模块, 或将本模块处理的特定协议卸载到协议栈模 块进行处理。 通用协议处理模块也位于操作系统的用户空间。
通过上述实施例, 在操作系统用户空间设置并且分别运行处理器不同逻辑 核的 M个协议栈对至少一个分发服务模块从外部接收到的数据包, 根据预配置 规则, 各自独立地对待处理数据包进行协议处理、 可以在提高协议处理效率, 并且应用在用户空间, 可以方便开发人员开发应用, 并且, 本发明实施例中的 几乎全部数据处理过程都集中在用户态操作, 内核态只需要通过内存映射, 将 数据通过内存映射透传到用户态, 从而避免资源消耗。
如图 5所示, 在本发明实施例提供的数据处理装置的一种实施例中, 本 发明技术实现的具体实施例中, 在用户空间只包含一个分发处理模块, 所述分 发处理模块可以运行于多核操作系统中处理器的任何一个逻辑核,
分发服务模块采用轮询( poll )方式从多个网络接口的不同输入端口抓取数 据包, 并通过网络输入 /输出 (I/O )模块采用内存映射的方式将接收到的数据包 直接存储在内存管理模块的内存空间中, 并通过内存映射的方式将该区域内存 映射至用户空间。
网络输入 /输出模块将接收到的数据包存储在内存空间后, 分发服务模块到 内存空间读取这些数据包, 并根据数据包的包头等属性, 确定该些数据包需要 哪个协议栈处理, 之后, 根据分发策略, 将数据包指针插入该协议栈对应的输 入队列。 每个用户态协议栈对应一个输入队列, 分发至该协议栈的数据包指针 均插入该接收队列。 协议栈根据该输入队列中的指针在内存空间读取数据包, 并进行协议处理。
在进行协议处理之后, 如果协议处理后的数据包需要用户应用处理, 则协 议栈将处理后的数据包拷贝给对应的用户应用, 反之则根据需要将数据包丟弃, 或者将数据包存储到内存空间。
如果协议处理后的数据包需要向外部发送, 则协议栈根据数据包的目的地 址等属性, 将数据包的指针插入到内存管理模块中的输出队列中, 输出队列与 网络接口上的输出端口具有配置的对应关系。 之后, 由于分发服务模块与网络 接口上的输出端口具有配置好的对应关系, 因此分发服务模块, 在输出队列中 取出与对应的输出端口相应的指针, 并读取指针对应的数据包, 通过网络输入 / 输出模块会将该输出队列中指针对应的数据发送给相应的网络接口上的输出端 口。
通过上述实施例, 通过上述实施例, 在操作系统用户空间设置并且分别运 行处理器不同逻辑核的 M 个协议栈对一个分发服务模块从外部接收到的数据 包, 根据配置好的协议处理规则, 各自独立地对待处理数据包进行协议处理、 可以在提高协议处理效率, 并且应用在用户空间, 可以方便开发人员开发应用, 并且, 本发明实施例中的几乎全部数据处理过程都集中在用户态操作, 内核态 只需要通过内存映射, 将数据通过内存映射透传到用户态, 从而避免资源消耗。
图 6为本发明实施例提供的另外一种数据处理装置的结构图, 与图 5所示 的实施例的区别在于, 在用户空间包含 M个分发服务模块, 每个分发服务模块 分别与一个协议栈对应。
在一种可能的是实施方式中, 每个分发服务模块对应一个网络接口, 只负 责该网络接口的数据收发。
在另一种实施方式中, 如果网络接口具有 RSS功能, 也就是说, 具有 RSS 功能的网络接口可将从网络接口收到的数据包, 根据其包头内容, 将其分发至 不同的硬件队列。 则在该种模式下, 一个分发服务模块可对应一个或多个接口 卡上的一个或多个硬件队列, 从这些硬件队列上收取数据包, 并进行分发。 类 似的, 发送数据包的过程也具有相似的过程。 与网络接口硬件 RSS功能配合使 用的多分发服务模块配置方式示意图, 如图 6所示。
在系统配置多分发服务模块的情况下, 分发服务模块从一个或多个队列抓 取数据包。 分发服务模块采用轮询模式, 通过网络输入 /输出模块依次从多个网 络接口的多接收队列抓取数据包, 将抓取的数据包存储在内存管理模块的内存 空间中。 分发服务模块根据抓取的数据包, 根据其数据包头或数据包其他部分 的值, 将数据包的指针发送至特定协议栈对应的输入队列。
分发服务模块识别特定的通用协议, 如 ARP、 ICMP等, 可将此类数据包 发送至通用协议处理模块处理。 当然, 可根据配置的不同, 也可将 ARP等协议 放在协议栈模块进行处理。
通过上述实施例, 在操作系统用户空间设置并且分别运行处理器不同逻辑 核的 M个协议栈对与 M个协议栈——对应的分发服务模块从外部接收到的数据 包, 根据预配置规则, 各自独立地对待处理数据包进行协议处理、 可以在提高 协议处理效率, 并且应用在用户空间, 可以方便开发人员开发应用, 并且, 本 发明实施例中的几乎全部数据处理过程都集中在用户态操作, 内核态只需要通 过内存映射, 将数据通过内存映射透传到用户态, 从而避免资源消耗。
相应的, 如图 7所示, 本发明实施例还提供了一种数据处理方法, 应用于 操作系统的用户空间, 所述方法包括:
701 , 从网络接口的输入端口接收数据包;
具体的, 分发服务模块采用轮询的方式向网络接口的硬件输入端口抓取数 据包, 每个分发服务模块之间抓取数据包彼此独立并行。
702, 将接收到的所述数据包存储到内存管理模块的内存空间;
具体的, 所述内存空间是经过内存映射后的存储空间, 所述内存空间用户 态和内核态都可以访问; 网络输入 /输出模块通过内存映射的方式将接收到的数 据包存储到内存管理模块的内存空间, 每个数据包具有不同的地址。
703 , 根据所述数据包的属性, 确定需要对所述数据包进行协议处理的第一 协议栈;
具体的, 如果只有一个分发服务模块, 则该分发服务模块需要将读取每个 接收到的数据包的包头, 并根据包头中携带的属性, 确定该数据包需要哪个协 议栈处理, 所述第一协议栈为 M个协议栈中的一个, 所述 M个协议栈和所述分 发服务模块设置在操作系统的用户空间, M为大于 1的正整数;
如果系统存在多个分发服务模块, 并且所述分发服务模块分别与一个网络 接口对应, 则一个分发服务模块只读取其对应的网络接口的输入端口上接收到 的数据包, 并根据该数据包的包头中的信息判断该数据包需要哪个协议栈处理。
704, 将所述数据包的指针插入与所述第一协议栈对应的输入队列, 所述数 据包的指针指向所述数据包在所述内存空间中的地址, 以便于所述第一协议栈 根据所述输入队列中的所述指针, 从所述内存空间中取出所述指针对应的数据 包, 根据协议处理规则对所述数据包进行协议处理。 具体的, 在分发服务模块为接收到的数据包确定完需要哪个协议栈进行协 议处理后, 将指向该数据包在所述内存空间中的地址的指针插入到该协议栈对 应的输入队列。
所述第一协议栈, 根据所述输入队列中的所述指针, 从所述内存空间中取 出所述指针对应的数据包, 根据协议处理规则对所述数据包进行协议处理。
更具体的, 每个协议栈只关注自身对应的输入队列中的指针, 在输入队列 中取出指针, 并根据指针到内存空间中获取数据包, 进行协议处理。
所述第一协议栈在对所述数据包协议处理后, 如果所述协议处理后的数据 包需要用户应用程序处理, 则所述第一协议栈将所述协议处理后的数据包拷贝 给所述用户应用程序。
如果不需要应用程序处理, 根据所述数据包的种类, 需要保存的数据包, 继续存储在内存空间, 不需要保存的数据包, 被协议栈丟弃。
如果所述协议处理后的所述数据包需要向外部发送, 则所述协议栈将所述 数据包的指针插入所述内存管理模块的输出队列, 以便于所述分发服务模块根 据所述输出队列中的指针, 将与所述指针对应的数据包从与所述输出队列对应 的网络接口输出端口发送。
通过上述实施例, 在操作系统用户空间设置并且分别运行处理器不同逻辑 核的 Μ个协议栈对至少一个分发服务模块从外部接收到的数据包, 根据预配置 规则, 各自独立地对待处理数据包进行协议处理、 可以在提高协议处理效率。
相应的, 如图 8所示, 本发明实施例还提供了一种数据处理方法, 所述的 方法包括:
801. 第一协议栈将协议处理后的数据包存储到内存空间, 并根据所述数据 包的属性将所述数据包的指针插入输出队列, 以便于分发服务模块从所述输出 队列读取所述指针对应的所述数据包, 通过与所述输出队列对应的所述网络接 口上的输出端口将所述指针对应的数据包向外部发送;
所述第一协议栈为位于用户空间的 Μ个协议栈中的一个,所述 Μ个协议栈 分别运行在处理器一个逻辑核, 所述 M个协议栈和所述分发服务模块设置在操 作系统的用户空间; 所述输出队列与网络接口上的输出端口对应, 所述数据包 的指针指向所述数据包在所述内存空间中的地址, 所述内存空间是经过内存映 射后的存储空间, 所述内存空间用户态和内核态都可以访问, M为大于 1 的正 整数。
具体而言, 被协议栈处理后的数据包, 在不需要被用户应用处理时, 直接 被存储在内存空间, 如果该数据包, 需要向外部发送, 则所述的第一协议栈还 需要将该数据包的指针根据数据包的目的地址等属性, 插入到相应的输出端口 对应的输出队列。
分发服务模块根据数据包的指针, 在内存空间中读取数据包。 如果系统中 存在多个分发服务模块, 每个分发服务模块与网络接口上的网络输出端口具有 配置好的对应关系, 则每个分发服务模块只需要读取与其对应的输出队列中的 指针对应的数据包。
分发服务模块通过所述网络接口上的输出端口将所述指针对应的数据包向 外部发送; 在确定该数据包需要在哪个网络接口的输出端口输出之后, 通过网 络输入 /输出模块将所述内存空间中与所述指针对应的数据包发送到所述网络接 口上的所述输出端口, 所述输出端口与所述输出队列对应。
通过上述实施例, 在操作系统用户空间设置并且分别运行处理器不同逻辑 核的 M个协议栈对至少一个分发服务模块从外部接收到的数据包, 根据预配置 规则, 各自独立地对待处理数据包进行协议处理、 可以在提高协议处理效率。 并且, 在需要发送数据包时, 用户态的各功能实体之间只需要传递指向数据包 的指针, 而无需数据包拷贝, 从而降低了资源消耗。
相应的, 如图 9所示, 本发明实施例还提供了一种计算机主机, 所述计算 机主机包括硬件层、 运行在硬件层之上的操作系统层, 所述硬件层包括至少一 个网络接口和至少一个处理器, 所述处理器包括至少 M个逻辑核, M为大于 1 的正整数, 所述操作系统分为内核空间和用户空间, 在所述用户空间设置有: M个协议栈,分别运行于处理器的 M个逻辑核,所述 M个协议栈用于根据 协议处理规则, 各自独立地对待处理数据包进行协议处理;
所述分发服务模块, 用于根据预配置规则, 从至少一个网络接口上的至少 一个输入端口接收输入的数据包, 并将所述数据包存储至内存空间, 以便于所 述 M个协议栈中的一个协议栈对所述数据包进行协议处理;以及接收所述 M个 协议栈处理后的数据包, 并将所述数据包通过所述网络接口上的输出端口向外 部发送, 所述内存空间是经过内存映射后的存储空间, 所述内存空间用户态和 内核态都可以访问。
在一种可选的是实施例中, 在所述用户空间还设置有:
内存管理模块, 包括用于存储数据包的所述内存空间, 与所述 M个协议栈 对应的 M个输入队列,以及与所述网络接口上的所述输出端口对应的输出队列; 所述 M个输入队列用于存储需要所述 M个协议栈处理的数据包的指针, 所述输 出队列用于存储需要向外部发送的数据包的指针, 所述指针指向所述数据包在 所述内存空间的地址, 所述内存空间是经过内存映射后的存储空间, 所述存储 空间用户态和内核态都可以访问;
所述分发服务模块具体用于: 将从所述网络接口上的所述输入端口接收到 的数据包存储在所述内存空间; 读取所述数据包的包头, 根据所述包头确定需 要对该数据包进行协议处理的协议栈, 并将所述数据包的指针插入与所述协议 栈对应的所述输入队列;
所述协议栈具体用于:
根据该协议栈对应的所述输入队列中的指针在所述内存空间中读取数据 包, 并对所述数据包进行协议处理。
若经过所述协议处理后的数据包需要提交给用户应用, 则将所述数据包拷 贝给所述用户应用;
若经过所述协议处理后的数据包需要向外部发送, 则所述协议栈将所述数 据包的指针插入所述内存管理模块的输出队列; 所述分发服务模块具体用于: 根据所述输出队列中的指针, 将与所述指针 对应的数据包从与所述输出队列对应的网络接口上的输出端口发送。
在一种可选的是实施例中, 所述分发服务模块数目为 M个, 分别与所述 M 个协议栈对应, 所述 M个分发服务模块分别运行在处理器的 M个逻辑核。
需说明的是, 本发明实施例采用递进描述, 各个实施例的相同或相似的部 分可以相互借鉴。
通过上述实施例提供的计算机主机, 可以实现在多核的环境下, 利用多核 系统并行处理的特性, 在操作系统用户空间实现多进程并行协议处理的功能, 并减少数据包拷贝造成的资源消耗。
专业人员应该还可以进一步意识到, 结合本文中所公开的实施例描述的各 示例的单元及算法步骤, 能够以电子硬件、 计算机软件或者二者的结合来实现, 为了清楚地说明硬件和软件的可互换性, 在上述说明中已经按照功能一般性地 描述了各示例的组成及步骤。 这些功能究竟以硬件还是软件方式来执行, 取决 于技术方案的特定应用和设计约束条件。 专业技术人员可以对每个特定的应用 来使用不同方法来实现所描述的功能, 但是这种实现不应认为超出本发明的范 围。
结合本文中所公开的实施例描述的方法或算法的步骤可以用硬件、 处理器 执行的软件模块, 或者二者的结合来实施。 软件模块可以置于随机存储器 ( RAM ) , 内存、只读存储器(ROM )、 电可编程 ROM、 电可擦除可编程 ROM、 寄存器、 硬盘、 可移动磁盘、 CD-ROM、 或技术领域内所公知的任意其它形式 的存储介质中。
以上所述的具体实施方式, 对本发明的目的、 技术方案和有益效果进行了 进一步详细说明, 所应理解的是, 以上所述仅为本发明的具体实施方式而已, 并不用于限定本发明的保护范围, 凡在本发明的精神和原则之内, 所做的任何 修改、 等同替换、 改进等, 均应包含在本发明的保护范围之内。

Claims

权 利 要 求
1、 一种数据处理装置, 其特征在于, 包括 M个协议栈和至少一个分发服 务模块, 所述 M个协议栈和至少一个分发服务模块设置在操作系统的用户空 间, M为大于 1的正整数, 其中,
所述 M个协议栈, 分别运行于处理器的不同逻辑核, 所述 M个协议栈用 于根据协议处理规则, 各自独立地对待处理数据包进行协议处理;
所述分发服务模块, 用于根据预配置规则, 从至少一个网络接口上的至少 一个输入端口接收数据包, 并将所述数据包存储至内存空间, 以便于所述 M个 协议栈中的一个协议栈对所述数据包进行协议处理; 以及接收所述 M个协议栈 处理后的数据包,并将所述数据包通过所述网络接口上的输出端口向外部发送, 其中, 所述内存空间是经过内存映射后的存储空间, 所述内存空间用户态和内 核态都可以访问。
2、 如权利要求 1所述的数据处理装置, 其特征在于, 还包括在所述操作系 统的用户空间设置的内存管理模块;
所述内存管理模块, 包括用于存储数据包的所述内存空间, 与所述 M个协 议栈对应的 M个输入队列,以及与所述网络接口上的所述输出端口对应的输出 队列; 所述 M个输入队列用于存储需要所述 M个协议栈处理的数据包的指针, 所述输出队列用于存储需要向外部发送的数据包的指针, 所述指针指向所述数 据包在所述内存空间的地址;
所述分发服务模块具体用于: 从所述网络接口上的所述输入端口接收到的 数据包存储在所述内存空间; 读取所述数据包的包头, 根据所述包头确定需要 对该数据包进行协议处理的协议栈, 并将所述数据包的指针插入与所述确定的 所述协议栈对应的所述输入队列;
所述协议栈具体用于:
根据该协议栈对应的所述输入队列中的指针在所述内存空间中读取数据 包, 并对所述数据包进行协议处理;
若经过所述协议处理后的数据包需要提交给用户应用, 则将所述数据包拷 贝给所述用户应用;
若经过所述协议处理后的数据包需要向外部发送, 则所述协议栈将所述数 据包的指针插入所述内存管理模块的输出队列;
所述分发服务模块具体用于: 根据所述输出队列中的指针, 将与所述指针 对应的数据包从与所述输出队列对应的网络接口上的输出端口发送。
3、 如权利要求 1或 2任一项所述的数据处理装置, 其特征在于, 所述分发 服务模块数目为 M个, 分别与所述 M个协议栈对应, 所述 M个分发服务模块 分别运行在处理器不同的逻辑核。
4、 如权利要求 1至 3任一项所述的数据处理装置, 其特征在于, 还包括: 协议配置模块, 用于对所述协议栈进行协议处理规则的配置。
5、 如权利要求 1至 4任一项所述的数据处理装置, 其特征在于, 还包括: 通用协议处理模块,用于对需要通用协议处理的数据包进行通用协议处理; 所述分发服务模块还用于: 根据接收到的数据包的所述包头确定需要对该 数据包需要进行通用协议处理时, 将所述数据包的指针发送给所述通用协议处 理模块, 以便于所述通用协议处理模块对所述指针对应的数据包进行通用协议 处理。
6、 如权利要求 1至 5任一项所述的数据处理装置, 其特征在于, 还包括设 置在所述操作系统的内核空间的网络输入 /输出模块;
所述分发服务模块具体用于: 通过所述网络输入 /输出模块从所述网络接口 的输入端口上接收数据包, 通过所述网络输入 /输出模块从所述网络接口的输出 端口上向外部发送数据包。
7、 如权利要求 6所述的数据处理装置, 其特征在于, 所述分发服务模块采 用轮询的方式通过所述网络输入 /输出模块从所述网络接口的不同输入端口接 收数据包。
8、 一种数据处理方法, 其特征在于, 应用于操作系统的用户空间, 所述方 法包括: 从网络接口的输入端口接收数据包;
将接收到的所述数据包存储到内存空间, 所述内存空间是经过内存映射后 的存储空间, 所述内存空间用户态和内核态都可以访问;
根据所述数据包的属性, 确定需要对所述数据包进行协议处理的第一协议 栈, 所述第一协议栈为 M个协议栈中的一个, 所述 M个协议栈设置在操作系 统的用户空间, M为大于 1的正整数;
将所述数据包的指针插入与所述第一协议栈对应的输入队列, 所述数据包 的指针指向所述数据包在所述内存空间中的地址, 以便于所述第一协议栈根据 所述输入队列中的所述指针, 从所述内存空间中取出所述指针对应的数据包, 根据协议处理规则对所述数据包进行协议处理。
9、 如权利要求 8所述的数据处理方法, 其特征在于, 如果所述协议处理后 的数据包需要用户应用程序处理, 则所述第一协议栈将所述协议处理后的数据 包拷贝给所述用户应用程序; 如果所述协议处理后的所述数据包需要向外部发 送, 则所述第一协议栈将所述数据包的指针插入输出队列;
所述方法还包括:
若所述第一协议栈将所述数据包的指针插入输出队列, 则根据所述输出队 列中的指针, 将与所述指针对应的数据包从与所述输出队列对应的网络接口的 输出端口发送。
10、 一种数据处理方法, 其特征在于, 所述方法包括:
第一协议栈将协议处理后的数据包存储到内存空间, 并根据所述数据包的 属性将所述数据包的指针插入输出队列, 以便于分发服务模块从所述输出队列 读取所述指针对应的所述数据包, 通过与所述输出队列对应的所述网络接口上 的输出端口将所述指针对应的数据包向外部发送;
所述第一协议栈为位于用户空间的 M个协议栈中的一个, 所述 M个协议 栈分别运行在处理器一个逻辑核,所述 M个协议栈和所述分发服务模块设置在 操作系统的用户空间; 所述输出队列与网络接口上的输出端口对应, 所述数据 包的指针指向所述数据包在所述内存空间中的地址, 所述内存空间是经过内存 映射后的存储空间, 所述内存空间用户态和内核态都可以访问, M为大于 1的 正整数。
11、 如权利要求 10所述的数据处理方法, 其特征在于, 通过与所述输出队 列对应的所述网络接口上的输出端口将所述指针对应的数据包向外部发送, 具 体包括:
通过网络输入 /输出模块将所述内存空间中与所述指针对应的数据包发送 到与所述输出队列对应的所述网络接口上的输出端口;
所述网络输入 /输出模块设置在所述操作系统的内核空间。
12、 一种计算机, 其特征在于, 包括硬件层、 运行在硬件层之上的操作系 统层, 所述硬件层包括至少一个网络接口和至少一个处理器, 所述处理器包括 至少 M个逻辑核, M为大于 1的正整数,所述操作系统分为内核空间和用户空 间, 在所述用户空间设置有:
M个协议栈, 分别运行于处理器的 M个逻辑核, 所述 M个协议栈用于根 据协议处理规则, 各自独立地对待处理数据包进行协议处理;
分发服务模块, 用于根据预配置规则, 从至少一个网络接口上的至少一个 输入端口接收数据包, 并将所述数据包存储至内存空间, 以便于所述 M个协议 栈中的一个协议栈对所述数据包进行协议处理; 以及接收所述 M个协议栈处理 后的数据包, 并将所述数据包通过所述网络接口上的输出端口向外部发送, 所 述内存空间是经过内存映射后的存储空间, 所述内存空间用户态和内核态都可 以访问。
13、 如权利要求 12所述的计算机, 其特征在于, 在所述用户空间还设置有 内存管理模块:
所述内存管理模块, 包括用于存储数据包的所述内存空间, 与所述 M个协 议栈对应的 M个输入队列,以及与所述网络接口上的所述输出端口对应的输出 队列; 所述 M个输入队列用于存储需要所述 M个协议栈处理的数据包的指针, 所述输出队列用于存储需要向外部发送的数据包的指针, 所述指针指向所述数 据包在所述内存空间的地址;
所述分发服务模块具体用于: 将从所述网络接口上的所述输入端口接收到 的数据包存储在所述内存空间; 读取所述数据包的包头, 根据所述包头确定需 要对该数据包进行协议处理的协议栈, 并将所述数据包的指针插入与所述协议 栈对应的所述输入队列;
所述协议栈具体用于:
根据该协议栈对应的所述输入队列中的指针在所述内存空间中读取数据 包, 并对所述数据包进行协议处理。
若经过所述协议处理后的数据包需要提交给用户应用, 则将所述数据包拷 贝给所述用户应用;
若经过所述协议处理后的数据包需要向外部发送, 则所述协议栈将所述数 据包的指针插入所述内存管理模块的输出队列,
所述分发服务模块具体用于: 根据所述输出队列中的指针, 将与所述指针 对应的数据包从与所述输出队列对应的网络接口上的输出端口发送。
14、 如权利要求 12或 13所述的计算机, 其特征在于, 所述分发服务模块 数目为 M个, 分别与所述 M个协议栈对应, 所述 M个分发服务模块分别运行 在处理器的 M个逻辑核。
PCT/CN2013/087107 2013-05-09 2013-11-14 数据处理装置及数据处理方法 WO2014180110A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/936,118 US10241830B2 (en) 2013-05-09 2015-11-09 Data processing method and a computer using distribution service module

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310169222.1 2013-05-09
CN201310169222.1A CN104142867B (zh) 2013-05-09 2013-05-09 数据处理装置及数据处理方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/936,118 Continuation US10241830B2 (en) 2013-05-09 2015-11-09 Data processing method and a computer using distribution service module

Publications (2)

Publication Number Publication Date
WO2014180110A1 true WO2014180110A1 (zh) 2014-11-13
WO2014180110A9 WO2014180110A9 (zh) 2015-02-12

Family

ID=51852048

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/087107 WO2014180110A1 (zh) 2013-05-09 2013-11-14 数据处理装置及数据处理方法

Country Status (3)

Country Link
US (1) US10241830B2 (zh)
CN (2) CN104142867B (zh)
WO (1) WO2014180110A1 (zh)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106469168B (zh) * 2015-08-19 2019-11-26 阿里巴巴集团控股有限公司 数据集成系统中多类型数据处理的方法及装置
CN106339435B (zh) * 2016-08-19 2020-11-03 中国银行股份有限公司 一种数据分发方法、装置及系统
CN106502806B (zh) * 2016-10-31 2020-02-14 华为技术有限公司 一种总线协议命令处理装置及相关方法
CN106850565B (zh) * 2016-12-29 2019-06-18 河北远东通信系统工程有限公司 一种高速的网络数据传输方法
CN108270813B (zh) * 2016-12-30 2021-02-12 华为技术有限公司 一种异构多协议栈方法、装置及系统
CN108366018B (zh) * 2017-01-26 2020-11-27 普天信息技术有限公司 一种基于dpdk的网络数据包处理方法
CN107153527B (zh) * 2017-05-17 2020-10-13 北京环境特性研究所 一种基于消息队列的并行雷达数据处理方法
US10332235B1 (en) 2018-05-01 2019-06-25 At&T Intellectual Property I, L.P. Direct memory access for graphics processing unit packet processing
CN109379303A (zh) * 2018-08-22 2019-02-22 华东计算技术研究所(中国电子科技集团公司第三十二研究所) 基于提升万兆以太网性能的并行化处理框架系统和方法
CN109343977B (zh) * 2018-09-21 2021-01-01 新华三技术有限公司成都分公司 跨态通信方法和通道驱动装置
US11489791B2 (en) * 2018-10-31 2022-11-01 Intel Corporation Virtual switch scaling for networking applications
US10795840B2 (en) 2018-11-12 2020-10-06 At&T Intellectual Property I, L.P. Persistent kernel for graphics processing unit direct memory access network packet processing
CN109547580B (zh) * 2019-01-22 2021-05-25 网宿科技股份有限公司 一种处理数据报文的方法和装置
CN111752604A (zh) * 2019-03-27 2020-10-09 阿里巴巴集团控股有限公司 一种具有多个运行模式的处理器
CN110209434B (zh) * 2019-04-23 2022-04-22 努比亚技术有限公司 一种内存管理方法、装置及计算机可读存储介质
CN110278161B (zh) * 2019-05-06 2020-08-11 阿里巴巴集团控股有限公司 基于用户态协议栈的报文分流方法、装置及系统
US10904719B2 (en) 2019-05-06 2021-01-26 Advanced New Technologies Co., Ltd. Message shunting method, device and system based on user mode protocol stack
CN110557369A (zh) * 2019-07-25 2019-12-10 中国航天系统科学与工程研究院 基于国产操作系统内核态的高速数据处理平台
CN111600833B (zh) * 2019-07-30 2022-08-26 新华三技术有限公司 网络操作系统及报文转发方法
CN110417791A (zh) * 2019-08-02 2019-11-05 成都卫士通信息产业股份有限公司 一种密码设备及处理网络数据方法、装置
CN112437032B (zh) * 2019-08-24 2023-04-18 北京希姆计算科技有限公司 数据收发装置及方法、存储介质和电子设备
CN111182063B (zh) * 2019-12-30 2022-09-09 奇安信科技集团股份有限公司 应用于电子设备的数据处理方法、电子设备及介质
CN111404818B (zh) * 2020-03-12 2022-04-15 深圳市风云实业有限公司 一种面向通用多核网络处理器的路由协议优化方法
US20220342655A1 (en) * 2021-04-22 2022-10-27 STMicroelectronics (Grand Ouest) SAS Microcontroller, computer program product, and method for adding an additional function to a computer program
CN113225257B (zh) * 2021-04-27 2022-04-12 深圳星耀智能计算技术有限公司 一种upf数据处理的方法、系统及存储介质
CN115001874A (zh) * 2022-08-04 2022-09-02 成都卫士通信息产业股份有限公司 一种数据传输方法、装置、设备及介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060088009A1 (en) * 2004-10-22 2006-04-27 Fraser Gibbs Method for transferring data in a wireless network
CN101170511A (zh) * 2007-11-20 2008-04-30 中兴通讯股份有限公司 嵌入式操作系统中实现多核处理器通信的装置及方法
CN101867558A (zh) * 2009-04-17 2010-10-20 深圳市永达电子股份有限公司 用户态网络协议栈系统及处理报文的方法
CN102158414A (zh) * 2011-04-12 2011-08-17 中兴通讯股份有限公司 中间设备的协议处理方法及装置

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6246683B1 (en) * 1998-05-01 2001-06-12 3Com Corporation Receive processing with network protocol bypass
US6675218B1 (en) 1998-08-14 2004-01-06 3Com Corporation System for user-space network packet modification
GB2353676A (en) * 1999-08-17 2001-02-28 Hewlett Packard Co Robust encryption and decryption of packetised data transferred across communications networks
FI115940B (fi) * 2002-06-10 2005-08-15 First Hop Oy Menetelmä ja laitteisto palvelulaadun toteuttamiseksi tiedonsiirrossa
EP1387279B1 (en) * 2002-07-31 2008-05-14 Texas Instruments Inc. Cache coherency in a multi-processor system
JP4211374B2 (ja) * 2002-12-09 2009-01-21 ソニー株式会社 通信処理装置、および通信処理方法、並びにコンピュータ・プログラム
US7319670B2 (en) * 2003-02-08 2008-01-15 Hewlett-Packard Development Company, L.P. Apparatus and method for transmitting data to a network based on retransmission requests
US20040176942A1 (en) * 2003-03-04 2004-09-09 International Business Machines Corporation Method, system and program product for behavioral simulation(s) of a network adapter within a computing node or across multiple nodes of a distributed computing environment
JP4492618B2 (ja) * 2007-01-18 2010-06-30 トヨタ自動車株式会社 車両用制御システム
CN101030975B (zh) * 2007-02-15 2010-05-26 重庆重邮信科通信技术有限公司 一种提高协议栈at指令响应速度的处理方法
US7992153B2 (en) * 2007-05-30 2011-08-02 Red Hat, Inc. Queuing for thread pools using number of bytes
CN101971578B (zh) * 2007-12-28 2014-07-30 茨特里克斯系统公司 Tcp分组间距
US8141084B2 (en) * 2008-04-07 2012-03-20 International Business Machines Corporation Managing preemption in a parallel computing system
US20090296685A1 (en) 2008-05-29 2009-12-03 Microsoft Corporation User-Mode Prototypes in Kernel-Mode Protocol Stacks
US20100135179A1 (en) * 2008-11-28 2010-06-03 International Business Machines Corporation Communication device
CN101951378B (zh) * 2010-09-26 2013-09-18 北京品源亚安科技有限公司 用于ssl vpn的协议栈系统及数据处理方法
US8839267B2 (en) * 2011-02-21 2014-09-16 Universidade Da Coruna-Otri Method and middleware for efficient messaging on clusters of multi-core processors
CN102801695B (zh) * 2011-05-27 2015-10-14 华耀(中国)科技有限公司 虚拟专用网通信设备及其数据包传输方法
CN102339234B (zh) 2011-07-12 2013-04-17 迈普通信技术股份有限公司 一种协议栈运行装置和方法
CN102662910B (zh) * 2012-03-23 2014-10-15 浙江大学 基于嵌入式系统的网络交互体系及网络交互方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060088009A1 (en) * 2004-10-22 2006-04-27 Fraser Gibbs Method for transferring data in a wireless network
CN101170511A (zh) * 2007-11-20 2008-04-30 中兴通讯股份有限公司 嵌入式操作系统中实现多核处理器通信的装置及方法
CN101867558A (zh) * 2009-04-17 2010-10-20 深圳市永达电子股份有限公司 用户态网络协议栈系统及处理报文的方法
CN102158414A (zh) * 2011-04-12 2011-08-17 中兴通讯股份有限公司 中间设备的协议处理方法及装置

Also Published As

Publication number Publication date
CN108268328B (zh) 2022-04-22
CN104142867B (zh) 2018-01-09
CN104142867A (zh) 2014-11-12
WO2014180110A9 (zh) 2015-02-12
US10241830B2 (en) 2019-03-26
US20160077872A1 (en) 2016-03-17
CN108268328A (zh) 2018-07-10

Similar Documents

Publication Publication Date Title
WO2014180110A1 (zh) 数据处理装置及数据处理方法
US9300578B2 (en) Large receive offload functionality for a system on chip
US8660133B2 (en) Techniques to utilize queues for network interface devices
TWI239164B (en) Multiprotocol decapsulation/encapsulation control structure and packet protocol conversion method
WO2023005773A1 (zh) 基于远程直接数据存储的报文转发方法、装置、网卡及设备
US9584628B2 (en) Zero-copy data transmission system
US11902184B2 (en) Methods and systems for providing a virtualized NVMe over fabric service
Rashti et al. iWARP redefined: Scalable connectionless communication over high-speed Ethernet
JP2022179412A (ja) データパス状態のレプリケーション及び中間デバイスのマッピングを使用したサービス分配のための方法及びシステム
JP5479710B2 (ja) データを処理するためのプロセッサ‐サーバ・ハイブリッド・システムおよび方法
US11593294B2 (en) Methods and systems for loosely coupled PCIe service proxy over an IP network
EP3466015B1 (en) Method and network node for handling sctp packets
CN113422792B (zh) 数据传输方法、装置、电子设备及计算机存储介质
US20090285207A1 (en) System and method for routing packets using tags
EP4393131A1 (en) System for storage of received messages
Neeser et al. SoftRDMA: Implementing iWARP over TCP kernel sockets
Inoue et al. Low-latency and high bandwidth TCP/IP protocol processing through an integrated HW/SW approach
Batmaz et al. CoAP acceleration on FPSoC for resource constrained Internet of Things devices
Miura et al. Xmcapi: Inter-core communication interface on multi-chip embedded systems
MacArthur Userspace RDMA verbs on commodity hardware using DPDK
US12007921B2 (en) Programmable user-defined peripheral-bus device implementation using data-plane accelerator (DPA)
US20230060132A1 (en) Coordinating data packet processing between kernel space and user space
US11949589B2 (en) Methods and systems for service state replication using original data packets
Cascallana et al. Collecting packet traces at high speed
Kim et al. Offloading Socket Processing for Ubiquitous Services.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13884190

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13884190

Country of ref document: EP

Kind code of ref document: A1