CN106850565B - A kind of network data transmission method of high speed - Google Patents

A kind of network data transmission method of high speed Download PDF

Info

Publication number
CN106850565B
CN106850565B CN201611245469.7A CN201611245469A CN106850565B CN 106850565 B CN106850565 B CN 106850565B CN 201611245469 A CN201611245469 A CN 201611245469A CN 106850565 B CN106850565 B CN 106850565B
Authority
CN
China
Prior art keywords
data
interface
cache
dpaa
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611245469.7A
Other languages
Chinese (zh)
Other versions
CN106850565A (en
Inventor
候春辉
张松轶
李学成
康志杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HEBEI FAREAST COMMUNICATION SYSTEM ENGINEERING Co Ltd
Original Assignee
HEBEI FAREAST COMMUNICATION SYSTEM ENGINEERING Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HEBEI FAREAST COMMUNICATION SYSTEM ENGINEERING Co Ltd filed Critical HEBEI FAREAST COMMUNICATION SYSTEM ENGINEERING Co Ltd
Priority to CN201611245469.7A priority Critical patent/CN106850565B/en
Publication of CN106850565A publication Critical patent/CN106850565A/en
Application granted granted Critical
Publication of CN106850565B publication Critical patent/CN106850565B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/06Notations for structuring of protocol data, e.g. abstract syntax notation one [ASN.1]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/548Queue

Abstract

The invention discloses a kind of network data transmission methods of high speed, solve the problems, such as the lengthy and jumbled processing of User space application transceiver network data, based on Freescale DPAA hardware, a kind of DPAA service processes are designed, are responsible for data forwarding between Ethernet interface and User space application.Around Ethernet driving is called, cache management interface is called to operate DPAA hardware transceiver network data with management interface is lined up in User space;Network protocol stack is simplified using the hardware resource realization of network processing unit, accelerates data packet processing;DPAA service processes and User space Application share memory, mutual data pointer need to be only exchanged in message queue in sending and receiving data, realize the zero-copy of kernel state and User space data packet, the lengthy and jumbled processing such as IO copy when I O call when saving traditional linux system transceiver network data, the switching of kernel execution thread context, User space application access data buffer storage, one very thin very efficient layer is provided between User space application and hardware, improves network data processing speed.

Description

A kind of network data transmission method of high speed
Technical field
The present invention relates to computer network data transmission fields, more particularly to a kind of realization of high speed network data transmission Scheme.
Background technique
DPAA (Data Path Acceleration Architecture) is that the data of Freescale QorIQ platform add Fast path architecture supports routing and managing related data stream process task in platform interior processing specific data stream.DPAA's Main modular includes queue manager, cache manager and data frame supervisor.Queue manager is responsible for CPU management, and network connects Data queue between mouth and hardware accelerator, manages data in a manner of queue, the priority with congestion management, queue The function of scheduling and packet sequencing, order recovery;Cache manager mainly provides cache pool management function, and software distribution is suitably slow It deposits and gives cache manager management;Data frame supervisor is a hardware frame accelerator, it support data message parsing, classification and Distribute (PCD), data frame supervisor is the inlet and outlet of network data, and the functions such as load balancing and Qos are realized in help, and are gathered Ethernet feature.
LTE base station equipment realizes the transmitting-receiving of network data, this traditional Linux by way of socket at present Data transmit-receive mode is main to use in kernel state sending and receiving data packet when handling data packet, then gives kernel protocol stack processing.? During being somebody's turn to do, system needs to call IO, in the switching of kernel execution thread context, user's space application access data buffer storage, It needs to carry out IO copy etc., this series of operation can all reduce User space in various degree and apply the transmitting-receiving to network data fast Rate.
Summary of the invention
The present invention is hard with the DPAA of Freescale in order to solve the problems, such as the lengthy and jumbled processing of User space application transceiver network data Based on part, the method for having invented a kind of transmission of high speed network data designs a kind of DPAA service processes, be responsible for Ethernet interface and Data forwarding between User space application realizes network data pack receiving and transmitting in User space, to reach the acceleration processing of network data Purpose.
The technical solution adopted by the present invention are as follows:
A kind of network data transmission method of high speed, which is characterized in that including initialization, upstream data receives and lower line number According to three processes of transmission, specifically includes the following steps:
Initialization:
Step 1: the initialization modules of DPAA service processes obtains configuration information, by data frame management interface configure with The too parsing, classification of the IP address of network interface and data frame supervisor and distribution policy;Memory management module mentions for cache manager For one piece of memory pool, and from memory pool dynamic acquisition part of cache, cache pool is formed;The configuration information includes that CPU is tied up Determine information, the parsing of the IP address of Ethernet interface and data frame supervisor, classification and distribution policy;
Upstream data receives:
Step 2: after the Ethernet interface of data frame supervisor receives the network data of IP address identical as the network interface, lead to Cache manager is crossed to cache to memory pool application one or more to store network data;Data frame supervisor according to parsing, point Class and distribution policy are parsed and are classified to the network data stored in memory pool, and the network data transmitting that user is concerned about is arrived Queue manager, and an enqueue request is initiated to queue manager;The enqueue request includes frame queue ID;
Step 3: after queue manager receives the enqueue request of data frame supervisor, user is closed according to frame queue ID The network data of the heart is pushed to corresponding CPU by the software entrance of queue manager;
Step 4: after CPU finds corresponding DPAA service processes according to CPU binding information, the ether of DPAA service processes Net transceiver interface calling lines up management interface receiving network data, and simplifies net for what network data was sent to DPAA service processes Network protocol stack;
Step 5: simplifying network protocol stack and parse to the stem of network data, and utilizes DPAA platform hardware resource The inspection of verification sum is carried out to the network data after parsing, obtains user data, user data is sent to DPAA service processes User space application interface;
Step 6: user data is assembled into message by User space application interface, is sent out message by way of message queue It is sent to User space application;
Downlink data is sent:
Step 7: User space, which is applied, to be cached from cache pool application to store user data to be sent, and is answered to User space Request message is sent with interface;
Step 8: it after User space application interface receives the request message that User space application is sent, is extracted from cache pool User data pointer is simultaneously sent to and simplifies network protocol stack;
Step 9: network protocol stack is simplified according to user data pointer and finds corresponding user data, utilizes DPAA platform Hardware resource carry out verification sum calculating, by user data addition protocol header and verification and, complete the encapsulation of user data, will User data pointer is sent to Ethernet transceiver interface;
Step 10: Ethernet transceiver interface finds the user data after corresponding encapsulation according to user data pointer, will seal User data after dress is packaged into the data frame of DPAA requirement, is sent data frame by queue manager interface and software entrance To queue manager;
Step 11: the data frame received is sent data frame supervisor by queue manager;
Step 12: data frame supervisor is sent data frame by Ethernet interface;
Step 13: the caching occupied in memory management module release cache pool.
Wherein, step 1 specifically includes the following steps:
(101) initialization module of DPAA service processes obtains configuration information, including CPU binding information, Ethernet interface Parsing, classification and the distribution policy of IP address and data frame supervisor;
(102) initialization module calls data frame management interface, by the IP address of Ethernet interface and data frame supervisor Parsing, classification and distribution policy are allocated to specified Ethernet interface;
(103) the Ethernet transceiver interface calling of DPAA service processes lines up management interface and caching management interface obtains column The software entrance of team manager and cache manager;
(104) memory management module provides one piece of memory by cache management interface and software entrance for cache manager Pond;And caching is formed from memory pool dynamic acquisition part of cache by cache management interface, software entrance and cache manager Pond.
Wherein, User space is applied from cache pool application caching to store in user data to be sent in step 7, if slow Deposit pond utilization rate be greater than preset cache quantity 70% when, memory management module by cache manager, cache manager it is soft Part entrance and caching management interface fill cache pool from memory pool application caching automatically.
The present invention compared with prior art the advantages of are as follows:
(1) the different previous data transmit-receive modes of the present invention, the present invention call around Ethernet driving is called in User space Cache management interface operates DPAA hardware with management interface is lined up, and realizes the transmitting-receiving operation of network data in User space, prevents The lengthy and jumbled operation of kernel state and User space;
(2) network protocol stack is simplified using the hardware resource realization of network processing unit, with use conventional network protocols stack phase Than the network protocol stack of this User space accelerates the performance of data packet processing, furthermore can also determine in protocol stack parsing encapsulation Protocol header processed facilitates the distribution forwarding task for realizing data packet;
(3) it is interacted between DPAA service processes and User space application by message queue, due to DPAA service processes With User space using shared drive, therefore DPAA service processes and User space are applied in sending and receiving data, it is only necessary in message Mutual data pointer is exchanged in queue, without transmitting entire data buffer storage by message queue, realizes kernel state With the zero-copy of User space data packet.
(4) by the above-mentioned means, DPAA service processes User space apply and hardware between provide it is one very thin very high The layer of effect, to achieve the purpose that improve network data processing speed.
Detailed description of the invention
Fig. 1 is that User space of the present invention application obtains network data schematic diagram.
Specific embodiment
In order to make those skilled in the art better understand the technical solutions in the application, below in conjunction with the application reality Apply the attached drawing in example, the technical scheme in the embodiment of the application is clearly and completely described so that advantages of the present invention and Feature can be easier to be readily appreciated by one skilled in the art, to make apparent specific boundary to protection scope of the present invention It is fixed.
The hardware platform that present system uses is the multi-core processor B4860 of Freescale company, and operating system is Linux.The present invention mainly simplifies the complex operations during tradition Linux data transmit-receive, accelerates framework using data path, User space realizes that the transmitting-receiving of network data, parsing, the encapsulation of network protocol stack realize LTE base station software high speed transceiver network The purpose of data.Embodiment is a DPAA service processes, operates in User space, is mainly responsible for Ethernet interface and LTE base station is soft The forwarding of user face data (eGTP-U data) between part, to reach the acceleration processing of data between base station and core net.Under The realization of face detailed description DPAA service processes: such as Fig. 1;
The initialization of DPAA service processes:
Step 1: the initialization modules of DPAA service processes obtains configuration information, by data frame management interface configure with The too parsing, classification of the IP address of network interface and data frame supervisor and distribution policy;Memory management module mentions for cache manager For one piece of memory pool, and from memory pool dynamic acquisition part of cache, cache pool is formed;The configuration information includes that CPU is tied up Determine information, the parsing of the IP address of Ethernet interface and data frame supervisor, classification and distribution policy;Specifically includes the following steps:
(101) initialization module of DPAA service processes obtains configuration information, including CPU binding information, Ethernet interface Parsing, classification and the distribution policy of IP address and data frame supervisor;
(102) initialization module calls data frame management interface, by the IP address of Ethernet interface and data frame supervisor Parsing, classification and distribution policy are allocated to specified Ethernet interface;
(103) the Ethernet transceiver interface calling of DPAA service processes lines up management interface and caching management interface obtains column The software entrance of team manager and cache manager;
(104) memory management module provides one piece of memory by cache management interface and software entrance for cache manager Pond;And caching is formed from memory pool dynamic acquisition part of cache by cache management interface, software entrance and cache manager Pond.
Upstream data receives:
Step 2: after the Ethernet interface of data frame supervisor receives the network data of IP address identical as the network interface, lead to Cache manager is crossed to cache to memory pool application one or more to store network data;Data frame supervisor according to parsing, point Class and distribution policy are parsed and are classified to the network data stored in memory pool, and the network data transmitting that user is concerned about is arrived Queue manager, and an enqueue request is initiated to queue manager;The enqueue request includes frame queue ID;
Step 3: after queue manager receives the enqueue request of data frame supervisor, user is closed according to frame queue ID The network data of the heart is pushed to corresponding CPU by the software entrance of queue manager;
Step 4: after CPU finds corresponding DPAA service processes according to CPU binding information, the ether of DPAA service processes Net transceiver interface calling lines up management interface receiving network data, and simplifies net for what network data was sent to DPAA service processes Network protocol stack;
Step 5: simplifying network protocol stack and parse to the stem of network data, utilizes DPAA platform hardware resource pair Network data after parsing carries out the inspection of verification sum, obtains user data, user data is sent to DPAA service processes User space application interface;
Step 6: user data is assembled into message by User space application interface, is sent out message by way of message queue It is sent to User space application;
Downlink data is sent:
Step 7: User space, which is applied, to be cached from cache pool application to store user data to be sent, and is answered to User space Request message is sent with interface;If cache pool utilization rate is greater than the 70% of preset cache quantity, memory management module is led to automatically It crosses cache manager, the software entrance of cache manager and caching management interface and fills caching from memory pool application caching automatically Pond.
Step 8: it after User space application interface receives the request message that User space application is sent, is extracted from cache pool User data pointer is simultaneously sent to and simplifies protocol stack;
Step 9: network protocol stack is simplified according to user data pointer and finds corresponding user data, utilizes DPAA platform Hardware resource carry out verification sum calculating, by user data addition protocol header and verification and, complete the encapsulation of user data, will User data pointer after encapsulation is sent to Ethernet transceiver interface;
Step 10: Ethernet transceiver interface finds the user data after corresponding encapsulation according to user data pointer, will seal User data after dress is packaged into the data frame of DPAA requirement, is sent data frame by queue manager interface and software entrance To queue manager;
Step 11: the data frame received is sent data frame supervisor by queue manager;
Step 12: data frame supervisor is sent data frame by Ethernet interface;
Step 13: the caching occupied in memory management module release cache pool.
Data frame management interface, cache management interface and line up management interface respectively correspond data frame management, caching pipe It manages and lines up to run in the driving of management.
Memory management module and cache manager are respectively that User space application interface and data frame supervisor provide memory The operation such as release and application.Memory management module is responsible for the initialization of the memory pool managed cache manager, and by slow Depositing manager interface is one cache pool of User space application Dynamic Maintenance.The dimension of DPAA service processes memory management module cache pool Protection mechanism is as follows:
When memory management module starts, one piece of memory is initialized first and transfers to cache manager management, and passes through caching pipe Manage interface obtain certain amount caching for User space using;Memory management module receives the real-time of User space application Application, release message are applied for the caching of number by User space application distribution from cache pool, or are released by cache management interface It puts to specify and is cached to memory pool;Memory management module makees following judgement: if cache pool when receiving real-time solicitation message When utilization rate is greater than the 70% of preset cache quantity, memory management module fills cache pool from memory pool application caching automatically.
The above description is only an embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (3)

1. a kind of network data transmission method of high speed, which is characterized in that including initialization, upstream data receives and downlink data Three processes are sent, specifically includes the following steps:
Initialization:
Step 1: the initialization module of DPAA service processes obtains configuration information, passes through data frame management interface configuration ethernet The IP address of mouth and parsing, classification and the distribution policy of data frame supervisor;Memory management module provides one for cache manager Block memory pool, and from memory pool dynamic acquisition part of cache, form cache pool;The configuration information includes CPU binding letter Parsing, classification and the distribution policy of breath, the IP address of Ethernet interface and data frame supervisor;
Upstream data receives:
Step 2: after the Ethernet interface of data frame supervisor receives the network data of IP address identical as the network interface, by slow Manager is deposited to cache to memory pool application one or more to store network data;Data frame supervisor according to parsing, classification and Distribution policy is parsed and is classified to the network data stored in memory pool, and the network data transmitting that user is concerned about is to lining up Manager, and an enqueue request is initiated to queue manager;The enqueue request includes frame queue ID;
Step 3: after queue manager receives the enqueue request of data frame supervisor, user is concerned about according to frame queue ID Network data is pushed to corresponding CPU by the software entrance of queue manager;
Step 4: after CPU finds corresponding DPAA service processes according to CPU binding information, the Ethernet of DPAA service processes is received Hair interface calling lines up management interface receiving network data, and the network of simplifying that network data is sent to DPAA service processes is assisted Discuss stack;
Step 5: simplifying network protocol stack and parse to the stem of network data, and using DPAA platform hardware resource to solution Network data after analysis carries out the inspection of verification sum, obtains user data, user data is sent to the use of DPAA service processes Family state application interface;
Step 6: user data is assembled into message by User space application interface, is sent a message to by way of message queue User space application;
Downlink data is sent:
Step 7: User space, which is applied, to be cached from cache pool application to store user data to be sent, and is connect to User space application Mouth sends request message;
Step 8: after User space application interface receives the request message that User space application is sent, user is extracted from cache pool Data pointer is simultaneously sent to and simplifies network protocol stack;
Step 9: network protocol stack is simplified according to user data pointer and finds corresponding user data, utilizes DPAA platform hardware Resource carries out the calculating of verification sum, by user data addition protocol header and verifies and the encapsulation of user data is completed, by user Data pointer is sent to Ethernet transceiver interface;
Step 10: Ethernet transceiver interface finds the user data after corresponding encapsulation according to user data pointer, after encapsulation User data be packaged into DPAA requirement data frame, column are sent a dataframe to by queue manager interface and software entrance Team's manager;
Step 11: the data frame received is sent data frame supervisor by queue manager;
Step 12: data frame supervisor is sent data frame by Ethernet interface;
Step 13: the caching occupied in memory management module release cache pool.
2. a kind of network data transmission method of high speed according to claim 1, which is characterized in that step 1 specifically includes Following steps:
(101) initialization module of DPAA service processes obtains configuration information, the IP including CPU binding information, Ethernet interface Parsing, classification and the distribution policy of location and data frame supervisor;
(102) initialization module call data frame management interface, by the parsing of the IP address of Ethernet interface and data frame supervisor, Classification and distribution policy are allocated to specified Ethernet interface;
(103) the Ethernet transceiver interface calling of DPAA service processes lines up management interface and pipe is lined up in caching management interface acquisition Manage the software entrance of device and cache manager;
(104) memory management module provides one piece of memory pool by cache management interface and software entrance for cache manager;And By cache management interface, software entrance and cache manager from memory pool dynamic acquisition part of cache, cache pool is formed.
3. a kind of network data transmission method of high speed according to claim 1, which is characterized in that User space in step 7 It caches using from cache pool application to store in user data to be sent, if cache pool utilization rate is greater than preset cache quantity When 70%, memory management module is by cache manager, the software entrance of cache manager and caching management interface automatically from interior It deposits pond application caching and fills cache pool.
CN201611245469.7A 2016-12-29 2016-12-29 A kind of network data transmission method of high speed Active CN106850565B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611245469.7A CN106850565B (en) 2016-12-29 2016-12-29 A kind of network data transmission method of high speed

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611245469.7A CN106850565B (en) 2016-12-29 2016-12-29 A kind of network data transmission method of high speed

Publications (2)

Publication Number Publication Date
CN106850565A CN106850565A (en) 2017-06-13
CN106850565B true CN106850565B (en) 2019-06-18

Family

ID=59114657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611245469.7A Active CN106850565B (en) 2016-12-29 2016-12-29 A kind of network data transmission method of high speed

Country Status (1)

Country Link
CN (1) CN106850565B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874128B (en) * 2017-01-22 2020-11-20 广州华多网络科技有限公司 Data transmission method and device
CN110753008A (en) * 2018-07-24 2020-02-04 普天信息技术有限公司 Network data processing device and method based on DPAA
CN109783250B (en) * 2018-12-18 2021-04-09 中兴通讯股份有限公司 Message forwarding method and network equipment
CN109688058B (en) * 2018-12-19 2021-03-02 迈普通信技术股份有限公司 Message processing method and device and network equipment
EP3694166B1 (en) 2019-02-06 2022-09-21 Hitachi Energy Switzerland AG Cyclic time-slotted operation in a wireless industrial network
US10904719B2 (en) 2019-05-06 2021-01-26 Advanced New Technologies Co., Ltd. Message shunting method, device and system based on user mode protocol stack
CN110278161B (en) * 2019-05-06 2020-08-11 阿里巴巴集团控股有限公司 Message distribution method, device and system based on user mode protocol stack
CN110557369A (en) * 2019-07-25 2019-12-10 中国航天系统科学与工程研究院 high-speed data processing platform based on domestic operating system kernel mode
CN111245809A (en) * 2020-01-07 2020-06-05 北京同有飞骥科技股份有限公司 Cross-layer data processing method and system
CN111273924B (en) * 2020-01-10 2023-07-25 杭州迪普科技股份有限公司 Software updating method and device
CN111277509B (en) * 2020-01-13 2023-12-05 奇安信科技集团股份有限公司 Flow guiding method and device for IPS engine
CN113485823A (en) * 2020-11-23 2021-10-08 中兴通讯股份有限公司 Data transmission method, device, network equipment and storage medium
CN113596171B (en) * 2021-08-04 2024-02-20 杭州网易数之帆科技有限公司 Cloud computing data interaction method, system, electronic equipment and storage medium
CN113891396B (en) * 2021-09-01 2022-07-26 深圳金信诺高新技术股份有限公司 Data packet processing method and device, computer equipment and storage medium
CN113986811B (en) * 2021-09-23 2022-05-10 北京东方通网信科技有限公司 High-performance kernel mode network data packet acceleration method
CN116521249B (en) * 2023-07-03 2023-10-10 北京左江科技股份有限公司 Kernel state message distribution method based on process file descriptor

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068229A (en) * 2007-06-08 2007-11-07 北京工业大学 Content filtering gateway realizing method based on network filter
CN101304373A (en) * 2008-06-25 2008-11-12 中兴通讯股份有限公司 Method and system for implementing high-efficiency transmission chunk data in LAN
CN101340574A (en) * 2008-08-04 2009-01-07 中兴通讯股份有限公司 Method and system realizing zero-copy transmission of stream media data
CN101727351A (en) * 2009-12-14 2010-06-09 北京航空航天大学 Multicore platform-orientated asymmetrical dispatcher for monitor of virtual machine and dispatching method thereof
CN101873337A (en) * 2009-04-22 2010-10-27 电子科技大学 Zero-copy data capture technology based on rt8169 gigabit net card and Linux operating system
CN102129531A (en) * 2011-03-22 2011-07-20 北京工业大学 Xen-based active defense method
CN104050101A (en) * 2014-05-29 2014-09-17 汉柏科技有限公司 Method for realizing user-state receiving and transmission of messages for ARM (Advanced RISC Machine) CPU (Central Processing Unit)
CN104142867A (en) * 2013-05-09 2014-11-12 华为技术有限公司 Data processing device and data processing method
EP2824892A1 (en) * 2013-07-09 2015-01-14 Alcatel Lucent Process for delivering an item from a data repository to a user through at least one node in a network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068229A (en) * 2007-06-08 2007-11-07 北京工业大学 Content filtering gateway realizing method based on network filter
CN101304373A (en) * 2008-06-25 2008-11-12 中兴通讯股份有限公司 Method and system for implementing high-efficiency transmission chunk data in LAN
CN101340574A (en) * 2008-08-04 2009-01-07 中兴通讯股份有限公司 Method and system realizing zero-copy transmission of stream media data
CN101873337A (en) * 2009-04-22 2010-10-27 电子科技大学 Zero-copy data capture technology based on rt8169 gigabit net card and Linux operating system
CN101727351A (en) * 2009-12-14 2010-06-09 北京航空航天大学 Multicore platform-orientated asymmetrical dispatcher for monitor of virtual machine and dispatching method thereof
CN102129531A (en) * 2011-03-22 2011-07-20 北京工业大学 Xen-based active defense method
CN104142867A (en) * 2013-05-09 2014-11-12 华为技术有限公司 Data processing device and data processing method
EP2824892A1 (en) * 2013-07-09 2015-01-14 Alcatel Lucent Process for delivering an item from a data repository to a user through at least one node in a network
CN104050101A (en) * 2014-05-29 2014-09-17 汉柏科技有限公司 Method for realizing user-state receiving and transmission of messages for ARM (Advanced RISC Machine) CPU (Central Processing Unit)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《面向用户态Click的I/O优化框架的设计与实现》;刘松等;《计算机科学与探索》;20151111;全文

Also Published As

Publication number Publication date
CN106850565A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN106850565B (en) A kind of network data transmission method of high speed
US20230231809A1 (en) Dynamic load balancing for multi-core computing environments
CN105681075B (en) Network Management System based on mixing cloud platform
CN105052081B (en) Communication flows processing framework and method
CN104521198B (en) For the system and method for Virtual Ethernet binding
CN103346981B (en) Virtual switch method, relevant apparatus and computer system
CN105556496B (en) Pass through the method and apparatus for the expansible direct inter-node communication that quick peripheral component interconnection high speed (Peripheral Component Interconnect-Express, PCIe) carries out
WO2019104090A1 (en) Work unit stack data structures in multiple core processor system for stream data processing
US8861434B2 (en) Method and system for improved multi-cell support on a single modem board
CN104220988B (en) The service of layer 3 in Cloud Server is realized and method
US7505410B2 (en) Method and apparatus to support efficient check-point and role-back operations for flow-controlled queues in network devices
CN103210619B (en) For nothing lock and the zero-copy messaging plan of communication network application
CN104395897B (en) Server node interconnection means and method
US20160248671A1 (en) Packet steering
CN103176780B (en) A kind of multi-network interface binding system and method
CN103200085A (en) Method and system for achieving transmission and receiving of VXLAN message line speed
CN109697122A (en) Task processing method, equipment and computer storage medium
US20200274827A1 (en) Network Interface Device
KR20130087026A (en) Lock-less buffer management scheme for telecommunication network applications
CN102609307A (en) Multi-core multi-thread dual-operating system network equipment and control method thereof
Freitas et al. A survey on accelerating technologies for fast network packet processing in Linux environments
CN110519180A (en) Network card virtualization queue scheduling method and system
CN109756490A (en) A kind of MDC implementation method and device
US20150309755A1 (en) Efficient complex network traffic management in a non-uniform memory system
CN107493574A (en) Wireless controller equipment, parallel authentication processing method, system, network device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Hou Chunhui

Inventor after: Zhang Songdie

Inventor after: Li Xuecheng

Inventor after: Kang Zhijie

Inventor before: Hou Chunhui

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant