CN104639460A - High-speed network data packet parallel receiving method based on many-core processor - Google Patents

High-speed network data packet parallel receiving method based on many-core processor Download PDF

Info

Publication number
CN104639460A
CN104639460A CN201510056076.0A CN201510056076A CN104639460A CN 104639460 A CN104639460 A CN 104639460A CN 201510056076 A CN201510056076 A CN 201510056076A CN 104639460 A CN104639460 A CN 104639460A
Authority
CN
China
Prior art keywords
core
network
network data
packet
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510056076.0A
Other languages
Chinese (zh)
Inventor
唐红
戴俊
王大瑞
赵国锋
邓娅茹
刘静娴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201510056076.0A priority Critical patent/CN104639460A/en
Publication of CN104639460A publication Critical patent/CN104639460A/en
Pending legal-status Critical Current

Links

Abstract

The invention requests to protect a high-speed network data packet parallel receiving method based on a many-core processor. The method comprises the following steps that a core in a multi-core or many-core embedded processor is divided into a network data receiving core and a network data processing core; the data receiving task can be divided into core reading, core hashing, core distribution and core sending according to the processing flow process of the model on the network data, the core reading process is used for processing the packet head information of a network data packet received by a network card, according to the hashing process, the hashing is carried out according to the read packet head information, hash values in the third layer and the fourth layer of the data packet head are calculated for selecting hashing grooves, according to the core distribution, the network data processing core is selected according to the selected hashing grooves, and through the core sending, the operation of sending the network data packet to the network data processing core is completed. The multi-core or many-core embedded processor has the advantages that an on-chip network is provided, an efficient inter-core communication mechanism is selected through the network, and the data transmission among sub tasks is completed.

Description

A kind of express network packet parallelization method of reseptance based on many-core processor
Technical field
The present invention relates to multinuclear and many core flush bonding processors and network service process field, particularly relate to the express network packet parallelization method of reseptance based on many-core processor.
Background technology
Along with the fast development of the communication technology and the appearance of multinuclear or many core flush bonding processors, traditional monokaryon tupe cannot meet the demand of High-speed network measure.Multinuclear or many-core processor adopt parallel processing mode, and multi-task parallel process, can improve processing speed when low dominant frequency, and reduce power consumption.And along with the increase day by day in broadband, web database technology explosive growth, has higher requirement for the high speed processing of network data simultaneously.Because the numerous data in network all require very strong real-time, in order to ensure the smooth transport of data and meet the requirement of real-time of various application to data receiver, the high-speed receiving how software systems effectively utilize its framework advantage to carry out packet becomes the focus of research already.
Application number be 201310365607.5 patent of invention disclose a kind of network packet receiving handling method based on multinuclear or many core flush bonding processors, this invention is illustrated the acceptance that network data wraps in multinuclear or many core flush bonding processors, and proposes a kind of data package processing method combined with Data dissemination to the mechanism of data receiver.
Although foregoing invention patent is a kind of receives data packets processing method based on multinuclear or many core flush bonding processors, but in fact just check packet with one and carry out reception process, the multinuclear not relating to the packet of multinuclear or many core flush bonding processors receives process, parallelization process is carried out in the more not mentioned reception to network packet, and for multinuclear or many core flush bonding processors, the application of reception paralell design to large data for network packet has more practical significance.
Summary of the invention
For above deficiency of the prior art, the object of the present invention is to provide one to receive fast and accurately express network data, meet the packet parallelization method of reseptance of the demand of 10GE network measure, technical scheme of the present invention is as follows:
Based on an express network packet parallelization method of reseptance for many-core processor, it comprises the following steps:
101, N number of core of many-core processor is divided into two nucleoids, namely the network data for receiving network data packet header receives core and the network data processing core that processes network data content, described network data receives core and is divided into again reading core, hash core, distributes core and notice core, wherein read the header packet information that core is used to process the network packet that network interface card receives, comprise the source port and the destination interface that header packet information are carried out to second layer Hash and extraction TCP or UDP bag; Hash core be according to read header packet information carries out the third layer in hash and calculated data packet header, the hash value of four layers selects hash groove; Distributing core is select network data processing core according to the hash groove chosen, send core to be used for network packet to send to network data processing core, wherein network data reception core adopts network-on-chip to communicate with between network data processing core, and described network data receives reading core, hash core, the distribution core of core and notifies also to adopt network-on-chip between core;
102, when network data arrives, the first-in first-out FIFO round-robin queue that whole packet can be stored into a 16KB by network interface card Adapter Layer Interface is called in iPkt data content, and round-robin queue data packet head being stored into a 4KB is called in iHdr data packet head, then be sent in corresponding network data processing core queue by packet by the mode of direct memory access by header information, network data processing core extracts data and to go forward side by side row relax.
Further, network data in step 101 receives to be checked the step that network packet carries out receiving and is specially: the reception information of initial configuration network of relation data processing core, and network data processing core receives core to network data and registers; Storage allocation space and be that network data selects spatial cache size; Wait for correlated inputs, if there is no correlated inputs, then continue to wait for; If there is correlated inputs, then sees and input be related news or related data, if the words of related news just receive, the message that wherein related news comprise registration message, buffering area returns, logout message and exit message; If exit message, network data receives core and just quits work; If other message just processes and upgrades the related news returned; If input is related data, packet header relevant information according to the network data of input is resolved this data packet head and receives this data packet head, mate with the relevant information configured during initialization according to this data packet head of parsing information out again, if just directly inquire about corresponding groove hash when having had this information when initialization, select corresponding network data processing core in conjunction with corresponding distribution algorithms and mapping table again according to groove hash, select data buffer storage space according to the size of data inside this data packet head; If there is no this information, then carry out hash computing according to the third layer of this data packet head, four layers of information and calculate its groove hash, and then repeating previous step to select suitable data buffer storage space and network data processing core; Last network data receives core informing network data processing core receiving network data and processes accordingly.
Further, described how many core flush bonding processors are isomorphism or heterogeneous processor, and the quantity of described core is at least 8, and described transmitting-receiving port configuration packet draws together at least 2 GE gigabit networking ports, and one of them is RX receiving port, and another is TX transmit port.
Advantage of the present invention and beneficial effect as follows:
(1) what the present invention proposed strengthens express network receives data packets model paralell design scheme extensibility, is applicable to current most multinuclear or many core flush bonding processors, meets the demand of major applications to express network data processing.
(2) multi-core parallel concurrent is utilized to receive express network mode data packet in the present invention, can receive fast and accurately express network data, meet the demand of 10GE network measure, be applicable to general network measuring system, achieve the network measuring system that lower cost measures express network data.
Accompanying drawing explanation
Fig. 1 is according to the general frame figure of the preferred embodiment of the present invention based on the express network receives data packets model of multinuclear or many core flush bonding processors;
Figure 2 shows that the receives data packets illustraton of model of the preferred embodiment of the present invention;
Figure 3 shows that the parallelization receives data packets flow chart of the preferred embodiment of the present invention;
Figure 4 shows that the parallelization receives data packets structured flowchart of the preferred embodiment of the present invention.
Embodiment
The invention will be further elaborated to provide an infinite embodiment below in conjunction with accompanying drawing.But should be appreciated that, these describe just example, and do not really want to limit the scope of the invention.In addition, in the following description, the description to known features and technology is eliminated, to avoid unnecessarily obscuring concept of the present invention.Figure 1 shows that general frame figure of the present invention, kind based on the paralell design method of the express network receives data packets model of multinuclear or many core flush bonding processors, comprise network interface card, further comprises data packet head and data content, internal memory, network data reception core and network data processing core, described network interface card, network data accept core and network data processing core adopts multinuclear or many core flush bonding processors; Described network data receives between core, between network data processing core and network data reception core is communicated by network-on-chip with between network data processing core; Wherein,
Described network data receive core comprise read core, hash core, distribute core, communication core, described reading core is used for processing the header packet information of the packet that network interface card receives; Described hash core be carry out the third layer in hash and calculated data packet header according to the header packet information read, the hash value of four layers reaches the object selecting hash groove; Described distribution core reaches the object selecting data processing core according to the hash groove chosen; Described notice core by Packet Generation to data processing core.
Described network data processing core comprises multinuclear or the multiple core of many core flush bonding processors, and this multiple core can be divided into different process core (such as the process core that the process core of measurement delay data is divided into one group, measure throughput data being divided into one group) according to the data of process different task;
Described network-on-chip is a kind of architecture that multinuclear or many core flush bonding processors inside carry, and network-on-chip is that internal memory, buffer memory and intercore communication provide high speed duplex communication method, thus eliminates the bottleneck of multi-core communication.
Further, described multinuclear or many core flush bonding processors are isomorphism or heterogeneous processor, and the quantity of described core is at least 8, and described transmitting-receiving port configuration packet draws together at least 2 GE network ports, and one of them is RX port, and another is TX port.
Figure 2 shows that packet Acceptance Model figure of the present invention, express network data arrive input interface from network interface card input, data packet head sends to network data to receive core by input interface, network data receives core and resolves the data packet head received from input interface, the input interface memory address that these data should map is told according to corresponding Data dissemination algorithm and mapping table, then this packet is just mapped directly to above-mentioned specified memory address by input interface, with regard to informing network data receiver core after this mapping process completes, it is ready with regard to informing network data processing nuclear network data that network data receives core, last network data processing core arrive corresponding memory address go read data.
Figure 3 shows that parallelization receives data packets flow chart of the present invention.The receiving course of composition graphs 4 parallelization receives data packets structured flowchart to packet elaborates.
When data receiver is started working, first initialization to be carried out to some relevant parameters, namely some parameters are configured, comprise following some: network data processing core is concerned about whether data to be processed needed for itself can accurately must arrive very much, so network data must be arrived first receive core registration, the main purpose of registration allows network data receive the data known required for network data processing core, so that by accurate for network data reach network data processing core and allow its process and allow network data processing core know which core is idle, now data processing can be carried out, do distribution according to some relevant informations and calculate foundation, according to the requirement reasonable distribution memory headroom of application and select suitable spatial cache size etc. for network data, then be exactly wait for correlated inputs, if there is no correlated inputs, just continue to wait for, if there is correlated inputs, just sees and input be related news or related data, if the words of related news just receive, wherein related news comprise above-mentioned said registration message, buffering area returns message, logout message, exit message etc., if exit message, network data receives core and just quits work, if other message just process and upgrades the related news that return so that timely conduct inputs next time, if input is related data, packet header relevant information according to the network data of input is resolved this data packet head and receives this data packet head, mate with the relevant information configured during initialization according to this data packet head of parsing information out again, if just directly inquire about corresponding groove hash when having had this information when initialization, select corresponding network data processing core in conjunction with corresponding distribution algorithms and mapping table again according to groove hash, select suitable data buffer storage space according to the size of data inside this data packet head, if there is no this information, then carry out hash computing according to the third layer of this data packet head, four layers of information and calculate its groove hash, and then repeating previous step to select suitable data buffer storage space and network data processing core, last network data receives core informing network data processing core receiving network data and processes accordingly, what Tile 1 completed as shown in Figure 4 is the packet header function reading the network data received from network interface card, the third layer that what Tile 2 completed is to the data packet head received, four layers carry out Hash computing and calculate its groove Hash function, what Tile 3 completed is in conjunction with distribution algorithms and mapping table to select suitable storage Buffer core Worker Tile, informing network data processing core collects data go forward side by side row relax function that what Tile 4 completed is, 4 core completes the task of oneself separately to reach the object of parallel processing according to pipeline system, if there is multiple task to carry out simultaneously, each data quick and precisely must be assigned to the enterprising row relax of each core by the advantage of the parallelization that network data processing core also can utilize network data to receive, if Tile 1-1, Tile 1-2 corresponding in task in Fig. 41, task 2, task 3, task n, Tile 1-n, Tile 2-1, Tile 2-2, Tile 2-n, Tile 3-1, Tile 3-2, Tile 3-n, Tile 4-1, Tile 4-2, Tile 4-n are exactly network data processing core used between corresponding different task, and the selection of these network data processing cores is exactly receive core according to network data to select.
These embodiments are interpreted as only being not used in for illustration of the present invention limiting the scope of the invention above.After the content of reading record of the present invention, technical staff can make various changes or modifications the present invention, and these equivalence changes and modification fall into the scope of the claims in the present invention equally.

Claims (3)

1., based on an express network packet parallelization method of reseptance for many-core processor, it is characterized in that: comprise the following steps:
101, N number of core of many-core processor is divided into two nucleoids, namely the network data for receiving network data packet header receives core and the network data processing core that processes network data content, described network data receives core and is divided into again reading core, hash core, distributes core and notice core, wherein read the header packet information that core is used to process the network packet that network interface card receives, comprise the source port and the destination interface that header packet information are carried out to second layer Hash and extraction TCP or UDP bag; Hash core be according to read header packet information carries out the third layer in hash and calculated data packet header, the hash value of four layers selects hash groove; Distributing core is select network data processing core according to the hash groove chosen, send core to be used for network packet to send to network data processing core, wherein network data reception core adopts network-on-chip to communicate with between network data processing core, and described network data receives reading core, hash core, the distribution core of core and notifies also to adopt network-on-chip between core;
102, when network data arrives, the first-in first-out FIFO round-robin queue that whole packet can be stored into a 16KB by network interface card Adapter Layer Interface is called in iPkt data content, and round-robin queue data packet head being stored into a 4KB is called in iHdr data packet head, then be sent in corresponding network data processing core queue by packet by the mode of direct memory access by header information, network data processing core extracts data and to go forward side by side row relax.
2. the express network packet parallelization method of reseptance based on many-core processor according to claim 1, it is characterized in that: network data in step 101 receives to be checked the step that network packet carries out receiving and be specially: the reception information of initial configuration network of relation data processing core, and network data processing core receives core to network data and registers; Storage allocation space and be that network data selects spatial cache size; Wait for correlated inputs, if there is no correlated inputs, then continue to wait for; If there is correlated inputs, then sees and input be related news or related data, if the words of related news just receive, the message that wherein related news comprise registration message, buffering area returns, logout message and exit message; If exit message, network data receives core and just quits work; If other message just processes and upgrades the related news returned; If input is related data, packet header relevant information according to the network data of input is resolved this data packet head and receives this data packet head, mate with the relevant information configured during initialization according to this data packet head of parsing information out again, if just directly inquire about corresponding groove hash when having had this information when initialization, select corresponding network data processing core in conjunction with corresponding distribution algorithms and mapping table again according to groove hash, select data buffer storage space according to the size of data inside this data packet head; If there is no this information, then carry out hash computing according to the third layer of this data packet head, four layers of information and calculate its groove hash, and then repeating previous step to select suitable data buffer storage space and network data processing core; Last network data receives core informing network data processing core receiving network data and processes accordingly.
3. the express network packet parallelization method of reseptance based on many-core processor according to claim 1, it is characterized in that: described how many core flush bonding processors are isomorphism or heterogeneous processor, the quantity of described core is at least 8, described transmitting-receiving port configuration packet draws together at least 2 GE gigabit networking ports, one of them is RX receiving port, and another is TX transmit port.
CN201510056076.0A 2015-02-03 2015-02-03 High-speed network data packet parallel receiving method based on many-core processor Pending CN104639460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510056076.0A CN104639460A (en) 2015-02-03 2015-02-03 High-speed network data packet parallel receiving method based on many-core processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510056076.0A CN104639460A (en) 2015-02-03 2015-02-03 High-speed network data packet parallel receiving method based on many-core processor

Publications (1)

Publication Number Publication Date
CN104639460A true CN104639460A (en) 2015-05-20

Family

ID=53217790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510056076.0A Pending CN104639460A (en) 2015-02-03 2015-02-03 High-speed network data packet parallel receiving method based on many-core processor

Country Status (1)

Country Link
CN (1) CN104639460A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106953780A (en) * 2017-03-15 2017-07-14 重庆邮电大学 A kind of many-core platform deep packet detection device and method for supporting networking products information inquiry
CN108494705A (en) * 2018-03-13 2018-09-04 山东超越数控电子股份有限公司 A kind of network message high_speed stamping die and method
CN105072047B (en) * 2015-09-22 2018-12-07 浪潮(北京)电子信息产业有限公司 A kind of message transmissions and processing method
CN110597482A (en) * 2019-08-30 2019-12-20 四川腾盾科技有限公司 Method for searching valid data packet in FIFO (first in first out) by serial port
CN111030844A (en) * 2019-11-14 2020-04-17 中盈优创资讯科技有限公司 Method and device for establishing flow processing framework
CN112565821A (en) * 2021-02-19 2021-03-26 紫光恒越技术有限公司 Data processing method and device, security gateway and storage device
CN113965294A (en) * 2021-10-22 2022-01-21 北京灵汐科技有限公司 Data packet encoding method, data packet decoding method and device
CN114363245A (en) * 2020-09-30 2022-04-15 北京灵汐科技有限公司 Many-core network-on-chip data transmission method, device, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101390067A (en) * 2006-02-28 2009-03-18 英特尔公司 Improvement in the reliability of a multi-core processor
CN101964749A (en) * 2010-09-21 2011-02-02 北京网康科技有限公司 Message retransmission method and system based on multi-core architecture
CN103441952A (en) * 2013-08-20 2013-12-11 西安电子科技大学 Network data package processing method based on multi-core or many-core embedded processor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101390067A (en) * 2006-02-28 2009-03-18 英特尔公司 Improvement in the reliability of a multi-core processor
CN101964749A (en) * 2010-09-21 2011-02-02 北京网康科技有限公司 Message retransmission method and system based on multi-core architecture
CN103441952A (en) * 2013-08-20 2013-12-11 西安电子科技大学 Network data package processing method based on multi-core or many-core embedded processor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹冰: "基于Tile64多核网络入侵检测系统的研究与设计", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105072047B (en) * 2015-09-22 2018-12-07 浪潮(北京)电子信息产业有限公司 A kind of message transmissions and processing method
CN106953780B (en) * 2017-03-15 2020-04-07 重庆邮电大学 Many-core platform deep packet detection device and method supporting network product information query
CN106953780A (en) * 2017-03-15 2017-07-14 重庆邮电大学 A kind of many-core platform deep packet detection device and method for supporting networking products information inquiry
CN108494705A (en) * 2018-03-13 2018-09-04 山东超越数控电子股份有限公司 A kind of network message high_speed stamping die and method
CN110597482A (en) * 2019-08-30 2019-12-20 四川腾盾科技有限公司 Method for searching valid data packet in FIFO (first in first out) by serial port
CN110597482B (en) * 2019-08-30 2021-11-16 四川腾盾科技有限公司 Method for searching latest effective data packet in FIFO by serial port
CN111030844A (en) * 2019-11-14 2020-04-17 中盈优创资讯科技有限公司 Method and device for establishing flow processing framework
CN111030844B (en) * 2019-11-14 2023-03-14 中盈优创资讯科技有限公司 Method and device for establishing flow processing framework
CN114363245A (en) * 2020-09-30 2022-04-15 北京灵汐科技有限公司 Many-core network-on-chip data transmission method, device, equipment and medium
CN114363245B (en) * 2020-09-30 2024-04-26 北京灵汐科技有限公司 Multi-core network-on-chip data transmission method, device, equipment and medium
CN112565821A (en) * 2021-02-19 2021-03-26 紫光恒越技术有限公司 Data processing method and device, security gateway and storage device
CN112565821B (en) * 2021-02-19 2021-05-28 紫光恒越技术有限公司 Data processing method and device, security gateway and storage device
CN113965294A (en) * 2021-10-22 2022-01-21 北京灵汐科技有限公司 Data packet encoding method, data packet decoding method and device

Similar Documents

Publication Publication Date Title
CN104639460A (en) High-speed network data packet parallel receiving method based on many-core processor
CN103345461B (en) Based on the polycaryon processor network-on-a-chip with accelerator of FPGA
Kliazovich et al. CA-DAG: Modeling communication-aware applications for scheduling in cloud computing
US9769084B2 (en) Optimizing placement of virtual machines
AU2011370439B2 (en) Method and apparatus for rapid data distribution
CN103412786B (en) High performance server architecture system and data processing method thereof
CN109076029A (en) Technology for network I/O access
CN102480430B (en) Method and device for realizing message order preservation
CN103049336A (en) Hash-based network card soft interrupt and load balancing method
CN102857505A (en) Data bus middleware of Internet of things
US11689470B2 (en) Allocation of processors for processing packets
JP2015511468A5 (en)
CN102546098A (en) Data transmission device, method and system
CN102970242A (en) Method for achieving load balancing
CN104935636A (en) Network channel acceleration method and system
CN104038418A (en) Routing method for hybrid topologic structure data center, path detection mechanism and message processing mechanism
CN105183431A (en) Method and apparatus for controlling CPU utilization ratio
Kulkarni et al. Scheduling opportunistic links in two-tiered reconfigurable datacenters
Verner et al. Scheduling periodic real-time communication in multi-GPU systems
CN101196928A (en) Contents searching method, system and engine distributing unit
CN103441952A (en) Network data package processing method based on multi-core or many-core embedded processor
US9128771B1 (en) System, method, and computer program product to distribute workload
WO2023249749A1 (en) Packet processing device to determine memory to store data in a server architecture and computing system including same
CN116132369A (en) Flow distribution method of multiple network ports in cloud gateway server and related equipment
CN107920035A (en) It is designed to the processor of certainty switching Ethernet

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150520