CN103428099A - Flow control method for universal multi-core network processor - Google Patents

Flow control method for universal multi-core network processor Download PDF

Info

Publication number
CN103428099A
CN103428099A CN2013103649796A CN201310364979A CN103428099A CN 103428099 A CN103428099 A CN 103428099A CN 2013103649796 A CN2013103649796 A CN 2013103649796A CN 201310364979 A CN201310364979 A CN 201310364979A CN 103428099 A CN103428099 A CN 103428099A
Authority
CN
China
Prior art keywords
thread
message
mailbox
credit
engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013103649796A
Other languages
Chinese (zh)
Other versions
CN103428099B (en
Inventor
陈一骄
胡勇庭
李韬
苏金树
吕高锋
孙志刚
崔向东
赵国鸿
毛席龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201310364979.6A priority Critical patent/CN103428099B/en
Publication of CN103428099A publication Critical patent/CN103428099A/en
Application granted granted Critical
Publication of CN103428099B publication Critical patent/CN103428099B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to a flow control method for a universal multi-core network processor. The flow control method is that before reading a message from an internal memory for processing, a software thread must obtain credit allocated by a message I/O engine; the thread needs to consume one unit of the credit information once processing and sending a message; the thread cannot process the message when no residual credit information exists. A thread control module is used for maintaining the credit information of the thread, judging whether the thread can process and send the message, if so, reading the message from a main memory and sending the message to a thread processing module for processing. Meanwhile, when the credit information is going to be used out, the thread control module obtains new credit information from the message I/O engine again. The flow control method for the universal multi-core network processor can adjust the sending rate of each thread reasonably and enable each thread to share the hardware processing resource fairly without reducing the processing capability of the thread.

Description

A kind of method of general multi-core network processor flow control
Technical field
The present invention relates to network communications technology field, relate in particular to a kind of flow control methods of general multi-core network processor.
Background technology
Current computer network is not only more and more higher to bandwidth requirement, and the intellectuality of simultaneously also message in network being processed is had higher requirement.In this case, network processing unit becomes rapidly a kind of core network treatment facility.Network processing unit combines the high speed processing ability of hardware and the programmable features of software, can meet the user to many-sided QoS demands such as bandwidth, delays.General multi-core network processor is also simple due to its programming, designs and produces technique and simply waits outstanding advantages to occupy consequence in the market of network processing unit.
Along with the development of multi-core CPU technology, software processing speed fast lifting in network processing unit.Be accompanied by the lifting of software processing speed, may occur that speed that software sends from message to hardware surpasses the hardware handles speed conditions that E-Packets, this situation makes the message after software is processed to send, and can only carry out packet loss, the bandwidth of having wasted CPU time and system bus.Therefore need in network processing unit, realize flow control, control software and send message rate to hardware.
In the network processing unit of XLR, all processing threads all must be processed message with identical processing speed, the fast thread of processing speed, after handling current message, must be waited for that all threads are all handled unified after current message next message to be processed.This mode can reach by the restriction to thread process speed the purpose of flow control, but this mode makes the performance of thread can not obtain best performance, has affected the disposal ability of network processing unit.
In network service, most popular flow control mechanism is based on the flow control mechanism of credit, and transmit leg, before sending data, must obtain recipient's credit (credit) information, the common corresponding recipient's of credit remaining cache space size.Data message of the every transmission of transmit leg, its credit can reduce, and when sending when the needed credit of current message is greater than the remaining credit of transmit leg, can't send data.Fiduciary flow control mechanism has to be realized simply, can guarantee the harmless transmission of message, and recipient's buffer memory is required to the advantages such as little, relatively is applicable to us and solves software in network processing unit and send the flow control problem of descriptor to hardware.But in network processing unit, a plurality of threads are shared the hardware handles resource simultaneously, and traditional fiduciary flow control mechanism can't solve the assignment problem of a plurality of threads to the hardware handles resource, may cause some threads to wait as long for the situation that can't send.
Summary of the invention
This patent is on the basis of original fiduciary flow control mechanism, the flow control methods of a kind of network processing unit inside has been proposed, the speed that makes software send message to hardware is less than hardware handles speed, does not produce the packet loss phenomenon, has avoided the waste of CPU and PCIe bandwidth resources.
In the flow control method the present invention relates to, software thread reads before message processed from internal memory, must obtain the credit (credit) that the network acceleration engine is its distribution, the every processing of thread sends the credit information that message need to consume a unit, when thread has remaining credit information, can't not process message.As shown in Figure 3, thread control module is for safeguarding the credit information of thread for software configuration, and can the judgement thread process the transmission message, if can, from main memory, read message, and message is sent to the thread process module processed.When credit information will soon have consumed, thread control module will obtain new credit information from the network acceleration engine again simultaneously.This flow control method is provided with a mailbox group in the network acceleration engine, distributes to the credit information of thread for record.The corresponding thread of each mailbox wherein, the message number that mailbox map network accelerating engine can receive, the value in mailbox updates to thread as the credit value of thread.Fig. 2 network processing engine network acceleration engine structure figure, wherein the Ingress module is the message entrance, the Egress module is the message outlet, the dispatch module is responsible for message is sent to different processing threads, DMA RX and DMA TX module are responsible for hardware and are directly accessed main memory, and the bus that PCIe is system.
The network processing unit inner stream flow control method that this patent relates to mainly comprises following treatment step:
A carries out initialization to the credit of each send-thread.
B network acceleration engine is assigned to the different software thread by the network message of reception and is processed.
C software processing threads is processed message, and descriptor is sent to the network acceleration engine.
D message I/O takes out message and is sent from main memory according to descriptor.
E network acceleration engine is upgraded the credit of thread.
Described step a has comprised following steps:
A1: the mailbox register group in the network acceleration engine is carried out to initialization, and the value of each mailbox register equals message descriptor counts that the network acceleration engine can receive divided by Thread Count.
A2: by the value of the mailbox register group in network acceleration engine notice thread.
In described step b, the dispatch module can optionally be assigned message according to the speed of thread process speed, and the fast thread of processing speed can be assigned to more message, thereby has realized the fairness of thread dispatch; Also can it be assigned to different threads according to different message kinds.
Described step c has comprised following steps:
C1: if the credit of thread is greater than 0, thread extracts, processes message and sends the message descriptor from main memory, and the credit value of thread subtracts 1.
C2: if the credit of thread equals 0, wait for the renewal of credit.
Described steps d has comprised following steps:
D1: the network acceleration engine receives and processes the message descriptor and sends the DMA read request to internal memory.
D2: main memory sends to message DMA TX module and sends.
D3: the network acceleration engine is upgraded the value in mailbox.
In steps d 3, as shown in Figure 6, each thread has the maximum memory space in corresponding network acceleration engine to the structure of the renewal system of mailbox, mailbox scheduling amount and mailbox value.Safeguarded a mailbox scheduling Quota distribution table simultaneously, the structure of mailbox scheduling Quota distribution table as shown in Figure 7, the method of when this table is changed for recording maximum memory space, the scheduling amount being upgraded, first thread number that records the arrival message in table, which thread is second should be distributed mailbox scheduling amount, the 3rd number for distributing to after recording this thread message arrival.Steps d 3 comprises following steps:
D31: the network acceleration engine is receiving the message descriptor and is completing the distribution that mailbox dispatches amount.
D32: the network acceleration engine takes out the message descriptor and completes the renewal of mailbox value and the consumption of mailbox scheduling amount from buffer memory.
Steps d 31 comprises following steps:
D311: be each thread initialization maximum memory space, mailbox scheduling amount, mailbox scheduling Quota distribution table and mailbox value.
D312: message descriptor of the every reception of network acceleration engine can, according to its thread number inquiry mailbox scheduling Quota distribution table, add 1 on the mailbox of corresponding thread scheduling amount.If the mailbox of this thread scheduling amount is less than maximum memory space, continue to receive next message descriptor; If the mailbox of this thread scheduling amount equals maximum memory space, jump to d313.
D313: the mailbox scheduling amount of supposing some thread x equates with its maximum memory space, its maximum memory space is strengthened, by residual memory space, half memory space of larger thread y is given this thread, the storage size of inserting thread number x and giving thread x in the list item that the y thread is corresponding in scheduling Quota distribution table.
Steps d 32 comprises following content:
The network acceleration engine discharges a message descriptor, and all threads of repeating query, if the credit of a certain thread scheduling amount is not 0, add 1 by its mailbox register, and next repeating query is from next thread; If its credit scheduling amount equals 0, repeating query next one thread.
When described step e upgrades credit at the network acceleration engine to thread, can be set a suitable time, while making the credit of thread be about to be used to complete, just obtain new credit, make the processing procedure of thread not occur interrupting.Deduct with the message number that is assigned to each thread the message number that message descriptor number that the network acceleration engine receives can be learnt buffer memory in thread, in thread, the message number of buffer memory is less than in DMA write operation time when the manageable maximum messages of thread are counted, and needs the network acceleration engine to upgrade credit to thread.
This patent provides a kind of adaptive flow control methods, by in the dispatch module in the network acceleration engine, mailbox and DMA TX module, counter being set, the network acceleration engine can obtain the quantity of message in software, and the credit of software is distributed, thereby regulate the size of the shared network acceleration engine memory space of different threads, make the handling property of thread can obtain maximum performance.This mechanism is compared with traditional flow-control mechanism based on credit with original network processing unit flow-control mechanism, can rationally regulate the transmission rate of each thread, makes the shared hardware handles resource of each thread justice, and can not reduce the disposal ability of thread.
The accompanying drawing explanation
The general multi-core network processor structure diagram of Fig. 1;
The general multi-core network processor structure chart of Fig. 2;
Network processing unit message handling process in Fig. 3 the present invention;
Fig. 4 mailbox upgrades systems function diagram;
Fig. 5 mailbox upgrades system construction drawing
Fig. 6 mailbox scheduling Quota distribution list structure figure;
The flow chart of the distribution of mailbox scheduling amount in Fig. 7 network processing unit;
The structural model figure of the Flow Control of network processing unit in Fig. 8 the present invention.
Embodiment
Patent of the present invention is intended to solve in general multi-core network processor, the problem that general multi-core CPU processing speed is too fast causes the packet buffer of network acceleration engine to overflow.This patent has adopted the flow-control mechanism based on " credit ", guaranteed the harmless transmission of communicating by letter between general multi-core CPU and network acceleration engine, credit for thread upgrades the differentiation formula service that adopted simultaneously, the fast thread of processing speed can obtain more credit, thereby process more message, guaranteed the lifting of systematic function.
General multi-core network processor structure as shown in Figure 1, can be divided into general multi-core CPU and network acceleration engine two large divisions.The handling process of most network processing unit is as described below: before the network acceleration engine starts to receive message, software is the message allocation buffer, and the configuration descriptor information, by the descriptor information initialization to the network acceleration engine.While receiving message, each message distributes a message descriptor, and by message body DMA in system hosts.CPU takes out message and is processed from main memory, after processing finishes, the message descriptor is sent to the network acceleration engine, the network acceleration engine its descriptor is processed after according to the message carried on descriptor the address in main memory take out message and sent, reclaim the message descriptor simultaneously.
Fig. 2 is general multi-core network processor modular structure schematic diagram, and after message enters network processing unit by the ingress of network acceleration engine end, the dispatch module assignment of being held by ingress, to different threads, is stored in the assigned address of main memory by dma mode.Thread can extract message at one's leisure and be processed from main memory, handle the egress end that thread after message can be sent to the descriptor of the message of finishing dealing with in the network acceleration engine and carry out other processing such as output scheduling, the network acceleration engine, to the message rear structure message read request of finishing dealing with, finally extracts the message body by the output of egress end from main memory.
In this patent, the handling process of flow-control mechanism as shown in Figure 3, mainly comprises following steps:
Step 301: the credit to each thread carries out initialization.During initialization, the summation of all thread mailbox equals total number that message descriptor buffer in the network acceleration engine can the buffer memory descriptor, each thread is divided equally memory space, after mailbox value initialization, thread can read its corresponding mailbox and obtain its initial credit information.
Step 302: the network acceleration engine is assigned to the different software thread by the network message of reception and is processed.After message enters the network acceleration engine by the ingress end, the network acceleration engine can be resolved message, according to the message characteristic, reduces and divides the thread of tasking appointment to be processed.After message is assigned thread, the network acceleration engine just by message DMA in the appointment main memory location of corresponding thread.
Step 303: the software processing threads is processed message, and descriptor is sent to the network acceleration engine.Message of the every processing of software, the credit of self can subtract 1, if credit equals 0, can't extract message from main memory and be processed.
Step 304: message I/O takes out message and is sent from main memory according to descriptor, mailbox is upgraded simultaneously.
Descriptor is after entering the network acceleration engine, can carry out the processing such as output scheduling, after processing finishes, the network acceleration engine can be according to message descriptor construction message read request, taking out message from the assigned address of main memory is sent, after message sends, can the credit in mailbox be upgraded.
Step 305: the network acceleration engine is upgraded the credit of thread, can divide the message total of tasking each thread by record in the dispatch module, and can record the descriptor sum from each thread sends in DMA TX module, both differences mean also untreated message number of thread.With time of a DMA write operation rapid rate (take PPS by unit) number of the treatable message of thread in the time of can obtaining upgrading credit divided by thread process.When the former is less than or equal to the latter, the network acceleration engine is notified corresponding thread by the value of active corresponding mailbox thread.
About the renewal of mailbox as shown in Figure 4, be divided into two steps.
Step 401: the network acceleration engine is receiving the message descriptor and is completing the distribution that mailbox dispatches amount.
The structure of the renewal system of mailbox as shown in Figure 5, each thread has the maximum memory space in corresponding network acceleration engine, mailbox scheduling amount and mailbox value, system has adopted the WRR dispatching algorithm, according to the speed of thread process message, the scheduling amount of WRR algorithm is dynamically changed, guarantee that the fast thread of processing speed obtains more network acceleration engine cache space simultaneously.In order dynamically to adjust mailbox scheduling amount, system maintenance a mailbox scheduling Quota distribution table, the structure of mailbox scheduling Quota distribution table as shown in Figure 6, the method for scheduling amount renewal when this table is changed for recording maximum memory space.First thread number that records the arrival message in table, which thread is second should be distributed mailbox scheduling amount to after recording this thread message arrival, the 3rd number for distributing, if the number one thread has been temporarily transferred 2 mailbox scheduling amounts to No. three thread, the data in the table are respectively 1,3 and 2.The concrete occupation mode flow process of Mailbox renewal system as shown in Figure 7, comprises following steps:
Step 701: during system initialization, be each thread initialization maximum memory space, mailbox scheduling amount, mailbox scheduling Quota distribution table and mailbox value.The maximum memory space of thread is initialized as the total memory space of network acceleration engine divided by Thread Count, and mailbox scheduling amount is initialized as full 0, and mailbox scheduling Quota distribution table is initialized as sky.
Step 702: message descriptor of the every reception of network acceleration engine can, according to its thread number inquiry mailbox scheduling Quota distribution table, add 1 on the mailbox of corresponding thread scheduling amount.If the mailbox of this thread scheduling amount is less than maximum memory space, continue to receive next message descriptor; If the mailbox of this thread scheduling amount equals maximum memory space, to thread, shared hardware acceleration engine memory space is reallocated, and jumps to 903 steps.
The mailbox scheduling amount of the some thread x of step 703 hypothesis equates with its maximum memory space, its maximum memory space is strengthened, by residual memory space, half memory space of larger thread y is given this thread, the storage size of inserting thread number x and giving thread x in the list item that the y thread is corresponding in scheduling Quota distribution table.
Step 402: the network acceleration engine sends the message descriptor and completes the renewal of mailbox value and the consumption of mailbox scheduling amount from buffer memory.
The network acceleration engine discharges a message descriptor, and all threads of repeating query, if the credit of a certain thread scheduling amount is not 0, add 1 by its mailbox register, and next repeating query is from next thread; If its credit scheduling amount equals 0, repeating query next one thread.
In the present invention, as shown in Figure 8, each thread can be safeguarded two register credit limit and credit used to the concrete enforcement structure of network processing unit Flow Control.Credit limit means that the network acceleration engine distributes to the credit of each thread, credit used means the credit that each thread has been used, after thread often sends a message descriptor, credit used register adds 1, and the difference of the two has meaned the message descriptor number that each thread can also send.After each network acceleration engine stores in the appointment main memory by the credit lastest imformation in mailbox, corresponding thread just is added to the credit value in the credit limit register of oneself.When credit limit register equates with credit used register, stop sending the message descriptor.From the network acceleration engine sends, can carry out the recovery operation of credit at message, it is added in the mailbox that thread is corresponding.When the credit information of thread is about to be used to complete, can read credit information from mailbox, it is added in the credit limit register of oneself.

Claims (8)

1. the flow control methods of a general multi-core network processor, it is characterized in that, software thread reads before message processed from internal memory, obtain the credit that the network acceleration engine is its distribution, the every processing of thread sends the credit information that message need to consume a unit, when thread has remaining credit information, can't not process message; A mailbox group is set in the network acceleration engine, distributes to the credit information of thread for record;
The method comprises the steps:
A carries out initialization to the credit of each send-thread;
B message I/O engine is assigned to the different software thread by the network message of reception and is processed; The I/O engine comprises, Ingress module, Egress module, dispatch module, DMA RX and DMA TX module, PCIe bus; Wherein, the Ingress module is the message entrance, and the Egress module is the message outlet, and the dispatch module is responsible for message is sent to different processing threads, and DMA RX and DMA TX module are responsible for hardware and are directly accessed main memory, the bus that PCIe is system;
C software processing threads is processed message, and descriptor is sent to message I/O engine;
D message I/O takes out message and is sent from main memory according to descriptor;
E message I/O engine is upgraded the credit of thread;
Described step a has comprised following steps:
A1: the mailbox register group in message I/O engine is carried out to initialization, and the value of each mailbox register equals message descriptor counts that message I/O engine can receive divided by Thread Count;
A2: by the value of the mailbox register group in message I/O engine notice thread;
In described step b, the dispatch module can optionally be assigned message according to the speed of thread process speed, and the fast thread of processing speed can be assigned to more message, thereby realizes the fairness of thread dispatch; Also can it be assigned to different threads according to different message kinds;
Described step c has comprised following steps:
C1: if the credit of thread is greater than 0, thread extracts, processes message and sends the message descriptor from main memory, and the credit value of thread subtracts 1;
C2: if the credit of thread equals 0, wait for the renewal of credit;
Described steps d has comprised following steps:
D1: message I/O engine receives and processes the message descriptor and sends the DMA read request to internal memory;
D2: main memory sends to message DMA TX module and sends;
D3: message I/O engine is upgraded the value in mailbox;
When described step e upgrades credit at message I/O engine to thread, a suitable time can be set, just obtain new credit while making the credit of thread be about to be used to complete, make the processing procedure of thread not occur interrupting, deduct with the message number that is assigned to each thread the message number that message descriptor number that message I/O engine receives can be learnt buffer memory in thread, in thread, the message number of buffer memory is less than in DMA write operation time when the manageable maximum messages of thread are counted, and needs message I/O engine to upgrade credit to thread.
2. the flow control methods of a kind of general multi-core network processor according to claim 1, it is characterized in that, in steps d 3, the renewal of mailbox is specially, each thread has the maximum memory space in corresponding message I/O engine, mailbox scheduling amount and mailbox value, safeguarded a mailbox scheduling Quota distribution table simultaneously, the method that mailbox scheduling Quota distribution table upgrades the scheduling amount while being changed for recording maximum memory space, first thread number that records the arrival message in table, which thread is second should be distributed mailbox scheduling amount to after recording this thread message arrival, the 3rd number for distributing.
3. the flow control methods of a kind of general multi-core network processor according to claim 1, is characterized in that, described steps d 3 comprises following steps:
D31: message I/O engine is receiving the message descriptor and is completing the distribution that mailbox dispatches amount;
D32: message I/O engine takes out the message descriptor and completes the renewal of mailbox value and the consumption of mailbox scheduling amount from buffer memory;
Steps d 31 comprises following steps:
D311: be each thread initialization maximum memory space, mailbox scheduling amount, mailbox scheduling Quota distribution table and mailbox value;
D312: message descriptor of the every reception of message I/O engine, according to its thread number inquiry mailbox scheduling Quota distribution table, add 1 on the mailbox of corresponding thread scheduling amount, if the mailbox of this thread scheduling amount is less than maximum memory space, continue to receive next message descriptor; If the mailbox of this thread scheduling amount equals maximum memory space, jump to d313;
D313: the mailbox scheduling amount of supposing some thread x equates with its maximum memory space, its maximum memory space is strengthened, by residual memory space, half memory space of larger thread y is given this thread, the storage size of inserting thread number x and giving thread x in the list item that the y thread is corresponding in scheduling Quota distribution table;
Steps d 32 comprises following content:
Message I/O engine discharges a message descriptor, and all threads of repeating query, if the credit of a certain thread scheduling amount is not 0, add 1 by its mailbox register, and next repeating query is from next thread; If its credit scheduling amount equals 0, repeating query next one thread.
4. the flow control methods of a kind of general multi-core network processor according to claim 3, it is characterized in that, for in steps d 311 being each thread initialization maximum memory space, mailbox scheduling amount, mailbox scheduling Quota distribution table and mailbox value, the maximum memory space of thread is initialized as the total memory space of message I/O engine divided by Thread Count, mailbox scheduling amount is initialized as full 0, and mailbox scheduling Quota distribution table is initialized as sky.
5. the flow control methods of a kind of general multi-core network processor according to claim 1, it is characterized in that, for message I/O engine in step e, to thread, upgrade the time of its credit: in the dispatch module, the message total of each thread tasked in record minute, and in DMA TX module the descriptor sum of record from each thread sends, both differences mean also untreated message number of thread, with time of a DMA write operation rapid rate divided by thread process, take PPS as unit, while obtaining upgrading credit thread the number of treatable message, when the former is less than or equal to the latter, message I/O engine is notified corresponding thread by the value of active corresponding mailbox thread.
6. the flow control methods of a kind of general multi-core network processor according to claim 1, it is characterized in that, each thread is safeguarded two register credit limit and credit used, Credit limit means that message I/O engine distributes to the credit of each thread, credit used means the credit that each thread has been used, after thread often sends a message descriptor, credit used register adds 1, the difference of the two has meaned the message descriptor number that each thread can also send, after each message I/O engine stores in the appointment main memory by the credit lastest imformation in mailbox, corresponding thread just is added to Credit limi in the credit limit register of oneself, when credit limit register equates with credit used register, stop sending the message descriptor.
7. the flow control methods of a kind of general multi-core network processor according to claim 6, it is characterized in that, the difference of Credit limit and credit used is exactly the remaining credit of thread, network processing unit extraction process send message descriptor flow process and comprise following steps from main memory:
S1: if Credit limit and credit used are unequal, thread extracts a message and is processed from main memory;
S2: if Credit limit with credit use, equate, wait for the renewal of credit limit register;
S2: the rear main memory of finishing dealing with sends the message descriptor, and credit used subtracts 1;
S3: the message descriptor enters in the buffer memory of message I/O engine;
S4: after a message descriptor output in the buffer memory of message I/O engine, mailbox is upgraded;
S5: message I/O engine sends to thread by mailbox information;
S6: thread is the cumulative credit of renewal limit by mailbox information and credit limit.
8. the flow control methods of a kind of general multi-core network processor according to claim 7, it is characterized in that, renewal for mailbox in S4 adopts " WRR " algorithm, after message I/O engine discharges a message descriptor, just produced a new memory location, to the renewal of mailbox, the memory location new this redistributed to thread, before distributing, each mailbox can be set up a mailbox scheduling amount, mailbox scheduling amount can be upgraded when the message descriptor enters message I/O engine, mailbox scheduling Quota distribution table record the mailbox scheduling amount that the message descriptor that enters message I/O engine should be assigned with.
CN201310364979.6A 2013-08-21 2013-08-21 A kind of method of universal multi-core network processor flow control Active CN103428099B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310364979.6A CN103428099B (en) 2013-08-21 2013-08-21 A kind of method of universal multi-core network processor flow control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310364979.6A CN103428099B (en) 2013-08-21 2013-08-21 A kind of method of universal multi-core network processor flow control

Publications (2)

Publication Number Publication Date
CN103428099A true CN103428099A (en) 2013-12-04
CN103428099B CN103428099B (en) 2016-02-24

Family

ID=49652287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310364979.6A Active CN103428099B (en) 2013-08-21 2013-08-21 A kind of method of universal multi-core network processor flow control

Country Status (1)

Country Link
CN (1) CN103428099B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104767606A (en) * 2015-03-19 2015-07-08 华为技术有限公司 Data synchronization device and method
CN105959161A (en) * 2016-07-08 2016-09-21 中国人民解放军国防科学技术大学 High-speed data packet construction and distribution control method and device
CN107222358A (en) * 2016-03-21 2017-09-29 深圳市中兴微电子技术有限公司 Wrap flux monitoring method per second and device
CN109714273A (en) * 2018-12-25 2019-05-03 武汉思普崚技术有限公司 A kind of message processing method and device of multi-core network device
CN109814925A (en) * 2018-12-24 2019-05-28 合肥君正科技有限公司 A kind of method and device of the general self-configuring of hardware module
CN111211931A (en) * 2020-02-20 2020-05-29 深圳市风云实业有限公司 Message forwarding system based on reconfigurable technology

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706742A (en) * 2009-11-20 2010-05-12 北京航空航天大学 Method for dispatching I/O of asymmetry virtual machine based on multi-core dynamic partitioning
CN101706743A (en) * 2009-12-07 2010-05-12 北京航空航天大学 Dispatching method of virtual machine under multi-core environment
CN102253857A (en) * 2011-06-24 2011-11-23 华中科技大学 Xen virtual machine scheduling control method in multi-core environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706742A (en) * 2009-11-20 2010-05-12 北京航空航天大学 Method for dispatching I/O of asymmetry virtual machine based on multi-core dynamic partitioning
CN101706743A (en) * 2009-12-07 2010-05-12 北京航空航天大学 Dispatching method of virtual machine under multi-core environment
CN102253857A (en) * 2011-06-24 2011-11-23 华中科技大学 Xen virtual machine scheduling control method in multi-core environment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104767606A (en) * 2015-03-19 2015-07-08 华为技术有限公司 Data synchronization device and method
CN104767606B (en) * 2015-03-19 2018-10-19 华为技术有限公司 Data synchronization unit and method
CN107222358A (en) * 2016-03-21 2017-09-29 深圳市中兴微电子技术有限公司 Wrap flux monitoring method per second and device
CN105959161A (en) * 2016-07-08 2016-09-21 中国人民解放军国防科学技术大学 High-speed data packet construction and distribution control method and device
CN105959161B (en) * 2016-07-08 2019-04-26 中国人民解放军国防科学技术大学 A kind of high speed packet construction and distribution control method and equipment
CN109814925A (en) * 2018-12-24 2019-05-28 合肥君正科技有限公司 A kind of method and device of the general self-configuring of hardware module
CN109714273A (en) * 2018-12-25 2019-05-03 武汉思普崚技术有限公司 A kind of message processing method and device of multi-core network device
CN111211931A (en) * 2020-02-20 2020-05-29 深圳市风云实业有限公司 Message forwarding system based on reconfigurable technology
CN111211931B (en) * 2020-02-20 2022-06-10 深圳市风云实业有限公司 Message forwarding system based on reconfigurable technology

Also Published As

Publication number Publication date
CN103428099B (en) 2016-02-24

Similar Documents

Publication Publication Date Title
CN103428099A (en) Flow control method for universal multi-core network processor
US11805065B2 (en) Scalable traffic management using one or more processor cores for multiple levels of quality of service
US6956818B1 (en) Method and apparatus for dynamic class-based packet scheduling
EP1774714B1 (en) Hierarchal scheduler with multiple scheduling lanes
CN103605576B (en) Multithreading-based MapReduce execution system
US7768907B2 (en) System and method for improved Ethernet load balancing
CN106533982B (en) The dynamic queue's dispatching device and method borrowed based on bandwidth
US9152482B2 (en) Multi-core processor system
CN103946803A (en) Processor with efficient work queuing
CN104579865A (en) Data communications network for an aircraft
KR101859188B1 (en) Apparatus and method for partition scheduling for manycore system
US7373467B2 (en) Storage device flow control
EP2526478B1 (en) A packet buffer comprising a data section an a data description section
WO2010135926A1 (en) Method and device for scheduling queues based on chained list
CN102945185B (en) Task scheduling method and device
CN110134430A (en) A kind of data packing method, device, storage medium and server
CN102609307A (en) Multi-core multi-thread dual-operating system network equipment and control method thereof
CN104572498A (en) Cache management method for message and device
CN102098217B (en) Probability-based multipriority queue scheduling method
EP2383658B1 (en) Queue depth management for communication between host and peripheral device
CN110519180A (en) Network card virtualization queue scheduling method and system
CN112783644B (en) Distributed inclined flow processing method and system based on high-frequency key value counting
US8943236B1 (en) Packet scheduling using a programmable weighted fair queuing scheduler that employs deficit round robin
CN111756586B (en) Fair bandwidth allocation method based on priority queue in data center network, switch and readable storage medium
US7912068B2 (en) Low-latency scheduling in large switches

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant