CN103428099B - A kind of method of universal multi-core network processor flow control - Google Patents

A kind of method of universal multi-core network processor flow control Download PDF

Info

Publication number
CN103428099B
CN103428099B CN201310364979.6A CN201310364979A CN103428099B CN 103428099 B CN103428099 B CN 103428099B CN 201310364979 A CN201310364979 A CN 201310364979A CN 103428099 B CN103428099 B CN 103428099B
Authority
CN
China
Prior art keywords
thread
message
mailbox
engine
credit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310364979.6A
Other languages
Chinese (zh)
Other versions
CN103428099A (en
Inventor
陈一骄
胡勇庭
李韬
苏金树
吕高锋
孙志刚
崔向东
赵国鸿
毛席龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201310364979.6A priority Critical patent/CN103428099B/en
Publication of CN103428099A publication Critical patent/CN103428099A/en
Application granted granted Critical
Publication of CN103428099B publication Critical patent/CN103428099B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present invention relates to a kind of flow control methods of universal multi-core network processor.The method is that software thread reads before message processes from internal memory, must obtain message I/O engine is the credit that it distributes, thread often processes the credit information that transmission message needs a consumption unit, cannot process message when thread does not have remaining credit information.Thread control module, for safeguarding the credit information of thread, judges that can thread process transmission message, if passable, then from main memory, reads message, and message is sent to thread process module and processes.Simultaneously when credit information completes by consumption, thread control module obtains new credit information by from message I/O engine again.This method can the transmission rate of each thread of reasonable adjusting, makes the shared hardware processing resources of each thread justice, and can not reduce the disposal ability of thread.

Description

A kind of method of universal multi-core network processor flow control
Technical field
The present invention relates to network communication technology field, particularly relate to a kind of flow control methods of universal multi-core network processor.
Background technology
Current computer network is not only more and more higher to bandwidth requirement, also has higher requirement to the intellectuality of Message processing in network simultaneously.In this case, network processing unit becomes rapidly a kind of core network treatment facility.Network processing unit combines the high speed processing ability of hardware and the programmable features of software, can meet user to many-sided QoS demand such as bandwidth, delay.Universal multi-core network processor is also simple due to its programming, designs and produces technique and simply waits outstanding advantages to occupy consequence in the market of network processing unit.
Along with the development of multi-core CPU technology, software processing speed fast lifting in network processing unit.Along with the lifting of software processing speed, may occur that software sends the speed of message to hardware and exceedes hardware handles and to E-Packet speed conditions, message after this situation makes software process cannot send, and can only carry out packet loss, waste the bandwidth of CPU time and system bus.Therefore need to realize flow control in network processing unit, control software design sends message rate to hardware.
In the network processing unit of XLR, all processing threads all must process message with identical processing speed, the fast thread of processing speed, after processing current message, must wait for that all threads unifiedly to process next message after all processing current message.This mode can reach the object of flow control by the restriction to thread process speed, but this mode makes the performance of thread can not obtain best performance, have impact on the disposal ability of network processing unit.
In network service, most popular flow control mechanism is fiduciary flow control mechanism, and transmit leg before transmitting data, must obtain credit (credit) information of recipient, the remaining cache space size of the usual corresponding recipient of credit.Transmit leg often sends a data message, and its credit can reduce, and then cannot send data when sending the credit required for current message and being greater than the remaining credit of transmit leg.Fiduciary flow control mechanism has and realizes simple, can ensure the Lossless transport of message, require the advantages such as little to recipient's buffer memory, is relatively applicable to us and solves software in network processing unit and send the flow control problems of descriptor to hardware.But in network processing unit, multiple thread shares hardware processing resources, and traditional fiduciary flow control mechanism cannot solve the assignment problem of multiple thread to hardware processing resources, and some threads may be caused to wait as long for the situation that cannot send simultaneously.
Summary of the invention
This patent is on the basis of original fiduciary flow control mechanism, propose a kind of flow control methods of network processing unit inside, the speed making software send message to hardware is less than hardware handles speed, does not produce packet loss phenomenon, avoids the waste of CPU and PCIe bandwidth resources.
In the flow control method that the present invention relates to, software thread reads before message processes from internal memory, must obtain network acceleration engine is the credit (credit) that it distributes, thread often processes the credit information that transmission message needs a consumption unit, cannot process message when thread does not have remaining credit information.As shown in Figure 3, thread control module, for safeguarding the credit information of thread, judges that can thread process transmission message, if passable, then from main memory, reads message software configuration, and message is sent to thread process module and processes.Simultaneously when credit information completes by consumption, thread control module obtains new credit information by from network acceleration engine again.This flow control method is provided with a mailbox group in network acceleration engine, for recording the credit information distributing to thread.The wherein corresponding thread of each mailbox, the message number that mailbox map network accelerating engine can receive, the value in mailbox updates to thread as the credit value of thread.Fig. 2 network processing engine network acceleration engine structure figure, wherein Ingress module is message entrance, and Egress module is message outlet, and message is sent to different processing threads by dispatch module in charge, DMARX and DMATX module in charge hardware directly accesses main memory, and PCIe is the bus of system.
The network processing unit inner stream flow control method that this patent relates to mainly comprises following treatment step:
The credit of a to each transmission thread carries out initialization.
The network message of reception is assigned to different software thread and processes by b network acceleration engine.
C software processing threads processes message, descriptor is sent to network acceleration engine.
D message I/O takes out message according to descriptor and sends from main memory.
The credit of e network acceleration engine to thread upgrades.
Described step a contains following steps:
A1: carry out initialization to the mailbox Parasites Fauna in network acceleration engine, the value of each mailbox register equals message descriptor counts that network acceleration engine can receive divided by Thread Count.
A2: by the value of the mailbox Parasites Fauna in network acceleration engine notice thread.
In described step b, dispatch module optionally can be assigned message according to the speed of thread process speed, and the fast thread of processing speed can be assigned to more message, thus achieves the fairness of thread dispatch; Also different threads can be assigned to according to different message kinds.
Described step c contains following steps:
C1: if the credit of thread is greater than 0, thread extracts from main memory, process message send message descriptor, and the credit value of thread subtracts 1.
C2: if the credit of thread equals 0, then wait for the renewal of credit.
Described steps d contains following steps:
D1: network acceleration engine accepts also processes message descriptor to internal memory transmission DMA read request.
D2: message is sent to DMATX module and sends by main memory.
D3: network acceleration engine upgrades the value in mailbox.
In steps d 3, the structure of the renewal system of mailbox as shown in Figure 6, and each thread has the maximum memory space in corresponding network acceleration engine, and mailbox dispatches amount and mailbox value.Maintain a mailbox simultaneously and dispatch Quota distribution table, mailbox dispatches the structure of Quota distribution table as shown in Figure 7, this table is for recording when maximum memory space changes the method that scheduling amount upgrades, the thread number of Section 1 record arrival message in table, Section 2 is used for recording after this thread message arrives which thread should distribute mailbox scheduling amount to, and the 3rd is the number of distributing.Steps d 3 comprises following steps:
D31: network acceleration engine is receiving message descriptor and completing the distribution that mailbox dispatches amount.
D32: network acceleration engine takes out message descriptor and the consumption of the renewal completed mailbox value and mailbox scheduling amount from buffer memory.
Steps d 31 comprises following steps:
D311: for each thread initialization maximum memory space, mailbox dispatches amount, mailbox dispatches Quota distribution table and mailbox value.
D312: network acceleration engine often receives a message descriptor, can dispatch Quota distribution table according to its thread number inquiry mailbox, dispatch in amount add 1 at the mailbox of corresponding thread.If the mailbox of this thread dispatches amount be less than maximum memory space, then continue to receive next message descriptor; If the mailbox of this thread dispatches amount equal maximum memory space, then jump to d313.
D313: suppose that the mailbox scheduling amount of some thread x is equal with its maximum memory space, then its maximum memory space is strengthened, the half memory space of thread y larger for residual memory space is given this thread, inserts thread number x and give the storage size of thread x in the list item that y thread is corresponding in scheduling Quota distribution table.
Steps d 32 comprises following content:
Network acceleration engine discharges a message descriptor, then all threads of repeating query, if it is not 0 that the credit of a certain thread dispatches amount, then its mailbox register is added 1, next repeating query is from next thread; If its credit dispatches amount equal 0, then the next thread of repeating query.
Described step e, when network acceleration engine upgrades credit to thread, can arrange suitable time, just obtain new credit when making the credit of thread be about to be used to complete, and makes the processing procedure of thread not occur interrupting.With the message number being assigned to each thread deduct network acceleration engine accepts to message descriptor number can learn the message number of buffer memory in thread, when the message number of buffer memory in thread is less than the manageable maximum message number of thread in the DMA write operation time, then network acceleration engine is needed to upgrade credit to thread.
This patent provides a kind of adaptive flow control methods, by arranging counter in the dispatch module in network acceleration engine, mailbox and DMATX module, network acceleration engine can obtain the quantity of message in software, and the credit of software is distributed, thus the size of network acceleration engine memory space shared by adjustment different threads, make the handling property of thread can obtain maximum performance.This mechanism is compared with traditional flow-control mechanism based on credit with original network processing unit flow-control mechanism, can the transmission rate of each thread of reasonable adjusting, makes the shared hardware processing resources of each thread justice, and can not reduce the disposal ability of thread.
Accompanying drawing explanation
Fig. 1 universal multi-core network processor structure diagram;
Fig. 2 universal multi-core network processor structure chart;
Network processing unit Message processing flow process in Fig. 3 the present invention;
Fig. 4 mailbox upgrades systems function diagram;
Fig. 5 mailbox upgrades system construction drawing
Fig. 6 mailbox dispatches Quota distribution list structure figure;
In Fig. 7 network processing unit, mailbox dispatches the flow chart of the distribution of amount;
The structural model figure of the Flow Control of network processing unit in Fig. 8 the present invention.
Embodiment
Patent of the present invention is intended to solve in universal multi-core network processor, the too fast problem causing the packet buffer of network acceleration engine to overflow of general multi-core CPU processing speed.This patent have employed the flow-control mechanism based on " credit ", ensure that the Lossless transport communicated between general multi-core CPU with network acceleration engine, credit renewal simultaneously for thread have employed differentiated service, the fast thread of processing speed can obtain more credit, thus process more message, ensure that the lifting of systematic function.
Universal multi-core network processor structure as shown in Figure 1, can be divided into general multi-core CPU and network acceleration engine two large divisions.The handling process of current most of network processing unit is as described below: before network acceleration engine starts to receive message, and software is message allocation buffer, and configures descriptor information, by descriptor information initialization to network acceleration engine.When receiving message, each message distributes a message descriptor, and by message body DMA in system hosts.CPU takes out message and processes from main memory, after process terminates, message descriptor is sent to network acceleration engine, takes out message according to the address of the message that descriptor carries in main memory after network acceleration engine processes its descriptor and send, reclaim message descriptor simultaneously.
Fig. 2 is universal multi-core network processor modular structure schematic diagram, and after message is held entered network processing unit by the ingress of network acceleration engine, the dispatch module assignment of being held by ingress, to different threads, is stored in the assigned address of main memory by dma mode.Thread can extract message at one's leisure and process from main memory, the egress end that after processing message, the descriptor of the message processed can be sent in network acceleration engine by thread carries out other process such as output scheduling, structure message read request after network acceleration engine completes Message processing, finally extracts message body and holds output by egress from main memory.
In this patent, the handling process of flow-control mechanism as shown in Figure 3, mainly comprises following steps:
Step 301: initialization is carried out to the credit of each thread.During initialization, the summation of all thread mailbox equals message descriptor buffer in network acceleration engine can total number of buffer descriptor, each thread divides equally memory space, after the initialization of mailbox value, the mailbox that thread can read its correspondence obtains its initial credit information.
Step 302: the network message of reception is assigned to different software thread and processes by network acceleration engine.After message enters network acceleration engine by ingress end, network acceleration engine can be resolved message, tasks the thread of specifying process according to the reduction point of message characteristic.After message is assigned thread, network acceleration engine just by message DMA to corresponding thread appointment main memory location in.
Step 303: software processing threads processes message, sends to network acceleration engine by descriptor.Software often processes a message, and the credit of self can subtract 1, if credit equals 0, then cannot extract message from main memory and process.
Step 304: message I/O takes out message according to descriptor and sends from main memory, upgrades mailbox simultaneously.
Descriptor is after entering network acceleration engine, the process such as output scheduling can be carried out, after process terminates, network acceleration engine can according to message descriptor construction message read request, from the assigned address of main memory, take out message send, after message sends, can the credit in mailbox be upgraded.
Step 305: the credit of network acceleration engine to thread upgrades, point message total tasking each thread can be recorded in dispatch module, and the descriptor sum sent from each thread can be recorded in DMATX module, namely both differences represent thread also untreated message number.When can obtain upgrading credit divided by the most rapid rate (in units of PPS) of thread process with time of a DMA write operation thread the number of treatable message.When the former is less than or equal to the latter, network acceleration engine initiatively will notify corresponding thread the value of mailbox corresponding for thread.
About mailbox renewal as shown in Figure 4, be divided into two steps.
Step 401: network acceleration engine is receiving message descriptor and completing the distribution that mailbox dispatches amount.
The structure of the renewal system of mailbox as shown in Figure 5, each thread has the maximum memory space in corresponding network acceleration engine, mailbox dispatches amount and mailbox value, system have employed WRR dispatching algorithm, dynamically change according to the scheduling amount of speed to WRR algorithm of thread process message simultaneously, ensure that the fast thread of processing speed obtains more network acceleration engine cache space.In order to dynamic conditioning mailbox dispatches amount, system maintenance mailbox dispatches Quota distribution table, and mailbox dispatches the structure of Quota distribution table as shown in Figure 6, and this table is for recording the method for to dispatch amount when maximum memory space changes and upgrading.The thread number of Section 1 record arrival message in table, Section 2 is used for recording after this thread message arrives which thread should distribute mailbox scheduling amount to, Section 3 is the number of distributing, if number one thread has temporarily transferred 2 mailbox to No. three thread dispatch amount, then the data in table have been respectively 1,3 and 2.Mailbox upgrades the concrete occupation mode flow process of system as shown in Figure 7, comprises following steps:
Step 701: during system initialization, for each thread initialization maximum memory space, mailbox dispatches amount, mailbox dispatches Quota distribution table and mailbox value.The maximum memory space of thread is initialized as the total memory space of network acceleration engine divided by Thread Count, and mailbox dispatches amount and is initialized as full 0, and mailbox dispatches Quota distribution table and is initialized as sky.
Step 702: network acceleration engine often receives a message descriptor, can dispatch Quota distribution table according to its thread number inquiry mailbox, dispatch in amount add 1 at the mailbox of corresponding thread.If the mailbox of this thread dispatches amount be less than maximum memory space, then continue to receive next message descriptor; If the mailbox of this thread dispatches amount equal maximum memory space, then the hardware acceleration engine memory space shared by thread is reallocated, jump to 903 steps.
Step 703 supposes that the mailbox scheduling amount of some thread x is equal with its maximum memory space, then its maximum memory space is strengthened, the half memory space of thread y larger for residual memory space is given this thread, inserts thread number x and give the storage size of thread x in the list item that y thread is corresponding in scheduling Quota distribution table.
Step 402: network acceleration engine sends message descriptor and the consumption of the renewal completed mailbox value and mailbox scheduling amount from buffer memory.
Network acceleration engine discharges a message descriptor, then all threads of repeating query, if it is not 0 that the credit of a certain thread dispatches amount, then its mailbox register is added 1, next repeating query is from next thread; If its credit dispatches amount equal 0, then the next thread of repeating query.
In the present invention, as shown in Figure 8, each thread can safeguard two register creditlimit and creditused to the concrete enforcement structure of network processing unit Flow Control.Creditlimit represents that the credit of each thread distributed to by network acceleration engine, creditused represents the credit that each thread has employed, after thread often sends a message descriptor, creditused register adds 1, and the two difference then illustrates the message descriptor number that each thread can also send.Credit lastest imformation in mailbox is stored into by each network acceleration engine specifies after in main memory, and credit value is just added in the creditlimit register of oneself by corresponding thread.When creditlimit register is equal with creditused register, then stop sending message descriptor.After message sends from network acceleration engine, the recovery operation of credit can be carried out, be added in mailbox corresponding to thread.When the credit information of thread is about to be used to complete, credit information can be read from mailbox, be added in the creditlimit register of oneself.

Claims (8)

1. the flow control methods of a universal multi-core network processor, it is characterized in that, software thread reads before message processes from internal memory, obtaining network acceleration engine is the credit that it distributes, thread often processes the credit information that transmission message needs a consumption unit, cannot process message when thread does not have remaining credit information; A mailbox Parasites Fauna is set in network acceleration engine, for recording the credit information distributing to thread;
The method comprises the steps:
The credit of a to each transmission thread carries out initialization;
The network message of reception is assigned to different software thread and processes by b message I/O engine; I/O engine comprises: Ingress module, Egress module, dispatch module, DMARX module, DMATX module and PCIe bus; Wherein, Ingress module is message entrance, and Egress module is message outlet, and message is sent to different processing threads by dispatch module in charge, and DMARX module and DMATX module in charge hardware directly access main memory, and PCIe is the bus of system;
C software processing threads processes message, descriptor is sent to message I/O engine;
D message I/O engine takes out message according to descriptor and sends from main memory;
The credit of e message I/O engine to thread upgrades;
Described step a contains following steps:
A1: initialization is carried out to the mailbox Parasites Fauna in message I/O engine, the value of each mailbox register equals message descriptor counts that message I/O engine can receive divided by Thread Count;
A2: by the value of the mailbox Parasites Fauna in message I/O engine notice thread;
In described step b, dispatch module optionally can be assigned message according to the speed of thread process speed, and the fast thread of processing speed can be assigned to more message, thus realizes the fairness of thread dispatch; Also different threads can be assigned to according to different message kinds;
Described step c contains following steps:
C1: if the credit of thread is greater than 0, thread extracts from main memory, process message send message descriptor, and the credit value of thread subtracts 1;
C2: if the credit of thread equals 0, then wait for the renewal of credit;
Described steps d contains following steps:
D1: message I/O engine accepts also processes message descriptor to internal memory transmission DMA read request;
D2: message is sent to DMATX module and sends by main memory;
D3: message I/O engine upgrades the value in mailbox Parasites Fauna;
Described step e is when message I/O engine upgrades credit to thread, suitable time can be set, just new credit is obtained when making the credit of thread be about to be used to complete, the processing procedure of thread is made not occur interrupting, with the message number being assigned to each thread deduct message I/O engine accepts to message descriptor number can learn the message number of buffer memory in thread, when the message number of buffer memory in thread is less than the manageable maximum message number of thread in the DMA write operation time, then message I/O engine is needed to upgrade credit to thread.
2. the flow control methods of a kind of universal multi-core network processor according to claim 1, it is characterized in that, in steps d 3, the renewal of mailbox Parasites Fauna is specially, each thread has the maximum memory space in corresponding message I/O engine, mailbox dispatches the value in amount and mailbox Parasites Fauna, maintain a mailbox simultaneously and dispatch Quota distribution table, mailbox dispatches Quota distribution table for recording when maximum memory space changes the method that scheduling amount upgrades, the thread number of Section 1 record arrival message in table, Section 2 should distribute mailbox scheduling amount to which thread after recording message arrival corresponding to this thread number, 3rd number for distributing.
3. the flow control methods of a kind of universal multi-core network processor according to claim 1, is characterized in that, described steps d 3 comprises following steps:
D31: message I/O engine is receiving message descriptor and is completing the distribution that mailbox dispatches amount;
D32: message I/O engine takes out message descriptor and the consumption of the renewal completed the value in mailbox Parasites Fauna and mailbox scheduling amount from buffer memory;
Steps d 31 comprises following steps:
D311: be the value in each thread initialization maximum memory space, mailbox scheduling amount, mailbox scheduling Quota distribution table and mailbox Parasites Fauna;
D312: message I/O engine often receives a message descriptor, Quota distribution table is dispatched according to its thread number inquiry mailbox, dispatch in amount at the mailbox of corresponding thread and add 1, if the mailbox of this thread dispatches amount be less than maximum memory space, then continue to receive next message descriptor; If the mailbox of this thread dispatches amount equal maximum memory space, then jump to d313;
D313: suppose that the mailbox scheduling amount of some thread x is equal with its maximum memory space, then its maximum memory space is strengthened, the half memory space of thread y larger for residual memory space is given this thread, inserts thread number x and give the storage size of thread x in the list item that y thread is corresponding in scheduling Quota distribution table;
Steps d 32 comprises following content:
Message I/O engine discharges a message descriptor, then all threads of repeating query, if it is not 0 that the credit of a certain thread dispatches amount, then its mailbox register is added 1, next repeating query is from next thread; If its credit dispatches amount equal 0, then the next thread of repeating query.
4. the flow control methods of a kind of universal multi-core network processor according to claim 3, it is characterized in that, for be each thread initialization maximum memory space in steps d 311, mailbox dispatches amount, mailbox dispatches value in Quota distribution table and mailbox Parasites Fauna, the maximum memory space of thread is initialized as the total memory space of message I/O engine divided by Thread Count, mailbox dispatches amount and is initialized as full 0, and mailbox dispatches Quota distribution table and is initialized as sky.
5. the flow control methods of a kind of universal multi-core network processor according to claim 1, it is characterized in that, the time of its credit is upgraded to thread: the message total of each thread tasked in record point in dispatch module for message I/O engine in step e, and in DMATX module, record the descriptor sum sent from each thread, namely both differences represent thread also untreated message number, with time of a DMA write operation most rapid rate divided by thread process, in units of PPS, when obtaining upgrading credit thread the number of treatable message, when untreated message number be less than or equal to upgrade credit time thread the number of treatable message time, message I/O engine initiatively will notify corresponding thread the value in mailbox Parasites Fauna corresponding for thread.
6. the flow control methods of a kind of universal multi-core network processor according to claim 1, it is characterized in that, each thread safeguards two register creditlimit and creditused, creditlimit represents that the credit of each thread distributed to by message I/O engine, creditused represents the credit that each thread has employed, after thread often sends a message descriptor, creditused register adds 1, the two difference then illustrates the message descriptor number that each thread can also send, credit lastest imformation in mailbox is stored into by each message I/O engine specifies after in main memory, creditlimit is just added in the creditlimit register of oneself by corresponding thread, when creditlimit register is equal with creditused register, then stop sending message descriptor.
7. the flow control methods of a kind of universal multi-core network processor according to claim 6, it is characterized in that, the difference of creditlimit and creditused is exactly the remaining credit of thread, network processing unit extraction process send message descriptor flow process and comprise following steps from main memory:
S1: if creditlimit and creditused is unequal, then thread extracts a message and processes from main memory;
S2: if creditlimit and credituse is equal, then wait for the renewal of creditlimit register;
S2: processed rear main memory and sent by message descriptor, creditused subtracts 1;
S3: message descriptor enters in the buffer memory of message I/O engine;
After a message descriptor in the buffer memory of S4: message I/O engine exports, mailbox is upgraded;
Mailbox information is sent to thread by S5: message I/O engine;
S6: mailbox information and creditlimit are done to add up and upgraded creditlimit by thread.
8. the flow control methods of a kind of universal multi-core network processor according to claim 7, it is characterized in that, renewal for mailbox in S4 adopts " WRR " algorithm, after message I/O engine discharges a message descriptor, just a new memory location is created, this new memory location, thread is reassigned to the renewal of mailbox, before distributing, each mailbox can be set up a mailbox and dispatch amount, mailbox dispatches amount and can upgrade when message descriptor enters message I/O engine, mailbox dispatches Quota distribution table record and the mailbox that the message descriptor that enters message I/O engine should be assigned with and dispatch amount.
CN201310364979.6A 2013-08-21 2013-08-21 A kind of method of universal multi-core network processor flow control Active CN103428099B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310364979.6A CN103428099B (en) 2013-08-21 2013-08-21 A kind of method of universal multi-core network processor flow control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310364979.6A CN103428099B (en) 2013-08-21 2013-08-21 A kind of method of universal multi-core network processor flow control

Publications (2)

Publication Number Publication Date
CN103428099A CN103428099A (en) 2013-12-04
CN103428099B true CN103428099B (en) 2016-02-24

Family

ID=49652287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310364979.6A Active CN103428099B (en) 2013-08-21 2013-08-21 A kind of method of universal multi-core network processor flow control

Country Status (1)

Country Link
CN (1) CN103428099B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104767606B (en) * 2015-03-19 2018-10-19 华为技术有限公司 Data synchronization unit and method
CN107222358A (en) * 2016-03-21 2017-09-29 深圳市中兴微电子技术有限公司 Wrap flux monitoring method per second and device
CN105959161B (en) * 2016-07-08 2019-04-26 中国人民解放军国防科学技术大学 A kind of high speed packet construction and distribution control method and equipment
CN109814925A (en) * 2018-12-24 2019-05-28 合肥君正科技有限公司 A kind of method and device of the general self-configuring of hardware module
CN109714273A (en) * 2018-12-25 2019-05-03 武汉思普崚技术有限公司 A kind of message processing method and device of multi-core network device
CN111211931B (en) * 2020-02-20 2022-06-10 深圳市风云实业有限公司 Message forwarding system based on reconfigurable technology

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706742A (en) * 2009-11-20 2010-05-12 北京航空航天大学 Method for dispatching I/O of asymmetry virtual machine based on multi-core dynamic partitioning
CN101706743A (en) * 2009-12-07 2010-05-12 北京航空航天大学 Dispatching method of virtual machine under multi-core environment
CN102253857A (en) * 2011-06-24 2011-11-23 华中科技大学 Xen virtual machine scheduling control method in multi-core environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706742A (en) * 2009-11-20 2010-05-12 北京航空航天大学 Method for dispatching I/O of asymmetry virtual machine based on multi-core dynamic partitioning
CN101706743A (en) * 2009-12-07 2010-05-12 北京航空航天大学 Dispatching method of virtual machine under multi-core environment
CN102253857A (en) * 2011-06-24 2011-11-23 华中科技大学 Xen virtual machine scheduling control method in multi-core environment

Also Published As

Publication number Publication date
CN103428099A (en) 2013-12-04

Similar Documents

Publication Publication Date Title
CN103428099B (en) A kind of method of universal multi-core network processor flow control
US20220342715A1 (en) Configurable logic platform with reconfigurable processing circuitry
US11805065B2 (en) Scalable traffic management using one or more processor cores for multiple levels of quality of service
CN104539440B (en) Traffic management with in-let dimple
CN106533982B (en) The dynamic queue's dispatching device and method borrowed based on bandwidth
CN103605576B (en) Multithreading-based MapReduce execution system
CN103946803A (en) Processor with efficient work queuing
CN105511954A (en) Method and device for message processing
CN102170396A (en) QoS control method of cloud storage system based on differentiated service
CN109697122A (en) Task processing method, equipment and computer storage medium
CN103841052A (en) Bandwidth resource distribution system and method
CN102981973B (en) Perform the method for request within the storage system
CN102945185B (en) Task scheduling method and device
CN108984280A (en) A kind of management method and device, computer readable storage medium of chip external memory
CN101840328A (en) Data processing method, system and related equipment
CN104881322A (en) Method and device for dispatching cluster resource based on packing model
CN110134430A (en) A kind of data packing method, device, storage medium and server
CN107330680A (en) Red packet control method, device, computer equipment and computer-readable recording medium
US20050257012A1 (en) Storage device flow control
CN111181873A (en) Data transmission method, data transmission device, storage medium and electronic equipment
CN104125168A (en) A scheduling method and system for shared resources
CN102609307A (en) Multi-core multi-thread dual-operating system network equipment and control method thereof
US11347567B2 (en) Methods and apparatus for multiplexing data flows via a single data structure
CN104717160A (en) Interchanger and scheduling algorithm
CN104572498A (en) Cache management method for message and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant