CN105610729A - Buffer allocation method, buffer allocation device and network processor - Google Patents

Buffer allocation method, buffer allocation device and network processor Download PDF

Info

Publication number
CN105610729A
CN105610729A CN201410663761.5A CN201410663761A CN105610729A CN 105610729 A CN105610729 A CN 105610729A CN 201410663761 A CN201410663761 A CN 201410663761A CN 105610729 A CN105610729 A CN 105610729A
Authority
CN
China
Prior art keywords
port
buffer
fixing
network processing
processing unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410663761.5A
Other languages
Chinese (zh)
Inventor
姜海明
孔玲丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201410663761.5A priority Critical patent/CN105610729A/en
Priority to PCT/CN2015/077698 priority patent/WO2016078341A1/en
Publication of CN105610729A publication Critical patent/CN105610729A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9005Buffering arrangements using dynamic buffer space allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9063Intermediate storage in different physical parts of a node or terminal
    • H04L49/9068Intermediate storage in different physical parts of a node or terminal in the network interface card

Abstract

Embodiments of the invention provide a buffer allocation method, a buffer allocation device and a network processor, and relate to the field of communication. The buffer allocation method comprises the following steps: dividing a buffer unit of a network processor into a fixed reserved buffer region and a dynamic shared buffer region; allocating a fixed buffer resource for each port of the network processor in the fixed reserved buffer region; and if the actual flow of a port is greater than the fixed buffer resource allocated for the port, allocating a dynamic buffer resource for the port from the dynamic shared buffer region. By using the scheme of the invention, the buffer resources of the network processor can be utilized more effectively.

Description

A kind of cache allocation method, device and network processing unit
Technical field
The present invention relates to the communications field, refer to that especially a kind of buffer memory of the port that is applied to network processing unit distributesMethod, device and network processing unit.
Background technology
Network Development speed is surprising now, and the growth of network traffics and the appearance of new business, need the network equipmentThere is linear speed and disposal ability flexibly. Network chip comprises ASIC (special IC) and NP at present(network processing unit) two large classes. Network processing unit relies on its high speed processing and programmability flexibly, becomesFor the effective solution of data processing in current network.
As shown in Figure 1, network processing unit inside comprises two unit conventionally: bag buffer unit and bag are processed and drawnHold up. Message enters network processing unit, first enter bag buffer unit in the corresponding queue of inbound port in; SoAfter, heading enters packet processing engine, and microcode is responsible for processing packet header, after message is modified, reentersBag buffer unit, takes out after former bag links and sends from port with bag buffer unit respective queue.
As shown in Figure 2, in bag buffer unit, comprise multiple queues, each queue is corresponding to an inbound port.Bag buffer unit comprises a block cache (buffer), and this interior queue that saves as different port shares. Existing queueThe method of salary distribution of buffer memory adopts the mode of fixed allocation conventionally, distributes buffer memory according to port speed. Such as,Certain chip comprises 2 10G ports and 10 1GE ports (gigabit ethernet interface), if buffer memory is totalSize is 60K, each 10G port assignment 20K internal memory, each 1GE port assignment 2K internal memory.
There is a drawback in this traditional buffer memory method of salary distribution, sudden and random due to port flowProperty, there will be certain port queue buffer area to exhaust and situation that some queue buffer does not use, thereforeThis precious resources of queue buffer area is not fully used.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of cache allocation method, device and network processing unit,Can more effectively utilize network processor cache resource.
For solving the problems of the technologies described above, the technical scheme that embodiments of the invention provide is as follows:
A kind of cache allocation method, comprising:
The buffer unit of network processing unit is divided into fixing reserved buffer zone and dynamic shared buffer memory region;
In fixing reserved buffer zone, for each port of network processing unit distributes respectively a fixing buffer memoryResource;
If the actual flow of a port is greater than the fixing cache resources for this port assignment, slow from dynamically sharingDeposit in region is this port assignment dynamic buffering resource.
Wherein, in fixing reserved buffer zone, for each port of network processing unit distributes respectively one admittedlyThe step of determining cache resources comprises:
In fixing reserved buffer zone, the each end that is network processing unit according to the message transmission rate of portMouth distributes respectively a fixing cache resources.
Wherein, described dynamic shared buffer memory region comprises: multiple dynamic buffering resource pools or and network processesThe port type of device is dynamic buffering resource pool one to one; Wherein, the port type of described network processing unitTo divide according to the message transmission rate of port.
Wherein, described is the step bag of this port assignment dynamic buffering resource from territory, dynamic shared cache areaDraw together:
In multiple dynamic buffering resource pools from territory, dynamic shared cache area or corresponding with this port typeDynamic buffering pond in, be this port assignment dynamic buffering resource.
Embodiments of the invention also provide a kind of buffer memory distributor, comprising:
Divide module, for the buffer unit of network processing unit being divided into fixing reserved buffer zone and dynamicShared buffer memory region;
The first distribution module, in fixing reserved buffer zone, for each port of network processing unit dividesDo not distribute a fixing cache resources;
The second distribution module, is greater than the fixing buffer memory money for this port assignment for actual flow at one end mouthfulSource is this port assignment dynamic buffering resource from territory, dynamic shared cache area.
Wherein, the first distribution module specifically for: in fixing reserved buffer zone, according to the data of portTransfer rate is that each port of network processing unit distributes respectively a fixing cache resources.
Wherein, described dynamic shared buffer memory region comprises: multiple dynamic buffering resource pools or and network processesThe port type of device is dynamic buffering resource pool one to one; Wherein, the port type of described network processing unitTo divide according to the message transmission rate of port.
Wherein, described the second distribution module is specifically for multiple dynamically slow from territory, dynamic shared cache areaDepositing in resource pool or in the dynamic buffering pond corresponding with this port type, is this port assignment dynamic buffering moneySource.
Embodiments of the invention also provide a kind of network processing unit, comprise multiple ports and buffer unit, andBuffer memory distributor as above.
Wherein, above-mentioned network processing unit also comprises: with the processing engine that described buffer unit is connected, and described placeReason engine receives from the packet in described buffer unit, and processes, and packet after treatment is returnedReturn described port.
The solution of the present invention at least has the following advantages:
Such scheme of the present invention is by being divided into the buffer unit of network processing unit fixing reserved buffer areaTerritory and dynamically shared buffer memory region; In fixing reserved buffer zone, for each port of network processing unit dividesDo not distribute a fixing cache resources; If the actual flow of a port is greater than the fixing buffer memory for this port assignmentResource is this port assignment dynamic buffering resource from territory, dynamic shared cache area; Thereby ensure at portWhen flow increases suddenly, can more effectively utilize network processor cache resource.
Brief description of the drawings
Fig. 1 is network processing unit internal structure rough schematic view;
Fig. 2 is existing network processor cache cell queue buffer memory method of salary distribution exemplary plot;
Fig. 3 is cache allocation method flow chart of the present invention;
Fig. 4 is network processor cache of the present invention unit buffer to ports method of salary distribution exemplary plot;
Fig. 5 is that the buffer memory of embodiments of the invention distributes schematic diagram;
Fig. 6 is the buffer memory allocation flow figure of embodiments of the invention.
Detailed description of the invention
For making the technical problem to be solved in the present invention, technical scheme and advantage clearer, below in conjunction with attachedFigure and specific embodiment are described in detail.
Core concept of the present invention is to be two parts by network processor cache dividing elements: fixing reserved buffer memoryRegion and dynamically shared buffer memory region. Concrete, fixing reserved buffer zone and dynamically shared buffer memory regionSize can be got optimal value by debugging, and fixing reserved buffer zone is the proprietary range of distribution of queue, is similar toTraditional approach, each queue distributes according to port speed; Dynamically shared buffer memory region is divided into resource one by onePond, for the data packet queue of multiple ports is shared. Each port queue needs to specify its dynamic buffering resourcePond. Each port flow enters network processing unit, first arrives the fixing reserved buffer zone of port respective queueApplication cache resources, if fixing reserved buffer zone exhausts, then from the dynamic buffering resource pool of its appointmentApplication distributes cache resources.
As shown in Figure 3, embodiments of the invention provide a kind of cache allocation method, comprising:
Step 31, is divided into the buffer unit of network processing unit fixing reserved buffer zone and dynamically sharesBuffer zone;
Step 32, in fixing reserved buffer zone, for each port of network processing unit distributes respectively onePiece is fixed cache resources;
Step 33, if the actual flow of a port is greater than the fixing cache resources for this port assignment, fromIn shared buffer memory region, be dynamically this port assignment dynamic buffering resource.
Such scheme of the present invention is by being divided into the buffer unit of network processing unit fixing reserved buffer areaTerritory and dynamically shared buffer memory region; In fixing reserved buffer zone, for each port of network processing unit dividesDo not distribute a fixing cache resources; If the actual flow of a port is greater than the fixing buffer memory for this port assignmentResource is this port assignment dynamic buffering resource from territory, dynamic shared cache area; Thereby ensure at portWhen flow increases suddenly, can more effectively utilize network processor cache resource.
In a specific embodiment of the present invention, step 12 can be specifically: in fixing reserved buffer zone,The each port that is network processing unit according to the message transmission rate of port distributes respectively a fixing buffer memory moneySource.
Wherein, described dynamic shared buffer memory region comprises: multiple dynamic buffering resource pools; Described from being dynamically total toEnjoy in buffer zone for the step of this port assignment dynamic buffering resource can be specifically: from dynamic shared buffer memoryIn multiple dynamic buffering resource pools in region, it is this port assignment dynamic buffering resource.
Described dynamic shared buffer memory region comprises: dynamically slow one to one with the port type of network processing unitDeposit resource pool; Wherein, the port type of described network processing unit is to divide according to the message transmission rate of port; As message transmission rate is less than the corresponding dynamic buffering resource pool of port of the first value, transfer of data speedRate is greater than or equal to the corresponding dynamic buffering resource pool of port of this first value.
Concrete, as shown in Figure 4, network processor cache dividing elements is two parts, fixing reserved buffer memoryRegion and dynamically shared buffer memory region. Port is first from fixing reserved buffer zone application buffer memory, trough asFruit exhausts and applies for from territory, dynamic shared cache area, with 2 10G ports and 10 1GE ports beforeSystem be example, can distribute buffer memory according to Fig. 4 mode. First by 60K buffer memory, be divided into two, 30KFixing reserved buffer zone and the dynamic shared buffer memory of 30K region. Fixing reserved buffer zone is according to end transmissionSpeed is divided, each point of 10K of 2 10G ports, each point of 2K of each 1GE port; Dynamically shared cache area30K region, territory is divided into two dynamic buffering resource pools, and a 20K dynamic buffering resource pool is 10G endMouth is used, and a 10K dynamic buffering resource pool is that GE port uses. Last bundling port and cache pool reflectPenetrate relation, two 10G port mapping are to 20K dynamic buffering resource pool, and 10 GE port mapping are to 10KDynamic buffering resource pool.
Port flow is first from the exclusive fixing reserved buffer zone of port storage allocation, as Fig. 5 middle port 1Corresponding queue queue1; If the fixing reserved buffer zone of port is full, dynamically delaying from port associationDeposit and in resource pool, apply for internal memory.
Concrete realization flow figure is as shown in Figure 6:
61 start;
Buffer unit zoning is divided into two parts by 62: fixing reserved buffer zone and dynamically shared buffer memory region;Concrete two region allocation ratios can be got optimal value by debugging;
63 divide the fixing reserved buffer zone of port, can (be the maximum transmitted of port according to port flowSpeed) divide;
64 divide dynamic shared buffer memory region, and dynamic shared buffer memory region is divided into resource pool one by one;Resource pool number and size can be specified flexibly. Also can divide according to flow attribution, such as by multiple 10GResource pool of port assignment, resource pool of multiple GE port assignment;
65 specify resource pool corresponding to each port, and port and resource pool are associated;
66 finish.
Forwarding plane, the flow entering from port, first from port queue fixing reserved buffer zone accordinglyStorage allocation, if there is sufficient space fixing reserved buffer zone, all from Shen, fixing reserved buffer zonePlease; Otherwise apply in the resource pool the dynamic shared buffer memory region being associated from queue. With prior art phaseRelatively, method of the present invention can more effectively be utilized network processor cache resource.
In addition, embodiment is corresponding with said method, and embodiments of the invention also provide a kind of buffer memory to distributeDevice, comprising:
Divide module, for the buffer unit of network processing unit being divided into fixing reserved buffer zone and dynamicShared buffer memory region;
The first distribution module, in fixing reserved buffer zone, for each port of network processing unit dividesDo not distribute a fixing cache resources;
The second distribution module, is greater than the fixing buffer memory money for this port assignment for actual flow at one end mouthfulSource is this port assignment dynamic buffering resource from territory, dynamic shared cache area.
Wherein, the first distribution module specifically for: in fixing reserved buffer zone, according to the data of portTransfer rate is that each port of network processing unit distributes respectively a fixing cache resources.
Wherein, described dynamic shared buffer memory region comprises: multiple dynamic buffering resource pools or and network processesThe port type of device is dynamic buffering resource pool one to one; Wherein, the port type of described network processing unitTo divide according to the message transmission rate of port.
Wherein, described the second distribution module is specifically for multiple dynamically slow from territory, dynamic shared cache areaDepositing in resource pool or in the dynamic buffering pond corresponding with this port type, is this port assignment dynamic buffering moneySource.
Embodiments of the invention also provide a kind of network processing unit, comprise multiple ports and buffer unit, andBuffer memory distributor as above.
Wherein, above-mentioned network processing unit also comprises: with the processing engine that described buffer unit is connected, and described placeReason engine receives from the packet in described buffer unit, and processes, and packet after treatment is returnedReturn described port.
The above is the preferred embodiment of the present invention, it should be pointed out that the common skill for the artArt personnel, not departing under the prerequisite of principle of the present invention, can also make some improvements and modifications,These improvements and modifications also should be considered as protection scope of the present invention.

Claims (10)

1. a cache allocation method, is characterized in that, comprising:
The buffer unit of network processing unit is divided into fixing reserved buffer zone and dynamic shared buffer memory region;
In fixing reserved buffer zone, for each port of network processing unit distributes respectively a fixing buffer memoryResource;
If the actual flow of a port is greater than the fixing cache resources for this port assignment, slow from dynamically sharingDeposit in region is this port assignment dynamic buffering resource.
2. cache allocation method according to claim 1, is characterized in that, at fixing reserved buffer memoryIn region, for distributing respectively the step of a fixing cache resources, each port of network processing unit comprises:
In fixing reserved buffer zone, the each end that is network processing unit according to the message transmission rate of portMouth distributes respectively a fixing cache resources.
3. cache allocation method according to claim 1, is characterized in that, described dynamically shared slowDepositing region comprises: multiple dynamic buffering resource pools or moving one to one with the port type of network processing unitState cache resources pond; Wherein, the port type of described network processing unit is the message transmission rate according to portDivide.
4. cache allocation method according to claim 3, is characterized in that, described from dynamically sharingStep for this port assignment dynamic buffering resource in buffer zone comprises:
In multiple dynamic buffering resource pools from territory, dynamic shared cache area or corresponding with this port typeDynamic buffering pond in, be this port assignment dynamic buffering resource.
5. a buffer memory distributor, is characterized in that, comprising:
Divide module, for the buffer unit of network processing unit being divided into fixing reserved buffer zone and dynamicShared buffer memory region;
The first distribution module, in fixing reserved buffer zone, for each port of network processing unit dividesDo not distribute a fixing cache resources;
The second distribution module, is greater than the fixing buffer memory money for this port assignment for actual flow at one end mouthfulSource is this port assignment dynamic buffering resource from territory, dynamic shared cache area.
6. buffer memory distributor according to claim 5, is characterized in that, the first distribution module toolBody is used for: in fixing reserved buffer zone, what be network processing unit according to the message transmission rate of port is everyIndividual port distributes respectively a fixing cache resources.
7. buffer memory distributor according to claim 5, is characterized in that, described dynamically shared slowDepositing region comprises: multiple dynamic buffering resource pools or moving one to one with the port type of network processing unitState cache resources pond; Wherein, the port type of described network processing unit is the message transmission rate according to portDivide.
8. buffer memory distributor according to claim 7, is characterized in that, described second distributes mouldPiece specifically for: in the multiple dynamic buffering resource pools from territory, dynamic shared cache area or with this port classIn dynamic buffering pond corresponding to type, it is this port assignment dynamic buffering resource.
9. a network processing unit, comprises multiple ports and buffer unit, it is characterized in that, also comprises:Buffer memory distributor as described in claim 5-8 any one.
10. network processing unit according to claim 9, is characterized in that, also comprises: with described slowThe processing engine that deposit receipt unit connects, described processing engine receives from the packet in described buffer unit, andProcess, packet after treatment is returned to described port.
CN201410663761.5A 2014-11-19 2014-11-19 Buffer allocation method, buffer allocation device and network processor Pending CN105610729A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410663761.5A CN105610729A (en) 2014-11-19 2014-11-19 Buffer allocation method, buffer allocation device and network processor
PCT/CN2015/077698 WO2016078341A1 (en) 2014-11-19 2015-04-28 Buffer allocation method and device, and network processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410663761.5A CN105610729A (en) 2014-11-19 2014-11-19 Buffer allocation method, buffer allocation device and network processor

Publications (1)

Publication Number Publication Date
CN105610729A true CN105610729A (en) 2016-05-25

Family

ID=55990271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410663761.5A Pending CN105610729A (en) 2014-11-19 2014-11-19 Buffer allocation method, buffer allocation device and network processor

Country Status (2)

Country Link
CN (1) CN105610729A (en)
WO (1) WO2016078341A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107222429A (en) * 2017-05-27 2017-09-29 努比亚技术有限公司 Data transmission system and method
CN107277796A (en) * 2017-05-27 2017-10-20 努比亚技术有限公司 Mobile terminal and its data transmission method
CN107870871A (en) * 2016-09-23 2018-04-03 华为技术有限公司 The method and apparatus for distributing caching
CN108768898A (en) * 2018-04-03 2018-11-06 郑州云海信息技术有限公司 A kind of method and its device of network-on-chip transmitting message
CN109495401A (en) * 2018-12-13 2019-03-19 迈普通信技术股份有限公司 The management method and device of caching
WO2020001608A1 (en) * 2018-06-30 2020-01-02 华为技术有限公司 Cache allocation method and device
CN115051958A (en) * 2022-04-14 2022-09-13 重庆奥普泰通信技术有限公司 Cache allocation method, device and equipment
WO2023097575A1 (en) * 2021-12-01 2023-06-08 Huawei Technologies Co.,Ltd. Devices and methods for wirelesscommunication in a wireless network
CN116340202A (en) * 2023-03-28 2023-06-27 中科驭数(北京)科技有限公司 Data transmission method, device, equipment and computer readable storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112187665B (en) * 2020-09-28 2023-04-07 杭州迪普科技股份有限公司 Message processing method and device
US11558316B2 (en) * 2021-02-15 2023-01-17 Mellanox Technologies, Ltd. Zero-copy buffering of traffic of long-haul links
CN113836048A (en) * 2021-09-17 2021-12-24 许昌许继软件技术有限公司 Data exchange method and device based on FPGA memory dynamic allocation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1780252A (en) * 2004-11-18 2006-05-31 华为技术有限公司 Buffer resource management for grouping converter
CN1798094A (en) * 2004-12-23 2006-07-05 华为技术有限公司 Method of using buffer area
CN1881937A (en) * 2005-05-02 2006-12-20 美国博通公司 Method and device for making storage space dynamic distribution for multi queue
CN101364948A (en) * 2008-09-08 2009-02-11 中兴通讯股份有限公司 Method for dynamically allocating cache
CN102185725A (en) * 2011-05-31 2011-09-14 北京星网锐捷网络技术有限公司 Cache management method and device as well as network switching equipment
WO2012079382A1 (en) * 2010-12-15 2012-06-21 中兴通讯股份有限公司 Output port buffer adjusting method and switch

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1780252A (en) * 2004-11-18 2006-05-31 华为技术有限公司 Buffer resource management for grouping converter
CN1798094A (en) * 2004-12-23 2006-07-05 华为技术有限公司 Method of using buffer area
CN1881937A (en) * 2005-05-02 2006-12-20 美国博通公司 Method and device for making storage space dynamic distribution for multi queue
CN101364948A (en) * 2008-09-08 2009-02-11 中兴通讯股份有限公司 Method for dynamically allocating cache
WO2012079382A1 (en) * 2010-12-15 2012-06-21 中兴通讯股份有限公司 Output port buffer adjusting method and switch
CN102185725A (en) * 2011-05-31 2011-09-14 北京星网锐捷网络技术有限公司 Cache management method and device as well as network switching equipment

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107870871B (en) * 2016-09-23 2021-08-20 华为技术有限公司 Method and device for allocating cache
CN107870871A (en) * 2016-09-23 2018-04-03 华为技术有限公司 The method and apparatus for distributing caching
CN107277796A (en) * 2017-05-27 2017-10-20 努比亚技术有限公司 Mobile terminal and its data transmission method
CN107222429A (en) * 2017-05-27 2017-09-29 努比亚技术有限公司 Data transmission system and method
CN108768898A (en) * 2018-04-03 2018-11-06 郑州云海信息技术有限公司 A kind of method and its device of network-on-chip transmitting message
WO2020001608A1 (en) * 2018-06-30 2020-01-02 华为技术有限公司 Cache allocation method and device
US11658924B2 (en) 2018-06-30 2023-05-23 Huawei Technologies Co., Ltd. Buffer allocation method, and device
CN109495401A (en) * 2018-12-13 2019-03-19 迈普通信技术股份有限公司 The management method and device of caching
CN109495401B (en) * 2018-12-13 2022-06-24 迈普通信技术股份有限公司 Cache management method and device
WO2023097575A1 (en) * 2021-12-01 2023-06-08 Huawei Technologies Co.,Ltd. Devices and methods for wirelesscommunication in a wireless network
CN115051958A (en) * 2022-04-14 2022-09-13 重庆奥普泰通信技术有限公司 Cache allocation method, device and equipment
CN116340202A (en) * 2023-03-28 2023-06-27 中科驭数(北京)科技有限公司 Data transmission method, device, equipment and computer readable storage medium
CN116340202B (en) * 2023-03-28 2024-03-01 中科驭数(北京)科技有限公司 Data transmission method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
WO2016078341A1 (en) 2016-05-26

Similar Documents

Publication Publication Date Title
CN105610729A (en) Buffer allocation method, buffer allocation device and network processor
CN103580842A (en) Method and system for conducting parallel transmission through multiple types of wireless links
EP3073688B1 (en) Data transmission method, core forwarding device and end point forwarding device
TW200711390A (en) Total dynamic sharing of a transaction queue
CN103179049B (en) Classification self adaptation dynamically goes out port and the system and method for queue buffer management
CN105763478A (en) Token bucket algorithm-based satellite data ground transmission network flow control system
CN105227497B (en) A kind of center defendance arbitration system being embedded in time triggered Ethernet switch
CN102497322A (en) High-speed packet filtering device and method realized based on shunting network card and multi-core CPU (Central Processing Unit)
CN107015942B (en) Method and device for multi-core CPU (Central processing Unit) packet sending
WO2016041375A1 (en) Method and device for transmitting message packet between cpu and chip
CN103532876A (en) Processing method and system of data stream
CN103347221B (en) A kind of threshold values discussion power-economizing method for EPON (Ethernet Passive Optical Network)
CN102857505A (en) Data bus middleware of Internet of things
CN102957626B (en) A kind of message forwarding method and device
CN104461727A (en) Memory module access method and device
CN103607343B (en) A kind of hybrid switching structure being applicable to spaceborne processing transponder
WO2010062916A1 (en) Network-on-chip system, method, and computer program product for transmitting messages utilizing a centralized on-chip shared memory switch
CN108259221A (en) Flow control methods, device, system, storage medium and processor
CN106487711A (en) A kind of method of caching dynamically distributes and system
CN111131081B (en) Method and device for supporting high-performance one-way transmission of multiple processes
CN106464835A (en) Uplink bandwidth allocation method, device and system
CN104519150B (en) Network address conversion port distribution method and system
CN101616365B (en) System and method for short message retry based on parallel queues
JP5876954B1 (en) Terminal station apparatus and terminal station receiving method
US20150149639A1 (en) Bandwidth allocation in a networked environment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160525

WD01 Invention patent application deemed withdrawn after publication