CN105512090B - The method for organizing of data buffering in a kind of network node based on FPGA - Google Patents
The method for organizing of data buffering in a kind of network node based on FPGA Download PDFInfo
- Publication number
- CN105512090B CN105512090B CN201510887894.5A CN201510887894A CN105512090B CN 105512090 B CN105512090 B CN 105512090B CN 201510887894 A CN201510887894 A CN 201510887894A CN 105512090 B CN105512090 B CN 105512090B
- Authority
- CN
- China
- Prior art keywords
- buffering
- data buffer
- address
- data
- equal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/78—Architectures of general purpose stored program computers comprising a single central processing unit
- G06F15/7807—System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
- G06F15/781—On-chip cache; Off-chip memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/78—Architectures of general purpose stored program computers comprising a single central processing unit
- G06F15/7807—System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package
- G06F15/7825—Globally asynchronous, locally synchronous, e.g. network on chip
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computing Systems (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Logic Circuits (AREA)
Abstract
The present invention proposes the method for organizing of data buffering in the network node based on FPGA, by the way that individual data buffering is divided into two parts, part full utilization BRAM, do not waste storage resource, and after another part is by combination, the waste of storage resource can be preferably minimized, BRAM is used alone compared to each buffering, this method can not only keep the convenience used, and the waste of storage resource can be reduced, reduce power consumption, especially buffer quantity it is more when effect clearly, the data buffering of arbitrary size is matched by the way that storage resource is applied in combination, to achieve the purpose that reduce storage resource waste and reduce device power consumption.
Description
Technical field
The present invention relates to a kind of method for organizing of data buffering in network node, more particularly to a kind of network based on FPGA
The method for organizing of data buffering in node.
Background technology
In network communication field, since data transmission bauds and data processing speed mismatch, within network nodes always
It needs between processing and transmission to transceiving data into row buffering, for ease of use, under normal conditions by individual data buffering
Size is fixed as the 2 minimum integer power not less than single bag data length, if single bag data length or more bag data length energy
Enough sizes with single storage resource in FPGA (referred to as BRAM) match, then can be very successful only waste seldom storage resource
It brings using upper facility, if cannot match, because BRAM sizes are fixed and every time at least using 1, it will cause BRAM's
Waste, when gap is larger, it will waste many storage resources, increase device power consumption.
Invention content
The object of the present invention is to provide a kind of method for organizing of data buffering in network node based on FPGA, pass through combination
The data buffering that arbitrary size is matched using storage resource, to reach the waste of reduction storage resource and reduce device power consumption
Purpose.
Technical scheme is as follows:
The method for organizing of data buffering, is characterized in that in a kind of network node based on FPGA:Including following step
Suddenly:
1) data buffer unit is divided into the first buffering and second buffering two parts, the size of the first buffering is a1;a1
2 integer power of=(can be divided exactly by B and be not more than A);
Wherein, B is the size of single BRAM, and A is the size of demand data buffer cell;
2) the number b for the first buffering that single BRAM can be realized is determined;
3) the quantity M of the BRAM1 for realizing the first buffering is determined;
4) the quantity N of the BRAM2 for realizing the second buffering is determined;
5) the size a2 of the second buffering is determined;A2 is multiplied by after B again divided by K equal to N;
6) outside access Address Resolution is data buffer unit serial number and data cell buffer address, data by decoding unit
Buffer cell serial number indicates that this time accesses and which data buffer unit is hit, and data buffer unit address indicates that hiting data is slow
Rush which of unit address;
7) decoding unit according to data buffer unit address judge access be the first buffering or second buffering, if data
Cell buffer address is less than a1, then accesses the first buffering, otherwise accesses the second buffering;
If 8) access the first buffering, decoding unit carries out lower rounding by data buffer unit serial number divided by b and to result,
It determines the BRAM1 serial numbers of hit, and is generated to the BRAM1 and read enabled or write enable signal accordingly;
Determine new data buffer cell serial number and new data cell buffer address;
;New data buffer cell serial number is equal to the BRAM1 serial numbers that (data buffer unit serial number subtracts b) is multiplied by hit;Newly
Data buffer unit address is equal to low X of data buffer unit address, and X is equal to the logarithm for the a1 at bottom with 2;
If accessing the second buffering, decoding unit carries out lower rounding by data buffer unit serial number divided by c and to result, really
The BRAM2 serial numbers of hit are made, and is generated to the BRAM2 and reads enabled or write enable signal accordingly;Wherein, c is single BRAM2
Including second buffering quantity, c be equal to B divided by a2;
Determine new data buffer cell serial number and new data cell buffer address;
New data buffer cell serial number is equal to the BRAM2 serial numbers that (data buffer unit serial number subtracts a2) is multiplied by hit;Newly
Data buffering address is equal to low Y that data buffer unit address subtracts the value after a1, and Y is equal to the logarithm for the a3 at bottom with 2;
9) by determining new data buffer cell serial number and new data cell buffer address composition inter access address, and will
The BRAM1 or BRAM2 of hit are sent into the inter access address;
10) data-signal of external data-signal and the BRAM1 or BRAM2 of hit are connected.
B in step 2) is equal to B divided by a1.
M in step 3) is equal to K divided by b, and K is the quantity of demand data buffer cell.
The value to round up after N=((A-a1) is multiplied by K) divided by B in step 4).
The beneficial effects of the invention are as follows:
1) on-chip memory utilization rate is improved.Can according to different actual application environment and condition, flexible allocation BRAM,
The waste for effectively reducing storage resource, increases operation rate.
2) device power consumption is reduced.Since the method increase the utilization rates of BRAM, only in the case where realizing identical function
It is minimized using the power consumption of relatively small number of BRAM, therefore device.
Description of the drawings
Fig. 1 is technical scheme of the present invention schematic diagram.
Specific implementation mode
This patent proposes a kind of method for organizing of data buffering in the network node based on FPGA, by by individual data
Buffering is divided into two parts, and part full utilization BRAM does not waste storage resource, and after another part is by combination, can be with
The waste of storage resource is preferably minimized, BRAM is used alone compared to each buffering, what this method can not only keep using
Convenience, and can reduce the waste of storage resource reduces power consumption, especially buffer quantity it is more when effect clearly.
As shown in Figure 1, the present invention includes M BRAM1, N number of BRAM2 and decoding unit.It is characterized in that:M BRAM1 is used
In realizing the first all bufferings, N number of BRAM2 is for realizing the second all bufferings, and first is slow big compared with the second buffering.
Below in conjunction with the accompanying drawings and specific example the present invention is described in further detail.
As shown in Figure 1, in a kind of network node based on FPGA data buffering method for organizing, including
1、BRAM1
M BRAM1 is for realizing the first buffering.When it is the buffering area of 2248KB to need 16 sizes of construction, if individually
The size of BRAM is 4096KB, then the method for determination of M values is:
A) individual data buffer cell is divided into the first buffering and second buffering two parts, the size a1 etc. of the first buffering
In 2048KB;
B) the number b for the first buffering that single BRAM can be realized is equal to 2;
C) the quantity M for realizing the BRAM1 of the first buffering is equal to 8.
2、BRAM2
N number of BRAM2 is for realizing the second buffering.Using determine M when as a result, the method for determination of N values is:
A) the quantity N for realizing the BRAM2 of the second buffering is equal to 1;
B) for the ease of management, the size a2 of the second buffering of realization is equal to 256KB.
3, decoding unit
Decoding unit goes out the BRAM of hit according to outside access address choice, and generates corresponding inter access address, complete
At data access, when outside access address is 0x2464:
A) outside access address can be analyzed to data buffer unit serial number and data cell buffer address two parts, and data are slow
It rushes unit number and indicates that this time accesses the 4th buffer cell of hit, data buffer unit address indicates hiting data buffer cell
Address be 0x100;
B) may determine that access according to data buffer unit address is the first buffering or the second buffering, because data are slow
It rushes element address and is less than a1, so accessing the first buffering;
C) BRAM1 of hit is calculated according to the first buffer number gauge that data buffer unit serial number and single BRAM1 include
Serial number 2, and read or write enable signal accordingly to BRAM1 generations;
D) the first buffering of hit, then for new data buffer cell serial number equal to 0, new data cell buffer address is equal to 0x100;
E) the inter access address being made of new data buffer cell serial number and new data cell buffer address is sent into and is ordered
In BRAM1;
F) data-signal of external data-signal and the BRAM1 of hit are connected.
Claims (1)
1. the method for organizing of data buffering in a kind of network node based on FPGA, it is characterised in that:Include the following steps:
1) data buffer unit is divided into the first buffering and second buffering two parts, the size of the first buffering is a1;A1 is 2
Integer power, and can be divided exactly by B and be not more than A;
Wherein, B is the size of single BRAM, and A is the size of demand data buffer cell;
2) determine that the number b, b of the first buffering that single BRAM can be realized are equal to B divided by a1;
3) it determines and is equal to K divided by b for realizing the quantity M, M of the BRAM1 of the first buffering, K is the number of demand data buffer cell
Amount;
4) the quantity N of the BRAM2 for realizing the second buffering is determined;N is equal to after ((A-a1) is multiplied by K) divided by B upwards
The value of rounding;
5) the size a2 of the second buffering is determined;A2 is multiplied by after B the quantity that again divided by K, K are demand data buffer cell equal to N;
6) outside access Address Resolution is data buffer unit serial number and data cell buffer address, data buffering by decoding unit
Unit number indicates that this time accesses and which data buffer unit is hit, and data buffer unit address indicates that hiting data buffering is single
Which of member address;
7) decoding unit according to data buffer unit address judge access be the first buffering or second buffering, if data buffering
Element address is less than a1, then accesses the first buffering, otherwise accesses the second buffering;
If 8) access the first buffering, decoding unit carries out lower rounding by data buffer unit serial number divided by b and to result, determines
Go out the BRAM1 serial numbers of hit, and is generated to the BRAM1 and read enabled or write enable signal accordingly;
Determine new data buffer cell serial number and new data cell buffer address;
New data buffer cell serial number, which is equal to data buffer unit serial number and subtracts the difference of b, is multiplied by the BRAM1 serial numbers of hit;New number
It is equal to low X of data buffer unit address according to cell buffer address, X is equal to the logarithm for the a1 at bottom with 2;
If accessing the second buffering, decoding unit carries out lower rounding by data buffer unit serial number divided by c and to result, determines
The BRAM2 serial numbers of hit, and generated to the BRAM2 and read enabled or write enable signal accordingly;Wherein, c is that single BRAM2 includes
Second buffering quantity, c be equal to B divided by a2;
Determine new data buffer cell serial number and new data cell buffer address;
New data buffer cell serial number, which is equal to data buffer unit serial number and subtracts the difference of a2, is multiplied by the BRAM2 serial numbers of hit;Newly
Data buffering address is equal to low Y that data buffer unit address subtracts the value after a1, and Y is equal to the logarithm for the a3 at bottom with 2;
9) determining new data buffer cell serial number and new data cell buffer address are formed into inter access address, and this is interior
Portion's access address is sent into the BRAM1 or BRAM2 of hit;
10) data-signal of external data-signal and the BRAM1 or BRAM2 of hit are connected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510887894.5A CN105512090B (en) | 2015-12-07 | 2015-12-07 | The method for organizing of data buffering in a kind of network node based on FPGA |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510887894.5A CN105512090B (en) | 2015-12-07 | 2015-12-07 | The method for organizing of data buffering in a kind of network node based on FPGA |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105512090A CN105512090A (en) | 2016-04-20 |
CN105512090B true CN105512090B (en) | 2018-09-21 |
Family
ID=55720085
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510887894.5A Active CN105512090B (en) | 2015-12-07 | 2015-12-07 | The method for organizing of data buffering in a kind of network node based on FPGA |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105512090B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110806997B (en) * | 2019-10-16 | 2021-03-26 | 广东高云半导体科技股份有限公司 | System on chip and memory |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63239545A (en) * | 1987-03-27 | 1988-10-05 | Toshiba Corp | Memory error detecting circuit |
KR100227278B1 (en) * | 1996-11-15 | 1999-11-01 | 윤종용 | Cache control unit |
CN101021833A (en) * | 2007-03-19 | 2007-08-22 | 中国人民解放军国防科学技术大学 | Stream damper based on double-damping structure |
CN101719104A (en) * | 2009-11-24 | 2010-06-02 | 中兴通讯股份有限公司 | Control system and control method of synchronous dynamic memory |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014229944A (en) * | 2013-05-17 | 2014-12-08 | 富士通株式会社 | Signal processing device, control method and communication device |
-
2015
- 2015-12-07 CN CN201510887894.5A patent/CN105512090B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63239545A (en) * | 1987-03-27 | 1988-10-05 | Toshiba Corp | Memory error detecting circuit |
KR100227278B1 (en) * | 1996-11-15 | 1999-11-01 | 윤종용 | Cache control unit |
CN101021833A (en) * | 2007-03-19 | 2007-08-22 | 中国人民解放军国防科学技术大学 | Stream damper based on double-damping structure |
CN101719104A (en) * | 2009-11-24 | 2010-06-02 | 中兴通讯股份有限公司 | Control system and control method of synchronous dynamic memory |
Also Published As
Publication number | Publication date |
---|---|
CN105512090A (en) | 2016-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11575609B2 (en) | Techniques for congestion management in a network | |
US8250394B2 (en) | Varying the number of generated clock signals and selecting a clock signal in response to a change in memory fill level | |
CN104298680B (en) | Data statistical approach and data statistics device | |
US20140068134A1 (en) | Data transmission apparatus, system, and method | |
CN110969198A (en) | Distributed training method, device, equipment and storage medium for deep learning model | |
CN111177025B (en) | Data storage method and device and terminal equipment | |
US10133549B1 (en) | Systems and methods for implementing a synchronous FIFO with registered outputs | |
CN103888377A (en) | Message cache method and device | |
CN103812895A (en) | Scheduling method, management nodes and cloud computing cluster | |
CN105743808A (en) | Method and device of adapting QoS | |
CN109660468A (en) | A kind of port congestion management method, device and equipment | |
CN107800644A (en) | Dynamically configurable pipelined token bucket speed limiting method and device | |
CN115102908A (en) | Method for generating network message based on bandwidth control and related device | |
CN109587072A (en) | Distributed system overall situation speed limiting system and method | |
CN104765701A (en) | Data access method and device | |
EP3461085B1 (en) | Method and device for queue management | |
CN105512090B (en) | The method for organizing of data buffering in a kind of network node based on FPGA | |
CN103514140B (en) | For realizing the reconfigurable controller of configuration information multi-emitting in reconfigurable system | |
CN109189336A (en) | A kind of storage system thread method of adjustment, system and electronic equipment and storage medium | |
CN109412999A (en) | A kind of molding mapping method of probability and device | |
CN101566933B (en) | Method and device for configurating cache and electronic equipment and data read-write equipment | |
CN106657097B (en) | A kind of data transmission method for uplink and device | |
CN104391564A (en) | Power consumption control method and device | |
CN105518617B (en) | Data cached processing method and processing device | |
CN104678815A (en) | Interface structure and configuration method of FPGA (field programmable gate array) chip |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |