WO2018018874A1 - 4r4w全共享报文的数据缓存处理方法及数据处理系统 - Google Patents

4r4w全共享报文的数据缓存处理方法及数据处理系统 Download PDF

Info

Publication number
WO2018018874A1
WO2018018874A1 PCT/CN2017/073642 CN2017073642W WO2018018874A1 WO 2018018874 A1 WO2018018874 A1 WO 2018018874A1 CN 2017073642 W CN2017073642 W CN 2017073642W WO 2018018874 A1 WO2018018874 A1 WO 2018018874A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
memory
read
written
bank
Prior art date
Application number
PCT/CN2017/073642
Other languages
English (en)
French (fr)
Inventor
许俊
夏杰
郑晓阳
Original Assignee
盛科网络(苏州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 盛科网络(苏州)有限公司 filed Critical 盛科网络(苏州)有限公司
Priority to US16/319,447 priority Critical patent/US20190332313A1/en
Publication of WO2018018874A1 publication Critical patent/WO2018018874A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/103Packet switching elements characterised by the switching fabric construction using a shared central buffer; using a shared memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • G06F1/3275Power saving in memory, e.g. RAM, cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/04Generating or distributing clock signals or signals derived directly therefrom
    • G06F1/06Clock generators producing several clock signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9036Common buffer combined with individual queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates to the field of network communication technologies, and in particular, to a data buffer processing method and a data processing system for a 4R4W fully shared message.
  • vendors typically provide only one read or write memory, one read write memory, and two read or write memories. Thus, the designer can only build memory for multiple ports based on the basic memory unit described above.
  • Message buffering is a special type of multi-port memory whose writing is controllable, that is, sequential writing, but reading is random.
  • the usual method is to divide the entire chip into multiple independent message forwarding and processing units for parallel processing.
  • the English name of the message forwarding and processing unit is Slice, for example, divided into 4 slices for parallel processing.
  • the bandwidth of the data that each slice needs to process is reduced, and the core frequency requirement is also reduced to 1/4 of the original core frequency.
  • custom design such as: modify the storage unit, and algorithm design to increase the SRAM The number of ports.
  • Custom design cycles are generally long, need to do spice simulation, and provide a memory compiler to generate different sizes and types of SRAM. For suppliers, it usually takes 6-9 months to provide a new type.
  • the type of SRAM, and such custom design is strongly related to the specific process (such as GlobalFoundries 14nm, 28nm or TSMC 28nm, 16nm), once the process changes, the custom designed SRAM library needs to be redesigned.
  • the algorithm design is based on the off-the-shelf SRAM type provided by the manufacturer.
  • the algorithm realizes multi-port memory. The biggest advantage is to avoid custom design and shorten the time. At the same time, the design is independent of the manufacturer library and can be easily transplanted between different manufacturers. .
  • a 4R4W storage architecture supporting four slice accesses is designed by means of algorithm design.
  • a large-capacity 2R2W SRAM is designed using 1R1W SRAM2D, which requires a total of 4 logical blocks.
  • the area of the 18M byte 4R4W SRAM occupies 213.248 square centimeters.
  • the power consumption is 55.296Watts.
  • the overhead of inserting Decap and DFT and place and route has not been considered here.
  • the 4R4W SRAM designed by this algorithm design has a large footprint and total power consumption.
  • S0, S1, S2, and S3 represent 4 slices, and each slice includes, for example, six 100GE ports.
  • the message input from slice0 or slice1 to slice0 or slice1 is stored in X0Y0, and input from slice0 or slice1.
  • the message to slice2 or slice3 is stored in X1Y0
  • the message input from slice2 or slice3 to slice0 or slice1 is stored in X0Y1
  • the message input from slice2 or slice3 to slice2 or slice3 is stored in X1Y1; for multicast message
  • the multicast message from Slice0 or Slice1 is simultaneously stored in X0Y0 and X1Y0.
  • slice0 or slice1 will read the message from X0Y0 or X0Y1, and slice2 or slice3 will be from X1Y0 or The message is read in X1Y1.
  • each X1Y1 designed by the prior art algorithm, an X? Y? Logically, four 16384 deep 2304 wide SRAMs are required.
  • Each logical 16384 deep and 2304 wide SRAM can be cut into 8 16384 deep and 288 wide physical SRAM2Ds; 14nm integrated circuit technology, such an 18M byte report
  • the area and power consumption of the above second algorithm design is only 1/4 of the first algorithm design.
  • the algorithm design cannot realize four 2R2W SRAM logic blocks shared among all four slices, each slice
  • the maximum packet buffer that the input port can occupy is only 9 Mbytes.
  • Such a message cache is not a shared cache in the true sense.
  • an object of the present invention is to provide a data buffer processing method and processing system for a 4R4W fully shared message.
  • a data cache processing method for a 4R4W fully shared message further includes: assembling two 2R1W memories into one bank storage unit in parallel;
  • the size of the data is less than or equal to the bit width of the 2R1W memory, the data is respectively written into different banks, and at the same time, the written data is copied and written into two 2R1W memories of each bank;
  • the method further includes:
  • the matched read port in the memory of the 4R4W is selected to directly read out the data
  • the second clock cycle is awaited, and when the second clock cycle comes, the matching read port in the 4R4W memory is selected to directly read the data.
  • the method further includes:
  • the write position of the data is selected according to the remaining free resources of each bank.
  • the method specifically includes:
  • a pool of free cache resources is created for each bank, and the pool of free cache resources is used to store the remaining free pointers of the current corresponding bank.
  • the depth of each of the free cache resource pools is compared.
  • the data is randomly written into the bank corresponding to one of the free cache resource pools having the largest depth.
  • the method further includes:
  • a 2m+1 block SRAM2P memory having the same depth and width is used to construct a hardware framework of the 2R1W memory, where m is a positive integer;
  • Each SRAM2P memory has M pointer addresses, wherein one of the plurality of SRAM2P memories is a secondary memory, and the rest are Main memory
  • the data in the main memory and the auxiliary memory are associated with each other according to the current pointer position of the data, and an exclusive OR operation is performed to complete the writing and reading of the data. .
  • an embodiment of the present invention provides a data cache processing system for a 4R4W fully shared message, the system comprising: a data construction module, and a data processing module;
  • the data construction module is specifically configured to: assemble two 2R1W memories into one bank storage unit in parallel;
  • the data processing module is specifically configured to: when data is written to the 4R4W memory through four write ports when determining one clock cycle,
  • the size of the data is less than or equal to the bit width of the 2R1W memory, the data is respectively written into different banks, and at the same time, the written data is copied and written into two 2R1W memories of each bank;
  • the data processing module is further configured to:
  • the matched read port in the memory of the 4R4W is selected to directly read out the data
  • the second clock cycle is awaited, and when the second clock cycle comes, the matching read port in the 4R4W memory is selected to directly read the data.
  • the data processing module is further configured to:
  • the write position of the data is selected according to the remaining free resources of each bank.
  • the data processing module is further configured to:
  • a pool of free cache resources is created for each bank, and the pool of free cache resources is used to store the remaining free pointers of the current corresponding bank.
  • the depth of each of the free cache resource pools is compared.
  • the data is randomly written into the bank corresponding to one of the free cache resource pools having the largest depth.
  • the data construction module is further configured to: select a 2m+1 block SRAM2P memory having the same depth and width according to the depth and width of the 2R1W memory to construct a hardware framework of the 2R1W memory, where m is a positive integer ;
  • Each SRAM2P memory has M pointer addresses, wherein one of the plurality of SRAM2P memories is a secondary memory, and the rest are main memories;
  • the data processing module is further configured to: perform an exclusive OR operation on the data in the main memory and the auxiliary memory according to the current pointer position of the data. , complete the writing and reading of data.
  • the data buffer processing method and processing system of the 4R4W fully shared message of the present invention is based on the existing SRAM type, and an algorithm is used to construct more port SRAMs, which can be maximized with a minimum cost.
  • Limit support for multi-port SRAM in the implementation process, avoid complex control logic and additional multi-port SRAM or register array resources, take advantage of the speciality of message buffer, through spatial segmentation and time division, only need simple XOR The operation can realize the 4R4W message buffer.
  • the 4R4W memory of the present invention has all the storage resources visible to the 4 slices or to any of the input/output ports, and all the storage resources are for any port.
  • the invention is completely shared, and the invention has lower power consumption, faster processing speed, and saves more resources or area, and is simple to implement, saving manpower and material cost.
  • FIG. 1 is a schematic diagram of a message buffer logic unit of a 2R2W memory based on an algorithm design of a 1R1W memory in the prior art
  • FIG. 2 is a schematic diagram of a message buffer logic unit of a 4R4W memory implemented in a custom design based on a 2R2W memory algorithm in the prior art;
  • FIG. 3 is a schematic diagram of a message buffering architecture of a 4R4W memory based on 2R2W memory using another algorithm design in the prior art;
  • Figure 4 is one of the X in Figure 3? Y? Schematic diagram of the message buffer logic unit
  • FIG. 5 is a schematic flowchart of a data buffer processing method for a 4R4W fully shared message according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram showing the structure of a digital circuit of a 2R1W memory formed by a custom design in the first embodiment of the present invention
  • FIG. 7 is a schematic diagram of a 2R1W memory read/write time-sharing operation formed by a custom design according to a second embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a message buffer logic unit of a 2R1W memory formed by an algorithm design in a third embodiment of the present invention.
  • 9a is a schematic diagram of a message buffer logic unit of a 2R1W memory formed by an algorithm design in a fourth embodiment of the present invention.
  • FIG. 9b is a schematic structural diagram of a memory block number mapping table corresponding to FIG. 9a;
  • FIG. 10 is a schematic flowchart of a data processing method of a 2R1W memory provided in a fifth embodiment of the present invention.
  • FIG. 11 is a schematic diagram of a message buffer logic unit of a 2R1W memory provided in a fifth embodiment of the present invention.
  • FIG. 12 is a schematic diagram of a message buffering architecture of four banks in an embodiment of the present invention.
  • FIG. 13 is a schematic diagram of a message buffering architecture of a 4R4W memory according to an embodiment of the present invention.
  • FIG. 14 is a schematic block diagram of a data cache processing system for a 4R4W fully shared message according to an embodiment of the present invention.
  • a data cache processing method for a 4R4W fully shared message includes:
  • the size of the data is less than or equal to the bit width of the 2R1W memory, the data is respectively written into different banks, and at the same time, the written data is copied and written into two 2R1W memories of each bank;
  • the matched read port in the memory of the 4R4W is selected to directly read out the data
  • the second clock cycle is awaited, and when the second clock cycle comes, the matching read port in the 4R4W memory is selected to directly read the data.
  • the 4R4W memory that is, a memory that supports 4 read 4 writes at the same time.
  • one word line is divided into two left and right sides, so that two read ports can be simultaneously operated or one write port, so that the MOS is from the left side.
  • the data read by the tube and the data read by the right MOS tube can be simultaneously performed.
  • the data read by the right MOS tube needs to be inverted before being used, and in order not to affect the speed of data reading, the readout is performed.
  • a sense amplifier requires a pseudo differential amplifier.
  • the 6T SRAM area is unchanged, the only cost is to double the word line, thus ensuring that the overall storage density is basically unchanged.
  • Customized design can increase the port of SRAM, cut one word line into 2 word lines, increase the read port to 2; also can operate by time sharing
  • the technique is that the read operation is performed on the rising edge of the clock, and the write operation is completed on the falling edge of the clock.
  • This also expands a basic 1-read or 1-write SRAM into a 1-read and 1-write SRAM type, ie 1 One read and one write can be performed simultaneously, and the storage density is basically unchanged.
  • FIG. 8 a schematic diagram of a 2R1W memory read/write operation process formed by an algorithm in an embodiment of the present invention in a third embodiment
  • an SRAM of 2R1W is constructed based on SRAM2P, which is an SRAM type capable of supporting 1 read and 1 read/write, that is, 2 read operations or 1 read can be simultaneously performed on SRAM2P. And 1 write operation.
  • a 2R1W SRAM is constructed based on SRAM2P by copying one SRAM; in this example, the right SRAM2P_1 is a copy of the left SRAM2P_0, and when the specific operation is performed, two SRAM2Ps are used as one read and one write memory. ;When writing data, write data to the left and right SRAM2P at the same time. When reading data, A is fixedly read from SRAM2P_0, and data B is fixedly read from SRAM2P_1, so that one write operation and two reads can be realized. The operation proceeds concurrently.
  • FIG. 9a and FIG. 9b in a fourth embodiment, a schematic diagram of a 2R1W memory read/write operation process formed by using an algorithm in another embodiment is shown;
  • a logically monolithic 16384-depth SRAM is divided into logically four 4096-depth SRAM2Ps, numbered sequentially as 0, 1, 2, and 3, and an additional 4096-depth SRAM is added, numbered as 4, as a solution to read and write conflicts, for read data A and read data B, always ensure that these two read operations can be performed concurrently, when the address of two read operations is in different SRAM2P, because any one SRAM2P can Configured as 1R1W type, so there is no conflict between reading and writing; when the addresses of 2 read operations are in the same block of SRAM2P, for example, they are all in SRAM2P_0, since the same SRAM2P can only provide 2 ports at the same time, at this time Its port is occupied by 2 read operations. If there is exactly one write operation to write to SRAM2P_0, then this data is written into the fourth block of memory SRAM2P_4.
  • a memory block mapping table is required to record which memory block stores valid data.
  • the depth of the memory block mapping table is the same as the depth of one memory block, that is, 4096 depths, each In an entry, the number of each memory block is sequentially stored after initialization, from 0 to 4.
  • SRAM2P_0 since SRAM2P_0 has read and write conflicts when writing data, the data is actually written to SRAM2P_4.
  • the read operation also reads the corresponding content in the memory map, the original content is ⁇ 0, 1, 2, 3, 4 ⁇ , and after modification, it becomes ⁇ 4, 1, 2, 3, 0 ⁇ , the first block
  • the number and the 4th block number are reversed, indicating that the data is actually written to SRAM2P_4, and SRAM2P_0 becomes a backup entry.
  • the memory block number mapping table address is first read.
  • the memory block number map is required to provide 1 read and 1 write ports.
  • the memory block number map is required to provide 2 read ports, so that a total of memory block number maps are required to provide 3 reads. Port and 1 write port, and these 4 access operations must be performed simultaneously.
  • a method for constructing a 2R1W memory includes:
  • the plurality of SRAM2P memories are sequentially SRAM2P(0), SRAM2P(1), ..., SRAM2P(2m), and each SRAM2P memory has M pointer addresses, wherein one of the plurality of SRAM2P memories For the auxiliary memory, the rest are the main memory;
  • each SRAM 2P memory (2R1W memory depth and width product) / 2m.
  • the plurality of SRAM2P memories are sequentially SRAM2P(0), SRAM2P(1), SRAM2P(2), SRAM2P(3), SRAM2P(4), wherein SRAM2P(0), SRAM2P(1), SRAM2P(2), SRAM2P(3) are the main memories, and SRAM2P(4) is the auxiliary memory.
  • the depth and width of each SRAM2P memory are 4096 and 128 respectively.
  • each SRAM2P memory has 4096. Pointer address; if the address of each SRAM2P memory is independently identified, the address of each SRAM2P memory is 0 ⁇ 4095. If all the addresses of the main memory are arranged in order, all the pointer addresses are: 0 to 16383.
  • SRAM2P(4) is used to resolve port conflicts, and in this embodiment, there is no need to add a memory block number mapping table to meet the demand.
  • the method further includes:
  • the data in the main memory and the auxiliary memory are associated with each other according to the current pointer position of the data, and an exclusive OR operation is performed to complete the writing and reading of the data. .
  • the data writing process is as follows:
  • the write address of the current data is W(x, y), and x represents the arrangement position of the SRAM2P memory where the write data is located, 0 ⁇ x ⁇ 2m, and y represents the specific pointer in the SRAM2P memory where the write data is located. Address, 0 ⁇ y ⁇ M;
  • the data in the remaining main memory having the same pointer address as the write address is obtained, and it is XORed with the current write data at the same time, and the XOR operation result is written into the same pointer address of the auxiliary memory.
  • a 128-bit all-one "1" is written to a pointer address "5" in the SRAM2P(0), that is, a write address of the current data.
  • W(0,5) in the process of writing data, in addition to directly writing the 128-bit data "1" to the pointer address "5" in the specified position SRAM2P(0), at the same time, the remaining mains need to be read.
  • the data reading process is as follows:
  • the read addresses of the two read data are respectively obtained as R1 (x1, y1), R2 (x2, y2), and x1 and y1 represent the arrangement positions of the SRAM2P memory in which the read data is located, 0 ⁇ x1 ⁇ 2 m, 0. ⁇ x2 ⁇ 2m, y1, y2 represent the specific pointer address in the SRAM2P memory in which the read data is located, 0 ⁇ y1 ⁇ M, 0 ⁇ y2 ⁇ M;
  • reading data stored in one of the read addresses R1 (x1, y1) reads the currently stored data directly from the current designated read address;
  • the remaining main memory having the same pointer address as the other read address, and the data stored in the auxiliary memory are acquired, and exclusive-ORed are performed, and the result of the exclusive OR operation is output as the stored data of the other read address.
  • the read data is two, and the pointer addresses are the pointer address "2" in the SRAM2P(0) and the pointer address "5" in the SRAM2P(0). , that is, the current data read address is R (0, 2) and R (0, 5);
  • the present invention solves the problem of simultaneously reading data by two read ports by using an exclusive OR operation.
  • the data is output and outputted by the above process, and the result is completely identical with the data stored in the pointer address "5" in the SRAM2P(0), thus, according to the current pointer position of the data, associated with the main memory and the auxiliary memory
  • the data is XORed to complete the writing and reading of the data.
  • the read addresses of the two current read data are in different SRAM2P memories, the data directly acquiring the corresponding pointer addresses in the different SRAM2P memories are independently output.
  • the read data is two, and the pointer addresses are the pointer address "5" in the SRAM2P(0) and the pointer address "10" in the SRAM2P(1). , that is, the current data read address is R (0, 5) and R (1, 10);
  • each SRAM2P is logically further divided, for example, into 4m SRAM2Ps having the same depth
  • the above 2R1W type SRAM can be constructed by adding only 1/4m of the memory area; correspondingly, Physically, the number of blocks of SRAM is also increased by nearly 2 times, which occupies a lot of area overhead in actual layout and routing; of course, the present invention is not limited to the above specific embodiments, and other uses XOR operation to expand the memory port.
  • the solution is also included in the scope of protection of the present invention and will not be described in detail herein.
  • a 2R1W type SRAM of two 16384-depth and 1152-width is assembled into a bank in parallel, and a bank has a capacity of 4.5 Mbytes. Banks make up a 18M byte 4R4W memory.
  • each slice when data is written into the 4R4W memory, it is necessary to simultaneously support simultaneous writing of 4 slices. It is assumed that the data bus width of each slice is 1152 bits, and each slice supports six 100GE ports for line-speed forwarding; In the worst case on the data channel, for message data less than or equal to 144 bytes in length, the core clock frequency needs to run to 892.9 MHz. For messages larger than 144 bytes, the core clock frequency needs to run to 909.1 MHz.
  • the bandwidth requirement can be satisfied; thus, the spatial segmentation is used, and the data is written by the four slices respectively.
  • the bandwidth requirement can be satisfied; that is, the data of each slice needs to occupy the entire bank; For each slice, only two clock cycles are required, and ping-pong operation can be used to meet the demand.
  • two of the data are written into two banks, and the second cycle arrives.
  • the other two data are respectively written into two banks; wherein, two 2R1W memories in each bank respectively store the high and bottom bits of any data larger than 144 bytes, and no detailed description is made here. Narration. As such, there is no conflict in writing data.
  • the reading process is similar to the writing process; if the bit width of the read data is less than or equal to 144 bytes in one clock cycle, in the worst case, the read data is stored in the same bank, since each of the present invention Bank is formed by two 2R1W memories, and each 2R1W memory can support two read requests at the same time. At the same time, when data is written, the data is copied and stored in the left and right 2R1W memories of the same bank. In this case, the data read request can also be satisfied.
  • the read data In one clock cycle, if the bit width of the read data is greater than 144 bytes, in the worst case, the read data is stored in the same bank, similar to the writing process, only needs to use ping pong in two clock cycles. Operation, that is, reading two data from two 2R1W memories of one bank in one clock cycle, and reading the remaining two data from two 2R1W memories of the same bank in the second clock cycle, thus, The read request can also be satisfied, and will not be described in detail here.
  • the method further includes: when data is written into the 4R4W memory, selecting a write location of the data according to remaining free resources of each bank. Specifically, a pool of free cache resources is configured for each bank, and the pool of free cache resources is used to store the remaining free pointers of the current corresponding bank. When the data is sent to the 4R4W memory request, each idle cache resource is compared. The depth of the pool,
  • the data is randomly written into the bank corresponding to one of the free cache resource pools having the largest depth.
  • a certain rule may be set.
  • the corresponding banks are sequentially written to the corresponding ones according to the order of the banks. Bank, we will not go into details here.
  • S0, S1, S2, and S3 represent 4 slices, and each slice includes, for example, six 100GE ports, and packets from slice0, slice1, slice2, and slice3 are respectively sent to slice0, slice1, slice2, and slice3.
  • Stored in X0Y0 further, when reading the message, slice0, slice1, slice2, and slice3 directly read the corresponding data directly from X0Y0. In this way, cache sharing is implemented between ports of different destination slices.
  • the specific process of writing and reading the message can be referred to the specific description of FIG.
  • a data cache processing system for a 4R4W fully shared message according to an embodiment of the present invention is provided.
  • the system includes: a data construction module 100, a data processing module 200;
  • the data construction module 100 is specifically configured to: assemble two 2R1W memories into one bank storage unit in parallel;
  • the data processing module 200 is specifically configured to: when data is written to the 4R4W memory through four write ports when determining one clock cycle,
  • the size of the data is less than or equal to the bit width of the 2R1W memory, the data is respectively written into different banks, and at the same time, the written data is copied and written into two 2R1W memories of each bank;
  • the data processing module 200 is further configured to: when determining data for one clock cycle, when the data is read from the 4R4W memory,
  • the matched read port in the memory of the 4R4W is selected to directly read out the data
  • the second clock cycle is awaited, and when the second clock cycle comes, the matching read port in the 4R4W memory is selected to directly read the data.
  • the data construction module 100 establishes the 2R1W memory in five ways.
  • the data construction module 100 divides a word line into two left and right, so that two read ports can be simultaneously operated or one write port.
  • the data read from the left MOS transistor and the data read from the right MOS transistor can be simultaneously performed.
  • the data read by the right MOS transistor needs to be inverted before being used, and in order not to affect the speed of data reading.
  • the sense amplifier that is read out requires a pseudo differential amplifier.
  • the 6T SRAM area is unchanged, the only cost is to double the word line, thus ensuring that the overall storage density is basically unchanged.
  • the data construction module 100 can increase the port of the SRAM by custom design, cutting one word line into two word lines, and increasing the read port to two;
  • the technique of operation that is, the read operation is performed on the rising edge of the clock, and the write operation is completed on the falling edge of the clock.
  • This also expands a basic 1-read or 1-write SRAM into a 1-read and 1-write SRAM type, ie One read and one write can be performed simultaneously, and the storage density is basically unchanged.
  • an SRAM of 2R1W is constructed based on SRAM2P, which is an SRAM type capable of supporting 1 read and 1 read/write, that is, SRAM2P can be simultaneously performed 2 One read operation, or one read and one write operation.
  • the data construction module 100 constructs a 2R1W SRAM based on the SRAM2P by copying a copy of the SRAM; in this example, the SRAM2P_1 on the right is a copy of the left SRAM2P_0, and in the specific operation, the two SRAM2Ps are read as 1 and 1 Write memory to use; in which, when writing data, write data to the left and right SRAM2P at the same time.
  • A is fixed to read from SRAM2P_0
  • data B is fixedly read from SRAM2P_1, so that one write operation can be realized. And two read operations are performed concurrently.
  • the data construction module 100 divides the logically monolithic 16384-depth SRAM into logically four 4096-depth SRAM2Ps, which are numbered 0, 1, and 2, respectively.
  • a memory block mapping table is required to record which memory block stores valid data, as shown in FIG. 9b, and the memory block mapping
  • the depth of the table is the same as the depth of a memory block, that is, 4096 depths.
  • the number of each memory block is sequentially stored in each entry after initialization, from 0 to 4.
  • the data is written in SRAM2P_0.
  • the read operation also reads the corresponding content in the memory map, and the original content is ⁇ 0, 1, 2, 3, 4 ⁇ , modified. Then becomes ⁇ 4, 1, 2, 3, 0 ⁇ , the first block number and the 4th block number are reversed, indicating that the data is actually written into SRAM2P_4, and SRAM2P_0 becomes a backup entry.
  • the memory block number mapping table address is first read.
  • the memory block number map is required to provide 1 read and 1 write ports.
  • the memory block number map is required to provide 2 read ports, so that a total of memory block number maps are required to provide 3 reads. Port and 1 write port, and these 4 access operations must be performed simultaneously.
  • the data construction module 100 selects 2m+1 blocks of SRAM2P memory having the same depth and width according to the depth and width of the 2R1W memory to construct a 2R1W memory.
  • Hardware framework m is a positive integer
  • the plurality of SRAM2P memories are sequentially SRAM2P(0), SRAM2P(1), ..., SRAM2P(2m), and each SRAM2P memory has M pointer addresses, wherein one of the plurality of SRAM2P memories For the auxiliary memory, the rest are the main memory;
  • each SRAM2P memory (the product of the depth and width of the 2R1W memory) / 2m.
  • the plurality of SRAM2P memories are sequentially SRAM2P(0), SRAM2P(1), SRAM2P(2), SRAM2P(3), SRAM2P(4), wherein SRAM2P(0), SRAM2P(1), SRAM2P(2), SRAM2P(3) are the main memories, and SRAM2P(4) is the auxiliary memory.
  • the depth and width of each SRAM2P memory are 4096 and 128 respectively.
  • each SRAM2P memory has 4096. Pointer address; if the address of each SRAM2P memory is independently identified, the address of each SRAM2P memory is 0 ⁇ 4095. If all the addresses of the main memory are arranged in order, all the pointer addresses are: 0 to 16383.
  • SRAM2P(4) is used to resolve port conflicts, and in this embodiment, there is no need to add a memory block number mapping table to meet the demand.
  • the data processing module 200 when data is written to and/or read from the 2R1W memory, the data processing module 200 is specifically configured to: associate the main memory and the auxiliary memory according to the current pointer position of the data. The data is XORed and the data is written and read.
  • the data writing process is as follows:
  • the write address of the current data is W(x, y), and x represents the arrangement position of the SRAM2P memory where the write data is located, 0 ⁇ x ⁇ 2m, and y represents the specific pointer in the SRAM2P memory where the write data is located. Address, 0 ⁇ y ⁇ M;
  • the data in the remaining main memory having the same pointer address as the write address is obtained, and it is XORed with the current write data at the same time, and the XOR operation result is written into the same pointer address of the auxiliary memory.
  • the data processing module 200 reads out the data as follows:
  • the data processing module 200 is specifically configured to: respectively acquire the read addresses of the two read data as R1 (x1, y1), R2 (x2, y2), and x1 and y1 respectively indicate the arrangement position of the SRAM2P memory in which the read data is located.
  • 0 ⁇ x1 ⁇ 2m, 0 ⁇ x2 ⁇ 2m, y1, y2 each represent a specific pointer address in the SRAM2P memory in which the read data is located, 0 ⁇ y1 ⁇ M, 0 ⁇ y2 ⁇ M;
  • the data processing module 200 is specifically configured to: select the read data stored in one of the read addresses R1 (x1, y1), and directly read the currently stored data from the current designated read address;
  • the data processing module 200 is specifically configured to: acquire the remaining main memory having the same pointer address as another read address, and the data stored in the auxiliary memory, and perform an exclusive OR operation on the XOR operation result, and read the XOR operation result as another readout
  • the stored data of the address is output.
  • the data processing module 200 directly obtains data corresponding to the pointer addresses in the different SRAM2P memories and outputs them independently.
  • each SRAM2P is logically further divided, for example, into 4m SRAM2Ps having the same depth
  • the above 2R1W type SRAM can be constructed by adding only 1/4m of the memory area; correspondingly, Physically, the number of blocks of SRAM is also increased by nearly 2 times, which occupies a lot of area overhead in actual layout and routing; of course, the present invention is not limited to the above specific embodiments, and other uses XOR operation to expand the memory port.
  • the solution is also included in the scope of protection of the present invention and will not be described in detail herein.
  • the data processing module 200 is further configured to: when data is written into the 4R4W memory, select a write location of data according to remaining free resources of each bank. Specifically, the data processing module 200 is further configured to: establish, for each bank, a pool of free cache resources, where the pool of free cache resources is used to store remaining free pointers of the current corresponding bank, when the data is sent and written When 4R4W memory requests, compare the depth of each free cache resource pool.
  • the data is randomly written into the bank corresponding to one of the free cache resource pools having the largest depth.
  • a certain rule may be set.
  • the corresponding banks are sequentially written to the corresponding ones according to the order of the banks. Bank, we will not go into details here.
  • the specific structures of X0Y0 and X1Y1 are the same as those shown in FIG. 12, and the data is written and read out according to the corresponding forwarding port, for example, S0 and S1. Data can only be written to X0Y0, and the data of S2 and S3 can only be written to X1Y1. The writing process is not described in detail.
  • the data buffer processing method and processing system of the 4R4W fully shared message of the present invention is based on the existing SRAM type, and an algorithm is used to construct more port SRAMs, which can be maximized with only a minimum cost.
  • Support multi-port SRAM in the implementation process, avoid using complex control logic and additional multi-port SRAM or register array resources, using the speciality of message buffer, through spatial segmentation and time division, only need simple XOR operation 4R4W message buffer can be realized.
  • the 4R4W memory of the present invention has all storage resources visible to 4 slices or to any input/output port, and all storage resources are between any ports.
  • the invention is completely shared, and the invention has lower power consumption, faster processing speed, and saves more resources or area, and is simple to implement, saving manpower and material cost.
  • the device embodiments described above are merely illustrative, wherein the modules described as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules, ie may be located A place, or it can be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. Those of ordinary skill in the art can understand and implement without any creative effort.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Static Random-Access Memory (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本发明提供的一种4R4W全共享报文的数据缓存处理方法及处理系统,所述方法包括:将2个2R1W存储器并行拼装为一个Bank存储单元;直接基于4个所述Bank存储单元形成4R4W存储器的硬件框架;一个时钟周期下,当数据通过4个写端口写入到4R4W存储器时,若数据的大小小于等于所述2R1W存储器的位宽,则将数据分别写入不同Bank中,同时,对写入的数据进行复制,分别写入至每个Bank的2个2R1W存储器中;若数据的大小大于所述2R1W存储器的位宽,则等待第二个时钟周期,当第二个时钟周期到来时,将数据分别写入不同Bank中,同时,将每个写入数据的高低位分别写入至每个Bank存储单元的2个2R1W存储器中。本发明具有更低的功耗,更快的处理速度,以及节省更多的资源或面积,实现简单。

Description

4R4W全共享报文的数据缓存处理方法及数据处理系统
本申请要求了申请日为2016年07月28日,申请号为201610605130.7,发明名称为“4R4W全共享报文的数据缓存处理方法及数据处理系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及网络通信技术领域,尤其涉及一种4R4W全共享报文的数据缓存处理方法及数据处理系统。
背景技术
在设计以太网交换芯片时,通常需要使用大容量的多端口存储器,例如2读1写(同时支持2个读端口和1个写端口)存储器、1读2写存储器、2读2写存储器或者更多端口的存储器。
通常情况下,供应商一般只提供1个读或者写存储器、1读1写存储器和2个读或者写存储器,如此,设计者仅能基于上述基本存储器单元构建多个端口的存储器。
报文缓存是一类特殊的多端口存储器,其写入是可控的,亦即,顺序写入,但是读出却是随机的。用户的其中一种需求中,单向交换容量为2.4Tbps的以太网交换芯片,为了做到线速写入和读出,每个最小报文(64字节)花费的时间只有280ps,需要核心频率高达3.571GHz,该种需求目前在现有的半导体工艺上无法实现。为了实现上述目标,通常的做法是,把整个芯片分割成多个独立的报文转发和处理单元并行进行处理,报文转发和处理单元的英文名称为Slice,例如分割成4个Slice并行处理,每个Slice需要处理的数据带宽就降低,对核心频率的要求也会降低到原核心频率的1/4。相应的,实现该方案过程中,对于报文缓存需要同时提供8个端口供4个Slice访问,其中4个是读端口,4个是写端口。
一般的,在SRAM的端口类型为1个读或者写,2个读或者写,以及1写或者2读的基础上,通过定制设计,例如:修改存储单元的办法,以及算法设计来增加SRAM的端口数量。
定制设计的周期一般比较长,需要做spice仿真,还要提供存储器编译器,以生成不同大小和类型的SRAM,对于供应商来说,一般需要6~9个月的时间,才能提供一个新型的SRAM的类型,而且这样的定制设计是与具体的工艺(例如GlobalFoundries 14nm,28nm还是TSMC的28nm,16nm)强相关的,工艺一旦改变,定制设计的SRAM库需要重新设计。
算法设计是基于厂家提供的现成的SRAM类型,通过算法来实现多端口存储器,最大的好处是避免定制设计,缩短时间,同时设计与厂家库无关,可以很容易的在不同的厂家库之间移植。
如图1所示,一种通过算法设计的方式,设计一个支持4个slice访问的4R4W的存储架构,该实施方式中,采用1R1W的SRAM2D设计大容量的2R2W的SRAM,逻辑上总共需要4块65536深度2304宽度大小的SRAM2D,由于单个物理SRAM2D的容量无法满足上述需求,需要把1块65536深度2304宽度的逻辑SRAM切割成多块物理SRAM,例如:可以切割成32块16384深度288宽度的物理块,这样总共需要32x4=128块物理块;以上述2R2W SRAM为基本单元,搭建18M字节大小的4R4W SRAM。
结合图2所示,逻辑上总共需要4块65536深度2304宽度大小的2R2W的SRAM,即:需要SRAM2D(16384深度288宽度)的物理块的个数为512块;根据现有数据可知:14nm工艺条件下,一块16384深度288宽度大小SRAM2D物理块的大小是0.4165平方厘米,功耗是0.108Watts(核心电压=0.9V,结温=125摄氏度,工艺条件是最快);上述采用厂家库提供的基本单元SRAM复制多份拷贝,构建更多端口SRAM的方法,虽然设计原理上显而易见,但是面积开销非常大,以上述方案为例,单单18M字节4R4W SRAM的面积就占用了213.248平方厘米,总的功耗为55.296Watts,这里还没有考虑到插入Decap和DFT以及布局布线的开销,通过此种算法设计方式设计出的4R4W SRAM,其占用面积以及总功耗均十分庞大;
如图3所示,现有技术中另外一种算法设计方式,以2R2W的SRAM为基本单元,通过空间上的分割实现4R4W SRAM的报文缓存,每个X?Y?是一个2R2W的SRAM逻辑块,大小是4.5M字节,总共有4块这样的SRAM逻辑块,构 成4R4W SRAM,大小是18M字节(4.5Mx4=18M);
其中,S0、S1、S2、S3代表4个slice,每个slice举例来说包含有6个100GE端口,从slice0或者slice1输入去往slice0或者slice1的报文存入X0Y0,从slice0或者slice1输入去往slice2或者slice3的报文存入X1Y0,从slice2或者slice3输入去往slice0或者slice1的报文存入X0Y1,从slice2或者slice3输入去往slice2或者slice3的报文存入X1Y1;对于组播报文,从Slice0或者Slice1来的组播报文同时存入X0Y0和X1Y0中;进一步的,读取报文的时候,slice0或者slice1将从X0Y0或者X0Y1中读取报文,slice2或者slice3将从X1Y0或者X1Y1中读取报文。
结合图4所示,现有技术中算法设计的每一个X1Y1的架构图,一个X?Y?逻辑上需要4块16384深度2304宽度的SRAM,每一个逻辑上16384深度和2304宽度的SRAM可以切割成8块16384深度和288宽度的物理SRAM2D;14nm集成电路工艺下,这样一个18M字节的报文缓存总共需要4x4x8=128块16384深度和288宽度的物理SRAM2D,总的面积为51.312平方厘米,总的功耗是13.824Watts(核心电压=0.9V,结温=125摄氏度,工艺条件是最快)
上述第二种算法设计的面积和功耗开销只有第一种算法设计的1/4,然而,该算法设计无法实现4个2R2W的SRAM逻辑块在所有的4个slice之间共享,每个Slice输入端口能够占用的最大报文缓存只有9M字节,这样的报文缓存不是真正意义上的共享缓存。
发明内容
为解决上述技术问题,本发明的目的在于提供一种4R4W全共享报文的数据缓存处理方法及处理系统。
为实现上述发明目的之一,本发明一实施方式提供的4R4W全共享报文的数据缓存处理方法,所述方法还包括:将2个2R1W存储器并行拼装为一个Bank存储单元;
直接基于4个所述Bank存储单元形成4R4W存储器的硬件框架;
一个时钟周期下,当数据通过4个写端口写入到4R4W存储器时,
若数据的大小小于等于所述2R1W存储器的位宽,则将数据分别写入不同Bank中,同时,对写入的数据进行复制,分别写入至每个Bank的2个2R1W存储器中;
若数据的大小大于所述2R1W存储器的位宽,则等待第二个时钟周期,当第二个时钟周期到来时,将数据分别写入不同Bank中,同时,将每个写入数据的高低位分别写入至每个Bank存储单元的2个2R1W存储器中。
作为本发明一实施方式的进一步改进,所述方法还包括:
一个时钟周期下,当数据从4R4W存储器读出时,
若数据的大小小于等于所述2R1W存储器的位宽,则选择4R4W的存储器中匹配的读端口直接读出数据;
若数据的大小大于所述2R1W存储器的位宽,则等待第二个时钟周期,当第二个时钟周期到来时,选择4R4W存储器中匹配的读端口直接读出数据。
作为本发明一实施方式的进一步改进,所述方法还包括:
当数据写入所述4R4W存储器时,根据每个Bank的剩余空闲资源选择数据的写入位置。
作为本发明一实施方式的进一步改进,所述方法具体包括:
为每个Bank对应建立一空闲缓存资源池,所述空闲缓存资源池用于存储当前对应Bank的剩余的空闲指针,当数据发出写入所述4R4W存储器请求时,比较各个空闲缓存资源池的深度,
若存在一个具有最大深度的空闲缓存资源池,则直接将数据写入到该最大深度的空闲缓存资源池对应的Bank中;
若存在2个以上具有相同的最大深度的空闲缓存资源池,则将该数据随机写入到其中一个具有最大深度的空闲缓存资源池对应的Bank中。
作为本发明一实施方式的进一步改进,所述方法还包括:
根据2R1W存储器的深度和宽度选择2m+1块具有相同深度及宽度的SRAM2P存储器构建2R1W存储器的硬件框架,m为正整数;
每个SRAM2P存储器均具有M个指针地址,其中,多个所述SRAM2P存储器中的一个为辅助存储器,其余均为 主存储器;
当数据写入2R1W存储器和/或从所述2R1W存储器读出时,根据数据的当前指针位置,关联主存储器以及辅助存储器中的数据,对其做异或运算,完成数据的写入和读出。
为了实现上述发明目的之一,本发明一实施方式提供一种4R4W全共享报文的数据缓存处理系统,所述系统包括:数据构建模块,数据处理模块;
所述数据构建模块具体用于:将2个2R1W存储器并行拼装为一个Bank存储单元;
直接基于4个所述Bank存储单元形成4R4W存储器的硬件框架;
所述数据处理模块具体用于:当确定一个时钟周期下,数据通过4个写端口写入到4R4W存储器时,
若数据的大小小于等于所述2R1W存储器的位宽,则将数据分别写入不同Bank中,同时,对写入的数据进行复制,分别写入至每个Bank的2个2R1W存储器中;
若数据的大小大于所述2R1W存储器的位宽,则等待第二个时钟周期,当第二个时钟周期到来时,将数据分别写入不同Bank中,同时,将每个写入数据的高低位分别写入至每个Bank存储单元的2个2R1W存储器中。
作为本发明一实施方式的进一步改进,所述数据处理模块还用于:
当确定一个时钟周期下,数据从4R4W存储器读出时,
若数据的大小小于等于所述2R1W存储器的位宽,则选择4R4W的存储器中匹配的读端口直接读出数据;
若数据的大小大于所述2R1W存储器的位宽,则等待第二个时钟周期,当第二个时钟周期到来时,选择4R4W存储器中匹配的读端口直接读出数据。
作为本发明一实施方式的进一步改进,所述数据处理模块还用于:
当确认数据写入所述4R4W存储器时,根据每个Bank的剩余空闲资源选择数据的写入位置。
作为本发明一实施方式的进一步改进,所述数据处理模块还用于:
为每个Bank对应建立一空闲缓存资源池,所述空闲缓存资源池用于存储当前对应Bank的剩余的空闲指针,当数据发出写入所述4R4W存储器请求时,比较各个空闲缓存资源池的深度,
若存在一个具有最大深度的空闲缓存资源池,则直接将数据写入到该最大深度的空闲缓存资源池对应的Bank中;
若存在2个以上具有相同的最大深度的空闲缓存资源池,则将该数据随机写入到其中一个具有最大深度的空闲缓存资源池对应的Bank中。
作为本发明一实施方式的进一步改进,所述数据构建模块还用于:根据2R1W存储器的深度和宽度选择2m+1块具有相同深度及宽度的SRAM2P存储器构建2R1W存储器的硬件框架,m为正整数;
每个SRAM2P存储器均具有M个指针地址,其中,多个所述SRAM2P存储器中的一个为辅助存储器,其余均为主存储器;
当数据写入2R1W存储器和/或从所述2R1W存储器读出时,所述数据处理模块还用于:根据数据的当前指针位置,关联主存储器以及辅助存储器中的数据,对其做异或运算,完成数据的写入和读出。
与现有技术相比,本发明的4R4W全共享报文的数据缓存处理方法及处理系统,基于现有的SRAM类型,通过算法的方式搭建更多端口的SRAM,仅仅用最小的代价便可以最大限度的支持多端口SRAM;其实现过程中,避免采用复杂的控制逻辑和额外的多端口SRAM或者寄存器阵列资源,利用报文缓存的特殊性,通过空间分割和时间分割,仅需要简单的异或运算就可实现4R4W的报文缓存,同时,本发明的4R4W存储器,其所有的存储资源对于4个Slice或者说对于任意一个输入/输出端口而言都是可见的,所有的存储资源对于任意端口之间是完全共享的,本发明具有更低的功耗,更快的处理速度,以及节省更多的资源或面积,实现简单,节约人力及物质成本。
附图说明
图1是现有技术中,基于1R1W存储器采用算法设计实现的2R2W存储器的报文缓存逻辑单元示意图;
图2是现有技术中,基于2R2W存储器算法定制设计实现的4R4W存储器的报文缓存逻辑单元示意图;
图3是现有技术中,基于2R2W存储器采用另一种算法设计实现的4R4W存储器的报文缓存架构示意图;
图4是图3中其中一个X?Y?的报文缓存逻辑单元示意图;
图5是本发明一实施方式中4R4W全共享报文的数据缓存处理方法的流程示意图;
图6是本发明第一实施方式中,通过定制设计形成的2R1W存储器的数字电路结构示意图;
图7是本发明第二实施方式的,通过定制设计形成的2R1W存储器读写分时操作示意图;
图8是本发明第三实施方式中,采用算法设计形成的2R1W存储器的报文缓存逻辑单元示意图;
图9a是本发明第四实施方式中,采用算法设计形成的2R1W存储器的报文缓存逻辑单元示意图;
图9b是对应图9a存储器块编号映射表的结构示意图;
图10是本发明第五实施方式中,提供的2R1W存储器的数据处理方法的流程示意图;
图11是本发明第五实施方式中,提供的2R1W存储器的报文缓存逻辑单元示意图;
图12是本发明是一具体实施方式中,4个Bank的报文缓存架构示意图;
图13是本发明是一具体实施方式中,4R4W存储器的报文缓存架构示意图;
图14是本发明一实施方式中提供的4R4W全共享报文的数据缓存处理系统的模块示意图。
具体实施方式
以下将结合附图所示的各实施方式对本发明进行详细描述。但这些实施方式并不限制本发明,本领域的普通技术人员根据这些实施方式所做出的结构、方法、或功能上的变换均包含在本发明的保护范围内。
如图5所示,本发明一实施方式提供的4R4W全共享报文的数据缓存处理方法,所述方法包括:
将2个2R1W存储器并行拼装为一个Bank存储单元;
直接基于4个所述Bank存储单元形成4R4W存储器的硬件框架;
一个时钟周期下,当数据通过4个写端口写入到4R4W存储器时,
若数据的大小小于等于所述2R1W存储器的位宽,则将数据分别写入不同Bank中,同时,对写入的数据进行复制,分别写入至每个Bank的2个2R1W存储器中;
若数据的大小大于所述2R1W存储器的位宽,则等待第二个时钟周期,当第二个时钟周期到来时,将数据分别写入不同Bank中,同时,将每个写入数据的高低位分别写入至每个Bank存储单元的2个2R1W存储器中。
一个时钟周期下,当数据从4R4W存储器读出时,
若数据的大小小于等于所述2R1W存储器的位宽,则选择4R4W的存储器中匹配的读端口直接读出数据;
若数据的大小大于所述2R1W存储器的位宽,则等待第二个时钟周期,当第二个时钟周期到来时,选择4R4W存储器中匹配的读端口直接读出数据。
所述4R4W存储器,即同时支持4读4写的存储器。
本发明优选实施方式中,建立所述2R1W存储器有五种方法。
如图6所示,第一种实施方式中,在6T SRAM的基础,把一根字线分割成左右两个,这样可以做成2个读端口同时操作或者1个写端口,这样从左边MOS管读出的数据和右边MOS管读出的数据可以同时进行,需要注意的是,右边MOS管读出的数据需要反相之后才可以用,同时为了不影响数据读取的速度,读出的感应放大器需要用伪差分放大器。这样,6T SRAM面积不变,唯一的代价是增加一倍的字线,从而保证总体的存储密度基本不变。
如图7所示,第二种实施方式中,通过定制设计形成的2R1W存储器读写操作流程示意图;
通过定制设计可以增加SRAM的端口,把一个字线切割成2个字线,将读端口增加到2个;还可以通过分时操作 的技术,即读操作在时钟的上升沿进行,而写操作在时钟的下降沿完成,这样也可以把一个基本的1读或者1写的SRAM扩展成1读和1写的SRAM类型,即1个读和1个写操作可以同时进行,存储密度基本不变。
如图8所示,第三种实施方式中本发明一实施方式中采用算法设计形成的2R1W存储器读写操作流程示意图;
本实施方式中,以SRAM2P为基础构建2R1W的SRAM为例,所述SRAM2P是一种能够支持1读和1读/写的SRAM类型,即可以对SRAM2P同时进行2个读操作,或者1个读和1个写操作。
本实施方式中,通过复制一份SRAM以SRAM2P为基础构建2R1W的SRAM;该示例中,右边的SRAM2P_1是左边SRAM2P_0的拷贝,具体操作的时候,把两块SRAM2P作为1读和1写存储器来使用;其中,写入数据时,同时往左右两个SRAM2P写入数据,读出数据时,A固定从SRAM2P_0读取,数据B固定从SRAM2P_1读取,这样就可以实现1个写操作和2个读操作并发进行。
如图9a、9b所示,第四种实施方式中,为另一实施方式中采用算法设计形成的2R1W存储器读写操作流程示意图;
该实施方式中,把逻辑上一整块的16384深度的SRAM分割成逻辑上4块4096深度的SRAM2P,编号依次为为0、1、2、3,再额外增加一块4096深度的SRAM,编号为4,作为解决读写冲突用,对于读数据A和读数据B,永远保证这2个读操作可以并发进行,当2个读操作的地址是处于不同的SRAM2P中时,因为任何一个SRAM2P都可以配置成1R1W类型,所以读写不会有冲突;当2个读操作的地址处于同一块SRAM2P中时,例如:均处于SRAM2P_0中,由于同一个SRAM2P最多只能提供2个端口同时操作,此时,其端口被2个读操作占用,如果恰好有一个写操作要写入SRAM2P_0,那么这时就把这个数据写入存储器第4块SRAM2P_4中。
该种实施方式中,需要有一个存储器块映射表记录哪一个存储器块存放有效数据,如图9b所示,存储器块映射表的深度和一个存储器块的深度相同,即都是4096个深度,每一个条目中在初始化后依次存放每个存储器块的编号,从0到4,图9a示例中,由于SRAM2P_0在写入数据的时候发生读写冲突,数据实际上是写入到SRAM2P_4中,此时,读操作同时会读取存储器映射表中对应的内容,原始内容为{0,1,2,3,4},修改之后变成{4,1,2,3,0},第一个块编号和第4个块编号对调,表示数据实际写入到SRAM2P_4中,同时SRAM2P_0变成了备份条目。
当读取数据的时候,需要首先读对应地址的存储器块编号映射表,查看有效数据存放在哪一个存储器块中,例如当要读取地址5123的数据,那么首先读取存储块编号映射表地址1027(5123-4096=1027)存放的内容,根据第二列的数字编号去读取对应存储块的地址1027的内容。
对于写数据操作,需要存储器块编号映射表提供1读和1写端口,对于2个读数据操作,需要存储器块编号映射表提供2个读端口,这样总共需要存储器块编号映射表提供3个读端口和1个写端口,而且这4个访问操作必须是同时进行。
如图10所示,第五种实施方式,即本发明的优选实施方式中,2R1W存储器的构建方法包括:
根据所述2R1W存储器的深度和宽度选择2m+1块具有相同深度及宽度的SRAM2P存储器构建2R1W存储器的硬件框架,m为正整数;
多个所述SRAM2P存储器按照排列顺序依次为SRAM2P(0)、SRAM2P(1)……、SRAM2P(2m),每个SRAM2P存储器均具有M个指针地址,其中,多个所述SRAM2P存储器中的一个为辅助存储器,其余均为主存储器;
该发明的优选实施方式中,每块SRAM2P存储器的深度与宽度的乘积=(2R1W存储器的深度与宽度乘积)/2m。
以下为了描述方便,对m取值为2、2R1W存储器为16384深度、128宽度的SRAM存储器进行详细描述。
则在该具体示例中,多个所述SRAM2P存储器按照排列顺序依次为SRAM2P(0)、SRAM2P(1)、SRAM2P(2)、SRAM2P(3)、SRAM2P(4),其中,SRAM2P(0)、SRAM2P(1)、SRAM2P(2)、SRAM2P(3)为主存储器,SRAM2P(4)为辅助存储器,每个SRAM2P存储器的深度和宽度分别为4096和128,相应的,每个SRAM2P存储器均具有4096个指针地址;如果对每个SRAM2P存储器的指针地址均独立标识,则每个SRAM2P存储器的指针地址均为0~4095,若将全部的主存储器的地址依次排列,则全部的指针地址范围为:0~16383。该示例中,SRAM2P(4)用于解决端口冲突,且在该实施方式中,无需增加存储器块编号映射表即可以满足需求。
进一步的,在上述硬件框架基础上,所述方法还包括:
当数据写入2R1W存储器和/或从所述2R1W存储器读出时,根据数据的当前指针位置,关联主存储器以及辅助存储器中的数据,对其做异或运算,完成数据的写入和读出。
本发明优选实施方式中,其数据写入过程如下:
获取当前数据的写入地址为W(x,y),x表示写入数据所处于的SRAM2P存储器的排列位置,0≤x<2m,y表示写入数据所处于的SRAM2P存储器中的具体的指针地址,0≤y≤M;
获取与写入地址具有相同指针地址的其余主存储器中的数据,将其同时与当前写入数据做异或运算,并将异或运算结果写入到辅助存储器的相同指针地址中。
结合图11所示,本发明一具体示例中,本发明一具体示例中,将数据128比特全“1”写入到SRAM2P(0)中的指针地址“5”,即当前数据的写入地址为W(0,5),在写入数据过程中,除了直接将数据128比特全“1”写入到指定位置SRAM2P(0)中的指针地址“5”外,同时,需要读取其余主存储器在相同指针地址的数据,假设从SRAM2P(1)中的指针地址“5”读出的数据为128比特全“1”,从SRAM2P(2)中的指针地址“5”读出的数据为128比特全“0”,从SRAM2P(3)中的指针地址“5”读出的数据为128比特全“1”,则将数据128比特全“1”、128比特全“0”、128比特全“1”、128比特全“1”做异或运算,并将其异或运算的结果“1”同时写入到SRAM2P(4)中的指针地址“5”。如此,以保证2R1W存储器的2个读端口和1个写端口同时操作。
进一步的,本发明优选实施方式中,其数据读出过程如下:
若当前两个读出数据的读出地址处于相同的SRAM2P存储器中,则
分别获取两个读出数据的读出地址为R1(x1,y1),R2(x2,y2),x1、y1均表示读出数据所处于的SRAM2P存储器的排列位置,0≤x1<2m,0≤x2<2m,y1、y2均表示读出数据所处于的SRAM2P存储器中的具体的指针地址,0≤y1≤M,0≤y2≤M;
任选其中一个读出地址R1(x1,y1)中存储的读出数据,从当前的指定读出地址中直接读出当前存储的数据;
获取与另一个读出地址具有相同指针地址的其余主存储器、以及辅助存储器中存储的数据,并对其做异或运算,将异或运算结果作为另一个读出地址的存储数据进行输出。
接续图11所示,本发明一具体示例中,读出的数据为2个,其指针地址分别为SRAM2P(0)中的指针地址“2”,以及SRAM2P(0)中的指针地址“5”,即当前数据的读出地址为R(0,2)和R(0,5);
在从2R1W存储器读出数据过程中,由于每一个SRAM2P只能保证1个读端口和1个写端口同时操作,读端口直接从SRAM2P(0)中的指针地址“2”中读取数据,但是另一读端口的请求无法满足。相应的,本发明采用异或运算的方式解决两个读端口同时读出数据的问题。
对于R(0,5)中的数据,分别读取其他三个主存储器以及辅助存储器的指针地址“5”的数据并对其做异或运算,接续上例,从SRAM2P(1)中的指针地址“5”读出的数据为“1”,从SRAM2P(2)中的指针地址“5”读出的数据为“0”,从SRAM2P(3)中的指针地址“5”读出的数据为128比特全“1”,从SRAM2P(4)中的指针地址“5”读出的数据为128比特全“1”,将数据128比特全“1”、128比特全“1”、128比特全“0”、128比特全“1”做异或运算,得到128比特“1”,并将其异或运算的结果128比特全“1”作为SRAM2P(0)中的指针地址“5”的存储数据进行输出,通过上述过程得到的数据,其结果与SRAM2P(0)中的指针地址“5”中存储的数据完全一致,如此,根据数据的当前指针位置,关联主存储器以及辅助存储器中的数据,对其做异或运算,完成数据的写入和读出。
本发明一实施方式中,若当前两个读出数据的读出地址处于不同的SRAM2P存储器中,则直接获取不同SRAM2P存储器中对应指针地址的数据分别独立进行输出。
接续图11所示,本发明一具体示例中,读出的数据为2个,其指针地址分别为SRAM2P(0)中的指针地址“5”,以及SRAM2P(1)中的指针地址“10”,即当前数据的读出地址为R(0,5)和R(1,10);
在从2R1W存储器读出数据过程中,由于每一个SRAM2P均能保证1个读端口和1个写端口同时操作,故,读出 数据过程中,直接从SRAM2P(0)中的指针地址“5”读取数据,以及直接从SRAM2P(1)中的指针地址“10”读出数据,如此,以保证2R1W存储器的2个读端口和1个写端口同时操作,在此不做详细赘述。
需要说明的是,如果逻辑上把每一个SRAM2P进一步切分,比如切分成4m个具有相同深度的SRAM2P,那么只需要增加额外1/4m的存储器面积就可以构建上述2R1W类型的SRAM;相应的,物理上SRAM的块数也增加了近2倍,在实际的布局布线中会占用不少的面积开销;当然,本发明并不以上述具体实施方式为限,其它采用异或运算以扩展存储器端口的方案也包括在本发明的保护范围内,在此不做详细赘述。
结合图12所示,对于本发明的4R4W存储器以2个16384深度和1152宽度的2R1W类型的SRAM并行拼装成一个Bank为例做具体介绍,一个Bank的容量大小是4.5M字节,总共有4个bank组成一个18M字节的4R4W存储器。
该示例中,数据写入4R4W存储器过程中,需要同时支持4个slice的同时写入,假设,每个slice的数据总线位宽是1152bits,同时每个slice支持6个100GE端口线速转发;在数据通道上最差的情况,对于小于等于144字节长度的报文数据,需要核心时钟频率跑到892.9MHz,对于大于144字节长度的报文,需要核心时钟频率跑到909.1MHz。
一个时钟周期下,若写入数据的位宽小于等于144字节,同时,需要满足4个Slice同时写入,才能满足带宽需求;如此,采用空间分割性,通过4个Slice的写入数据分别写入到4个Bank中,同时,将写于一个Bank中的数据进行复制,并分别写入到一个Bank的左右2个2R1W存储器中,如此,以满足数据的读出请求,以下将会详细描述。
一个时钟周期下,若写入数据的位宽大于144字节,同时,需要满足4个Slice同时写入,才能满足带宽需求;即:通过每个Slice的数据均需要占用整个Bank;如此,对于每个Slice而言,只需要在2个时钟周期下,采用乒乓操作即可以满足需求,例如:一个时钟周期下,将其中的两个数据分别写入到2个Bank中,第二个周期到来时,将另外两个数据分别写入到2个Bank中;其中,每个Bank中的两个2R1W存储器,分别对应存储任一个大于144字节的数据的高位和底位,在此不做详细赘述。如此,写入数据不会发生冲突。
其读取过程与写入过程相类似;一个时钟周期下,若读出数据的位宽小于等于144字节,最坏情况下,读出数据存储于同一个Bank中,由于本发明的每个Bank均由2个2R1W存储器拼接形成,而每个2R1W存储器均可以同时支持两个读出请求,同时,数据写入时,对数据进行拷贝分别存储至同一个Bank的左右2R1W存储器中,故,在该种情况下,也可以满足数据的读出请求。
一个时钟周期下,若读出数据的位宽大于144字节,最坏情况下,读出数据存储于同一个Bank中,与写入过程相类似,仅需要在两个时钟周期下,采用乒乓操作,即一个时钟周期下,从一个Bank的2个2R1W存储器读出两个数据,在第二个时钟周期下,从该相同Bank的2个2R1W存储器中读出剩余的两个数据,如此,同样可以满足读出请求,在此不做详细赘述。
本发明一优选实施方式中,所述方法还包括:当数据写入所述4R4W存储器时,根据每个Bank的剩余空闲资源选择数据的写入位置。具体的,为每个Bank对应建立一空闲缓存资源池,所述空闲缓存资源池用于存储当前对应Bank的剩余的空闲指针,当数据发出写入所述4R4W存储器请求时,比较各个空闲缓存资源池的深度,
若存在一个具有最大深度的空闲缓存资源池,则直接将数据写入到该最大深度的空闲缓存资源池对应的Bank中;
若存在2个以上具有相同的最大深度的空闲缓存资源池,则将该数据随机写入到其中一个具有最大深度的空闲缓存资源池对应的Bank中。
当然,在本发明的其他实施方式中,也可以设定一定的规则,当具有2个以上具有相同的最大深度的空闲缓存资源池时,按照各个Bank的排列顺序,按序写入到对应的Bank中,在此不做详细赘述。
结合图13所示,本发明一具体示例中,X0Y0的具体结构与图12所示相同,
其中,S0、S1、S2、S3代表4个slice,每个slice举例来说包含有6个100GE端口,从slice0、slice1、slice2以及slice3输入分别去往slice0、slice1、slice2以及slice3的报文均存入X0Y0,进一步的,读取报文的时候,slice0、slice1、slice2以及slice3均直接从X0Y0中直接读取相应的数据。如此,不同目的slice的端口之间实现缓存共享。而报文写入及读出的具体过程可参照图12的具体说明。
本发明的4R4W存储器,在14nm集成电路工艺下,其逻辑上总共个需要40个4096深度1152宽度的SRAM2P,总共占用面积22.115平方厘米,总的功耗为13.503Watts(核心电压=0.9V,结温=125摄氏度,工艺条件是最快),同时,不需要复杂的控制逻辑,只需要简单的异或运算就可实现多个读端口的操作;另外,也不需要额外的存储器块映射表和控制逻辑。更进一步的,所有的存储资源对于4个Slice或者说对于任意一个输入/输出端口而言都是可见的,所有的存储资源对于任意端口之间是完全共享的。
结合图14所示,本发明一实施方式提供的4R4W全共享报文的数据缓存处理系统,
所述系统包括:数据构建模块100,数据处理模块200;
所述数据构建模块100具体用于:将2个2R1W存储器并行拼装为一个Bank存储单元;
直接基于4个所述Bank存储单元形成4R4W存储器的硬件框架;
所述数据处理模块200具体用于:当确定一个时钟周期下,数据通过4个写端口写入到4R4W存储器时,
若数据的大小小于等于所述2R1W存储器的位宽,则将数据分别写入不同Bank中,同时,对写入的数据进行复制,分别写入至每个Bank的2个2R1W存储器中;
若数据的大小大于所述2R1W存储器的位宽,则等待第二个时钟周期,当第二个时钟周期到来时,将数据分别写入不同Bank中,同时,将每个写入数据的高低位分别写入至每个Bank存储单元的2个2R1W存储器中。
所述数据处理模块200还用于:当确定一个时钟周期下,当数据从4R4W存储器读出时,
若数据的大小小于等于所述2R1W存储器的位宽,则选择4R4W的存储器中匹配的读端口直接读出数据;
若数据的大小大于所述2R1W存储器的位宽,则等待第二个时钟周期,当第二个时钟周期到来时,选择4R4W存储器中匹配的读端口直接读出数据。
本发明优选实施方式中,数据构建模块100采用5种方式建立所述2R1W存储器。
如图6所示,第一种实施方式中,在6T SRAM的基础,数据构建模块100把一根字线分割成左右两个,这样可以做成2个读端口同时操作或者1个写端口,这样从左边MOS管读出的数据和右边MOS管读出的数据可以同时进行,需要注意的是,右边MOS管读出的数据需要反相之后才可以用,同时为了不影响数据读取的速度,读出的感应放大器需要用伪差分放大器。这样,6T SRAM面积不变,唯一的代价是增加一倍的字线,从而保证总体的存储密度基本不变。
如图7所示,第二种实施方式中,数据构建模块100通过定制设计可以增加SRAM的端口,把一个字线切割成2个字线,将读端口增加到2个;还可以通过分时操作的技术,即读操作在时钟的上升沿进行,而写操作在时钟的下降沿完成,这样也可以把一个基本的1读或者1写的SRAM扩展成1读和1写的SRAM类型,即1个读和1个写操作可以同时进行,存储密度基本不变。
如图8所示,第三种实施方式中,以SRAM2P为基础构建2R1W的SRAM为例,所述SRAM2P是一种能够支持1读和1读/写的SRAM类型,即可以对SRAM2P同时进行2个读操作,或者1个读和1个写操作。
本实施方式中,数据构建模块100通过复制一份SRAM以SRAM2P为基础构建2R1W的SRAM;该示例中,右边的SRAM2P_1是左边SRAM2P_0的拷贝,具体操作的时候,把两块SRAM2P作为1读和1写存储器来使用;其中,写入数据时,同时往左右两个SRAM2P写入数据,读出数据时,A固定从SRAM2P_0读取,数据B固定从SRAM2P_1读取,这样就可以实现1个写操作和2个读操作并发进行。
如图9a、9b所示,第四种实施方式中,数据构建模块100把逻辑上一整块的16384深度的SRAM分割成逻辑上4块4096深度的SRAM2P,编号依次为为0、1、2、3,再额外增加一块4096深度的SRAM,编号为4,作为解决读写冲突用,对于读数据A和读数据B,永远保证这2个读操作可以并发进行,当2个读操作的地址是处于不同的SRAM2P中时,因为任何一个SRAM2P都可以配置成1R1W类型,所以读写不会有冲突;当2个读操作的地址处于同一块SRAM2P中时,例如:均处于SRAM2P_0中,由于同一个SRAM2P最多只能提供2个端口同时操作,此时,其端口被2个读操作占用,如果恰好有一个写操作要写入SRAM2P_0,那么这时就把这个数据写入存储器第4块SRAM2P_4中。
该种实施方式中,需要有一个存储器块映射表记录哪一个存储器块存放有效数据,如图9b所示,存储器块映射 表的深度和一个存储器块的深度相同,即都是4096个深度,每一个条目中在初始化后依次存放每个存储器块的编号,从0到4,图9a示例中,由于SRAM2P_0在写入数据的时候发生读写冲突,数据实际上是写入到SRAM2P_4中,此时,读操作同时会读取存储器映射表中对应的内容,原始内容为{0,1,2,3,4},修改之后变成{4,1,2,3,0},第一个块编号和第4个块编号对调,表示数据实际写入到SRAM2P_4中,同时SRAM2P_0变成了备份条目。
当读取数据的时候,需要首先读对应地址的存储器块编号映射表,查看有效数据存放在哪一个存储器块中,例如当要读取地址5123的数据,那么首先读取存储块编号映射表地址1027(5123-4096=1027)存放的内容,根据第二列的数字编号去读取对应存储块的地址1027的内容。
对于写数据操作,需要存储器块编号映射表提供1读和1写端口,对于2个读数据操作,需要存储器块编号映射表提供2个读端口,这样总共需要存储器块编号映射表提供3个读端口和1个写端口,而且这4个访问操作必须是同时进行。
如图10所示,第五种实施方式,即本发明的优选实施方式中,数据构建模块100根据所述2R1W存储器的深度和宽度选择2m+1块具有相同深度及宽度的SRAM2P存储器构建2R1W存储器的硬件框架,m为正整数;
多个所述SRAM2P存储器按照排列顺序依次为SRAM2P(0)、SRAM2P(1)……、SRAM2P(2m),每个SRAM2P存储器均具有M个指针地址,其中,多个所述SRAM2P存储器中的一个为辅助存储器,其余均为主存储器;
每块SRAM2P存储器的深度与宽度的乘积=(2R1W存储器的深度与宽度乘积)/2m。
以下为了描述方便,对m取值为2、2R1W存储器为16384深度、128宽度的SRAM存储器进行详细描述。
则在该具体示例中,多个所述SRAM2P存储器按照排列顺序依次为SRAM2P(0)、SRAM2P(1)、SRAM2P(2)、SRAM2P(3)、SRAM2P(4),其中,SRAM2P(0)、SRAM2P(1)、SRAM2P(2)、SRAM2P(3)为主存储器,SRAM2P(4)为辅助存储器,每个SRAM2P存储器的深度和宽度分别为4096和128,相应的,每个SRAM2P存储器均具有4096个指针地址;如果对每个SRAM2P存储器的指针地址均独立标识,则每个SRAM2P存储器的指针地址均为0~4095,若将全部的主存储器的地址依次排列,则全部的指针地址范围为:0~16383。该示例中,SRAM2P(4)用于解决端口冲突,且在该实施方式中,无需增加存储器块编号映射表即可以满足需求。
进一步的,在上述硬件框架基础上,当数据写入2R1W存储器和/或从所述2R1W存储器读出时,数据处理模块200具体用于:根据数据的当前指针位置,关联主存储器以及辅助存储器中的数据,对其做异或运算,完成数据的写入和读出。
本发明优选实施方式中,其数据写入过程如下:
获取当前数据的写入地址为W(x,y),x表示写入数据所处于的SRAM2P存储器的排列位置,0≤x<2m,y表示写入数据所处于的SRAM2P存储器中的具体的指针地址,0≤y≤M;
获取与写入地址具有相同指针地址的其余主存储器中的数据,将其同时与当前写入数据做异或运算,并将异或运算结果写入到辅助存储器的相同指针地址中。
进一步的,本发明优选实施方式中,数据处理模块200读出数据过程如下:
若当前两个读出数据的读出地址处于相同的SRAM2P存储器中,则
数据处理模块200具体用于:分别获取两个读出数据的读出地址为R1(x1,y1),R2(x2,y2),x1、y1均表示读出数据所处于的SRAM2P存储器的排列位置,0≤x1<2m,0≤x2<2m,y1、y2均表示读出数据所处于的SRAM2P存储器中的具体的指针地址,0≤y1≤M,0≤y2≤M;
数据处理模块200具体用于:任选其中一个读出地址R1(x1,y1)中存储的读出数据,从当前的指定读出地址中直接读出当前存储的数据;
数据处理模块200具体用于:获取与另一个读出地址具有相同指针地址的其余主存储器、以及辅助存储器中存储的数据,并对其做异或运算,将异或运算结果作为另一个读出地址的存储数据进行输出。
本发明一实施方式中,若当前两个读出数据的读出地址处于不同的SRAM2P存储器中,数据处理模块200则直接获取不同SRAM2P存储器中对应指针地址的数据分别独立进行输出。
需要说明的是,如果逻辑上把每一个SRAM2P进一步切分,比如切分成4m个具有相同深度的SRAM2P,那么只需要增加额外1/4m的存储器面积就可以构建上述2R1W类型的SRAM;相应的,物理上SRAM的块数也增加了近2倍,在实际的布局布线中会占用不少的面积开销;当然,本发明并不以上述具体实施方式为限,其它采用异或运算以扩展存储器端口的方案也包括在本发明的保护范围内,在此不做详细赘述。
本发明一优选实施方式中,所述数据处理模块200还用于:当数据写入所述4R4W存储器时,根据每个Bank的剩余空闲资源选择数据的写入位置。具体的,所述数据处理模块200还用于:为每个Bank对应建立一空闲缓存资源池,所述空闲缓存资源池用于存储当前对应Bank的剩余的空闲指针,当数据发出写入所述4R4W存储器请求时,比较各个空闲缓存资源池的深度,
若存在一个具有最大深度的空闲缓存资源池,则直接将数据写入到该最大深度的空闲缓存资源池对应的Bank中;
若存在2个以上具有相同的最大深度的空闲缓存资源池,则将该数据随机写入到其中一个具有最大深度的空闲缓存资源池对应的Bank中。
当然,在本发明的其他实施方式中,也可以设定一定的规则,当具有2个以上具有相同的最大深度的空闲缓存资源池时,按照各个Bank的排列顺序,按序写入到对应的Bank中,在此不做详细赘述。
结合图13所示,该具体示例中,X0Y0以及X1Y1的具体结构均与图12所示相同,数据写入及读出过程中,需根据其对应的转发端口进行存储,例如:S0、S1的数据仅能写入到X0Y0中,而S2、S3的数据仅能写入到X1Y1中,其写入过程不在具体赘述。
本发明的4R4W存储器,在14nm集成电路工艺下,其逻辑上总共个需要40个4096深度1152宽度的SRAM2P,总共占用面积22.115平方厘米,总的功耗为13.503Watts(核心电压=0.9V,结温=125摄氏度,工艺条件是最快),同时,不需要复杂的控制逻辑,只需要简单的异或运算就可实现多个读端口的操作;另外,也不需要额外的存储器块映射表和控制逻辑。更进一步的,所有的存储资源对于4个Slice或者说对于任意一个输入/输出端口而言都是可见的,所有的存储资源对于任意端口之间是完全共享的。
综上所述,本发明的4R4W全共享报文的数据缓存处理方法及处理系统,基于现有的SRAM类型,通过算法的方式搭建更多端口的SRAM,仅仅用最小的代价便可以最大限度的支持多端口SRAM;其实现过程中,避免采用复杂的控制逻辑和额外的多端口SRAM或者寄存器阵列资源,利用报文缓存的特殊性,通过空间分割和时间分割,仅需要简单的异或运算就可实现4R4W的报文缓存,同时,本发明的4R4W存储器,其所有的存储资源对于4个Slice或者说对于任意一个输入/输出端口而言都是可见的,所有的存储资源对于任意端口之间是完全共享的,本发明具有更低的功耗,更快的处理速度,以及节省更多的资源或面积,实现简单,节约人力及物质成本。
为了描述的方便,描述以上装置时以功能分为各种模块分别描述。当然,在实施本发明时可以把各模块的功能在同一个或多个软件和/或硬件中实现。
以上所描述的装置实施方式仅仅是示意性的,其中所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施方式方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
应当理解,虽然本说明书按照实施方式加以描述,但并非每个实施方式仅包含一个独立的技术方案,说明书的这种叙述方式仅仅是为清楚起见,本领域技术人员应当将说明书作为一个整体,各实施方式中的技术方案也可以经适当组合,形成本领域技术人员可以理解的其他实施方式。
上文所列出的一系列的详细说明仅仅是针对本发明的可行性实施方式的具体说明,它们并非用以限制本发明的保护范围,凡未脱离本发明技艺精神所作的等效实施方式或变更均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种4R4W全共享报文的数据缓存处理方法,其特征在于,所述方法包括:
    将2个2R1W存储器并行拼装为一个Bank存储单元;
    直接基于4个所述Bank存储单元形成4R4W存储器的硬件框架;
    一个时钟周期下,当数据通过4个写端口写入到4R4W存储器时,
    若数据的大小小于等于所述2R1W存储器的位宽,则将数据分别写入不同Bank中,同时,对写入的数据进行复制,分别写入至每个Bank的2个2R1W存储器中;
    若数据的大小大于所述2R1W存储器的位宽,则等待第二个时钟周期,当第二个时钟周期到来时,将数据分别写入不同Bank中,同时,将每个写入数据的高低位分别写入至每个Bank存储单元的2个2R1W存储器中。
  2. 根据权利要求1所述的4R4W全共享报文的数据缓存处理方法,其特征在于,所述方法还包括:
    一个时钟周期下,当数据从4R4W存储器读出时,
    若数据的大小小于等于所述2R1W存储器的位宽,则选择4R4W的存储器中匹配的读端口直接读出数据;
    若数据的大小大于所述2R1W存储器的位宽,则等待第二个时钟周期,当第二个时钟周期到来时,选择4R4W存储器中匹配的读端口直接读出数据。
  3. 根据权利要求2所述的4R4W全共享报文的数据缓存处理方法,其特征在于,所述方法还包括:
    当数据写入所述4R4W存储器时,根据每个Bank的剩余空闲资源选择数据的写入位置。
  4. 根据权利要求3所述的4R4W全共享报文的数据缓存处理方法,其特征在于,所述方法具体包括:
    为每个Bank对应建立一空闲缓存资源池,所述空闲缓存资源池用于存储当前对应Bank的剩余的空闲指针,当数据发出写入所述4R4W存储器请求时,比较各个空闲缓存资源池的深度,
    若存在一个具有最大深度的空闲缓存资源池,则直接将数据写入到该最大深度的空闲缓存资源池对应的Bank中;
    若存在2个以上具有相同的最大深度的空闲缓存资源池,则将该数据随机写入到其中一个具有最大深度的空闲缓存资源池对应的Bank中。
  5. 根据权利要求1至4任一项所述的4R4W全共享报文的数据缓存处理方法,其特征在于,所述方法还包括:
    根据2R1W存储器的深度和宽度选择2m+1块具有相同深度及宽度的SRAM2P存储器构建2R1W存储器的硬件框架,m为正整数;
    每个SRAM2P存储器均具有M个指针地址,其中,多个所述SRAM2P存储器中的一个为辅助存储器,其余均为主存储器;
    当数据写入2R1W存储器和/或从所述2R1W存储器读出时,根据数据的当前指针位置,关联主存储器以及辅助存储器中的数据,对其做异或运算,完成数据的写入和读出。
  6. 一种4R4W全共享报文的数据缓存处理系统,其特征在于,所述系统包括:数据构建模块,数据处理模块;
    所述数据构建模块具体用于:将2个2R1W存储器并行拼装为一个Bank存储单元;
    直接基于4个所述Bank存储单元形成4R4W存储器的硬件框架;
    所述数据处理模块具体用于:当确定一个时钟周期下,数据通过4个写端口写入到4R4W存储器时,
    若数据的大小小于等于所述2R1W存储器的位宽,则将数据分别写入不同Bank中,同时,对写入的数据进行复制,分别写入至每个Bank的2个2R1W存储器中;
    若数据的大小大于所述2R1W存储器的位宽,则等待第二个时钟周期,当第二个时钟周期到来时,将数据分别写入不同Bank中,同时,将每个写入数据的高低位分别写入至每个Bank存储单元的2个2R1W存储器中。
  7. 根据权利要求6所述的4R4W全共享报文的数据缓存处理系统,其特征在于,
    所述数据处理模块还用于:
    当确定一个时钟周期下,数据从4R4W存储器读出时,
    若数据的大小小于等于所述2R1W存储器的位宽,则选择4R4W的存储器中匹配的读端口直接读出数据;
    若数据的大小大于所述2R1W存储器的位宽,则等待第二个时钟周期,当第二个时钟周期到来时,选择4R4W存储器中匹配的读端口直接读出数据。
  8. 根据权利要求7所述的4R4W全共享报文的数据缓存处理系统,其特征在于,
    所述数据处理模块还用于:
    当确认数据写入所述4R4W存储器时,根据每个Bank的剩余空闲资源选择数据的写入位置。
  9. 根据权利要求8所述的4R4W全共享报文的数据缓存处理系统,其特征在于,
    所述数据处理模块还用于:
    为每个Bank对应建立一空闲缓存资源池,所述空闲缓存资源池用于存储当前对应Bank的剩余的空闲指针,当数据发出写入所述4R4W存储器请求时,比较各个空闲缓存资源池的深度,
    若存在一个具有最大深度的空闲缓存资源池,则直接将数据写入到该最大深度的空闲缓存资源池对应的Bank中;
    若存在2个以上具有相同的最大深度的空闲缓存资源池,则将该数据随机写入到其中一个具有最大深度的空闲缓存资源池对应的Bank中。
  10. 根据权利要求6至9任一项所述的4R4W全共享报文的数据缓存处理系统,其特征在于,
    所述数据构建模块还用于:根据2R1W存储器的深度和宽度选择2m+1块具有相同深度及宽度的SRAM2P存储器构建2R1W存储器的硬件框架,m为正整数;
    每个SRAM2P存储器均具有M个指针地址,其中,多个所述SRAM2P存储器中的一个为辅助存储器,其余均为主存储器;
    当数据写入2R1W存储器和/或从所述2R1W存储器读出时,所述数据处理模块还用于:根据数据的当前指针位置,关联主存储器以及辅助存储器中的数据,对其做异或运算,完成数据的写入和读出。
PCT/CN2017/073642 2016-07-28 2017-02-15 4r4w全共享报文的数据缓存处理方法及数据处理系统 WO2018018874A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/319,447 US20190332313A1 (en) 2016-07-28 2017-02-15 Data buffer processing method and data buffer processing system for 4r4w fully-shared packet

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610605130.7A CN106302260B (zh) 2016-07-28 2016-07-28 4个读端口4个写端口全共享报文的数据缓存处理方法及数据处理系统
CN201610605130.7 2016-07-28

Publications (1)

Publication Number Publication Date
WO2018018874A1 true WO2018018874A1 (zh) 2018-02-01

Family

ID=57662840

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/073642 WO2018018874A1 (zh) 2016-07-28 2017-02-15 4r4w全共享报文的数据缓存处理方法及数据处理系统

Country Status (3)

Country Link
US (1) US20190332313A1 (zh)
CN (1) CN106302260B (zh)
WO (1) WO2018018874A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106297861B (zh) * 2016-07-28 2019-02-22 盛科网络(苏州)有限公司 可扩展的多端口存储器的数据处理方法及数据处理系统
CN106302260B (zh) * 2016-07-28 2019-08-02 盛科网络(苏州)有限公司 4个读端口4个写端口全共享报文的数据缓存处理方法及数据处理系统
CN109344093B (zh) * 2018-09-13 2022-03-04 苏州盛科通信股份有限公司 缓存结构、读写数据的方法和装置
CN109617838B (zh) * 2019-02-22 2021-02-26 盛科网络(苏州)有限公司 多通道报文汇聚共享内存管理方法及系统
DE102019128331A1 (de) * 2019-08-29 2021-03-04 Taiwan Semiconductor Manufacturing Co., Ltd. Gemeinsam genutzter decodiererschaltkreis und verfahren
KR20210076630A (ko) * 2019-12-16 2021-06-24 삼성전자주식회사 메모리 장치의 데이터 기입 방법, 데이터 독출 방법 및 이를 포함하는 구동 방법
CN112071344B (zh) * 2020-09-02 2023-02-03 安徽大学 一种用于提高内存内计算线性度和一致性的电路
CN112787955B (zh) * 2020-12-31 2022-08-26 苏州盛科通信股份有限公司 Mac层数据报文的处理方法、设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030039165A1 (en) * 2001-08-23 2003-02-27 Jeng-Jye Shau High performance semiconductor memory devices
CN103077123A (zh) * 2013-01-15 2013-05-01 华为技术有限公司 一种数据写入和读取方法及装置
CN104572573A (zh) * 2014-12-26 2015-04-29 深圳市国微电子有限公司 数据存储方法、存储模块和可编程逻辑器件
CN106297861A (zh) * 2016-07-28 2017-01-04 盛科网络(苏州)有限公司 可扩展的多端口存储器的数据处理方法及数据处理系统
CN106302260A (zh) * 2016-07-28 2017-01-04 盛科网络(苏州)有限公司 4r4w全共享报文的数据缓存处理方法及数据处理系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7283556B2 (en) * 2001-07-31 2007-10-16 Nishan Systems, Inc. Method and system for managing time division multiplexing (TDM) timeslots in a network switch
US8861300B2 (en) * 2009-06-30 2014-10-14 Infinera Corporation Non-blocking multi-port memory formed from smaller multi-port memories
US8589851B2 (en) * 2009-12-15 2013-11-19 Memoir Systems, Inc. Intelligent memory system compiler
US8959291B2 (en) * 2010-06-04 2015-02-17 Lsi Corporation Two-port memory capable of simultaneous read and write
CN104484128A (zh) * 2014-11-27 2015-04-01 盛科网络(苏州)有限公司 基于一读一写存储器的多读多写存储器及其实现方法
CN104409098A (zh) * 2014-12-05 2015-03-11 盛科网络(苏州)有限公司 容量翻倍的芯片内部表项及其实现方法
CN104834501A (zh) * 2015-04-20 2015-08-12 江苏汉斯特信息技术有限公司 一种基于l结构处理器的寄存器和寄存器操作方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030039165A1 (en) * 2001-08-23 2003-02-27 Jeng-Jye Shau High performance semiconductor memory devices
CN103077123A (zh) * 2013-01-15 2013-05-01 华为技术有限公司 一种数据写入和读取方法及装置
CN104572573A (zh) * 2014-12-26 2015-04-29 深圳市国微电子有限公司 数据存储方法、存储模块和可编程逻辑器件
CN106297861A (zh) * 2016-07-28 2017-01-04 盛科网络(苏州)有限公司 可扩展的多端口存储器的数据处理方法及数据处理系统
CN106302260A (zh) * 2016-07-28 2017-01-04 盛科网络(苏州)有限公司 4r4w全共享报文的数据缓存处理方法及数据处理系统

Also Published As

Publication number Publication date
US20190332313A1 (en) 2019-10-31
CN106302260A (zh) 2017-01-04
CN106302260B (zh) 2019-08-02

Similar Documents

Publication Publication Date Title
WO2018018874A1 (zh) 4r4w全共享报文的数据缓存处理方法及数据处理系统
WO2018018875A1 (zh) 可扩展的多端口存储器的数据处理方法及数据处理系统
TWI640003B (zh) 用於邏輯/記憶體器件之裝置及方法
US11947798B2 (en) Packet routing between memory devices and related apparatuses, methods, and memory systems
WO2018018876A1 (zh) 2r1w存储器的数据处理方法及数据处理系统
JP6564375B2 (ja) 高スループットのキーバリューストアの実現のためのメモリ構成
CN105912484B (zh) 高带宽存储器和少故障差分异或
US8862835B2 (en) Multi-port register file with an input pipelined architecture and asynchronous read data forwarding
US9311990B1 (en) Pseudo dual port memory using a dual port cell and a single port cell with associated valid data bits and related methods
US8862836B2 (en) Multi-port register file with an input pipelined architecture with asynchronous reads and localized feedback
US10580481B1 (en) Methods, circuits, systems, and articles of manufacture for state machine interconnect architecture using embedded DRAM
US20150378946A1 (en) High throughput register file memory
JPH065070A (ja) シリアルアクセスメモリ
US7248491B1 (en) Circuit for and method of implementing a content addressable memory in a programmable logic device
US7242633B1 (en) Memory device and method of transferring data in memory device
US9129661B2 (en) Single port memory that emulates dual port memory
TW202230352A (zh) 記憶體電路架構
Delgado-Frias et al. A programmable dynamic interconnection network router with hidden refresh
US20140293682A1 (en) Memory bitcell clusters employing localized generation of complementary bitlines to reduce memory area, and related systems and methods
Kaur et al. XMAT: A 6T XOR-MAT based 2R-1W SRAM for high bandwidth network applications
US9030887B2 (en) Semiconductor memory device and information processing apparatus
RAMESWARI et al. Implementation of Bram Multiported Memory
Taraate et al. SOC Prototyping
JP2000231546A (ja) 共有メモリ
KR19990075965A (ko) 선입선출기 회로

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17833191

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17833191

Country of ref document: EP

Kind code of ref document: A1