WO2024001414A1 - 报文的缓存方法、装置、电子设备及存储介质 - Google Patents

报文的缓存方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2024001414A1
WO2024001414A1 PCT/CN2023/087615 CN2023087615W WO2024001414A1 WO 2024001414 A1 WO2024001414 A1 WO 2024001414A1 CN 2023087615 W CN2023087615 W CN 2023087615W WO 2024001414 A1 WO2024001414 A1 WO 2024001414A1
Authority
WO
WIPO (PCT)
Prior art keywords
cache
message
cache block
block
array
Prior art date
Application number
PCT/CN2023/087615
Other languages
English (en)
French (fr)
Inventor
王敏
徐金林
王越
唐梓函
Original Assignee
深圳市中兴微电子技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中兴微电子技术有限公司 filed Critical 深圳市中兴微电子技术有限公司
Publication of WO2024001414A1 publication Critical patent/WO2024001414A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/06Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9005Buffering arrangements using dynamic buffer space allocation

Definitions

  • Embodiments of the present application relate to the field of network communication technology, and in particular to a message caching method, device, electronic device, and storage medium.
  • the storage method usually used is the entire packet cache. For packets of any size, all packets will be stored in the cache space.
  • An embodiment of the present application provides a message caching method, which includes: dividing the cache space into an N*N cache array; wherein N is a natural number greater than zero, and the size of each cache block in the cache array is Same; select a cache block for storing the message according to the size of the message to be stored and the number of free addresses of each cache block; store the message in the selected free address of the cache block.
  • An embodiment of the present application also provides a message caching device, including: a dividing module, used to divide the cache space into an N*N cache array; wherein, the N is a natural number greater than zero, and each of the cache arrays is The sizes of the cache blocks are the same; the selection module is used to select the cache block for storing the message according to the size of the message to be stored and the number of free addresses of each cache block; the storage module is used to store the message The file is stored in the free address of the selected cache block.
  • An embodiment of the present application also provides an electronic device, including: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores information that can be executed by the at least one processor. Instructions, which are executed by the at least one processor, so that the at least one processor can execute the above message caching method.
  • Embodiments of the present application also provide a computer-readable storage medium that stores a computer program.
  • the computer program is executed by a processor, the above message caching method is implemented.
  • the message caching method proposed in this application divides the cache space into an N*N cache array, where N is a natural number greater than zero.
  • the size of each cache block in the cache array is the same.
  • the number of free addresses in each cache block select the cache block used to store the message, and store the message in the free address of the selected cache block.
  • Cache block and store the message in the free address of the selected cache block, so that the resources of each cache block are fully utilized and avoid the waste of resources in the cache space due to storing small packet messages, while other cache blocks have relatively large In the case of multiple storage spaces, that is, effectively increasing the cache space Utilization and balance, reducing the waste of storage resources.
  • Figure 1 is a flowchart 1 of a message caching method provided according to an embodiment of the present application.
  • Figure 2 is a schematic structural diagram of a cache array provided according to an embodiment of the present application.
  • Figure 3 is a schematic diagram of a message enqueuing operation provided according to an embodiment of the present application.
  • Figure 4 is a schematic diagram of linked list information storage according to an embodiment of the present application.
  • Figure 5 is a flow chart 2 of a message caching method provided according to an embodiment of the present application.
  • Figure 6 is a flow chart 3 of a message caching method provided according to an embodiment of the present application.
  • Figure 7 is a schematic diagram of a packet dequeuing operation provided according to an embodiment of the present application.
  • Figure 8 is a schematic diagram of a device provided according to another embodiment of the present application.
  • Figure 9 is a schematic structural diagram of an electronic device according to another embodiment of the present application.
  • One embodiment of the present application relates to a message caching method by dividing the cache space into an N*N cache array. Since the size of each cache block in the cache array is the same, the size of the message to be stored and the The number of free addresses in the cache block can determine the cache block in which the message is stored, and store the message in the free address of the selected cache block, so that the resources of each cache block are fully utilized and avoid errors caused by storing small packets. There is a situation where resources in the cache space are wasted, while other cache blocks have more storage space.
  • the message caching method of this embodiment can be applied to the traffic management direction in the design of network processor chips in the field of network communication technology to achieve cache management of messages to be stored.
  • Step 101 Divide the cache space into an N*N cache array; where N is a natural number greater than zero, and the size of each cache block in the cache array is the same.
  • Step 102 Select a cache block for storing the message based on the size of the message to be stored and the number of free addresses in each cache block.
  • Step 103 Store the message in the free address of the selected cache block.
  • step 101 the on-chip cache space is divided into multiple equal-sized cache blocks, and then the multiple cache blocks are grouped into an N*N cache array.
  • each cache block is the minimum management granularity of cache space: 192 bytes (B). Since each cache block has the same size, this embodiment can effectively reduce the number of instantiations of the cache space, that is, the cache block only needs to be instantiated once, and reduces the area and power consumption of the system.
  • the parameters of the cache block are 4K (depth) * 192B (bit width). The value of the parameter can be determined by factors such as the amount of data, the bit width of the message bus, and the number of read and write access ports.
  • the cache array in Figure 2 is only an illustration of the message caching method in this embodiment.
  • the size of the cache array can also be 8*8, and the parameters of the cache block are 1K*96B.
  • Those skilled in the art can select a corresponding cache array according to the layout and wiring inside the chip.
  • the data bus of each message is within 768B, and there are three access ports, two reading and one writing.
  • the cache block in this embodiment is a single-port random access memory (RAM), in order to support the above situation of simultaneous access of two reads and one write, at least three cache blocks are needed to allocate different cache units to avoid reading. Write conflict, so the size of the cache array is at least 3*3. If a cache block with a depth of 4K is used, in order to support the storage of 64K messages, the number of storage blocks is 16, and the cache array size can be 4*4, that is, 4 groups*4 banks; if the depth is 1K In order to support the storage of 64K messages, the number of storage blocks is 64, and the cache array size can be 8*8, that is, 8 groups*8 banks.
  • RAM single-port random access memory
  • the cache block in this embodiment can also use a dual-port RAM to realize simultaneous reading and writing of data and meet the requirement that integrated circuits need to have higher cache efficiency in the application process.
  • step 102 since the number of free addresses of each cache block in the cache array is different, the number of free addresses of the cache block to store the message needs to meet the size of the message. Therefore, it is necessary to determine the size of the message to be stored and the size of each cache. According to the number of free addresses of the block, the cache block used to store the message is selected to make full use of the on-chip cache space and improve the utilization of the cache space.
  • the number of cache blocks L required to store the packet is determined based on the size of the packet to be stored.
  • L is a natural number greater than zero. For example, for a large packet of 768B, it can be provided when the full bandwidth is reached. 4 cache blocks enable simultaneous storage and reading of 768B data; for small packet messages, 1 to 3 cache blocks are provided. Then, L rows of cache blocks are determined as candidate cache lines in the cache array. In each candidate cache line, according to the sorting result of the number of free addresses of each cache block, a cache block is selected as the target cache block, specifically selecting the largest number of free addresses. The cache block is used as the target cache block.
  • candidate cache lines are first determined in the cache array. Specifically, the number of free addresses of the cache blocks in each line of the cache array is sorted, and based on Based on the sorting results of the number of free addresses of each row of cache blocks, L rows of cache blocks are determined as candidate cache lines in the cache array, that is, the number of free addresses of each row of cache blocks is sorted from large to small, and L rows of cache blocks are selected in sequence as candidate cache lines. , to ensure that the number of free addresses in the selected candidate cache line is the largest and meets the size of the message to be stored.
  • the sorting result of the number of free addresses of each row of cache blocks in the cache array and the sorting result of the number of free addresses of each cache block in the candidate cache line are obtained based on a comparison algorithm to ensure that the cache blocks of rows and columns in the cache array are The balance of the messages stored in it.
  • a counter is set in each cache block of the cache array in advance.
  • the counters of the cache blocks in each row are named data_cnt0 ⁇ 3, and the sequence numbers are from small to large, that is, the number L of cache blocks required to store the message is determined.
  • each data_cnt is compared with the remaining three data_cnt in turn. After comparison, if the weight of the two data_cnt values is larger, it is recorded as 1, otherwise it is recorded as 0. If the two data_cnt values are the same, the weight of the smaller serial number is recorded as 1, otherwise Remember 0, and the weights of several data_cnt are the sorting results. The one ranked highest indicates that this row of cache blocks has the most free addresses, and so on.
  • the sorting method of the number of free addresses of cache blocks in each column is the same as the sorting method of the number of free addresses of cache blocks in each row, and will not be described again here.
  • step 103 after selecting the target cache block to store the message, the message is stored in the free address of the target cache block, that is, the queue enqueuing operation of the message is completed.
  • the message caching method of this embodiment can be applied to the architecture shown in Figure 3, including: a cache address management module and a storage module, where the cache address management module also includes a linked list information storage module and an address Apply for modules.
  • the structures of the storage module, linked list information storage module and address application module are all cache arrays of the same size.
  • the address application module is used to receive a storage request for the message, that is, the address application request, and allocate a cache block for storing the message to the message; the storage module is used to store the message into a buffer for storing the message. in the free address of the cache block; the linked list information storage module is used to store the address information of the cache block used to store messages.
  • Figure 3 shows a schematic diagram of a message enqueuing operation, in which 1 represents an address application request; 2 represents storing the address information of the cache block used to store messages applied from the address application module to the linked list information storage module ; 2' means sending the address of the applied cache block to the storage module; 3 means storing the message in the free address of the applied cache block to complete the queue operation.
  • this embodiment stores the message into the free address of the selected cache block.
  • the addresses of other cache blocks except the first cache block are written into the first cache block.
  • the first cache block is selected from the first candidate cache line.
  • the target cache block that is, the first cache block is the target cache block with the largest number of free addresses in the candidate cache line with the largest number of free addresses, where the address of the cache block is used to indicate the storage location of the message.
  • the cache blocks used to store messages are G0B1 (i.e. bank1 of group0), G1B2, G2B1 and G3B0 in the figure, then the linked list information corresponding to the address application module will be stored in the cache array of the module.
  • G0B1 is used as the address
  • G1B2, G2B1 and G3B0 are spliced as data, and the spliced data is written into G0B1, that is, the address information of the cache block used to store the message applied from the address application module is stored in the linked list information storage module.
  • the step of applying for enqueuing will not be delayed and the overall system speed will not be reduced.
  • the cache space is divided into an N*N cache array, where N is a natural number greater than zero.
  • the size of each cache block in the cache array is the same.
  • the number of free addresses select the cache block used to store the message, and store the message in the free address of the selected cache block.
  • Cache block and store the message in the free address of the selected cache block, so that the resources of each cache block are fully utilized and avoid the waste of resources in the cache space due to storing small packet messages, while other cache blocks have relatively large In the case of multiple storage spaces, it effectively improves the utilization and balance of cache space and reduces the waste of storage resources.
  • Another embodiment of the present application relates to a message caching method.
  • This embodiment is roughly the same as the first embodiment. The difference is that the type of cache block in this embodiment is a single-port RAM. Therefore, when the data to be stored is When the message includes the first message and the second message, a storage conflict may occur.
  • This embodiment provides a conflict avoidance mechanism during address application to solve the storage conflict problem.
  • the specific implementation flow chart of the message caching method in this embodiment is shown in Figure 5, including:
  • Step 501 Divide the cache space into N*N cache arrays; where N is a natural number greater than zero, the size of each cache block in the cache array is the same, and the cache block type is single-port RAM.
  • Step 502 Obtain the priorities of the first message and the second message to be stored.
  • the first message is an on-chip message
  • the second message is an off-chip message. Due to the long access cycle when storing off-chip messages, a pre-read operation is generally required when dequeuing, and the off-chip messages need to be written back to the cache space first. Therefore, there will be messages to be stored including the first message and the second message.
  • the type of the cache block in this embodiment is a single-port RAM and does not support the double-read scenario
  • the message to be stored includes: the first message and the second message, it is first necessary to Get the priorities of the first packet and the second packet. Among them, the priority of the message is determined in advance according to the business needs.
  • Step 503 Select a cache block for storing the message based on the size of the message to be stored and the number of free addresses in each cache block.
  • the cache blocks requested for the first message and the second message should be completely different. Therefore, when selecting the storage packet from the cache array, When selecting a candidate cache line for a message, the candidate cache lines for the first message and the second message need to be determined from the cache array based on the priorities of the first message and the second message. There may be multiple candidate cache lines for each message, and they are selected multiple times. For the determination of each candidate cache line, when the first message has a higher priority than the second message, the cache line is selected from the cache array. A row of cache blocks with the largest number of free addresses is selected as a candidate cache line for the first message, and a row of cache blocks with the second largest number of free addresses is selected from the cache array as a candidate cache line of the second message.
  • the first message and the second message may select the same candidate cache line.
  • the same candidate cache line selected by the article is selected.
  • the cache block with the largest number of free addresses is selected as the cache line.
  • the cache block that stores the first message is selected, and the cache block with the largest number of free addresses is selected as the cache block that stores the second message.
  • the cache block with the largest number of free addresses is selected as the storage
  • the cache block with the largest number of free addresses is selected as the cache block to store the second message.
  • group1 select the cache block with the largest number of free addresses, such as bank0, and use bank0 as the cache block to store the first message, and Select the cache block with the second largest number of free addresses, such as bank1, and use bank1 as the cache block to store the second message.
  • the above is only an example of how to select a cache block for storing messages, taking the priority of the first message to be higher than the priority of the second message.
  • the implementation of the present application is not limited to this, and may also be The priority of the second packet is higher than the priority of the first packet.
  • Step 504 Store the message in the free address of the selected cache block.
  • Step 504 is roughly the same as step 103 and will not be described again here.
  • the messages to be stored will include the first message and the second message
  • the requirements for storing the first message and the second message need to be met
  • the type of cache block is single-port RAM.
  • each candidate cache line select the cache block with the largest number of free addresses as the cache block to store the first message, and select the cache block with the second largest number of free addresses as the cache block to store the second message, It can effectively solve the storage conflict problem caused by port restrictions.
  • FIG. 6 Another embodiment of the present application relates to a message caching method. This embodiment is roughly the same as the first embodiment. The difference is that the type of cache block in this embodiment is a single-port RAM. Therefore, in the cache array When there are dequeue messages, read and write conflicts may occur. This embodiment provides a read and write conflict avoidance mechanism to solve the read and write conflict problem.
  • the specific implementation flow chart of the message caching method in this embodiment is shown in Figure 6, including:
  • Step 601 Divide the cache space into N*N cache arrays; where N is a natural number greater than zero, the size of each cache block in the cache array is the same, and the cache block type is single-port RAM.
  • Step 602 Determine whether there are dequeued packets in the cache array. If there are dequeued packets in the cache array, remove the cache block where the dequeued packet is located from the cache array.
  • dequeuing address recycling has a higher priority than enqueuing address application. Therefore, when there are dequeuing packets in the cache array, that is, the cache block address needs to be recycled, Remove the cache block where the dequeued packet is located from the cache array.
  • Figure 7 shows a schematic diagram of a packet dequeuing operation. That is, when there are dequeued packets in the cache array, if the addresses of the dequeued packets are G0B1 and G1B0, they need to be recycled. G0B1 and G1B0, and convert G0B1 and G1B0 are evicted from the cache array.
  • Step 603 Select a cache block for storing the message based on the size of the message to be stored and the number of free addresses in each cache block.
  • a cache block to store the message is selected from the cache array after the cache blocks are eliminated. For example, if the cache blocks to be eliminated are G0B1 and G1B0 in the cache array, a cache block to store the message is selected from cache blocks other than G0B1 and G1B0.
  • the specific method of selecting a cache block to store the message is the same as in the first implementation. The examples are the same and will not be repeated here.
  • Step 604 Store the message in the free address of the selected cache block.
  • Step 604 is roughly the same as step 103 and will not be described again here.
  • the packet dequeuing operation in this embodiment can be implemented through the structure shown in Figure 7, including: a cache address management module and a storage module, where the cache address management module also includes a linked list information storage module and Address recycling module.
  • the structures of the storage module, linked list information storage module and address recycling module are all cache arrays of the same size.
  • the address recycling module is used to receive the dequeue request of the message, that is, the address recycling request; the storage module is used to store the dequeued message; the linked list information storage module is used to store the cache used to store the dequeue message. Block address information.
  • 1 represents an address recycling request
  • 2 represents that the linked list information storage module will obtain the address of each cache block that stores the message
  • 2' represents that the address of the cache block obtained will be sent to the storage module
  • 3 represents that the The message is read from the corresponding cache block of the storage module to complete the dequeuing operation of the message.
  • the dequeuing application step will not be delayed and the overall system speed will be reduced.
  • the second embodiment and the third embodiment of the present application are only two scenarios that may occur when messages are cached: two reads and one read and one write. There will also be various conflict scenarios such as two reads and one write, two reads and two writes, etc. , then the message caching methods of the second embodiment and the third embodiment can be used in combination to solve the above conflict problem.
  • the conflict scenario of two reads and two writes can be solved at most. If there are many sources of reads and writes, which is greater than two reads and two writes, the cache block can be adjusted by increasing the number of arrays. In-depth solution, for example, an 8*8 cache array can resolve conflict scenarios of up to four reads and four writes.
  • bank0 ⁇ 3 all represent cache blocks
  • bank0 ⁇ 3 represents the order of the number of bank free addresses from most to least
  • the address recycling may be any bank from 0 to 3.
  • bank0 and bank1 N means there is no on-chip enqueue request.
  • Off-chip enqueue application has the lowest priority.
  • bank1 is the bank with the most free addresses. If there is no message enqueuing request on the chip at this time, that is is N, then off-chip packets can be queued to apply for bank1 with the most free addresses; if there is a packet queue request on the chip at this time, and the priority of the on-chip packets is greater than the off-chip packets, the on-chip packets will apply for free addresses For bank1 with the most free addresses, off-chip packets can be queued and apply for bank2 with the second largest number of free addresses. When packets on banks 1 to 3 are dequeued, the methods for applying for each cache block for on-chip packets and off-chip packets are similar and will not be described again.
  • FIG. 8 is a schematic diagram of the message caching device according to this embodiment, including: a dividing module 801, a selection module 802 and a storage module 803.
  • the dividing module 801 is used to divide the cache space into an N*N cache array; where N is a natural number greater than zero, and the size of each cache block in the cache array is the same.
  • the selection module 802 is configured to select a cache block for storing the message based on the size of the message to be stored and the number of free addresses of each cache block.
  • the selection module 802 is also configured to determine the number L of cache blocks required to store the message according to the size of the message to be stored; determine L rows of cache blocks in the cache array as candidate cache lines; In each candidate cache line, a cache block is selected as the target cache block based on the sorting result of the number of free addresses of each cache block.
  • the selection module 802 is also used to sort the number of free addresses of each row of cache blocks in the cache array; according to the sorting result of the number of free addresses of each row of cache blocks, determine L rows of cache blocks in the cache array as candidate caches. OK.
  • the selection module 802 is also used to determine the candidate cache line of the first message and Whether the candidate cache line of the second message is the same; the candidate cache line of the first message is the same as the candidate cache line of the second message, and the priority of the first message is higher than the priority of the second message.
  • the cache block with the largest number of free addresses is selected as the cache block to store the first message
  • the cache block with the second largest number of free addresses is selected as the cache block to store the second message.
  • the selection module 802 is also used to select the module 802 according to the size of the message to be stored and the The number of free addresses in each cache block, and the cache block to store the message is selected from the cache array after the cache block is eliminated.
  • the storage module 803 is used to store the message in the free address of the selected cache block.
  • the storage module 803 is also used to store the message in the free address of the selected target cache block.
  • This embodiment is a device embodiment corresponding to the above method embodiment, and this embodiment can be implemented in cooperation with the above method embodiment.
  • the relevant technical details and technical effects mentioned in the above embodiment are still valid in this embodiment. In order to reduce duplication, they will not be described again here. Correspondingly, the relevant technical details mentioned in this embodiment can also be applied to the above embodiments.
  • Each module involved in this embodiment is a logical module.
  • a logical unit can be a physical unit, or a part of a physical unit, or can be implemented as a combination of multiple physical units.
  • units that are not closely related to solving the technical problems raised in this application are not introduced in this embodiment, but this does not mean that other units do not exist in this embodiment.
  • FIG. 9 Another embodiment of the present application relates to an electronic device, as shown in Figure 9, including: at least one processor 901; and a memory 902 communicatively connected to the at least one processor 901; wherein the memory 902 stores Instructions that can be executed by the at least one processor 901, and the instructions are executed by the at least one processor 901, so that the at least one processor 901 can execute the message caching method in the above embodiments.
  • the bus can include any number of interconnected buses and bridges.
  • the bus connects one or more processors and various circuits of the memory together.
  • the bus may also connect various other circuits together such as peripherals, voltage regulators, and power management circuits, which are all well known in the art and therefore will not be described further herein.
  • the bus interface provides the interface between the bus and the transceiver.
  • a transceiver may be one element or may be multiple elements, such as multiple receivers and transmitters, providing a unit for communicating with various other devices over a transmission medium.
  • the data processed by the processor is transmitted over the wireless medium through the antenna. Further, the antenna also receives the data and transmits the data to the processor.
  • the processor is responsible for managing the bus and general processing, and can also provide a variety of functions, including timing, peripheral interfaces, voltage regulation, power management, and other control functions.
  • Memory can be used to store data used by the processor when performing operations.
  • Another embodiment of the present application relates to a computer-readable storage medium storing a computer program.
  • the above method embodiments are implemented when the computer program is executed by the processor.
  • the program is stored in a storage medium and includes several instructions to cause a device ( It may be a microcontroller, a chip, etc.) or a processor (processor) that executes all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer And Data Communications (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本申请实施例涉及网络通信技术领域,公开了一种报文的缓存方法、装置、电子设备及存储介质。上述报文的缓存方法包括:将缓存空间划分为N*N的缓存阵列;其中,所述N为大于零的自然数,所述缓存阵列中每个缓存块的大小相同;根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储所述报文的缓存块;将所述报文存储至选择的所述缓存块的空闲地址中。

Description

报文的缓存方法、装置、电子设备及存储介质
相关申请
本申请要求于2022年6月27号申请的、申请号为202210745290.7的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及网络通信技术领域,特别涉及一种报文的缓存方法、装置、电子设备及存储介质。
背景技术
在网络处理器的流量管理过程中,当对片上报文进行存储时,通常采用的存储方式是整包缓存,对于任一大小的报文,都会全部将其存储至缓存空间。
然而,整包缓存需要满足最大报文的存储空间,对于小包报文的存储,会存在缓存空间使用不足、大量存储资源浪费的问题。
发明内容
本申请实施例提供了一种报文的缓存方法,包括:将缓存空间划分为N*N的缓存阵列;其中,所述N为大于零的自然数,所述缓存阵列中每个缓存块的大小相同;根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储所述报文的缓存块;将所述报文存储至选择的所述缓存块的空闲地址中。
本申请实施例还提供一种报文的缓存装置,包括:划分模块,用于将缓存空间划分为N*N的缓存阵列;其中,所述N为大于零的自然数,所述缓存阵列中每个缓存块的大小相同;选择模块,用于根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储所述报文的缓存块;存储模块,用于将所述报文存储至选择的所述缓存块的空闲地址中。
本申请实施例还提供了一种电子设备,包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的报文的缓存方法。
本申请实施例还提供了一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现上述的报文的缓存方法。
本申请提出的报文缓存方法,将缓存空间划分为N*N的缓存阵列,其中,N为大于零的自然数,缓存阵列中每个缓存块的大小相同,根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储报文的缓存块,并将报文存储至选择的缓存块的空闲地址中。通过将缓存空间划分为N*N的缓存阵列,由于缓存阵列中每个缓存块的大小相同,则根据待存储报文的大小以及各缓存块中空闲地址的数目,能确定出存储报文的缓存块,并将报文存储至选择的缓存块的空闲地址中,使得各缓存块的资源被充分利用,避免由于存储小包报文而出现的缓存空间中资源浪费,而其他缓存块还有较多存储空间的情况,即有效地提升缓存空间 的利用率和均衡性,减少存储资源的浪费。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标识的元件表示为类似的元件,除非有特别的申明,附图中的图不构成比例限制。
图1是根据本申请一个实施例提供的一种报文的缓存方法流程图一;
图2是根据本申请一个实施例提供的一种缓存阵列的结构示意图;
图3是根据本申请一个实施例提供的一种报文的入队操作示意图;
图4是根据本申请一个实施例提供的一种链表信息存储示意图;
图5是根据本申请一个实施例提供的一种报文的缓存方法流程图二;
图6是根据本申请一个实施例提供的一种报文的缓存方法流程图三;
图7是根据本申请一个实施例提供的一种报文的出队操作示意图;
图8是根据本申请另一个实施例提供的一种装置的示意图;
图9是根据本申请另一个实施例提供的一种电子设备的结构示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请的各实施例进行详细的阐述。然而,本领域的普通技术人员可以理解,在本申请各实施例中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施例的种种变化和修改,也可以实现本申请所要求保护的技术方案。以下各个实施例的划分是为了描述方便,不应对本申请的具体实现方式构成任何限定,各个实施例在不矛盾的前提下可以相互结合相互引用。
本申请的一个实施例涉及一种报文的缓存方法,通过将缓存空间划分为N*N的缓存阵列,由于缓存阵列中每个缓存块的大小相同,则根据待存储报文的大小以及各缓存块中空闲地址的数目,能确定出存储报文的缓存块,并将报文存储至选择的缓存块的空闲地址中,使得各缓存块的资源被充分利用,避免由于存储小包报文而出现的缓存空间中资源浪费,而其他缓存块还有较多存储空间的情况。
本实施例的报文的缓存方法可以应用于网络通信技术领域中网络处理器芯片设计中的流量管理方向,以实现对待存储的报文的缓存管理。
本实施例的报文的缓存方法的具体实现流程图如图1所示,包括:
步骤101,将缓存空间划分为N*N的缓存阵列;其中,N为大于零的自然数,缓存阵列中每个缓存块的大小相同。
步骤102,根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储报文的缓存块。
步骤103,将报文存储至选择的缓存块的空闲地址中。
下面对本实施例的报文的缓存方法的实现细节进行具体的说明,以下内容仅为方便理解提供的实现细节,并非实施本方案的必须。
在步骤101中,将片上的缓存空间划分为多个大小相等的缓存块,然后将多个缓存块组 成N*N的缓存阵列。
其中,每个缓存块为缓存空间的最小管理粒度:192字节(B)。由于每个缓存块的大小相同,因此,本实施例可以有效减少缓存空间的例化次数,即只需要例化一次缓存块,并降低了系统的面积与功耗。
在一个例子中,本实施例的缓存阵列可以如图2所示,其中,N=4,group表示缓存阵列的行,bank表示缓存阵列的列,则bank0,bank1,bank2,bank3分别表示一个group中的各缓存块。缓存块的参数为4K(深度)*192B(位宽),参数的取值具体可以由数据量、报文总线位宽以及读写访问端口数目等因素决定。
图2中的缓存阵列仅为实现本实施例的报文的缓存方法的一种示意,缓存阵列的大小还可以为8*8,则缓存块的参数为1K*96B。本领域技术人员可以根据芯片内部的布局布线选择相应的缓存阵列。
为了便于理解,现对如何确定缓存块的参数进行举例说明:
假设缓存空间需要支持64K个报文的存储,每个报文数据总线在768B以内,且有两读一写三个访问端口。
若本实施例的缓存块为单端口的随机存取存储器(random access memory,RAM),则为了支持上述两读一写同时访问的情况,至少需要3个缓存块来分配不同的缓存单元避免读写冲突,因此缓存阵列的大小至少为3*3。若使用深度为4K的缓存块,为了支持64K个报文的存储,则存储块的数目为16个,缓存阵列大小可以为4*4,即4个group*4个bank;若使用深度为1K的缓存块,为了支持64K个报文的存储,则存储块的数目为64个,缓存阵列大小可以为8*8,即8个group*8个bank。
在一个例子中,本实施例中的缓存块还可以采用双端口的RAM,以实现数据的同时读写,并满足应用过程中集成电路需具有较高缓存效率的要求。
在步骤102中,由于缓存阵列中各缓存块的空闲地址数目并不相同,存储报文的缓存块的空闲地址数目需满足报文的大小,因此需要根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储报文的缓存块,以充分利用片上的缓存空间,提升缓存空间的利用率。
在一个示例中,根据待存储的报文的大小,确定存储报文所需的缓存块的数目L,L为大于零的自然数,例如,对于768B的大包报文,达到满带宽时可以提供4个缓存块,实现768B的数据同时存储与读取;对于小包报文,提供1~3个缓存块。然后在缓存阵列中确定L行缓存块作为候选缓存行,在每一个候选缓存行中,根据各缓存块的空闲地址数目的排序结果,选择一个缓存块作为目标缓存块,具体选择空闲地址数目最多的缓存块作为目标缓存块。
在一个例子中,在确定存储报文所需的缓存块的数目L之后,首先在缓存阵列中确定出候选缓存行,具体地,对缓存阵列中各行缓存块的空闲地址数目进行排序,并根据各行缓存块的空闲地址数目的排序结果,在缓存阵列中确定L行缓存块作为候选缓存行,即将各行缓存块的空闲地址数目从大到小排序,依次选择出L行缓存块作为候选缓存行,以确保选择的候选缓存行的空闲地址数目为最大,满足待存储的报文大小。
具体参见图2,若待存储的报文大小需要需的缓存块的数目L=3,则确定group0中各缓存块的空闲地址数目之和,group1,group2,group3同理,将确定的四个group的空闲地址数目进行排序,依次选择空闲地址最多、次多和第三多的group,作为候选缓存行。假设候选缓 存行为group0,group1,group2,对group0中的各缓存块的空闲地址数目进行排序,以从group0中选择空闲地址数目最多的缓存块,group1,group2同理,将选择的三个缓存块作为目标缓存块,用于存储报文。
在一实施方式中,缓存阵列中各行缓存块的空闲地址数目的排序结果,以及候选缓存行中各缓存块的空闲地址数目的排序结果基于比较算法获得,以保证缓存阵列中行与列的缓存块中存储的报文的均衡性。
在一个示例中,预先在缓存阵列的每个缓存块中设置一个计数器,各行的缓存块的计数器命名为data_cnt0~3,序号从小到大,即在确定存储报文所需的缓存块的数目L之后,将每个data_cnt依次与其余3个data_cnt进行比较,经比较,若两者data_cnt数值较大的权重记1,否则记0,若两者data_cnt数值一样,序号较小的权重记1,否则记0,几个data_cnt的权重即为排序结果,排序最靠前的说明此行缓存块的空闲地址最多,依次往下。各列的缓存块的空闲地址数目的排序方法与各行的缓存块的空闲地址数目的排序方法相同,此处不再赘述。
在步骤103中,在选择得到存储报文的目标缓存块之后,将报文存储至目标缓存块的空闲地址中,即完成报文的入队操作。
本领域技术人员可以理解的是,报文的存储是动态进行的,由入队报文的同时,还会存在出队的报文,因此,本实施例中根据待存储的报文的大小以及各缓存块的空闲地址数目,选择的用于存储报文的缓存块在某一时刻,一定可以完全存储报文。
在一个例子中,本实施例的报文的缓存方法可以应用于如图3所示的架构中,包括:缓存地址管理模块和存储模块,其中,缓存地址管理模块还包括链表信息存储模块和地址申请模块。存储模块,链表信息存储模块和地址申请模块的结构均为相同大小的缓存阵列。
在一个示例中,地址申请模块用于接收报文的存储请求,即地址申请请求,并为报文分配用于存储报文的缓存块;存储模块用于将报文存储至用于存储报文的缓存块的空闲地址中;链表信息存储模块用于存储用于存储报文的缓存块的地址信息。
图3示出了一种报文的入队操作示意图,其中,①表示地址申请请求;②表示将从地址申请模块申请到的用于存储报文的缓存块的地址信息存储到链表信息存储模块;②’代表将申请到的缓存块的地址发送给存储模块;③表示将报文存储到申请的缓存块的空闲地址中,完成入队操作。
在一个例子中,在选择得到用于存储报文的缓存块之后,由于存储一个报文可能会用到1~4个缓存块,本实施例将报文存储至选择的缓存块的空闲地址中的同时,在用于存储报文的缓存块中,将除首个缓存块外的其他缓存块的地址写入首个缓存块中,首个缓存块为从第一个候选缓存行中选择得到的目标缓存块,即首个缓存块为空闲地址数目最多的候选缓存行中,空闲地址数目最多的目标缓存块,其中,缓存块的地址用于指示报文的存储位置。通过将除首个缓存块外的其他缓存块的地址写入首个缓存块中,在报文出队时,只需要获取用于存储报文的首个缓存块的地址,即可从首个缓存块中获取除首个缓存块外的其他缓存块的地址,即获取用于存储所有缓存块的地址,使队列管理变得更加简单。
具体参见图4,若用于存储报文的缓存块分别为图中的G0B1(即group0的bank1)、G1B2、G2B1和G3B0,则将与地址申请模块的对应的链表信息存储模块的缓存阵列中的G0B1作为地址,G1B2、G2B1和G3B0进行拼接作为数据,并将拼接后的数据写入G0B1中,即将从地址申请模块申请到的用于存储报文的缓存块的地址信息存储到链表信息存储模块。因此, 在进行报文的出队操作时,可以直接从链表信息存储模块获取用于存储报文的首个缓存块的地址,并从首个缓存块中获取除首个缓存块外的其他缓存块的地址,根据获取的用于存储报文的各缓存块的地址,从存储模块对应的缓存阵列中的缓存块中获取存储的报文。
在一实施方式中,由于报文的入队与链表存储这两个步骤是同时进行的,所以并不会延缓入队申请的步骤,降低系统整体速率。
本实施例中,将缓存空间划分为N*N的缓存阵列,其中,N为大于零的自然数,缓存阵列中每个缓存块的大小相同,根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储报文的缓存块,并将报文存储至选择的缓存块的空闲地址中。通过将缓存空间划分为N*N的缓存阵列,由于缓存阵列中每个缓存块的大小相同,则根据待存储报文的大小以及各缓存块中空闲地址的数目,能确定出存储报文的缓存块,并将报文存储至选择的缓存块的空闲地址中,使得各缓存块的资源被充分利用,避免由于存储小包报文而出现的缓存空间中资源浪费,而其他缓存块还有较多存储空间的情况,即有效地提升缓存空间的利用率和均衡性,减少存储资源的浪费。
本申请的另一个实施例涉及一种报文的缓存方法,本实施例与第一实施例大致相同,不同之处在于本实施例中缓存块的类型为单端口RAM,因此,当待存储的报文包括:第一报文和第二报文时,会出现存储冲突,本实施例中提供了一种地址申请时的冲突避免机制,以解决存储冲突问题。本实施例的报文的缓存方法的具体实现流程图如图5所示,包括:
步骤501,将缓存空间划分为N*N的缓存阵列;其中,N为大于零的自然数,缓存阵列中每个缓存块的大小相同,且缓存块的类型为单端口RAM。
步骤502,获取待存储的第一报文和第二报文的优先级。
其中,第一报文为片上报文,第二报文为片外报文。由于片外报文存储时的访问周期较长,一般在出队时需要有预读操作,需要先将片外报文回写到缓存空间中,因此,会存在待存储的报文包括第一报文和第二报文的情况。
在一实施方式中,由于本实施例中的缓存块的类型为单端口RAM,不支持两读场景,因此,当待存储的报文包括:第一报文和第二报文时,首先需要获取第一报文和第二报文的优先级。其中,报文的优先级预先根据业务的需求确定好。
步骤503,根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储报文的缓存块。
在一个示例中,为了避免第一报文和第二报文存储时出现冲突,第一报文和第二报文的所申请的缓存块应完全不同,因此,在从缓存阵列中选取存储报文的候选缓存行时,需要根据第一报文和第二报文的优先级从缓存阵列中确定第一报文和第二报文的候选缓存行。每个报文的候选缓存行可能有多个,且分多次选取,对于每个候选缓存行的确定,在第一报文高于第二报文的优先级的情况下,从缓存阵列中选择空闲地址数目最多的一行缓存块作为第一报文的候选缓存行,并从缓存阵列中选择空闲地址数目次多的一行缓存块作为第二报文的候选缓存行。
在进行第一报文和第二报文的候选缓存行的选取之后,第一报文和第二报文可能会选择到同一个候选缓存行,此时,对于第一报文和第二报文选择得到的同一个候选缓存行,根据候选缓存行中各缓存块的空闲地址数目的排序结果,选择空闲地址数目最多的缓存块作为存 储第一报文的缓存块,并选择空闲地址数目次多的一个缓存块作为存储第二报文的缓存块。若第一报文的候选缓存行与第二报文的候选缓存行不相同,则在不同的候选缓存行中,对于第一报文的候选缓存行,选择空闲地址数目最多的缓存块作为存储第一报文的缓存块,对于第二报文候选缓存行,选择空闲地址数目最多的缓存块作为存储第二报文的缓存块。
具体参见图2,若第一报文所需的候选缓存行数目L=3,第二报文所需的候选缓存行数目L=1,在确定第一报文的第一个候选缓存行的同时,需要确定第二报文的候选缓存行,此时group0的空闲地址数目最多,group1的空闲地址数目次多,group2的空闲地址数目为第三多,则将group0作为第一报文的候选缓存行,并将group1作为第二报文的候选缓存行,然后再依次确定第一报文的第二个和第三个候选缓存行,即group1和group2。group1为第一报文和第二报文选择得到的同一个候选缓存行,则在group1中,选择空闲地址数目最多的缓存块,例如bank0,将bank0作为存储第一报文的缓存块,并选择空闲地址数目次多的缓存块,例如bank1,将bank1作为存储第二报文的缓存块。
上述仅为以第一报文的优先级高于第二报文的优先级为例说明的选择用于存储报文的缓存块的方式,但本申请的实施方式并不限于此,也可以为第二报文的优先级高于第一报文的优先级。
步骤504,将报文存储至选择的缓存块的空闲地址中。
步骤504步骤103大致相同,此处不再赘述。
本实施例中,由于待存储的报文会存在第一报文和第二报文的情况,需要满足存储第一报文和第二报文的要求,而在缓存块的类型为单端口RAM时,会存在存储冲突问题,而通过在将缓存空间划分为N*N的缓存阵列之后,获取待存储的第一报文和第二报文的优先级,并确定第一报文的候选缓存行与第二报文的候选缓存行是否相同,在第一报文的候选缓存行与第二报文的候选缓存行相同,且第一报文的优先级高于第二报文的优先级的情况下,在每一行候选缓存行中,选择空闲地址数目最多的缓存块作为存储第一报文的缓存块,并选择空闲地址数目次多的缓存块作为存储第二报文的缓存块,可以有效解决端口限制而带来的存储冲突问题。
本申请的另一个实施例涉及一种报文的缓存方法,本实施例与第一实施例大致相同,不同之处在于本实施例中缓存块的类型为单端口RAM,因此,在缓存阵列中存在出队报文时,会出现读写冲突,本实施例中提供了一种读写冲突避免机制,以解决读写冲突问题。本实施例的报文的缓存方法的具体实现流程图如图6所示,包括:
步骤601,将缓存空间划分为N*N的缓存阵列;其中,N为大于零的自然数,缓存阵列中每个缓存块的大小相同,且缓存块的类型为单端口RAM。
步骤602,确定缓存阵列中是否存在出队的报文,在缓存阵列存在出队的报文的情况下,将出队的报文所在的缓存块从缓存阵列中剔除。
在一个例子中,由于报文出队不可预测,故出队地址回收比入队地址申请的优先级高,因此,在缓存阵列存在出队的报文,即需要回收缓存块地址的情况下,将出队的报文所在的缓存块从缓存阵列中剔除。
参见图7,图7示出了一种报文的出队操作示意图,即在缓存阵列存在出队的报文的情况下,若出队的报文所在的地址为G0B1和G1B0,则需要回收G0B1和G1B0,并将G0B1 和G1B0从缓存阵列中剔除。
步骤603,根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储报文的缓存块。
具体而言,根据待存储的报文的大小以及各缓存块的空闲地址数目,从经过缓存块剔除后的缓存阵列中选择存储报文的缓存块。例如,若剔除的缓存块为缓存阵列中的G0B1和G1B0,则从除G0B1和G1B0之外的缓存块中选择存储报文的缓存块,具体选择存储报文的缓存块的方式与第一实施例相同,此处不再赘述。
步骤604,将报文存储至选择的缓存块的空闲地址中。
步骤604步骤103大致相同,此处不再赘述。
在一个例子中,本实施例中的报文的出队操作可以通过如图7所示的结构实现,包括:缓存地址管理模块和存储模块,其中,缓存地址管理模块还包括链表信息存储模块和地址回收模块。存储模块,链表信息存储模块和地址回收模块的结构均为相同大小的缓存阵列。
在一个示例中,地址回收模块用于接收报文的出队请求,即地址回收请求;存储模块用于存储出队的报文;链表信息存储模块用于存储用于存储出队报文的缓存块的地址信息。
如图7所示,①表示地址回收请求;②表示将从链表信息存储模块获取存储报文的各缓存块的地址;②’代表将获取到的缓存块的地址发送给存储模块;③表示将报文从存储模块的对应缓存块中读出,完成报文的出队操作。
在一实施方式中,由于报文的出队与地址回收这两个步骤是同时进行的,所以并不会延缓出队申请的步骤,降低系统整体速率。
本实施例中,由于会出现报文入队和报文出队的场景,在缓存块的类型为单端口RAM时,会存在读写冲突,即报文入队和出队的冲突问题,而通过在将缓存空间划分为N*N的缓存阵列之后,在缓存阵列存在出队的报文的情况下,将出队的报文所在的缓存块从缓存阵列中剔除,并根据待存储的报文的大小以及各缓存块的空闲地址数目,从经过缓存块剔除后的缓存阵列中选择存储报文的缓存块,可以有效解决端口限制而带来的读写冲突问题。
本申请的第二实施例和第三实施例仅为两种报文缓存时会出现的场景:两读和一读一写,还会存在两读一写,两读两写等各种冲突场景,则可以将第二实施例与第三实施例的报文缓存方法结合使用,以解决上述冲突问题。
在一实施方式中,若缓存阵列的大小为4*4,则最多可以解决两读两写的冲突场景,如果读写来源较多,大于两读两写,可以通过增加阵列数目,调节缓存块的深度解决,比如8*8的缓存阵列最多可以解决四读四写的冲突场景。
在一个例子中,若出现两读一写的冲突场景,即同时出现报文出队,片上报文入队和片外报文入队,则根据本申请实施例的报文缓存方法中的冲突避免机制可以申请的缓存块地址如表1所示:
表1
其中,bank0~3均表示缓存块,bank0~3表示bank空闲地址数目从多到少的排序,地址回收可能是0~3中任意一个bank。片上入队申请有bank0、bank1这两种情况,N表示片上无报文入队请求,片外报文入队申请优先级最低,有bank0、bank1或者bank2这三种情况。
为了便于理解,现对上表进行说明:当bank0中存在报文出队时,将bank0从缓存阵列中剔除,则bank1为空闲地址最多的bank,若此时片上无报文入队请求,即为N,则片外报文入队可以申请空闲地址最多的bank1;若此时片上有报文入队请求,且片上报文的优先级大于片外报文,则片上报文会申请空闲地址最多的bank1,片外报文入队可以申请空闲地址次多的bank2。bank1~3上报文出队时,片上报文和片外报文申请各缓存块的方式类似,不再赘述。
需要说明的是,本实施方式中的上述各示例均为方便理解进行的举例说明,并不对本申请的技术方案构成限定。
上面各种方法的步骤划分,只是为了描述清楚,实现时可以合并为一个步骤或者对某些步骤进行拆分,分解为多个步骤,只要包括相同的逻辑关系,都在本申请的保护范围内;对算法中或者流程中添加无关紧要的修改或者引入无关紧要的设计,但不改变其算法和流程的核心设计都在该申请的保护范围内。
本申请的另一个实施例涉及一种报文的缓存装置,下面对本实施例的报文的缓存装置的细节进行具体的说明,以下内容仅为方便理解提供的实现细节,并非实施本例的必须,图8是本实施例所述的报文的缓存装置的示意图,包括:划分模块801、选择模块802和存储模块803。
在一个示例中,划分模块801,用于将缓存空间划分为N*N的缓存阵列;其中,N为大于零的自然数,缓存阵列中每个缓存块的大小相同。
选择模块802,用于根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储报文的缓存块。
在一个例子中,选择模块802,还用于根据待存储的报文的大小,确定存储报文所需的所述缓存块的数目L;在缓存阵列中确定L行缓存块作为候选缓存行;在每一个候选缓存行中,根据各缓存块的空闲地址数目的排序结果,选择一个缓存块作为目标缓存块。
在一个例子中,选择模块802,还用于对缓存阵列中各行缓存块的空闲地址数目进行排序;根据各行缓存块的空闲地址数目的排序结果,在缓存阵列中确定L行缓存块作为候选缓存行。
在一个例子中,当缓存块的类型为单端口RAM,待存储的报文包括:第一报文和第二报文时,选择模块802,还用于确定第一报文的候选缓存行与所述第二报文的候选缓存行是否相同;在第一报文的候选缓存行与第二报文的候选缓存行相同,且第一报文的优先级高于第二报文的优先级的情况下,在每一行候选缓存行中,选择空闲地址数目最多的缓存块作为存储第一报文的缓存块,并选择空闲地址数目次多的缓存块作为存储第二报文的缓存块。
在一个例子中,当缓存块的类型为单端口RAM,且在将出队的报文所在的缓存块从缓存阵列中剔除后,选择模块802,还用于根据待存储的报文的大小以及各缓存块的空闲地址数目,从经过缓存块剔除后的缓存阵列中选择存储报文的缓存块。
存储模块803,用于将报文存储至选择的缓存块的空闲地址中。
在一个例子中,存储模块803,还用于将报文存储至选择的目标缓存块的空闲地址中。
本实施例为与上述方法实施例对应的装置实施例,本实施例可以与上述方法实施例互相配合实施。上述实施例中提到的相关技术细节和技术效果在本实施例中依然有效,为了减少重复,这里不再赘述。相应地,本实施例中提到的相关技术细节也可应用在上述实施例中。
本实施例中所涉及到的各模块均为逻辑模块,在实际应用中,一个逻辑单元可以是一个物理单元,也可以是一个物理单元的一部分,还可以以多个物理单元的组合实现。此外,为了突出本申请的创新部分,本实施例中并没有将与解决本申请所提出的技术问题关系不太密切的单元引入,但这并不表明本实施例中不存在其它的单元。
本申请另一个实施例涉及一种电子设备,如图9所示,包括:至少一个处理器901;以及,与所述至少一个处理器901通信连接的存储器902;其中,所述存储器902存储有可被所述至少一个处理器901执行的指令,所述指令被所述至少一个处理器901执行,以使所述至少一个处理器901能够执行上述各实施例中的报文的缓存方法。
其中,存储器和处理器采用总线方式连接,总线可以包括任意数量的互联的总线和桥,总线将一个或多个处理器和存储器的各种电路连接在一起。总线还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路连接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口在总线和收发机之间提供接口。收发机可以是一个元件,也可以是多个元件,比如多个接收器和发送器,提供用于在传输介质上与各种其他装置通信的单元。经处理器处理的数据通过天线在无线介质上进行传输,进一步,天线还接收数据并将数据传送给处理器。
处理器负责管理总线和通常的处理,还可以提供各种功能,包括定时,外围接口,电压调节、电源管理以及其他控制功能。而存储器可以被用于存储处理器在执行操作时所使用的数据。
本申请另一个实施例涉及一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现上述方法实施例。
即,本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
本领域的普通技术人员可以理解,上述各实施方式是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。

Claims (10)

  1. 一种报文的缓存方法,包括:
    将缓存空间划分为N*N的缓存阵列;其中,所述N为大于零的自然数,所述缓存阵列中每个缓存块的大小相同;
    根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储所述报文的缓存块;
    将所述报文存储至选择的所述缓存块的空闲地址中。
  2. 根据权利要求1所述报文的缓存方法,其中,所述根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储所述报文的缓存块,包括:
    根据所述待存储的报文的大小,确定存储所述报文所需的所述缓存块的数目L,所述L为大于零的自然数;
    在所述缓存阵列中确定L行缓存块作为候选缓存行;
    在每一个所述候选缓存行中,根据各缓存块的空闲地址数目的排序结果,选择一个缓存块作为目标缓存块;
    所述将所述报文存储至选择的所述缓存块的空闲地址中,包括:
    将所述报文存储至选择的所述目标缓存块的空闲地址中。
  3. 根据权利要求2所述报文的缓存方法,其中,所述在所述缓存阵列中确定L行缓存块作为候选缓存行,包括:
    对所述缓存阵列中各行缓存块的空闲地址数目进行排序;
    根据所述各行缓存块的空闲地址数目的排序结果,在所述缓存阵列中确定L行缓存块作为所述候选缓存行。
  4. 根据权利要求3所述的报文的缓存方法,其中,所述缓存阵列中各行缓存块的空闲地址数目的排序结果,以及所述候选缓存行中各缓存块的空闲地址数目的排序结果基于比较算法获得。
  5. 根据权利要求4所述的报文的缓存方法,其中,所述缓存块的类型为单端口RAM,所述待存储的报文包括:第一报文和第二报文;
    在所述根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储所述报文的缓存块之前,还包括:
    获取所述第一报文和所述第二报文的优先级;
    所述在所述缓存阵列中确定L行缓存块作为所述候选缓存行,包括:
    对于每个所述候选缓存行的确定,在所述第一报文高于所述第二报文的优先级的情况下,从所述缓存阵列中选择空闲地址数目最多的一行缓存块作为所述第一报文的候选缓存行,并从所述缓存阵列中选择空闲地址数目次多的一行缓存块作为所述第二报文的候选缓存行。
  6. 根据权利要求3所述的报文的缓存方法,其中,在所述根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储所述报文的缓存块之后,所述方法还包括:
    在用于存储所述报文的缓存块中,将除首个缓存块外的其他缓存块的地址写入所述首个缓存块中;其中,所述首个缓存块为从第一个所述候选缓存行中选择得到的目标缓存块,所述缓存块的地址用于指示所述报文的存储位置。
  7. 根据权利要求1至6中任一项所述的报文的缓存方法,其中,所述缓存块为单端口RAM;
    在所述根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储所述报文的缓存块之前,还包括:
    确定所述缓存阵列中是否存在出队的报文;
    在所述缓存阵列存在出队的报文的情况下,将所述出队的报文所在的缓存块从所述缓存阵列中剔除;
    所述根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储所述报文的缓存块,包括:
    根据所述待存储的报文的大小以及各缓存块的空闲地址数目,从经过缓存块剔除后的缓存阵列中选择存储所述报文的缓存块。
  8. 一种报文的缓存装置,包括:
    划分模块,设置为将缓存空间划分为N*N的缓存阵列;其中,所述N为大于零的自然数,所述缓存阵列中每个缓存块的大小相同;
    选择模块,设置为根据待存储的报文的大小以及各缓存块的空闲地址数目,选择用于存储所述报文的缓存块;
    存储模块,设置为将所述报文存储至选择的所述缓存块的空闲地址中。
  9. 一种电子设备,包括:
    至少一个处理器;以及,
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1至7中任一项所述的报文的缓存方法。
  10. 一种计算机可读存储介质,存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1至7中任一项所述的报文的缓存方法。
PCT/CN2023/087615 2022-06-27 2023-04-11 报文的缓存方法、装置、电子设备及存储介质 WO2024001414A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210745290.7 2022-06-27
CN202210745290.7A CN117354268A (zh) 2022-06-27 2022-06-27 一种报文的缓存方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2024001414A1 true WO2024001414A1 (zh) 2024-01-04

Family

ID=89369752

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/087615 WO2024001414A1 (zh) 2022-06-27 2023-04-11 报文的缓存方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN117354268A (zh)
WO (1) WO2024001414A1 (zh)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101094183A (zh) * 2007-07-25 2007-12-26 杭州华三通信技术有限公司 一种缓存管理方法及装置
US7930481B1 (en) * 2006-12-18 2011-04-19 Symantec Operating Corporation Controlling cached write operations to storage arrays
CN103150122A (zh) * 2011-12-07 2013-06-12 华为技术有限公司 一种磁盘缓存空间管理方法和装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7930481B1 (en) * 2006-12-18 2011-04-19 Symantec Operating Corporation Controlling cached write operations to storage arrays
CN101094183A (zh) * 2007-07-25 2007-12-26 杭州华三通信技术有限公司 一种缓存管理方法及装置
CN103150122A (zh) * 2011-12-07 2013-06-12 华为技术有限公司 一种磁盘缓存空间管理方法和装置

Also Published As

Publication number Publication date
CN117354268A (zh) 2024-01-05

Similar Documents

Publication Publication Date Title
US7366865B2 (en) Enqueueing entries in a packet queue referencing packets
CN102096648B (zh) 基于fpga的实现多路突发数据业务缓存的系统及方法
US10248350B2 (en) Queue management method and apparatus
US7733892B2 (en) Buffer management method based on a bitmap table
US8982658B2 (en) Scalable multi-bank memory architecture
CN109388590B (zh) 提升多通道dma访问性能的动态缓存块管理方法和装置
US8325603B2 (en) Method and apparatus for dequeuing data
WO2009111971A1 (zh) 缓存数据写入系统及方法和缓存数据读取系统及方法
CN101499956B (zh) 分级缓冲区管理系统及方法
JP2016195375A (ja) 複数のリンクされるメモリリストを利用する方法および装置
US7627672B2 (en) Network packet storage method and network packet transmitting apparatus using the same
CN110058816B (zh) 一种基于ddr的高速多用户队列管理器及方法
CN112948293A (zh) 一种多用户接口的ddr仲裁器及ddr控制器芯片
CN112463415A (zh) 基于随机地址的多端口共享内存管理系统及方法
CN116724287A (zh) 一种内存控制方法及内存控制装置
EP3440547B1 (en) Qos class based servicing of requests for a shared resource
CN113126911B (zh) 基于ddr3 sdram的队列管理方法、介质、设备
US10067868B2 (en) Memory architecture determining the number of replicas stored in memory banks or devices according to a packet size
US8572349B2 (en) Processor with programmable configuration of logical-to-physical address translation on a per-client basis
US10031884B2 (en) Storage apparatus and method for processing plurality of pieces of client data
US9137167B2 (en) Host ethernet adapter frame forwarding
WO2024001414A1 (zh) 报文的缓存方法、装置、电子设备及存储介质
US20070233958A1 (en) Cashe Device and Method for the Same
US7783796B2 (en) Method for releasing data of storage apparatus
CN109308247A (zh) 一种日志处理方法、装置、设备及一种网络设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23829604

Country of ref document: EP

Kind code of ref document: A1