CN114531488B - High-efficiency cache management system for Ethernet switch - Google Patents

High-efficiency cache management system for Ethernet switch Download PDF

Info

Publication number
CN114531488B
CN114531488B CN202111275730.9A CN202111275730A CN114531488B CN 114531488 B CN114531488 B CN 114531488B CN 202111275730 A CN202111275730 A CN 202111275730A CN 114531488 B CN114531488 B CN 114531488B
Authority
CN
China
Prior art keywords
descriptor
module
port
space
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111275730.9A
Other languages
Chinese (zh)
Other versions
CN114531488A (en
Inventor
赵文琦
史阳春
李龙飞
李小波
冯海强
王剑峰
杨靓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Microelectronics Technology Institute
Original Assignee
Xian Microelectronics Technology Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Microelectronics Technology Institute filed Critical Xian Microelectronics Technology Institute
Priority to CN202111275730.9A priority Critical patent/CN114531488B/en
Publication of CN114531488A publication Critical patent/CN114531488A/en
Application granted granted Critical
Publication of CN114531488B publication Critical patent/CN114531488B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/215Flow control; Congestion control using token-bucket
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS

Abstract

The invention discloses a high-efficiency buffer management system for an Ethernet exchanger, which divides a buffer management module by a queue management module, divides a buffer space, utilizes an inlet control module to control and monitor buffer use conditions of an inlet port CoS queue, classifies flows of received data packets, puts the data packets into different CoS queues according to classification results, realizes flow control and congestion treatment of the ports and the queues, realizes reasonable use of buffer for dynamic management, and realizes management of an outlet queue by a double-table management technology.

Description

High-efficiency cache management system for Ethernet switch
Technical Field
The invention belongs to the field of computer networks and computer system structures, relates to a gigabit Ethernet switch cache management method, and in particular relates to an Ethernet switch-oriented high-efficiency cache management system.
Background
In recent years, with the rapid development of communication services, ethernet switches are used as key devices for network communication, and have been developed very rapidly, and various aspects are perfected continuously. Throughout the current trend, ethernet switches exhibit several development features, namely higher speed, greater port density, and greater intelligence. The exchange modes of the exchanger mainly comprise three types, namely a through type, a storage forwarding type and a fragment isolation type. The store-and-forward switch has wider application because the store-and-forward switch can perform error detection on the incoming data packet, effectively improve network performance, and more importantly, can support cooperative work among ports with different rates.
The performance of the storage forwarding ethernet switch mainly depends on a buffer management unit, which performs storage and transmission control of data packets, and includes main functions of buffer space allocation and release and transmission queue management. The current mainstream implementation mode is to take the whole buffer space as a whole, allocate the buffer space according to the request sequence of the data packets to store the data packets, and release the buffer space after the data packets are sent. Queue management refers to how packets are sent out at the egress, including enqueue and dequeue management, enqueue refers to placing packets into a specified queue, dequeue refers to taking packets out of the specified queue and sending them.
The use of the buffer space may be stored on a packet basis or may be stored on a CELL basis based on a switch that shares a buffer. Based on the storage mode of the data packet, the method has the advantages of small space waste and large exchange delay, and if one huge frame is stored, the data packet can not be processed until the data packet is completely stored in a buffer memory at other ports. The CELL-based storage mode is to complete the storage of only one CELL at a time, and each port can poll and store, so that the problem that one port occupies a cache for a long time to cause the failure of serving other ports is avoided, the parallel processing of two planes of data storage and exchange control is ensured, and the exchange delay is reduced, but the disadvantage is that space waste exists.
Based on overall buffer space management, traffic bursts for a single port cannot be effectively suppressed, which can lead to the port possibly occupying too much buffer and not being released in time, resulting in other ports having no buffer space available. The traditional queue management is to take out the data, put the data into the export queue, and read out the data from the export queue and send the data when the data is dequeued. The disadvantage of this approach is that the data needs to be cached twice and the switching delay becomes large.
Disclosure of Invention
The invention aims to provide an efficient cache management system for an Ethernet switch, which aims to overcome the defects of the prior art.
In order to achieve the above purpose, the invention adopts the following technical scheme:
an Ethernet exchanger oriented high-efficiency cache management system comprises a cache management module, a queue management module, an inlet control module, an outlet control module, a QoS control module and a register module;
the buffer management module is used for completing the allocation and release of the buffer address of the data packet received and transmitted from the MAC, transmitting the data packet to be allocated to the queue management module, and discarding the data packet to be discarded to release the buffer space;
the queue management module is used for carrying out enqueue and dequeue allocation on the data packets distributed by the buffer management module;
the entrance control module is used for controlling and monitoring the buffer use condition of the CoS queue of the entrance port;
the outlet control module is used for counting the use of the outlet cache to realize outlet flow control;
the QoS control module is used for carrying out flow classification on the received data packets, putting the data packets into different CoS queues according to classification results, and realizing flow control and congestion treatment of ports and the queues;
the register module is used for realizing the configuration of the cache management unit.
Further, the data packet to be discarded includes that the port for forwarding cannot be found, that the port cannot receive a jumbo frame exceeding a specified length, or that other control gives a discard flag, and then the data packet is discarded.
Further, the data buffer space of the buffer management module stores data packets to be forwarded; when the receiving port receives the data packet, the buffer management module allocates a corresponding space for the data packet, generates descriptor information at the same time and sends the descriptor information to the queue management module; if the data packet is to be discarded, discarding the data packet and releasing the allocated storage space.
Further, the buffer memory of the buffer memory management module takes CELL as a minimum unit, and each CELL is 128 bytes in size.
Further, the output of the queue management module forms an output array by using a two-layer linked list structure, wherein the first layer is a transmission queue linked list and the second layer is a cache mark linked list.
Further, when receiving a stored data packet, the descriptor management module writes a descriptor of a data packet into a transmission descriptor queue of a certain port, and when the switching controller reads the descriptor of the data packet from the transmission port descriptor queue, the data of the packet is read from the data cache according to the content of each field in the descriptor, and is transmitted from the corresponding port.
Further, the queue management module comprises a cache request module, a descriptor write request control module, a descriptor read request control module, a descriptor management module, a descriptor cache module, a CELL write management module and a data packet write management module;
the buffer request module is used for recording the address allocated to the data packet by the buffer management module;
the descriptor management module is used for writing the information recorded by the cache request module into a corresponding descriptor queue;
the descriptor write request control module requests a corresponding sending port to perform descriptor write queue operation according to the relevant result of forwarding control;
the descriptor read request control module is used for judging whether the sending transmission port can forward data or not and giving out a descriptor queue read request;
the descriptor cache module is used for storing descriptor linked list information;
the CELL write management module is used for recording the CELL write operation state of the current request enqueue and preparing for the next CELL write of the port.
The data packet write management module is used for recording the data packet write operation state of the current request enqueue and preparing for the next data packet write of the port.
Further, if the buffer memory of the incoming port CoS queue exceeds a threshold value, a flow control message is generated; if the opposite terminal processes the flow control message, the opposite terminal stops sending the data packet.
Further, the access control module divides the cache space into a port guarantee space, a shared space and a head space, and the port guarantee space provides a minimum guaranteed available space for the port; the shared space is used for providing a shared cache space for the port when the minimum guaranteed space is insufficient; the head space is used to provide some additional buffering capacity when the minimum guaranteed space and shared buffer space are not sufficient.
Further, the register module is used for realizing the configuration of the cache management unit, and comprises thresholds of cache space, coS queue, flow control and shaping functions in each module of the ingress control, the egress control and the QoS control.
Compared with the prior art, the invention has the following beneficial technical effects:
the invention relates to a high-efficiency buffer management system for an Ethernet exchanger, which divides a buffer management module by a queue management module, divides a buffer space, utilizes an inlet control module to control and monitor buffer use conditions of an inlet port CoS queue, classifies flows of received data packets, puts the data packets into different CoS queues according to classification results, realizes flow control and congestion treatment of the ports and the queues, realizes reasonable use of buffer for dynamic management, realizes management of an outlet queue by a double-chain table management technology, saves storage resources, reduces exchange delay, and ensures service quality by flow classification, flow supervision, shaping and congestion management.
Furthermore, the buffer space is divided into three parts, namely a port guarantee space, a shared space and a head space, wherein the port guarantee space reserves a minimum buffer space for each port, other ports cannot occupy, and after the port guarantee space is used, the port can use the shared buffer, so that reasonable use of the buffer space by each port is guaranteed, and the head space is used for storing a small part of data which can still be received after the shared buffer is used, and the exchange sends a flow control signal, thereby guaranteeing no packet loss in best effort.
Furthermore, the buffer management unit realizes the functions of flow classification, flow supervision, shaping, congestion management and the like, realizes the efficient non-blocking forwarding of data, and ensures the service quality.
Furthermore, the incoming messages are counted independently at the incoming port and the outgoing port respectively, and the flow control and congestion processing are realized by adopting threshold values, missing barrels, WRED and classification strategies for queues.
Drawings
FIG. 1 is a block diagram of the overall structure of an embodiment of the present invention
Fig. 2 is a block diagram of a transmit queue module according to an embodiment of the invention.
Detailed Description
The invention is described in further detail below with reference to the attached drawing figures:
as shown in fig. 1, an ethernet switch-oriented high-efficiency cache management system includes a cache management module 1, a queue management module 2, an ingress control module 3, an egress control module 4, a qos control module 5, and a register module 6;
the cache management module 1 is used for realizing allocation and release of cache space.
Stored in the data buffer space of the buffer management module 1 is a data packet to be forwarded. When the receiving port receives the data packet, the buffer management module 1 is responsible for distributing corresponding space for the data packet, generating descriptor information at the same time and sending the descriptor information to the queue management module; if the packet fails to find a forwarded port, exceeds a specified length, the port cannot receive a jumbo frame or other control gives a discard flag, the packet is discarded and the allocated memory space is freed. When the data packets meet the forwarding condition, the queue management module takes the data packets out of the buffer memory space of the buffer memory management module 1 for forwarding through the descriptor information, and the corresponding space is released to become a new free space.
The cache of the cache management module 1 takes CELL as a minimum unit, and each CELL is 128 bytes in size. The data cache space of the cache management module 1 is to be organized by adopting a linked list structure, wherein the linked list refers to a single linked list limited to the deletion of a head node and the insertion of a tail node. The idle part in the data buffer space is organized in a single linked list mode by taking CELL as a unit, and when the space is allocated each time, one node of the linked list is deleted from the head of the linked list; and each time space is released, a node is inserted at the tail of the linked list.
The structure of the singly linked list is realized by a 16384x 14-bit RAM in a chip, the depth 16384 corresponds to the maximum free page number, namely the whole data cache space, the content of the 14-bit word corresponds to the address of the next word in the control RAM, and the position of each item in the RAM corresponds to the position of one page in the cache storage space. In addition to the control RAM, there are two 14-bit wide registers that record the position of the Head and Tail pointers of the singly linked list, respectively, denoted Head and Tail.
The idle page list requires an initialization process before use, after initialization, the content in Head is 0, the content in 0 word in control RAM is 1, the content in 1 word is 2 … … 16383 words is 16383, and the content in tail is 16383.
The release of the buffer space can be divided into two cases, one is that the receiving port analyzes the received data packet, and finds that the receiving port does not meet the forwarding condition, and at this time, the space allocated for the data packet needs to be released; another situation is that the occupied buffer space needs to be released after the data packet of each sending port is forwarded. It should be noted that when forwarding multicast or broadcast packets, this space must be freed when all forwarding ports have completed data transmission.
The queue management module 2 forms an output array at the exit using a two-layer linked list structure. The first layer is a transmission queue linked list, and the second layer is a cache mark linked list; using a transmit queue linked list to guarantee the priority order of each port data packet; for each packet, a buffer flag linked list is used to ensure that the order of pages in the buffer corresponds to each packet.
Each output port supports at most 24 transmit queues to ensure quality of service, and all transmit queues share a transmit queue linked list. The transmit queue is maintained as a linked list with each node representing a pointer to the packet buffer identity. Each buffer identification includes packet information and a pointer to the next packet identification. Each cache identity has an associated page allocated in the data cache. For packets with packet sizes greater than 128 bytes, multiple buffer identifications are required.
When receiving a stored data packet, the descriptor management module writes the descriptor of a data packet into a transmission descriptor queue of a certain port, namely, binds the stored data packet to a corresponding transmission port, and when the switching controller reads the descriptor of the data packet from the transmission port descriptor queue, the data of the packet can be read from a data cache according to the content of each field in the descriptor and transmitted from the corresponding port.
The storage position of a packet received by a port in a data buffer is random, and one packet may occupy one CELL or multiple CELLs, and these CELLs are not necessarily continuous, so that how to correlate in a data packet and between data packets needs to be considered when defining a descriptor structure.
Therefore, not only the information of the storage locations of the data packets but also the association information between each CELL of the data packets and the association information between the data packets need to be recorded in the descriptor. The first layer of transmission queue chain table is used for ensuring the priority order of each port packet, so that the related information such as the head address of the packet is recorded. The second layer is a buffer mark chain table for ensuring the order of buffer pages and the correspondence of each packet, so the information such as CELL address of the data packet is recorded.
The descriptor linked list is associated with the data cache through the address pointer, when the data packet is forwarded, the information of the descriptor linked list is written into the descriptor queue of the sending port, and when the data packet needs to be forwarded to different ports, the descriptor pointers of the ports can point to the same block cache area, so that the storage space is saved.
In design, there are 30 transmit ports, so the on-chip descriptor queue space is divided into 30 port queues, each port queue can hold 16383 descriptors, if a port priority is configured, a transmit port can support up to 24 transmit queues, then each port is sent to a corresponding transmit queue according to the corresponding priority, and all queues share the port descriptor queue.
For multicast transmission, when the memory controller receives a packet end signal from an input port, it prepares a descriptor queue that binds the currently stored packet to the transmitting port. In the process of binding descriptors, if multicast transmission is encountered, that is, a data packet is to be output from multiple transmission ports, the relevant attribute of the packet needs to be bound to all the transmission port descriptor queues. In the control RAM of the buffer space, the number of transmit port queues is counted with a 5-bit counter. When the descriptor binding is performed in the management descriptor queue, when a packet end flag is received and the current packet is confirmed from the switching controller to be capable of being forwarded, the descriptor is written in the corresponding transmitting port queue for the current packet. When the switch controller has finished sending the packet of a certain port, the packet descriptor needs to be released, the corresponding data buffer space is found according to the frame address in the descriptor, the 5-bit count in the corresponding control RAM is reduced by 1, and then the descriptor is released.
As shown in fig. 2, the queue management module 2 includes a buffer request module 7, a descriptor write request control module 8, a descriptor read request control module 9, a descriptor management module 10, a descriptor buffer module 11, a CELL write management module 12, and a packet write management module 13;
the buffer request module 7 is configured to record addresses allocated to the data packets by the buffer management module, including addresses of all CELLs of each data packet, and record relevant information of the CELLs, such as SOF, EOF, and BE, and the descriptor management module 10 is configured to write the information recorded by the buffer request module 7 into a corresponding descriptor queue, so as to implement association of the data packets.
The descriptor write request control module 8 requests the corresponding transmitting port to perform the write descriptor queue operation according to the relevant result of the forwarding control.
The descriptor read request control module 9 is configured to determine whether the sending transmission port can forward data, give a descriptor queue read request, complete handshake between the forwarding and descriptor management modules when the data packet is sent, and give a sending completion flag, and request the buffer management module 1 to release the buffer space.
The descriptor management module is a core module for queue management, and mainly completes write control of the descriptor queue and read control of the descriptor queue.
The descriptor buffer module 11 is configured to store descriptor linked list information. The cache is updated as packets are enqueued and dequeued.
The CELL write management module is used for recording the CELL write operation state of the current request enqueue and preparing for the next CELL write of the port.
The data packet write management module is used for recording the data packet write operation state of the current request enqueue and preparing for the next data packet write of the port.
The transmission queue write control realizes 30 port queue write request arbitration, completes the operation of writing request data packets into the queue for the arbitrated ports and updates the corresponding queue pointer and descriptor. The above process is completed by the state machine, if the port 2 has a request first, the state machine jumps to the last address state of the port 2, in which state, according to the configured queue mode and the queue number of the data packet, if the request is the first CELL request of the data packet, the frame linked list needs to be updated, and the address space allocated this time is updated to the next frame address item of the address space pointed by the tail pointer of the frame linked list descriptor linked list. And the address item of the next page in the descriptor of the address space pointed by the tail pointer of the cache mark linked list needs to be updated. If the request is not the first CELL request of a frame, only the buffer list descriptor needs to be updated. After the operation is finished, the state machine jumps to the current address state of the port 2, and in the state, the tail pointer and the descriptor queue of the corresponding queue are updated according to the configured queue mode and the queue number of the request. If the request is the first request of the packet, the descriptors and the tail pointers of the queue link table and the cache mark link table are all required to be updated, otherwise, only the descriptors and the tail pointers of the cache mark link table are required to be updated.
The transmit queue read (transmit) state machine describes the mechanism of how a transmit queue for a port transmits, the scheduler monitors the bandwidth usage of each egress port CoS queue, the monitoring mechanism classifies the CoS queues into different scheduling groups, and bandwidth monitoring is based on each CoS queue individually. The minimum bandwidth monitoring is to provide minimum bandwidth guarantees to the CoS queues of each egress port. The maximum bandwidth monitoring is to control the maximum bandwidth limit of the CoS queue for each egress port. Minimum bandwidth monitoring and maximum bandwidth monitoring are implemented through a leaky bucket mechanism.
The normal port has only 8 standard CoS queues, which 8 CoS queues are scheduled by the S2 scheduler. While a high speed port may support 24 CoS queues. The first 8 CoS queues are scheduled by the S2 scheduler, the other 16 CoS queues are scheduled by the S1 scheduler, and the service quality of the high-speed port is guaranteed by the two-stage scheduling method.
The egress queue scheduling supports four CoS queue scheduling algorithms: strict priority CoS queue scheduling, polling priority CoS queue scheduling, weighted difference polling priority CoS queue scheduling, and user selection as desired.
The ingress control module 3 is configured to control and monitor the buffer usage of the ingress CoS queues. If the threshold is exceeded, a flow control message is generated. If the opposite terminal processes the flow control message, the opposite terminal stops sending the data packet, so as to relieve the pressure of the inlet.
The buffer space is divided into three parts: port guaranteed space, shared space, and head space;
the port guaranteed space provides the port with minimum guaranteed available space. Configured by registers. There is only one set of register configurations, all port space being the same. The shared space, when the minimum guaranteed space is insufficient, provides the shared cache space for the ports, including the total shared cache space, and the maximum available shared cache space for each port. All ports may be configured dynamically or may be configured with static thresholds. Head space, when the minimum guaranteed space and shared buffer space are insufficient, provides some additional buffer capacity. The method comprises the steps of including a port head space and a global head space, wherein the priority group head space is used for storing a period from when a flow control frame is sent to the opposite end to stop sending a message, and partial messages sent by the opposite end can be received, and the global head space is used as a shared head space of all ports if the head space is not independently allocated for each port. If this space is used, each port is only allowed to hold one message.
The caching rule is that when receiving a data packet, the port guaranteed space is preferentially used, then the shared space is used, and finally the head space is used. When the data packet is sent, the head space is released preferentially, then the shared space is shared, and finally the port guaranteed space is reserved.
Each port is provided with an independent bucket leakage algorithm mechanism for monitoring the use condition of the buffer memory, and the flow control is triggered at the inlet port to realize inlet flow shaping. BUCKET_COUNT represents the number of tokens in the current BUCKET, and is 0 at the beginning, if a message arrives, the number of tokens is converted into the corresponding number of tokens according to the message byte size and added into the BUCKET. Every t_refresh cycle, the number of tokens of REFRESH COUNT is taken out of the BUCKET (buffer_count- =refresh COUNT). The GRANULARITY of each token is selected by MEER_GRANULARITY. When BUCKET_COUNT reaches DISCARD_THD waterline, MMU DISCARDs the arrived message from the notification port.
The outlet control module 4 is used for controlling the highest throughput;
the principle of ingress control is to avoid packet loss as much as possible, and the principle of egress control is to achieve high throughput as much as possible.
The egress control is achieved by setting a threshold value for each port. The egress port is associated with CoS queues, each with its own threshold that determines the message to be admitted to the queue and which destination port is the port for which the message will be discarded. Similar to the ingress control, the egress control also has two cache resources, minimum guaranteed space and shared space. When the data packet is sent, the shared space is released preferentially, and the influence on the use of other port caches is reduced.
The QoS control module 5 performs flow classification on the received data packet, puts the data packet into different CoS queues according to the classification result, and realizes flow control and congestion processing of ports and queues.
The QoS control module supports conventional flow control and service related flow control. Traditional flow control refers to the implementation of flow control by the form of PAUSE frame back-pressure across the ports. Service related flow control, then, finer to each COS queue inside the port, flow control between each COS queue is independent, thus allowing for finer flow control based on between COS queues. For example, it is possible to control the high priority flows to be as far as possible without packet loss, and to set a lower threshold for the low priority flows, if the threshold is exceeded, no flow control is required and the flows are discarded directly.
After the data packet enters the buffer and creates a queue, the QoS control module updates some relevant internal resource registers. Based on these resource statistics registers, it is determined whether a port enters a flow control state, head-end blocking, weighted early drop state (WRED), or forwarding is determined.
The register module 6 is used for realizing the configuration of the cache management unit. The method mainly comprises the steps of controlling the buffer space, the CoS queue and the threshold value of the flow control and shaping functions in each module of the inlet control, the outlet control and the QoS control.
When in use, a user configures the battery to meet different application requirements after powering on according to different requirements.
In the invention, all the buffer space is not used as the shared buffer, but is divided into three parts of a port guarantee space, a shared space and a head space, wherein the port guarantee space reserves a minimum buffer space for each port, other ports cannot occupy the shared buffer, and the ports can use the shared buffer after the ports are used up, so that the reasonable use of the buffer space by each port is ensured, the head space is used for storing a small part of data which can still be received after the shared buffer is used up and a switch sends a flow control signal, and the best effort packet loss is ensured. In addition, the buffer memory management unit realizes the functions of flow classification, flow supervision, shaping, congestion management and the like, realizes the efficient non-blocking forwarding of data, and ensures the service quality.
The invention discloses an Ethernet exchanger oriented high-efficiency cache management system, which is used for realizing dynamic management of a cache unit by dividing the cache space and reserving the minimum guaranteed space for each exchange port. Meanwhile, a linked list structure is adopted to manage the buffer space and the queue, so that the occupation of resources is reduced, and the reliability is high. And respectively and independently counting the incoming messages at the input port and the output port, and adopting strategies such as threshold value, bucket leakage, WRED, queue classification and the like to realize flow control and congestion treatment. The exit scheduler adopts a two-stage scheduling strategy and an SP/RR/WRR/WDRR queue scheduling algorithm to ensure the service priority of the port message. Through technologies such as flow classification, flow shaping, congestion management and the like, efficient forwarding of messages under normal and congestion situations is achieved.
As shown in fig. 1, the line internal structure is a hardware implementation architecture of the high-efficiency cache management unit provided by the invention, and other modules in the gigabit ethernet switch connected with the line external to the broken line include a receiving arbitration module, a sending arbitration and cache module, a MAC module and a PHY module.
The numbering blocks of fig. 1 are described below.
And the buffer management module 1 is 1 in number and is used for completing the allocation and release of the buffer addresses of the data packets received and transmitted from the MAC and comprises the record of the multicast and broadcast data forwarding ports.
And the transmission queue management module 2 is 1 in number and completes the enqueuing and dequeuing operation of the data packet.
And the number of the inlet control modules 3 is 1, so that the distribution and flow control of the inlet caches are realized.
And the number of the outlet control modules 4 is 1, and the use of the statistical outlet buffer memory realizes the outlet flow control.
The QoS control module 5, the number of which is 1, implements traffic classification, traffic control and congestion management.
The number of register modules 6 is 1, and the configuration of each functional module is realized.
As shown in fig. 2, within the dashed line is the architecture of the transmit queue module, and outside the dashed line is the other modules connected thereto.
The buffer request module 7, the number of which is 30, records the CELL address allocated to the data packet by the buffer management module and the data packet information related to the CELL, such as SOF, EOF and BE.
The descriptor write request control module 8, which is 30 in number, generates egress write queue information including write requests, addresses, packet related information, etc.
The descriptor read request control module 9, the number of which is 30, implements the egress queue read request function.
Descriptor management module 10, which is 30 in number, implements enqueue and dequeue management for each egress.
The descriptor buffer module 11, the number of which is 1, stores a descriptor queue link list.
The CELL writing management module 12 records the writing state of the CELL-related information, and the number of the CELL writing management modules is 30.
The packet write management module 13, the number of which is 30, records the write status of the packet related information.
The invention can be used in the Ethernet exchanger supporting the store-and-forward architecture, and is especially suitable for the two-layer and three-layer high-performance Ethernet exchanger.
By adopting the buffer management provided by the invention, the efficient forwarding of the message under normal and congestion situations can be realized.
According to the scheme, the logic functions of each module in the invention are described by VHDL language, and are integrated with the MAC and PHY modules in the gigabit Ethernet exchanger, and the verification of the system level is carried out on the FPGA. The verification result shows that the invention realizes the design function. Aiming at data exchange under normal and congestion transmission situations, the line speed forwarding of the message can be realized, and the packet loss rate is small.

Claims (6)

1. An Ethernet switch-oriented high-efficiency cache management system is characterized by comprising a cache management module (1), a queue management module (2), an inlet control module (3), an outlet control module (4), a QoS control module (5) and a register module (6);
the buffer management module (1) is used for completing the allocation and release of the buffer address of the data packet received and transmitted from the MAC, transmitting the data packet to be allocated to the queue management module (2), and discarding the data packet to be discarded to release the buffer space;
the queue management module (2) is used for enqueuing and dequeuing the data packets distributed by the cache management module (1);
the entrance control module (3) is used for controlling and monitoring the buffer use condition of the CoS queue of the entrance port;
the outlet control module (4) is used for counting the use of the outlet cache to realize outlet flow control;
the QoS control module (5) is used for carrying out flow classification on the received data packets, putting the data packets into different CoS queues according to classification results, and realizing flow control and congestion treatment of ports and the queues;
the register module (6) is used for realizing the configuration of the cache management unit; the data packet to be discarded comprises a port which can not find forwarding, a huge frame which exceeds a specified length and can not be received by the port or other control giving out a discarding mark, and the data packet is discarded; the access control module (3) divides the cache space into a port guarantee space, a shared space and a head space, wherein the port guarantee space provides a minimum guaranteed available space for the port; the shared space is used for providing a shared cache space for the port when the minimum guaranteed space is insufficient; the header space is used for providing some extra buffering capacity when the minimum guaranteed space and the shared buffering space are insufficient, and the data buffering space of the buffering management module (1) stores data packets to be forwarded; when the receiving port receives the data packet, the buffer management module (1) allocates a corresponding space for the data packet, generates descriptor information at the same time and sends the descriptor information to the queue management module; if the data packet is to be discarded, discarding the data packet and releasing the allocated storage space, wherein the buffer memory of the buffer memory management module (1) takes CELL as the minimum unit, and each CELL is 128 bytes in size.
2. An ethernet switch oriented cache management system according to claim 1, wherein the exit of the queue management module (2) forms an output array using a two-layer linked list structure, the first layer being a transmit queue linked list and the second layer being a cache tag linked list.
3. An ethernet switch oriented cache management system according to claim 1 wherein upon receipt of a stored data packet, the descriptor management module writes a descriptor of a data packet to a transmit descriptor queue of a port, and when the switch controller reads the descriptor of the data packet from the transmit port descriptor queue, the data of the packet is read from the data cache according to the contents of each field in the descriptor and transmitted from the corresponding port.
4. An ethernet switch oriented cache management system according to claim 1, wherein the queue management module (2) comprises a cache request module (7), a descriptor write request control module (8), a descriptor read request control module (9), a descriptor management module (10), a descriptor cache module (11), a CELL write management module (12) and a packet write management module (13);
the buffer request module (7) is used for recording the address allocated to the data packet by the buffer management module;
the descriptor management module (10) is used for writing the information recorded by the cache request module (7) into a corresponding descriptor queue;
the descriptor write request control module (8) requests a corresponding sending port to perform descriptor write queue operation according to the relevant result of forwarding control;
the descriptor read request control module (9) is used for judging whether the sending transmission port can forward data or not and giving a descriptor queue read request;
the descriptor cache module (11) is used for storing descriptor linked list information;
the CELL writing management module (12) is used for recording the CELL writing operation state of the current request enqueue and preparing for the next CELL writing of the port;
the data packet write management module (13) is used for recording the data packet write operation state of the current request enqueue and preparing for the next data packet write of the port.
5. An ethernet switch oriented cache management system according to claim 1 wherein the cache of the ingress CoS queue generates a flow control message if the threshold is exceeded; if the opposite terminal processes the flow control message, the opposite terminal stops sending the data packet.
6. An ethernet switch oriented cache management system according to claim 1, wherein the register module (6) is configured to implement a cache management unit, including thresholds for cache space, coS queues and flow control and shaping functions in each of the ingress control, egress control and QoS control modules.
CN202111275730.9A 2021-10-29 2021-10-29 High-efficiency cache management system for Ethernet switch Active CN114531488B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111275730.9A CN114531488B (en) 2021-10-29 2021-10-29 High-efficiency cache management system for Ethernet switch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111275730.9A CN114531488B (en) 2021-10-29 2021-10-29 High-efficiency cache management system for Ethernet switch

Publications (2)

Publication Number Publication Date
CN114531488A CN114531488A (en) 2022-05-24
CN114531488B true CN114531488B (en) 2024-01-26

Family

ID=81619272

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111275730.9A Active CN114531488B (en) 2021-10-29 2021-10-29 High-efficiency cache management system for Ethernet switch

Country Status (1)

Country Link
CN (1) CN114531488B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002052B (en) * 2022-07-18 2022-10-25 井芯微电子技术(天津)有限公司 Layered cache controller, control method and control equipment
CN115242729B (en) * 2022-09-22 2022-11-25 沐曦集成电路(上海)有限公司 Cache query system based on multiple priorities

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789734A (en) * 2016-12-21 2017-05-31 中国电子科技集团公司第三十二研究所 Control system and method for macro frame in exchange control circuit
CN106789729A (en) * 2016-12-13 2017-05-31 华为技术有限公司 Buffer memory management method and device in a kind of network equipment
CN107220200A (en) * 2017-06-15 2017-09-29 西安微电子技术研究所 Time triggered Ethernet data management system and method based on dynamic priority
US10298496B1 (en) * 2017-09-26 2019-05-21 Amazon Technologies, Inc. Packet processing cache
CN113110943A (en) * 2021-03-31 2021-07-13 中国人民解放军战略支援部队信息工程大学 Software defined switching structure and data switching method based on the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106789729A (en) * 2016-12-13 2017-05-31 华为技术有限公司 Buffer memory management method and device in a kind of network equipment
CN106789734A (en) * 2016-12-21 2017-05-31 中国电子科技集团公司第三十二研究所 Control system and method for macro frame in exchange control circuit
CN107220200A (en) * 2017-06-15 2017-09-29 西安微电子技术研究所 Time triggered Ethernet data management system and method based on dynamic priority
US10298496B1 (en) * 2017-09-26 2019-05-21 Amazon Technologies, Inc. Packet processing cache
CN113110943A (en) * 2021-03-31 2021-07-13 中国人民解放军战略支援部队信息工程大学 Software defined switching structure and data switching method based on the same

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Storage Efficient Edge Caching with Time Domain Buffer Sharing at Base Stations;Zhanyuan Xie等;《 ICC 2019 - 2019 IEEE International Conference on Communications (ICC)》;全文 *
云存储分级服务缓冲队列技术;刘静涛;王喆峰;廉新科;张新建;;指挥信息系统与技术(第05期);全文 *

Also Published As

Publication number Publication date
CN114531488A (en) 2022-05-24

Similar Documents

Publication Publication Date Title
US6084856A (en) Method and apparatus for adjusting overflow buffers and flow control watermark levels
US6167054A (en) Method and apparatus providing programmable thresholds for full-duplex flow control in a network switch
US9178830B2 (en) Network processor unit and a method for a network processor unit
US7391786B1 (en) Centralized memory based packet switching system and method
US7145904B2 (en) Switch queue predictive protocol (SQPP) based packet switching technique
US7227841B2 (en) Packet input thresholding for resource distribution in a network switch
US9008113B2 (en) Mapped FIFO buffering
JP5863076B2 (en) Method, apparatus, and system for reconstructing and reordering packets
US7023841B2 (en) Three-stage switch fabric with buffered crossbar devices
CN114531488B (en) High-efficiency cache management system for Ethernet switch
US7406041B2 (en) System and method for late-dropping packets in a network switch
US20020163922A1 (en) Network switch port traffic manager having configurable packet and cell servicing
JPH09512683A (en) ATM architecture and switching elements
US9769092B2 (en) Packet buffer comprising a data section and a data description section
US7020133B2 (en) Switch queue predictive protocol (SQPP) based packet switching method
EP2526478B1 (en) A packet buffer comprising a data section an a data description section
US6735210B1 (en) Transmit queue caching
US5748633A (en) Method and apparatus for the concurrent reception and transmission of packets in a communications internetworking device
US6885591B2 (en) Packet buffer circuit and method
CN112615796A (en) Queue management system considering storage utilization rate and management complexity
US20030123492A1 (en) Efficient multiplexing system and method
CN113347112B (en) Data packet forwarding method and device based on multi-level cache
US7092404B1 (en) Interposer chip for protocol conversion
Guo et al. A Novel Priority-Allocated Scheme for Flow-Based Queue Managers
Kumar et al. Addressing queuing bottlenecks at high speeds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant