US20050235129A1 - Switch memory management using a linked list structure - Google Patents
Switch memory management using a linked list structure Download PDFInfo
- Publication number
- US20050235129A1 US20050235129A1 US11/150,152 US15015205A US2005235129A1 US 20050235129 A1 US20050235129 A1 US 20050235129A1 US 15015205 A US15015205 A US 15015205A US 2005235129 A1 US2005235129 A1 US 2005235129A1
- Authority
- US
- United States
- Prior art keywords
- pointer
- free
- memory
- memory location
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/10—Packet switching elements characterised by the switching fabric construction
- H04L49/103—Packet switching elements characterised by the switching fabric construction using a shared central buffer; using a shared memory
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99953—Recoverability
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99956—File allocation
- Y10S707/99957—Garbage collection
Definitions
- the invention relates to a method and apparatus for high performance switching in local area communications networks such as token ring, ATM, ethernet, fast ethernet, and gigabit ethernet environments, generally known as LANs.
- the invention relates to a new switching architecture geared to power efficient and cost sensitive markets, and which can be implemented on a semiconductor substrate such as a silicon chip.
- Switches Based upon the Open Systems Interconnect (OSI) 7-layer reference model, network capabilities have grown through the development of repeaters, bridges, routers, and, more recently, “switches”, which operate with various types of communication media. Thickwire, thinwire, twisted pair, and optical fiber are examples of media which has been used for computer networks. Switches, as they relate to computer networking and to ethernet, are hardware-based devices which control the flow of data packets or cells based upon destination address information which is available in each packet. A properly designed and implemented switch should be capable of receiving a packet and switching the packet to an appropriate output port at what is referred to wirespeed or linespeed, which is the maximum speed capability of the particular network.
- wirespeed or linespeed which is the maximum speed capability of the particular network.
- Basic ethernet wirespeed is up to 10 megabits per second
- Fast Ethernet is up to 100 megabits per second.
- a gigabit Ethernet is capable of transmitting data over a network at a rate of up to 1,000 megabits per second.
- Hubs or repeaters operate at layer one, and essentially copy and “broadcast” incoming data to a plurality of spokes of the hub.
- Layer two switching-related devices are typically referred to as multiport bridges, and are capable of bridging two separate networks.
- Bridges can build a table of forwarding rules based upon which MAC (media access controller) addresses exist on which ports of the bridge, and pass packets which are destined for an address which is located on an opposite side of the bridge.
- Bridges typically utilize what is known as the “spanning tree” algorithm to eliminate potential data loops; a data loop is a situation wherein a packet endlessly loops in a network looking for a particular address.
- the spanning tree algorithm defines a protocol for preventing data loops.
- Layer three switches sometimes referred to as routers, can forward packets based upon the destination network address. Layer three switches are capable of learning addresses and maintaining tables thereof which correspond to port mappings. Processing speed for layer three switches can be improved by utilizing specialized high performance hardware, and off loading the host CPU so that instruction decisions do not delay packet forwarding.
- the invention is directed to a scheme for reducing clock speed and power consumption in a network chip.
- the invention is a memory management method.
- the method has the steps of assigning pointers to free memory locations and linking the pointers to one another creating a linked list of free memory locations having a beginning and an end.
- a free head pointer is assigned to a memory location indicating the beginning of free memory locations and a free tail pointer is assigned to a memory location indicating the end of free memory locations.
- An initial data pointer is assigned to the memory location assigned to the free head pointer and an end of data pointer is assigned to a last data memory location.
- the free head pointer is assigned to a next memory location linked to the last data memory location assigned to the end of data pointer.
- the next memory location indicates the beginning of free memory locations.
- the invention is a memory management method.
- the method has the steps of assigning pointers to free memory locations and linking the pointers to one another creating a linked list of free memory locations having a beginning and an end.
- a free head pointer is assigned to a memory location indicating the beginning of free memory locations and a free tail pointer is assigned to a memory location indicating the end of free memory locations.
- the memory location assigned to the free tail pointer is linked to the memory location assigned to an initial data pointer when memory locations occupied by data is to be indicated as free memory.
- the free tail pointer is assigned to the last data memory location assigned to the end of data pointer.
- Another embodiment of the invention is a memory management system.
- the system has a pointer assignor that assigns pointers to free memory locations and a linker that links said pointers to one another thereby creating a linked list of free memory locations having a beginning and an end.
- a free head pointer assignor assigns a free head pointer to a memory location indicating the beginning of free memory locations and a free tail pointer assignor assigns a free tail pointer to a memory location indicating the end of free memory locations.
- An initial data pointer assignor assigns an initial data pointer to the memory location assigned to the free head pointer and an end of data pointer assignor assigns an end of data pointer to a last data memory location.
- the free head pointer assignor assigns the free head pointer to a next memory location linked to the last data memory location assigned to said end of data pointer. The next memory location indicates the beginning of free memory locations.
- the invention in another embodiment is a memory management system.
- the system has a pointer assignor that assigns pointers to free memory locations, and a linker that links the pointers to one another thereby creating a linked list of free memory locations having a beginning and an end.
- a free head pointer assignor assigns a free head pointer to a memory location indicating the beginning of free memory locations, and a free tail pointer assignor that assigns a free tail pointer to a memory location indicating the end of free memory locations.
- the linker links the memory location assigned to the free tail pointer to the memory location assigned to an initial data pointer when memory locations occupied by data is to be indicated as free memory and the free tail pointer assignor assigns the free tail pointer to the last data memory location assigned to said end of data pointer.
- a linker means links the pointers to one another thereby creating a linked list of free memory locations having a beginning and an end, and a free head pointer assignor assigns a free head pointer to a memory location indicating the beginning of the linked list of free memory locations.
- a free tail pointer assignor assigns a free tail pointer to a memory location indicating the end of the linked list of free memory locations, and an initial data pointer assignor means that assigns an initial data pointer to the memory location assigned to the free head pointer.
- An end of data pointer assignor means assigns an end of data pointer to a last data memory location.
- the free head pointer assignor means assigns the free head pointer to a next memory location linked to the last data memory location assigned to the end of data pointer, wherein the next memory location indicates the beginning of free memory locations.
- the invention is a memory management system having a pointer assignor means for assigning pointers to free memory locations and a linker means for linking the pointers to one another thereby creating a linked list of free memory locations having a beginning and an end.
- a free head pointer assignor means assigns a free head pointer to a memory location indicating the beginning of the linked list of free memory locations
- a free tail pointer assignor means assigns a free tail pointer to a memory location indicating the end of the linked list of free memory locations.
- the linker means links the memory location assigned to the free tail pointer to the memory location assigned to an initial data pointer when memory locations occupied by data is to be indicated as free memory.
- the free tail pointer assignor means assigns the free tail pointer to the last data memory location assigned to the end of data pointer.
- the invention is in another embodiment a memory management device having a pointer assignor that assigns pointers to free memory locations of a memory.
- the pointer assignor is in communication with the memory.
- a linker links the pointers to one another thereby creating a linked list of free memory locations having a beginning and an end.
- the linker is also in communication with the memory.
- a free head pointer assignor assigns a free head pointer to a memory location indicating the beginning of the linked list of free memory locations.
- the free head pointer assignor is in communication with the memory.
- a free tail pointer assignor assigns a free tail pointer to a memory location indicating the end of the linked list of free memory locations.
- the free tail pointer assignor is in communication with the memory.
- An initial data packet pointer assignor assigns an initial data packet pointer to the memory location assigned to the free head pointer.
- the initial data packet pointer assignor is in communication with the memory.
- An end of data packet pointer assignor that assigns an end of data packet pointer to a last data memory location in the memory, and the end of data packet pointer assignor is in communication with the memory.
- the free head pointer assignor assigns the free head pointer to a next memory location linked to the last data packet memory location assigned to the end of data packet pointer, wherein the next memory location indicates said beginning of free memory locations.
- the invention is a memory management device having a pointer assignor that assigns pointers to free memory locations of a memory.
- the pointer assignor is in communication with the memory.
- a linker links the pointers to one another thereby creating a linked list of free memory locations having a beginning and an end.
- the linker is in communication with the memory.
- a free head pointer assignor assigns a free head pointer to a memory location indicating the beginning of the linked list of free memory locations.
- the free head pointer assignor is also in communication with the memory.
- a free tail pointer assignor assigns a free tail pointer to a memory location indicating the end of the linked list of free memory locations.
- the free tail pointer assignor is in communication with the memory.
- the linker links the memory location assigned to the free tail pointer to the memory location assigned to an initial data pointer when memory locations occupied by data is to be indicated as free memory, and the free tail pointer assignor assigns the free tail pointer to the last data memory location assigned to the end of data pointer.
- FIG. 1 is a general block diagram of elements of the present invention
- FIG. 2 illustrates the data flow on the CPS channel of a network switch according to the present invention
- FIG. 3A illustrates a linked list structure of Packet Buffer Memory
- FIG. 3B illustrates a linked list structure of Packet Buffer Memory with two data packets
- FIG. 3C illustrates a linked list structure of Packet Buffer Memory after the memory occupied by one data packet is freed
- FIG. 3D illustrates a linked list structure of Packet Buffer Memory after the memory occupied by another data packet is freed
- FIG. 4A is a flow diagram of the steps to assign data to memory locations in a linked list of free memory.
- FIG. 4B is a flow diagram of the steps to add memory locations designated for data to a linked list of free memory.
- FIG. 5 is an illustration of a system for managing a linked list of free memory.
- FIG. 1 is an example of a block diagram of a switch 100 of the present invention.
- switch 100 has 12 ports, 102 ( 1 )- 102 ( 12 ), which can be fully integrated IEEE compliant ports.
- Each of these 12 ports 102 ( 1 )- 102 ( 12 ) can be 10BASE-T/100BASE-TX/FX ports each having a physical element (PHY), which can be compliant with IEEE standards.
- Each of the ports 102 ( 1 )- 102 ( 12 ), in one example of the invention, has a port speed that can be forced to a particular configuration or set so that auto-negotiation will determine the optimal speed for each port independently.
- Each PHY of each of the ports can be connected to a twisted-pair interface using TXOP/N and RXIP/N as transmit and receive protocols, or a fiber interface using FXOP/N and FXIP/N as transmit and receive protocols.
- Each of the ports 102 ( 1 )- 102 ( 12 ) has a Media Access Controller (MAC) connected to each corresponding PHY.
- MAC Media Access Controller
- each MAC is a fully compliant IEEE 802.3 MAC.
- Each MAC can operate at 10 Mbps or 100 Mbps and supports both a full-duplex mode, which allows for data transmission and reception simultaneously, and a half duplex mode, which allows data to be either transmitted or received, but not both at the same time.
- Flow control is provided by each of the MACs.
- flow control When flow control is implemented, the flow of incoming data packets is managed or controlled to reduce the chances of system resources being exhausted.
- the present embodiment can be a non-blocking, wire speed switch, the memory space available may limit data transmission speeds. For example, during periods of packet flooding (i.e. packet broadcast storms), the available memory can be exhausted rather quickly.
- the present invention can implement two different types of flow control. In full-duplex mode, the present invention can, for example, implement the IEEE 802.3x flow control. In half-duplex mode, the present invention can implement a collision backpressure scheme.
- each port has a latency block connected to the MAC.
- Each of the latency blocks has transmit and receive FIFOs which provide an interface to main packet memory. In this example, if a packet does not successfully transmit from one port to another port within a preset time, the packet will be dropped from the transmit queue.
- a gigabit interface 104 can be provided on switch 100 .
- Gigabit interface 104 can support a Gigabit Media—Independent Interface (GMII) and a Ten Bit Interface (TBI).
- the GMII can be fully compliant to IEEE 802.3ab.
- the GMII can pass data at a rate of 8 bits every 8 ns resulting in a throughput of 2 Gbps including both transmit and receive data.
- gigabit interface 104 can be configured to be a TBI, which is compatible with many industry standard fiber drivers. Since in some embodiments of the invention the MDIO/MDC interfaces (optical interfaces) are not supported, the gigabit PHY (physical layer) is set into the proper mode by the system designer.
- Gigabit interface 104 like ports 102 ( 1 )- 102 ( 12 ), has a PHY, a Gigabit Media Access Controller (GMAC) and a latency block.
- the GMAC can be a fully compliant IEEE 802.3z MAC operating at 1 Gbps full-duplex only and can connect to a fully compliant GMII or TBI interface through the PHY.
- GMAC 108 provides full-duplex flow control mechanisms and a low cost stacking solution for either twisted pair or TBI mode using in-band signaling for management. This low cost stacking solution allows for a ring structure to connect each switch utilizing only one gigabit port.
- a CPU interface 106 is provided on switch 100 .
- CPU interface 106 is an asynchronous 8 or 16 bit I/O device interface. Through this interface a CPU can read internal registers, receive packets, transmit packets and allow for interrupts.
- CPU interface 106 also allows for a Spanning Tree Protocol to be implemented.
- a chip select pin is available allowing a single CPU control two switches. In this example an interrupt pin when driven low (i.e., driven to the active state) requiring a pull-up resistor will allow multiple switches to be controlled by a single CPU.
- a switching fabric 108 is also located on switch 100 in one example of the present invention.
- Switching fabric 108 can allow for full wire speed operation of all ports.
- a hybrid or virtual shared memory approach can also be implemented to minimize bandwidth and memory requirements. This architecture allows for efficient and low latency transfer of packets through the switch and also supports address learning and aging features, VLAN, port trunking and port mirroring.
- Memory interfaces 110 , 112 and 114 can be located on switch 100 and allow for the separation of data and control information.
- Packet buffer memory interface (PBM) 110 handles packet data storage while the transmit queue memory interface (TXM) 112 keeps a list of packets to be transmitted and address table/control memory interface (ATM) 114 handles the address table and header information.
- PBM Packet buffer memory interface
- TXM transmit queue memory interface
- ATM address table/control memory interface
- Each of these interfaces can use memory such as SSRAM that can be configured in various total amounts and chip sizes.
- PBM 110 is located on switch 100 and can have an external packet buffer memory (not shown) that is used to store the packet during switching operations.
- packet buffer memory is made up of multiple 256 byte buffers. Therefore, one packet may span several buffers within memory. This structure allows for efficient memory usage and minimizes bandwidth overhead.
- the packet buffer memory can be configurable so that up to 4 Mbytes of memory per chip can be used for a total of 8 Mbytes per 24+2 ports. In this example, efficient memory usage is maintained by allocating 256 byte blocks, which allows storage for up to 32K packets.
- PBM 110 can be 64 bits wide and can use either a 64 bit wide memory or two 32 bit wide memories and can run at 100 MHz.
- TXM 112 is located on switch 100 and can have an external transmit queue memory (not shown). TXM 112 , in this example, maintains 4 priority queues per port and allows for 64K packets per chip and up to 128K packets per system. TXM 112 can run at a speed of up to 100 MHz.
- ATM 114 can be located on switch 100 and can have an external address table/control memory (not shown) used to store the address table and header information corresponding to each 256 byte section of PBM 110 .
- Address table/control memory allows up to 16K unique unicast addresses. The remaining available memory is used for control information.
- ATM 114 in this example, runs up to 133 MHz.
- Switch 100 in one example of the invention, has a Flow Control Manager 116 that manages the flow of packet data. As each port sends more and more data to the switch, Flow Control Manager 116 can monitor the amount of memory being used by each port 102 ( 1 )- 102 ( 12 ) of switch 100 and the switch as a whole. In this example, if one of the ports 102 ( 1 )- 102 ( 12 ) or the switch as a whole is using up too much memory as is predetermined by a register setting predefined by the manufacturer or by a user, Flow Control Manager 116 will issue commands over the ATM Bus requesting the port or switch to slow down and may eventually drop packets if necessary.
- switch 100 In addition to Flow control manager 116 , switch 100 also has a Start Point Manager (SPM) 118 connected to Switching Fabric 108 , a Forwarding Manager (FM) 120 connected to Switching Fabric 108 and an Address Manager (AM) 122 connected to Switching Fabric 108 .
- SPM Start Point Manager
- FM Forwarding Manager
- AM Address Manager
- Start Point Manager (SPM) 118 through Switching Fabric 108 in one example of the present invention, keeps track of which blocks of memory in PBM 110 are being used and which blocks of memory are free.
- Forwarding Manager 120 can, for example, forward packet data through Switching Fabric 108 to appropriate ports for transmission.
- AM 122 can, through Switching Fabric 108 , manage the address table including learning source addresses, assigning headers to packets and keeping track of these addresses.
- AM 122 uses aging to remove addresses from the address table that have not been used for a specified time period or after a sequence of events.
- An expansion port 124 can also be provided on switch 100 to connect two switches together. This will allow for full wire speed operation on twenty-five 100 M ports (includes one CPU port) and two gigabit ports.
- the expansion port 124 in this example, allows for 4.6 Gbps of data to be transmitted between switches.
- An LED controller 126 can also be provided on switch 100 .
- LED controller 126 activates appropriate LEDs to give a user necessary status information.
- Each port of the ports 102 ( 1 )- 102 ( 12 ), in one example of the invention, has 4 separate LEDs, which provide per port status information.
- the LEDs are fully programmable and are made up of port LEDs and other LEDs.
- Each LED can include a default state for each of the four port LEDs. An example of the default operation of each of the port LEDs are shown below.
- each of the port LEDs can be programmed through registers. These registers can be set up, in one example of the invention, by a CPU. By having programmable registers that control LEDs, full customization of the system architecture can be realized including the programmability of the blink rate.
- Each of the LEDs can have a table, as shown below, associated with the LED, where register bits R Ax , R Bx and R Cx can be set to provide a wide range of information.
- register bits R Ax , R Bx and R Cx can be set to determine when LED ON , LED BLINK and LED OFF are activated or deactivated.
- LED ON , LED BLINK and LED OFF are activated or deactivated.
- LED OFF are activated or deactivated.
- Registers 128 are located on switch 100 in this example of the present invention. Registers 128 are full registers that allow for configuration, status and Remote Monitoring (RMON) management. In this example, Registers 128 are arranged into groups and offsets. There are 32 address groups each of which can contain up to 64 registers.
- FIG. 2 is an illustration of one embodiment of the invention having a PBM Bus, an ATM Bus, and a TXM Bus for communications with other portions of the switch.
- PBM 110 is connected to the PBM Bus and an external PBM Memory
- TXM 112 is connected to the TXM Bus and an external TXM Memory
- ATM 114 is connected to the ATM Bus and an external ATM Memory.
- Each of the transmit (TX) and receive (RX) portions of ports 102 ( 1 )- 102 ( 12 ) are connected to the PBM Bus, ATM Bus and TXM Bus for communications.
- FM 120 is connected to each of the ports 102 ( 1 )- 102 ( 12 ) directly and is also connected to the ATM Bus for communications with other portions of the switch.
- SPM 118 and AM 122 are also connected to the ATM Bus for communications with other portions of the switch.
- switch 100 for transmission of a unicast packet (i.e., a packet destined for a single port for output) in one example of the invention is made with reference to FIG. 2 as follows.
- Switch 100 is initialized following the release of a hardware reset pin. A series of initialization steps will occur including the initialization of external buffer memory and the address table. All ports on the switch will then be disabled and the CPU will enable packet traffic by setting an enable register. As links become available on the ports (ports 102 ( 1 )- 102 ( 12 ) and gigabit port 104 ), an SPT protocol will confirm these ports and the ports will become activated. After the initialization process is concluded normal operation of Switch 100 can begin.
- a PORT_ACTIVE command is issued by CPU. This indicates that the port is ready to transmit and receive data packets. If for some reason a port goes down or becomes disabled, a PORT_INACTIVE command is issued by the CPU.
- a packet from an external source on port 102 ( 1 ) is received at the receive (RX) PHY of port 102 ( 1 ).
- the RX MAC of port 102 ( 1 ) will not start processing the packet until a Start of Frame Delimiter (SFD) for the packet is detected.
- SFD Start of Frame Delimiter
- the RX MAC will place the packet into a receive (RX) FIFO of the latency block of port 102 ( 1 ).
- RX FIFO receive FIFO
- port 102 ( 1 ) will request an empty receive buffer from the SPM.
- the RX FIFO Latency block of port 102 ( 1 ) sends packets received in the RX FIFO to the external PBM Memory through the PBM Bus and PBM 110 until the end of packet is reached.
- the PBM Memory in this example, is made up of 256 byte buffers. Therefore, one packet may span several buffers within the packet buffer memory if the packet size is greater than 256 bytes. Connections between packet buffers can be maintained through a linked list system in one example of the present invention.
- a linked list system allows for efficient memory usage and minimized bandwidth overhead and will be explained in further detail with relation to FIG. 3A - FIG. 3D .
- the port will also send the source address to Address Manager (AM) 122 and request a filtering table from AM 122 .
- AM Address Manager
- the port If the packet is “good”, as is determined through normal, standard procedures known to those of ordinary skill in the art, such as valid length and IEEE standard packet checking such as a Cyclic Redundancy Check, the port writes the header information to the ATM memory through the ATM Bus and ATM 114 .
- AM 122 sends a RECEP_COMPL command over the ATM Bus signifying that packet reception is complete.
- Other information is also sent along with the RECEP_COMPL command such as the start address and filtering table which indicates which ports the packet is to be sent out on. For example, a filtering table having a string such as “011111111111” would send the packet to all ports except port 1 and would have a count of 11. The count simply is the number of ports the packet is to be sent, as indicated by the number of “1”s.
- FM 120 Forwarding Manager 120 is constantly monitoring the ATM Bus to determine if a RECEP_COMPL command has been issued. Once FM 120 has determined that a RECEP_COMPL command has been issued, Forwarding Manager (FM) 120 will use the filtering table to send packets to appropriate ports. It is noted that a packet will not be forwarded if one of the following conditions is met:
- the RECEP_COMPL command includes information such as a filter table, a start pointer, priority information and other miscellaneous information.
- FM 120 will read the filter table to determine if the packet is to be transmitted from one of its ports. If it is determined that the packet is to be transmitted from one of its ports, FM 120 will send the RECEP_COMPL command information directly to the port. In this case, the RECEP_COMPL command information is sent to the TX FIFO of port 102 ( 12 ).
- the RECEP_COMPL command information is transferred to TXM Memory through the TXM Bus and TXM 112 .
- the TXM memory contains a queue of packets to be transmitted.
- TXM Memory is allocated on a per port basis so that if there are ten ports there are ten queues within the TXM Memory allocated to each port. As each of the ports transmitters becomes idle, each port will read the next RECEP_COMPL command information stored in the TXM Memory.
- the TX FIFO of port 102 ( 12 ) will receive, as part of the RECEP_COMPL command information, a start pointer which will point to a header in ATM memory across the ATM Bus which in turn points to the location of a packet in the PBM Memory over the PBM Bus.
- the port will at this point request to load the packet into the transmit (TX) FIFO of port 102 ( 12 ) and send it out through the MAC and PHY of port 102 ( 12 ).
- the port If the port is in half duplex mode, it is possible that a collision could occur and force the packet transmission to start over. If this occurs, the port simply re-requests the bus master and reloads the packet and starts over again. If however, the number of consecutive collisions becomes excessive, the packet will be dropped from the transmission queue.
- the port will signal FM 120 that it is done with the current buffer. FM 120 will then decrement a counter which indicates how many more ports must transmit the packet. For example, if a packet is destined to eleven ports for output, the counter, in this example, is set to 11. Each time a packet is successfully transmitted, FM 120 decrements the counter by one. When the counter reaches zero this will indicate that all designated ports have successfully transmitted the packet. FM 120 will then issue a FREE command over the ATM Bus indicating that the memory occupied by the packet in the PBM Memory is no longer needed and can now be freed for other use.
- Multicast and broadcast packets are handled exactly like unicast packets with the exception that their filter tables will indicate that all or most ports should transmit the packet. This will force the forwarding managers to transmit the packet out on all or most of their ports.
- FIG. 3A is an illustration of a PBM Memory structure in one example of the invention.
- PBM Memory Structure 300 is a linked list of 256 byte segments 302 , 304 , 306 , 308 , 310 , 312 , 314 and 316 .
- segment 302 is the free_head indicating the beginning of the free memory linked list and segment 316 is the free_tail indicating the last segment of free memory.
- Packet 1 occupies segments 302 , 306 and 308 and packet 2 occupies segment 304 .
- Segments 310 , 312 , 314 and 316 are free memory.
- Segment 310 is the free_head indicating the beginning of free memory and segment 316 is the free_tail indicating the end of free memory.
- segment 310 linking to segment 312
- segment 312 linking to segment 314
- segment 314 linking to segment 316
- segment 316 linking to segment 302
- segment 302 linking to segment 306
- segment 306 linking to segment 308 where segment 308 is the free_tail.
- FIG. 3D in this example simply illustrates the PBM Memory after packet 2 has been transmitted successfully and the Forwarding Manager has issued a FREE command over the ATM Bus.
- the SPM will detect the FREE command and then add the memory space occupied by packet 2 in the PBM Memory to the free memory linked list.
- segment 308 is linked to the memory occupied by packet 2 , segment 304 , and segment 304 is identified as the free_tail.
- FIG. 4A is an illustration of the method steps taken in one embodiment of the invention. The steps are described in relation with FIGS. 3A-3D .
- step 400 free memory locations 302 , 304 , 306 , 308 , 310 , 312 , 314 and 316 are linked to one another forming a linked list of free memory.
- memory location 302 has a pointer linking memory location 302 to memory location 304 .
- memory location 304 has a pointer which links memory location 304 to memory location 306 .
- the memory locations in this embodiment of the invention are initially sequentially linked to one another where memory location 302 is linked to memory location 304 , memory location 304 is linked to memory location 306 , memory location 306 is linked to memory location 308 , etc.
- a free_head pointer is assigned to an initial memory location of the linked list of free memory. As can be seen in the example illustrated in FIG. 3A , the free_head pointer is assigned to memory location 302 .
- a free tail pointer is assigned to a last memory location of the linked list.
- the free tail is assigned to memory location 316 , which in this case is the last memory location of the linked list of free memory.
- an initial data pointer is assigned to the memory location assigned to the free head pointer. For example, referring back to FIG. 3A , if a first packet of data were to be saved, the initial data pointer would be assigned to memory location 302 where the free head pointer was assigned. In step 408 , the next memory location linked to the memory location assigned to the initial data pointer would be assigned to store more data. For example, if the data would require free memory locations, memory locations 304 and 306 would also be assigned to data packet 1 . At memory location 306 , an end of data pointer would be assigned to this last data location. The free head pointer would then be assigned to memory location 308 indicating the beginning of free memory as described in step 414 .
- a free head pointer initially points to memory location 302 and packet 2 occupies memory location 304 .
- the initial data pointer is assigned to memory location 302 .
- packet 1 needs three memory locations and in this case, memory location 306 was linked to memory location 302 and memory location 308 was linked to memory location 306 .
- the data pointers for packet 1 are assigned to memory locations 302 , 306 and 308 .
- the end of data pointer is assigned to memory location 308 .
- step 414 the free head pointer is assigned to the next memory location linked to the last data memory location 308 , which is in this case, memory location 310 . Therefore, the free head pointer is assigned to memory location 310 and the free tail pointer is maintained as memory location 316 .
- FIG. 4B illustrates the method steps in one embodiment of the invention for freeing memory taken up by data packets and adding this memory to the linked list of free memory.
- step 416 the memory location assigned to the initial data pointer is linked to the memory location assigned to the free tail pointer.
- the free tail pointer was assigned to memory location 316 .
- the memory location assigned to the free tail pointer 316 is linked to the memory location assigned to the initial data pointer, memory location 302 .
- the free tail pointer is assigned to the memory location assigned to the end of data pointer.
- the end of data pointer for packet 1 was memory location 308 .
- the free tail pointer is assigned to the memory location assigned to the end of data pointer, memory location 308 .
- the free head pointer in this example remains assigned to memory location 310 .
- packet 2 is indicated as being free memory.
- the memory location 308 assigned to the free tail is linked to the initial data pointer memory location 304 . Since data packet 2 only occupies one memory location, the end of data pointer is also assigned to memory location 304 .
- the free tail pointer is assigned to the memory location assigned to the end of data pointer.
- the free tail pointer is assigned to memory location 304 which was assigned to the end of data pointer.
- FIG. 5 is an illustration of a system for managing a linked list memory.
- FIG. 5 will be described with reference to FIGS. 3A-3D .
- a pointer assigner 500 is responsible for linking memory locations to one another as depicted in FIG. 3A .
- pointers are assigned to memory locations 302 , 304 , and 306 - 316 .
- Linker 502 directs which pointers are assigned to which memory locations. For example, in FIG. 3A , the pointer of memory location 302 is linked to memory location 304 . The memory location 304 is linked to memory location 306 . The memory location 306 is linked to memory location 308 , etc.
- the system also has a free head pointer assignor 504 and a free tail pointer assignor 506 .
- the free head pointer assignor 504 assigns a free head pointer to the beginning of a linked list structure. In this case, the free head pointer assignor assigns a free head pointer to memory location 302 as depicted in FIG. 3A .
- the free tail pointer 506 assigns a free tail pointer to the last m memory location of the linked list as depicted in FIG. 3A . In this case, the free tail pointer assignor assigns the free tail pointer to memory location 316 .
- the system also has a initial data pointer assignor 408 which assigns an initial data pointer to the memory location the free head pointer is assigned to. For example, in FIG. 3B , packet 2 occupied memory location 304 and the free head pointer was assigned to memory location 302 . The initial data pointer assignor 508 assigned the initial data pointer to memory location 302 .
- Data assignor 510 assigns a sufficient number of memory locations to store data packets until an end of data has been found. In this case, data packet 1 needs three memory locations. Therefore, the data assignor assigns the packet data to memory locations 306 and 308 .
- the end of data pointer assignor 512 determines when the end of data has been reached and assigns an end of data pointer to the memory location occupied by the end of data. In this case, the end of data pointer assignor 512 assigns an end of data pointer to memory location 308 indicating the end of data as shown in FIG. 3B .
- the free head pointer assignor 504 then reassigns the free head to the next memory location in the linked list of memory assigned to the memory location the end of data pointer assignor the end of data pointer is assigned to. In this case, the end of data pointer is assigned to memory location 308 . Therefore, this free head pointer is assigned to the next memory location in the linked list, memory location 310 .
- linker 502 links the memory location that the free tail pointer was pointing to the memory location the initial data pointer was assigned.
- the free tail pointer was initially assigned to memory location 316 and the initial data pointer was assigned to memory location 302 . Therefore, memory location 316 is linked to memory location 302 by linker 502 .
- the free tail pointer assignor 506 assigns the memory location assigned to the end of data pointer memory location 308 to free tail pointer.
- the free tail pointer is assigned to memory location 308 .
- the above-discussed configuration of the invention is, in a preferred embodiment, embodied on a semiconductor substrate, such as silicon, with appropriate semiconductor manufacturing techniques and based upon a circuit layout which would, based upon the embodiments discussed above, be apparent to those skilled in the art.
- a person of skill in the art with respect to semiconductor design and manufacturing would be able to implement the various modules, interfaces, and tables, buffers, etc. of the present invention onto a single semiconductor substrate, based upon the architectural description discussed above. It would also be within the scope of the invention to implement the disclosed elements of the invention in discrete electronic components, thereby taking advantage of the functional aspects of the invention without maximizing the advantages through the use of a single semiconductor substrate.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
- This is a Continuation of application Ser. No. 09/855,670, filed May 16, 2001, which claims priority to U.S. Provisional Patent Application Ser. No. 60/237,764 filed on Oct. 3, 2000 and U.S. Provisional Patent Application Ser. No. 60/242,701 filed on Oct. 25, 2000. The disclosure of the prior applications identified above are hereby incorporated by reference.
- 1. Field of the Invention
- The invention relates to a method and apparatus for high performance switching in local area communications networks such as token ring, ATM, ethernet, fast ethernet, and gigabit ethernet environments, generally known as LANs. In particular, the invention relates to a new switching architecture geared to power efficient and cost sensitive markets, and which can be implemented on a semiconductor substrate such as a silicon chip.
- 2. Description of the Related Art
- As computer performance has increased in recent years, the demands on computer networks has significantly increased; faster computer processors and higher memory capabilities need networks with high bandwidth capabilities to enable high speed transfer of significant amounts of data. The well-known ethernet technology, which is based upon numerous IEEE ethernet standards, is one example of computer networking technology which has been able to be modified and improved to remain a viable computing technology. A more complete discussion of prior art networking systems can be found, for example, in SWITCHED AND FAST ETHERNET, by Breyer and Riley (Ziff-Davis, 1996), and numerous IEEE publications relating to IEEE 802 standards. Based upon the Open Systems Interconnect (OSI) 7-layer reference model, network capabilities have grown through the development of repeaters, bridges, routers, and, more recently, “switches”, which operate with various types of communication media. Thickwire, thinwire, twisted pair, and optical fiber are examples of media which has been used for computer networks. Switches, as they relate to computer networking and to ethernet, are hardware-based devices which control the flow of data packets or cells based upon destination address information which is available in each packet. A properly designed and implemented switch should be capable of receiving a packet and switching the packet to an appropriate output port at what is referred to wirespeed or linespeed, which is the maximum speed capability of the particular network. Basic ethernet wirespeed is up to 10 megabits per second, and Fast Ethernet is up to 100 megabits per second. A gigabit Ethernet is capable of transmitting data over a network at a rate of up to 1,000 megabits per second. As speed has increased, design constraints and design requirements have become more and more complex with respect to following appropriate design and protocol rules and providing a low cost, commercially viable solution.
- Referring to the OSI 7-layer reference model discussed previously, the higher layers typically have more information. Various types of products are available for performing switching-related functions at various levels of the OSI model. Hubs or repeaters operate at layer one, and essentially copy and “broadcast” incoming data to a plurality of spokes of the hub. Layer two switching-related devices are typically referred to as multiport bridges, and are capable of bridging two separate networks. Bridges can build a table of forwarding rules based upon which MAC (media access controller) addresses exist on which ports of the bridge, and pass packets which are destined for an address which is located on an opposite side of the bridge. Bridges typically utilize what is known as the “spanning tree” algorithm to eliminate potential data loops; a data loop is a situation wherein a packet endlessly loops in a network looking for a particular address. The spanning tree algorithm defines a protocol for preventing data loops. Layer three switches, sometimes referred to as routers, can forward packets based upon the destination network address. Layer three switches are capable of learning addresses and maintaining tables thereof which correspond to port mappings. Processing speed for layer three switches can be improved by utilizing specialized high performance hardware, and off loading the host CPU so that instruction decisions do not delay packet forwarding.
- The invention is directed to a scheme for reducing clock speed and power consumption in a network chip.
- In one embodiment, the invention is a memory management method. The method has the steps of assigning pointers to free memory locations and linking the pointers to one another creating a linked list of free memory locations having a beginning and an end. A free head pointer is assigned to a memory location indicating the beginning of free memory locations and a free tail pointer is assigned to a memory location indicating the end of free memory locations. An initial data pointer is assigned to the memory location assigned to the free head pointer and an end of data pointer is assigned to a last data memory location. The free head pointer is assigned to a next memory location linked to the last data memory location assigned to the end of data pointer. The next memory location indicates the beginning of free memory locations.
- In another embodiment, the invention is a memory management method. The method has the steps of assigning pointers to free memory locations and linking the pointers to one another creating a linked list of free memory locations having a beginning and an end. A free head pointer is assigned to a memory location indicating the beginning of free memory locations and a free tail pointer is assigned to a memory location indicating the end of free memory locations. The memory location assigned to the free tail pointer is linked to the memory location assigned to an initial data pointer when memory locations occupied by data is to be indicated as free memory. The free tail pointer is assigned to the last data memory location assigned to the end of data pointer.
- Another embodiment of the invention is a memory management system. The system has a pointer assignor that assigns pointers to free memory locations and a linker that links said pointers to one another thereby creating a linked list of free memory locations having a beginning and an end. A free head pointer assignor assigns a free head pointer to a memory location indicating the beginning of free memory locations and a free tail pointer assignor assigns a free tail pointer to a memory location indicating the end of free memory locations. An initial data pointer assignor assigns an initial data pointer to the memory location assigned to the free head pointer and an end of data pointer assignor assigns an end of data pointer to a last data memory location. The free head pointer assignor assigns the free head pointer to a next memory location linked to the last data memory location assigned to said end of data pointer. The next memory location indicates the beginning of free memory locations.
- The invention in another embodiment is a memory management system. The system has a pointer assignor that assigns pointers to free memory locations, and a linker that links the pointers to one another thereby creating a linked list of free memory locations having a beginning and an end. A free head pointer assignor assigns a free head pointer to a memory location indicating the beginning of free memory locations, and a free tail pointer assignor that assigns a free tail pointer to a memory location indicating the end of free memory locations. The linker links the memory location assigned to the free tail pointer to the memory location assigned to an initial data pointer when memory locations occupied by data is to be indicated as free memory and the free tail pointer assignor assigns the free tail pointer to the last data memory location assigned to said end of data pointer.
- Another embodiment of the invention is a memory management system having a pointer assignor means for assigning pointers to free memory locations. A linker means links the pointers to one another thereby creating a linked list of free memory locations having a beginning and an end, and a free head pointer assignor assigns a free head pointer to a memory location indicating the beginning of the linked list of free memory locations. A free tail pointer assignor assigns a free tail pointer to a memory location indicating the end of the linked list of free memory locations, and an initial data pointer assignor means that assigns an initial data pointer to the memory location assigned to the free head pointer. An end of data pointer assignor means assigns an end of data pointer to a last data memory location. The free head pointer assignor means assigns the free head pointer to a next memory location linked to the last data memory location assigned to the end of data pointer, wherein the next memory location indicates the beginning of free memory locations.
- In another embodiment, the invention is a memory management system having a pointer assignor means for assigning pointers to free memory locations and a linker means for linking the pointers to one another thereby creating a linked list of free memory locations having a beginning and an end. A free head pointer assignor means assigns a free head pointer to a memory location indicating the beginning of the linked list of free memory locations, and a free tail pointer assignor means assigns a free tail pointer to a memory location indicating the end of the linked list of free memory locations. The linker means links the memory location assigned to the free tail pointer to the memory location assigned to an initial data pointer when memory locations occupied by data is to be indicated as free memory. The free tail pointer assignor means assigns the free tail pointer to the last data memory location assigned to the end of data pointer.
- The invention is in another embodiment a memory management device having a pointer assignor that assigns pointers to free memory locations of a memory. The pointer assignor is in communication with the memory. A linker links the pointers to one another thereby creating a linked list of free memory locations having a beginning and an end. The linker is also in communication with the memory. A free head pointer assignor assigns a free head pointer to a memory location indicating the beginning of the linked list of free memory locations. The free head pointer assignor is in communication with the memory. A free tail pointer assignor assigns a free tail pointer to a memory location indicating the end of the linked list of free memory locations. The free tail pointer assignor is in communication with the memory. An initial data packet pointer assignor assigns an initial data packet pointer to the memory location assigned to the free head pointer. The initial data packet pointer assignor is in communication with the memory. An end of data packet pointer assignor that assigns an end of data packet pointer to a last data memory location in the memory, and the end of data packet pointer assignor is in communication with the memory. The free head pointer assignor assigns the free head pointer to a next memory location linked to the last data packet memory location assigned to the end of data packet pointer, wherein the next memory location indicates said beginning of free memory locations.
- In another embodiment, the invention is a memory management device having a pointer assignor that assigns pointers to free memory locations of a memory. The pointer assignor is in communication with the memory. A linker links the pointers to one another thereby creating a linked list of free memory locations having a beginning and an end. The linker is in communication with the memory. A free head pointer assignor assigns a free head pointer to a memory location indicating the beginning of the linked list of free memory locations. The free head pointer assignor is also in communication with the memory. A free tail pointer assignor assigns a free tail pointer to a memory location indicating the end of the linked list of free memory locations. The free tail pointer assignor is in communication with the memory. The linker links the memory location assigned to the free tail pointer to the memory location assigned to an initial data pointer when memory locations occupied by data is to be indicated as free memory, and the free tail pointer assignor assigns the free tail pointer to the last data memory location assigned to the end of data pointer.
- The objects and features of the invention will be more readily understood with reference to the following description and the attached drawings, wherein:
-
FIG. 1 is a general block diagram of elements of the present invention; -
FIG. 2 illustrates the data flow on the CPS channel of a network switch according to the present invention; -
FIG. 3A illustrates a linked list structure of Packet Buffer Memory; -
FIG. 3B illustrates a linked list structure of Packet Buffer Memory with two data packets; -
FIG. 3C illustrates a linked list structure of Packet Buffer Memory after the memory occupied by one data packet is freed; -
FIG. 3D illustrates a linked list structure of Packet Buffer Memory after the memory occupied by another data packet is freed; -
FIG. 4A is a flow diagram of the steps to assign data to memory locations in a linked list of free memory. -
FIG. 4B is a flow diagram of the steps to add memory locations designated for data to a linked list of free memory. -
FIG. 5 is an illustration of a system for managing a linked list of free memory. -
FIG. 1 is an example of a block diagram of aswitch 100 of the present invention. In this example, switch 100 has 12 ports, 102(1)-102(12), which can be fully integrated IEEE compliant ports. Each of these 12 ports 102(1)-102(12) can be 10BASE-T/100BASE-TX/FX ports each having a physical element (PHY), which can be compliant with IEEE standards. Each of the ports 102(1)-102(12), in one example of the invention, has a port speed that can be forced to a particular configuration or set so that auto-negotiation will determine the optimal speed for each port independently. Each PHY of each of the ports can be connected to a twisted-pair interface using TXOP/N and RXIP/N as transmit and receive protocols, or a fiber interface using FXOP/N and FXIP/N as transmit and receive protocols. - Each of the ports 102(1)-102(12) has a Media Access Controller (MAC) connected to each corresponding PHY. In one example of the invention, each MAC is a fully compliant IEEE 802.3 MAC. Each MAC can operate at 10 Mbps or 100 Mbps and supports both a full-duplex mode, which allows for data transmission and reception simultaneously, and a half duplex mode, which allows data to be either transmitted or received, but not both at the same time.
- Flow control is provided by each of the MACs. When flow control is implemented, the flow of incoming data packets is managed or controlled to reduce the chances of system resources being exhausted. Although the present embodiment can be a non-blocking, wire speed switch, the memory space available may limit data transmission speeds. For example, during periods of packet flooding (i.e. packet broadcast storms), the available memory can be exhausted rather quickly. In order to enhance the operability of the switch in these types of situations, the present invention can implement two different types of flow control. In full-duplex mode, the present invention can, for example, implement the IEEE 802.3x flow control. In half-duplex mode, the present invention can implement a collision backpressure scheme.
- In one example of the present invention each port has a latency block connected to the MAC. Each of the latency blocks has transmit and receive FIFOs which provide an interface to main packet memory. In this example, if a packet does not successfully transmit from one port to another port within a preset time, the packet will be dropped from the transmit queue.
- In addition to ports 102(1)-102(12), a
gigabit interface 104 can be provided onswitch 100.Gigabit interface 104 can support a Gigabit Media—Independent Interface (GMII) and a Ten Bit Interface (TBI). The GMII can be fully compliant to IEEE 802.3ab. The GMII can pass data at a rate of 8 bits every 8 ns resulting in a throughput of 2 Gbps including both transmit and receive data. In addition to the GMII,gigabit interface 104 can be configured to be a TBI, which is compatible with many industry standard fiber drivers. Since in some embodiments of the invention the MDIO/MDC interfaces (optical interfaces) are not supported, the gigabit PHY (physical layer) is set into the proper mode by the system designer. -
Gigabit interface 104, like ports 102(1)-102(12), has a PHY, a Gigabit Media Access Controller (GMAC) and a latency block. The GMAC can be a fully compliant IEEE 802.3z MAC operating at 1 Gbps full-duplex only and can connect to a fully compliant GMII or TBI interface through the PHY. In this example,GMAC 108 provides full-duplex flow control mechanisms and a low cost stacking solution for either twisted pair or TBI mode using in-band signaling for management. This low cost stacking solution allows for a ring structure to connect each switch utilizing only one gigabit port. - A
CPU interface 106 is provided onswitch 100. In one example of the present invention,CPU interface 106 is an asynchronous 8 or 16 bit I/O device interface. Through this interface a CPU can read internal registers, receive packets, transmit packets and allow for interrupts.CPU interface 106 also allows for a Spanning Tree Protocol to be implemented. In one example of the present invention, a chip select pin is available allowing a single CPU control two switches. In this example an interrupt pin when driven low (i.e., driven to the active state) requiring a pull-up resistor will allow multiple switches to be controlled by a single CPU. - A switching
fabric 108 is also located onswitch 100 in one example of the present invention.Switching fabric 108 can allow for full wire speed operation of all ports. A hybrid or virtual shared memory approach can also be implemented to minimize bandwidth and memory requirements. This architecture allows for efficient and low latency transfer of packets through the switch and also supports address learning and aging features, VLAN, port trunking and port mirroring. - Memory interfaces 110, 112 and 114 can be located on
switch 100 and allow for the separation of data and control information. Packet buffer memory interface (PBM) 110 handles packet data storage while the transmit queue memory interface (TXM) 112 keeps a list of packets to be transmitted and address table/control memory interface (ATM) 114 handles the address table and header information. Each of these interfaces can use memory such as SSRAM that can be configured in various total amounts and chip sizes. -
PBM 110 is located onswitch 100 and can have an external packet buffer memory (not shown) that is used to store the packet during switching operations. In one example of the invention, packet buffer memory is made up of multiple 256 byte buffers. Therefore, one packet may span several buffers within memory. This structure allows for efficient memory usage and minimizes bandwidth overhead. The packet buffer memory can be configurable so that up to 4 Mbytes of memory per chip can be used for a total of 8 Mbytes per 24+2 ports. In this example, efficient memory usage is maintained by allocating 256 byte blocks, which allows storage for up to 32K packets.PBM 110 can be 64 bits wide and can use either a 64 bit wide memory or two 32 bit wide memories and can run at 100 MHz. -
TXM 112 is located onswitch 100 and can have an external transmit queue memory (not shown).TXM 112, in this example, maintains 4 priority queues per port and allows for 64K packets per chip and up to 128K packets per system.TXM 112 can run at a speed of up to 100 MHz. -
ATM 114 can be located onswitch 100 and can have an external address table/control memory (not shown) used to store the address table and header information corresponding to each 256 byte section ofPBM 110. Address table/control memory allows up to 16K unique unicast addresses. The remaining available memory is used for control information.ATM 114, in this example, runs up to 133 MHz. -
Switch 100, in one example of the invention, has aFlow Control Manager 116 that manages the flow of packet data. As each port sends more and more data to the switch,Flow Control Manager 116 can monitor the amount of memory being used by each port 102(1)-102(12) ofswitch 100 and the switch as a whole. In this example, if one of the ports 102(1)-102(12) or the switch as a whole is using up too much memory as is predetermined by a register setting predefined by the manufacturer or by a user,Flow Control Manager 116 will issue commands over the ATM Bus requesting the port or switch to slow down and may eventually drop packets if necessary. - In addition to
Flow control manager 116, switch 100 also has a Start Point Manager (SPM) 118 connected toSwitching Fabric 108, a Forwarding Manager (FM) 120 connected toSwitching Fabric 108 and an Address Manager (AM) 122 connected toSwitching Fabric 108. - Start Point Manager (SPM) 118, through
Switching Fabric 108 in one example of the present invention, keeps track of which blocks of memory inPBM 110 are being used and which blocks of memory are free. - Forwarding
Manager 120 can, for example, forward packet data throughSwitching Fabric 108 to appropriate ports for transmission. - Address Manager (AM) 122 can, through
Switching Fabric 108, manage the address table including learning source addresses, assigning headers to packets and keeping track of these addresses. In one example of the invention,AM 122 uses aging to remove addresses from the address table that have not been used for a specified time period or after a sequence of events. - An
expansion port 124 can also be provided onswitch 100 to connect two switches together. This will allow for full wire speed operation on twenty-five 100M ports (includes one CPU port) and two gigabit ports. Theexpansion port 124, in this example, allows for 4.6 Gbps of data to be transmitted between switches. - An
LED controller 126 can also be provided onswitch 100.LED controller 126 activates appropriate LEDs to give a user necessary status information. Each port of the ports 102(1)-102(12), in one example of the invention, has 4 separate LEDs, which provide per port status information. The LEDs are fully programmable and are made up of port LEDs and other LEDs. Each LED can include a default state for each of the four port LEDs. An example of the default operation of each of the port LEDs are shown below.LED DEFAULT OPERATION 0 Speed Indicator OFF = 10 Mbps or no link ON = 100 Mbps 1 Full/Half/Collision Duplex OFF = The port is in half duplex or no link BLINK = The port is in half duplex and a collision has occurred ON = The port is in full duplex 2 Link/Activity Indicator OFF = Indicates that the port does not have link BLINK = Link is present and receive or transmit activity is occurring on the media ON = Link present without activity 3 Alert Condition OFF = No alert conditions, port is operating normally ON = The port has detected an isolate condition - In addition to the default operations for the port LEDs, each of the port LEDs can be programmed through registers. These registers can be set up, in one example of the invention, by a CPU. By having programmable registers that control LEDs, full customization of the system architecture can be realized including the programmability of the blink rate.
- Each of the LEDs can have a table, as shown below, associated with the LED, where register bits RAx, RBx and RCx can be set to provide a wide range of information.
- For example, register bits RAx, RBx and RCx can be set to determine when LEDON, LEDBLINK and LEDOFF are activated or deactivated. In addition to the port LEDs, there are additional LEDs which indicate the status of the switch.
-
Registers 128 are located onswitch 100 in this example of the present invention.Registers 128 are full registers that allow for configuration, status and Remote Monitoring (RMON) management. In this example, Registers 128 are arranged into groups and offsets. There are 32 address groups each of which can contain up to 64 registers. -
FIG. 2 is an illustration of one embodiment of the invention having a PBM Bus, an ATM Bus, and a TXM Bus for communications with other portions of the switch. In thisexample PBM 110 is connected to the PBM Bus and an external PBM Memory;TXM 112 is connected to the TXM Bus and an external TXM Memory; andATM 114 is connected to the ATM Bus and an external ATM Memory. Each of the transmit (TX) and receive (RX) portions of ports 102(1)-102(12) are connected to the PBM Bus, ATM Bus and TXM Bus for communications. -
FM 120 is connected to each of the ports 102(1)-102(12) directly and is also connected to the ATM Bus for communications with other portions of the switch.SPM 118 andAM 122 are also connected to the ATM Bus for communications with other portions of the switch. - The operation of
switch 100 for transmission of a unicast packet (i.e., a packet destined for a single port for output) in one example of the invention is made with reference toFIG. 2 as follows. - In this example,
Switch 100 is initialized following the release of a hardware reset pin. A series of initialization steps will occur including the initialization of external buffer memory and the address table. All ports on the switch will then be disabled and the CPU will enable packet traffic by setting an enable register. As links become available on the ports (ports 102(1)-102(12) and gigabit port 104), an SPT protocol will confirm these ports and the ports will become activated. After the initialization process is concluded normal operation ofSwitch 100 can begin. - In this example, once a port has been initialized and activated, a PORT_ACTIVE command is issued by CPU. This indicates that the port is ready to transmit and receive data packets. If for some reason a port goes down or becomes disabled, a PORT_INACTIVE command is issued by the CPU.
- During unicast transmission, a packet from an external source on port 102(1) is received at the receive (RX) PHY of port 102(1).
- In this example, the RX MAC of port 102(1) will not start processing the packet until a Start of Frame Delimiter (SFD) for the packet is detected. When the SFD is detected by the RX MAC portion of port 102(1), the RX MAC will place the packet into a receive (RX) FIFO of the latency block of port 102(1). As the RX FIFO becomes filled, port 102(1) will request an empty receive buffer from the SPM. Once access to the ATM Bus is granted, the RX FIFO Latency block of port 102(1) sends packets received in the RX FIFO to the external PBM Memory through the PBM Bus and
PBM 110 until the end of packet is reached. - The PBM Memory, in this example, is made up of 256 byte buffers. Therefore, one packet may span several buffers within the packet buffer memory if the packet size is greater than 256 bytes. Connections between packet buffers can be maintained through a linked list system in one example of the present invention. A linked list system allows for efficient memory usage and minimized bandwidth overhead and will be explained in further detail with relation to
FIG. 3A -FIG. 3D . - At the same time packets are being sent to the external PBM Memory, the port will also send the source address to Address Manager (AM) 122 and request a filtering table from
AM 122. - If the packet is “good”, as is determined through normal, standard procedures known to those of ordinary skill in the art, such as valid length and IEEE standard packet checking such as a Cyclic Redundancy Check, the port writes the header information to the ATM memory through the ATM Bus and
ATM 114.AM 122 sends a RECEP_COMPL command over the ATM Bus signifying that packet reception is complete. Other information is also sent along with the RECEP_COMPL command such as the start address and filtering table which indicates which ports the packet is to be sent out on. For example, a filtering table having a string such as “011111111111” would send the packet to all ports exceptport 1 and would have a count of 11. The count simply is the number of ports the packet is to be sent, as indicated by the number of “1”s. - Forwarding Manager (FM) 120 is constantly monitoring the ATM Bus to determine if a RECEP_COMPL command has been issued. Once
FM 120 has determined that a RECEP_COMPL command has been issued, Forwarding Manager (FM) 120 will use the filtering table to send packets to appropriate ports. It is noted that a packet will not be forwarded if one of the following conditions is met: -
- a. The packet contains a CRC error
- b. The PHY signals a receive error
- c. The packet is less than 64 bytes
- d. The packet is greater than 1518 bytes or 1522 bytes depending on register settings
- e. The packet is only forwarded to the receiving port
- The RECEP_COMPL command includes information such as a filter table, a start pointer, priority information and other miscellaneous information.
FM 120 will read the filter table to determine if the packet is to be transmitted from one of its ports. If it is determined that the packet is to be transmitted from one of its ports,FM 120 will send the RECEP_COMPL command information directly to the port. In this case, the RECEP_COMPL command information is sent to the TX FIFO of port 102(12). - If the port is busy, the RECEP_COMPL command information is transferred to TXM Memory through the TXM Bus and
TXM 112. The TXM memory contains a queue of packets to be transmitted. TXM Memory is allocated on a per port basis so that if there are ten ports there are ten queues within the TXM Memory allocated to each port. As each of the ports transmitters becomes idle, each port will read the next RECEP_COMPL command information stored in the TXM Memory. The TX FIFO of port 102(12) will receive, as part of the RECEP_COMPL command information, a start pointer which will point to a header in ATM memory across the ATM Bus which in turn points to the location of a packet in the PBM Memory over the PBM Bus. The port will at this point request to load the packet into the transmit (TX) FIFO of port 102(12) and send it out through the MAC and PHY of port 102(12). - If the port is in half duplex mode, it is possible that a collision could occur and force the packet transmission to start over. If this occurs, the port simply re-requests the bus master and reloads the packet and starts over again. If however, the number of consecutive collisions becomes excessive, the packet will be dropped from the transmission queue.
- Once the port successfully transmits a packet, the port will signal
FM 120 that it is done with the current buffer.FM 120 will then decrement a counter which indicates how many more ports must transmit the packet. For example, if a packet is destined to eleven ports for output, the counter, in this example, is set to 11. Each time a packet is successfully transmitted,FM 120 decrements the counter by one. When the counter reaches zero this will indicate that all designated ports have successfully transmitted the packet.FM 120 will then issue a FREE command over the ATM Bus indicating that the memory occupied by the packet in the PBM Memory is no longer needed and can now be freed for other use. - When
SPM 118 detects a FREE command over the ATM Bus, steps are taken to indicate that the space taken by the packet is now free memory. - Multicast and broadcast packets are handled exactly like unicast packets with the exception that their filter tables will indicate that all or most ports should transmit the packet. This will force the forwarding managers to transmit the packet out on all or most of their ports.
-
FIG. 3A is an illustration of a PBM Memory structure in one example of the invention.PBM Memory Structure 300 is a linked list of 256byte segments example segment 302 is the free_head indicating the beginning of the free memory linked list andsegment 316 is the free_tail indicating the last segment of free memory. - In
FIG. 3B two packets have been received and stored in the PBM Memory.Packet 1 occupiessegments packet 2 occupiessegment 304.Segments Segment 310 is the free_head indicating the beginning of free memory andsegment 316 is the free_tail indicating the end of free memory. - In
FIG. 3C packet 1 has been fully transmitted and the Forwarding Manager (FM) has issued a FREE command. Sincepacket 1 is already in a linked list format the SPM can add the memory occupied bypacket 1 to the free memory link list. The free_head,segment 310 remains the same. However, the free_tail is changed. This is accomplished by linkingsegment 316 to the beginning ofpacket 1,segment 302, and designating the last segment ofpacket 1,segment 308, as the free_tail. As a result, there is a linked list starting withsegment 310 linking tosegment 312,segment 312 linking tosegment 314,segment 314 linking tosegment 316,segment 316 linking tosegment 302,segment 302 linking tosegment 306 andsegment 306 linking tosegment 308 wheresegment 308 is the free_tail. -
FIG. 3D in this example simply illustrates the PBM Memory afterpacket 2 has been transmitted successfully and the Forwarding Manager has issued a FREE command over the ATM Bus. The SPM will detect the FREE command and then add the memory space occupied bypacket 2 in the PBM Memory to the free memory linked list. In thisexample segment 308 is linked to the memory occupied bypacket 2,segment 304, andsegment 304 is identified as the free_tail. -
FIG. 4A is an illustration of the method steps taken in one embodiment of the invention. The steps are described in relation withFIGS. 3A-3D . Instep 400,free memory locations FIG. 3A ,memory location 302 has a pointer linkingmemory location 302 tomemory location 304. Likewise,memory location 304 has a pointer which linksmemory location 304 tomemory location 306. As can be seen inFIG. 3A , the memory locations in this embodiment of the invention are initially sequentially linked to one another wherememory location 302 is linked tomemory location 304,memory location 304 is linked tomemory location 306,memory location 306 is linked tomemory location 308, etc. - In
step 402, a free_head pointer is assigned to an initial memory location of the linked list of free memory. As can be seen in the example illustrated inFIG. 3A , the free_head pointer is assigned tomemory location 302. - In
step 404, a free tail pointer is assigned to a last memory location of the linked list. In the example illustrated inFIG. 3A , the free tail is assigned tomemory location 316, which in this case is the last memory location of the linked list of free memory. - In
step 406, an initial data pointer is assigned to the memory location assigned to the free head pointer. For example, referring back toFIG. 3A , if a first packet of data were to be saved, the initial data pointer would be assigned tomemory location 302 where the free head pointer was assigned. Instep 408, the next memory location linked to the memory location assigned to the initial data pointer would be assigned to store more data. For example, if the data would require free memory locations,memory locations data packet 1. Atmemory location 306, an end of data pointer would be assigned to this last data location. The free head pointer would then be assigned tomemory location 308 indicating the beginning of free memory as described instep 414. - In another example illustrated in
FIG. 3B , a free head pointer initially points tomemory location 302 andpacket 2 occupiesmemory location 304. Whendata packet 1 is to be saved, the initial data pointer is assigned tomemory location 302. In this example,packet 1 needs three memory locations and in this case,memory location 306 was linked tomemory location 302 andmemory location 308 was linked tomemory location 306. Thus, the data pointers forpacket 1 are assigned tomemory locations memory location 308. - In
step 414, the free head pointer is assigned to the next memory location linked to the lastdata memory location 308, which is in this case,memory location 310. Therefore, the free head pointer is assigned tomemory location 310 and the free tail pointer is maintained asmemory location 316. -
FIG. 4B illustrates the method steps in one embodiment of the invention for freeing memory taken up by data packets and adding this memory to the linked list of free memory. - In
step 416, the memory location assigned to the initial data pointer is linked to the memory location assigned to the free tail pointer. For example, inFIG. 3C ,packet 1 previously occupiedmemory locations memory location 316. In accordance withstep 416 as described inFIG. 4B , the memory location assigned to thefree tail pointer 316 is linked to the memory location assigned to the initial data pointer,memory location 302. - In
step 418, the free tail pointer is assigned to the memory location assigned to the end of data pointer. For example, inFIG. 3C , the end of data pointer forpacket 1 wasmemory location 308. In this example, the free tail pointer is assigned to the memory location assigned to the end of data pointer,memory location 308. The free head pointer in this example remains assigned tomemory location 310. - In
FIG. 3D ,packet 2 is indicated as being free memory. Thus, in accordance withstep 416, thememory location 308 assigned to the free tail is linked to the initial datapointer memory location 304. Sincedata packet 2 only occupies one memory location, the end of data pointer is also assigned tomemory location 304. - In
step 418, the free tail pointer is assigned to the memory location assigned to the end of data pointer. In the example depicted inFIG. 3D , the free tail pointer is assigned tomemory location 304 which was assigned to the end of data pointer. -
FIG. 5 is an illustration of a system for managing a linked list memory.FIG. 5 will be described with reference toFIGS. 3A-3D . Apointer assigner 500 is responsible for linking memory locations to one another as depicted inFIG. 3A . For example, in this example, pointers are assigned tomemory locations -
Linker 502 directs which pointers are assigned to which memory locations. For example, inFIG. 3A , the pointer ofmemory location 302 is linked tomemory location 304. Thememory location 304 is linked tomemory location 306. Thememory location 306 is linked tomemory location 308, etc. - The system also has a free
head pointer assignor 504 and a freetail pointer assignor 506. The freehead pointer assignor 504 assigns a free head pointer to the beginning of a linked list structure. In this case, the free head pointer assignor assigns a free head pointer tomemory location 302 as depicted inFIG. 3A . Thefree tail pointer 506 assigns a free tail pointer to the last m memory location of the linked list as depicted inFIG. 3A . In this case, the free tail pointer assignor assigns the free tail pointer tomemory location 316. - The system also has a initial
data pointer assignor 408 which assigns an initial data pointer to the memory location the free head pointer is assigned to. For example, inFIG. 3B ,packet 2 occupiedmemory location 304 and the free head pointer was assigned tomemory location 302. The initialdata pointer assignor 508 assigned the initial data pointer tomemory location 302. -
Data assignor 510 assigns a sufficient number of memory locations to store data packets until an end of data has been found. In this case,data packet 1 needs three memory locations. Therefore, the data assignor assigns the packet data tomemory locations - The end of
data pointer assignor 512 determines when the end of data has been reached and assigns an end of data pointer to the memory location occupied by the end of data. In this case, the end ofdata pointer assignor 512 assigns an end of data pointer tomemory location 308 indicating the end of data as shown inFIG. 3B . - The free
head pointer assignor 504 then reassigns the free head to the next memory location in the linked list of memory assigned to the memory location the end of data pointer assignor the end of data pointer is assigned to. In this case, the end of data pointer is assigned tomemory location 308. Therefore, this free head pointer is assigned to the next memory location in the linked list,memory location 310. - In the case that memory is to be freed, as illustrated in
FIG. 3C ,linker 502 links the memory location that the free tail pointer was pointing to the memory location the initial data pointer was assigned. In this case, as illustrated inFIG. 3C , the free tail pointer was initially assigned tomemory location 316 and the initial data pointer was assigned tomemory location 302. Therefore,memory location 316 is linked tomemory location 302 bylinker 502. Finally, in order to indicate the memory location occupied bypacket 1 as free memory, the freetail pointer assignor 506 assigns the memory location assigned to the end of datapointer memory location 308 to free tail pointer. Thus, the free tail pointer is assigned tomemory location 308. - The above-discussed configuration of the invention is, in a preferred embodiment, embodied on a semiconductor substrate, such as silicon, with appropriate semiconductor manufacturing techniques and based upon a circuit layout which would, based upon the embodiments discussed above, be apparent to those skilled in the art. A person of skill in the art with respect to semiconductor design and manufacturing would be able to implement the various modules, interfaces, and tables, buffers, etc. of the present invention onto a single semiconductor substrate, based upon the architectural description discussed above. It would also be within the scope of the invention to implement the disclosed elements of the invention in discrete electronic components, thereby taking advantage of the functional aspects of the invention without maximizing the advantages through the use of a single semiconductor substrate.
- Although the invention has been described based upon these preferred embodiments, it would be apparent to those of skilled in the art that certain modifications, variations, and alternative constructions would be apparent, while remaining within the spirit and scope of the invention. In order to determine the metes and bounds of the invention, therefore, reference should be made to the appended claims.
Claims (32)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/150,152 US20050235129A1 (en) | 2000-10-03 | 2005-06-13 | Switch memory management using a linked list structure |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US23776400P | 2000-10-03 | 2000-10-03 | |
US24270100P | 2000-10-25 | 2000-10-25 | |
US09/855,670 US6988177B2 (en) | 2000-10-03 | 2001-05-16 | Switch memory management using a linked list structure |
US11/150,152 US20050235129A1 (en) | 2000-10-03 | 2005-06-13 | Switch memory management using a linked list structure |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/855,670 Continuation US6988177B2 (en) | 2000-10-03 | 2001-05-16 | Switch memory management using a linked list structure |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050235129A1 true US20050235129A1 (en) | 2005-10-20 |
Family
ID=27399015
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/855,670 Expired - Lifetime US6988177B2 (en) | 2000-10-03 | 2001-05-16 | Switch memory management using a linked list structure |
US11/150,152 Abandoned US20050235129A1 (en) | 2000-10-03 | 2005-06-13 | Switch memory management using a linked list structure |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/855,670 Expired - Lifetime US6988177B2 (en) | 2000-10-03 | 2001-05-16 | Switch memory management using a linked list structure |
Country Status (1)
Country | Link |
---|---|
US (2) | US6988177B2 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030236904A1 (en) * | 2002-06-19 | 2003-12-25 | Jonathan Walpole | Priority progress multicast streaming for quality-adaptive transmission of data |
US20040179533A1 (en) * | 2003-03-13 | 2004-09-16 | Alcatel | Dynamic assignment of re-assembly queues |
US20050083930A1 (en) * | 2003-10-20 | 2005-04-21 | Jen-Kai Chen | Method of early buffer release and associated MAC controller |
US7035988B1 (en) * | 2003-03-17 | 2006-04-25 | Network Equipment Technologies, Inc. | Hardware implementation of an N-way dynamic linked list |
US20060193260A1 (en) * | 2005-02-24 | 2006-08-31 | George H A | Preemptive packet flow controller |
WO2011140515A1 (en) * | 2010-05-07 | 2011-11-10 | Qualcomm Incorporated | Linked -list management of llr- memory |
US9262554B1 (en) | 2010-02-16 | 2016-02-16 | Pmc-Sierra Us, Inc. | Management of linked lists within a dynamic queue system |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7523284B1 (en) | 2004-08-10 | 2009-04-21 | American Megatrends, Inc. | Method and apparatus for providing memory management within a system management mode |
US7751421B2 (en) * | 2004-12-29 | 2010-07-06 | Alcatel Lucent | Traffic generator and monitor |
US7882280B2 (en) | 2005-04-18 | 2011-02-01 | Integrated Device Technology, Inc. | Packet processing switch and methods of operation thereof |
US7775428B2 (en) * | 2005-05-06 | 2010-08-17 | Berkun Kenneth A | Systems and methods for generating, reading and transferring identifiers |
US7817652B1 (en) * | 2006-05-12 | 2010-10-19 | Integrated Device Technology, Inc. | System and method of constructing data packets in a packet switch |
US7747904B1 (en) | 2006-05-12 | 2010-06-29 | Integrated Device Technology, Inc. | Error management system and method for a packet switch |
US7706387B1 (en) | 2006-05-31 | 2010-04-27 | Integrated Device Technology, Inc. | System and method for round robin arbitration |
US8078657B2 (en) * | 2007-01-03 | 2011-12-13 | International Business Machines Corporation | Multi-source dual-port linked list purger |
KR20140106576A (en) * | 2011-12-14 | 2014-09-03 | 옵티스 셀룰러 테크놀리지, 엘엘씨 | Buffer resource management method and telecommunication equipment |
CN105159837A (en) * | 2015-08-20 | 2015-12-16 | 广东睿江科技有限公司 | Memory management method |
Citations (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5159678A (en) * | 1990-06-11 | 1992-10-27 | Supercomputer Systems Limited Partnership | Method for efficient non-virtual main memory management |
US5303302A (en) * | 1992-06-18 | 1994-04-12 | Digital Equipment Corporation | Network packet receiver with buffer logic for reassembling interleaved data packets |
US5390173A (en) * | 1992-10-22 | 1995-02-14 | Digital Equipment Corporation | Packet format in hub for packet data communications system |
US5414704A (en) * | 1992-10-22 | 1995-05-09 | Digital Equipment Corporation | Address lookup in packet data communications link, using hashing and content-addressable memory |
US5423015A (en) * | 1988-10-20 | 1995-06-06 | Chung; David S. F. | Memory structure and method for shuffling a stack of data utilizing buffer memory locations |
US5459717A (en) * | 1994-03-25 | 1995-10-17 | Sprint International Communications Corporation | Method and apparatus for routing messagers in an electronic messaging system |
US5473607A (en) * | 1993-08-09 | 1995-12-05 | Grand Junction Networks, Inc. | Packet filtering for data networks |
US5524254A (en) * | 1992-01-10 | 1996-06-04 | Digital Equipment Corporation | Scheme for interlocking line card to an address recognition engine to support plurality of routing and bridging protocols by using network information look-up database |
US5555398A (en) * | 1994-04-15 | 1996-09-10 | Intel Corporation | Write back cache coherency module for systems with a write through cache supporting bus |
US5568477A (en) * | 1994-12-20 | 1996-10-22 | International Business Machines Corporation | Multipurpose packet switching node for a data communication network |
US5644784A (en) * | 1995-03-03 | 1997-07-01 | Intel Corporation | Linear list based DMA control structure |
US5696899A (en) * | 1992-11-18 | 1997-12-09 | Canon Kabushiki Kaisha | Method and apparatus for adaptively determining the format of data packets carried on a local area network |
US5724358A (en) * | 1996-02-23 | 1998-03-03 | Zeitnet, Inc. | High speed packet-switched digital switch and method |
US5748631A (en) * | 1996-05-09 | 1998-05-05 | Maker Communications, Inc. | Asynchronous transfer mode cell processing system with multiple cell source multiplexing |
US5781549A (en) * | 1996-02-23 | 1998-07-14 | Allied Telesyn International Corp. | Method and apparatus for switching data packets in a data network |
US5787084A (en) * | 1996-06-05 | 1998-07-28 | Compaq Computer Corporation | Multicast data communications switching system and associated method |
US5790539A (en) * | 1995-01-26 | 1998-08-04 | Chao; Hung-Hsiang Jonathan | ASIC chip for implementing a scaleable multicast ATM switch |
US5802287A (en) * | 1993-10-20 | 1998-09-01 | Lsi Logic Corporation | Single chip universal protocol multi-function ATM network interface |
US5825772A (en) * | 1995-11-15 | 1998-10-20 | Cabletron Systems, Inc. | Distributed connection-oriented services for switched communications networks |
US5828653A (en) * | 1996-04-26 | 1998-10-27 | Cascade Communications Corp. | Quality of service priority subclasses |
US5831980A (en) * | 1996-09-13 | 1998-11-03 | Lsi Logic Corporation | Shared memory fabric architecture for very high speed ATM switches |
US5838915A (en) * | 1995-06-21 | 1998-11-17 | Cisco Technology, Inc. | System for buffering data in the network having a linked list for each of said plurality of queues |
US5845081A (en) * | 1996-09-03 | 1998-12-01 | Sun Microsystems, Inc. | Using objects to discover network information about a remote network having a different network protocol |
US5887187A (en) * | 1993-10-20 | 1999-03-23 | Lsi Logic Corporation | Single chip network adapter apparatus |
US5892922A (en) * | 1997-02-28 | 1999-04-06 | 3Com Corporation | Virtual local area network memory access system |
US5909686A (en) * | 1997-06-30 | 1999-06-01 | Sun Microsystems, Inc. | Hardware-assisted central processing unit access to a forwarding database |
US5987507A (en) * | 1998-05-28 | 1999-11-16 | 3Com Technologies | Multi-port communication network device including common buffer memory with threshold control of port packet counters |
US6011795A (en) * | 1997-03-20 | 2000-01-04 | Washington University | Method and apparatus for fast hierarchical address lookup using controlled expansion of prefixes |
US6041053A (en) * | 1997-09-18 | 2000-03-21 | Microsfot Corporation | Technique for efficiently classifying packets using a trie-indexed hierarchy forest that accommodates wildcards |
US6061351A (en) * | 1997-02-14 | 2000-05-09 | Advanced Micro Devices, Inc. | Multicopy queue structure with searchable cache area |
US6119196A (en) * | 1997-06-30 | 2000-09-12 | Sun Microsystems, Inc. | System having multiple arbitrating levels for arbitrating access to a shared memory by network ports operating at different data rates |
US6175902B1 (en) * | 1997-12-18 | 2001-01-16 | Advanced Micro Devices, Inc. | Method and apparatus for maintaining a time order by physical ordering in a memory |
US6185185B1 (en) * | 1997-11-21 | 2001-02-06 | International Business Machines Corporation | Methods, systems and computer program products for suppressing multiple destination traffic in a computer network |
US6292492B1 (en) * | 1998-05-20 | 2001-09-18 | Csi Zeitnet (A Cabletron Systems Company) | Efficient method and apparatus for allocating memory space used for buffering cells received on several connections in an asynchronous transfer mode (ATM) switch |
US6389513B1 (en) * | 1998-05-13 | 2002-05-14 | International Business Machines Corporation | Disk block cache management for a distributed shared memory computer system |
US20020059165A1 (en) * | 1999-12-02 | 2002-05-16 | Ants Software | Lock-free list |
US6425032B1 (en) * | 1999-04-15 | 2002-07-23 | Lucent Technologies Inc. | Bus controller handling a dynamically changing mix of multiple nonpre-emptable periodic and aperiodic devices |
US6430666B1 (en) * | 1998-08-24 | 2002-08-06 | Motorola, Inc. | Linked list memory and method therefor |
US20020120664A1 (en) * | 2000-11-17 | 2002-08-29 | Horn Robert L. | Scalable transaction processing pipeline |
US20020176357A1 (en) * | 2000-10-03 | 2002-11-28 | Altima Communications, Inc. | Switch having flow control management |
US6614793B1 (en) * | 1998-10-06 | 2003-09-02 | Stmicroelectronics Limited | Device for segmentation and transmission of messages stored as blocks of variable length |
US6625591B1 (en) * | 2000-09-29 | 2003-09-23 | Emc Corporation | Very efficient in-memory representation of large file system directories |
US6735207B1 (en) * | 2000-06-13 | 2004-05-11 | Cisco Technology, Inc. | Apparatus and method for reducing queuing memory access cycles using a distributed queue structure |
US6820086B1 (en) * | 1996-10-18 | 2004-11-16 | Hewlett-Packard Development Company, L.P. | Forming linked lists using content addressable memory |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4899334A (en) | 1987-10-19 | 1990-02-06 | Oki Electric Industry Co., Ltd. | Self-routing multistage switching network for fast packet switching system |
US5253248A (en) | 1990-07-03 | 1993-10-12 | At&T Bell Laboratories | Congestion control for connectionless traffic in data networks via alternate routing |
GB9023867D0 (en) | 1990-11-02 | 1990-12-12 | Mv Ltd | Improvements relating to a fault tolerant storage system |
JPH04189023A (en) | 1990-11-22 | 1992-07-07 | Victor Co Of Japan Ltd | Pulse synchronizing circuit |
JPH04214290A (en) | 1990-12-12 | 1992-08-05 | Mitsubishi Electric Corp | Semiconductor memory device |
JPH05183828A (en) | 1991-12-27 | 1993-07-23 | Sony Corp | Electronic equipment |
US5499295A (en) | 1993-08-31 | 1996-03-12 | Ericsson Inc. | Method and apparatus for feature authorization and software copy protection in RF communications devices |
US5579301A (en) | 1994-02-28 | 1996-11-26 | Micom Communications Corp. | System for, and method of, managing voice congestion in a network environment |
FR2725573B1 (en) | 1994-10-11 | 1996-11-15 | Thomson Csf | METHOD AND DEVICE FOR CONTROLLING CONGESTION OF SPORADIC EXCHANGES OF DATA PACKETS IN A DIGITAL TRANSMISSION NETWORK |
US5664116A (en) | 1995-07-07 | 1997-09-02 | Sun Microsystems, Inc. | Buffering of data for transmission in a computer communication system interface |
US5940596A (en) | 1996-03-25 | 1999-08-17 | I-Cube, Inc. | Clustered address caching system for a network switch |
US5802052A (en) | 1996-06-26 | 1998-09-01 | Level One Communication, Inc. | Scalable high performance switch element for a shared memory packet or ATM cell switch fabric |
US5898687A (en) | 1996-07-24 | 1999-04-27 | Cisco Systems, Inc. | Arbitration mechanism for a multicast logic engine of a switching fabric circuit |
GB9618132D0 (en) | 1996-08-30 | 1996-10-09 | Sgs Thomson Microelectronics | Improvements in or relating to an ATM switch |
US5842038A (en) | 1996-10-10 | 1998-11-24 | Unisys Corporation | Optimized input/output memory access request system and method |
JP3123447B2 (en) | 1996-11-13 | 2001-01-09 | 日本電気株式会社 | Switch control circuit of ATM exchange |
EP0849917B1 (en) | 1996-12-20 | 2005-07-20 | International Business Machines Corporation | Switching system |
US6233246B1 (en) | 1996-12-30 | 2001-05-15 | Compaq Computer Corporation | Network switch with statistics read accesses |
DE19703833A1 (en) | 1997-02-01 | 1998-08-06 | Philips Patentverwaltung | Coupling device |
US6452933B1 (en) | 1997-02-07 | 2002-09-17 | Lucent Technologies Inc. | Fair queuing system with adaptive bandwidth redistribution |
US5920566A (en) | 1997-06-30 | 1999-07-06 | Sun Microsystems, Inc. | Routing in a multi-layer distributed network element |
US6246680B1 (en) | 1997-06-30 | 2001-06-12 | Sun Microsystems, Inc. | Highly integrated multi-layer switch element architecture |
US6115378A (en) | 1997-06-30 | 2000-09-05 | Sun Microsystems, Inc. | Multi-layer distributed network element |
US6088356A (en) | 1997-06-30 | 2000-07-11 | Sun Microsystems, Inc. | System and method for a multi-layer network element |
US6014380A (en) | 1997-06-30 | 2000-01-11 | Sun Microsystems, Inc. | Mechanism for packet field replacement in a multi-layer distributed network element |
US6021132A (en) | 1997-06-30 | 2000-02-01 | Sun Microsystems, Inc. | Shared memory management in a switched network element |
US6094435A (en) | 1997-06-30 | 2000-07-25 | Sun Microsystems, Inc. | System and method for a quality of service in a multi-layer network element |
US6016310A (en) | 1997-06-30 | 2000-01-18 | Sun Microsystems, Inc. | Trunking support in a high performance network device |
US5918074A (en) | 1997-07-25 | 1999-06-29 | Neonet Llc | System architecture for and method of dual path data processing and management of packets and/or cells and the like |
JP2959539B2 (en) | 1997-10-01 | 1999-10-06 | 日本電気株式会社 | Buffer control method and device |
-
2001
- 2001-05-16 US US09/855,670 patent/US6988177B2/en not_active Expired - Lifetime
-
2005
- 2005-06-13 US US11/150,152 patent/US20050235129A1/en not_active Abandoned
Patent Citations (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5423015A (en) * | 1988-10-20 | 1995-06-06 | Chung; David S. F. | Memory structure and method for shuffling a stack of data utilizing buffer memory locations |
US5159678A (en) * | 1990-06-11 | 1992-10-27 | Supercomputer Systems Limited Partnership | Method for efficient non-virtual main memory management |
US5524254A (en) * | 1992-01-10 | 1996-06-04 | Digital Equipment Corporation | Scheme for interlocking line card to an address recognition engine to support plurality of routing and bridging protocols by using network information look-up database |
US5303302A (en) * | 1992-06-18 | 1994-04-12 | Digital Equipment Corporation | Network packet receiver with buffer logic for reassembling interleaved data packets |
US5390173A (en) * | 1992-10-22 | 1995-02-14 | Digital Equipment Corporation | Packet format in hub for packet data communications system |
US5414704A (en) * | 1992-10-22 | 1995-05-09 | Digital Equipment Corporation | Address lookup in packet data communications link, using hashing and content-addressable memory |
US5696899A (en) * | 1992-11-18 | 1997-12-09 | Canon Kabushiki Kaisha | Method and apparatus for adaptively determining the format of data packets carried on a local area network |
US5473607A (en) * | 1993-08-09 | 1995-12-05 | Grand Junction Networks, Inc. | Packet filtering for data networks |
US5887187A (en) * | 1993-10-20 | 1999-03-23 | Lsi Logic Corporation | Single chip network adapter apparatus |
US5802287A (en) * | 1993-10-20 | 1998-09-01 | Lsi Logic Corporation | Single chip universal protocol multi-function ATM network interface |
US5459717A (en) * | 1994-03-25 | 1995-10-17 | Sprint International Communications Corporation | Method and apparatus for routing messagers in an electronic messaging system |
US5555398A (en) * | 1994-04-15 | 1996-09-10 | Intel Corporation | Write back cache coherency module for systems with a write through cache supporting bus |
US5568477A (en) * | 1994-12-20 | 1996-10-22 | International Business Machines Corporation | Multipurpose packet switching node for a data communication network |
US5790539A (en) * | 1995-01-26 | 1998-08-04 | Chao; Hung-Hsiang Jonathan | ASIC chip for implementing a scaleable multicast ATM switch |
US5644784A (en) * | 1995-03-03 | 1997-07-01 | Intel Corporation | Linear list based DMA control structure |
US5838915A (en) * | 1995-06-21 | 1998-11-17 | Cisco Technology, Inc. | System for buffering data in the network having a linked list for each of said plurality of queues |
US5825772A (en) * | 1995-11-15 | 1998-10-20 | Cabletron Systems, Inc. | Distributed connection-oriented services for switched communications networks |
US5781549A (en) * | 1996-02-23 | 1998-07-14 | Allied Telesyn International Corp. | Method and apparatus for switching data packets in a data network |
US5724358A (en) * | 1996-02-23 | 1998-03-03 | Zeitnet, Inc. | High speed packet-switched digital switch and method |
US5828653A (en) * | 1996-04-26 | 1998-10-27 | Cascade Communications Corp. | Quality of service priority subclasses |
US5748631A (en) * | 1996-05-09 | 1998-05-05 | Maker Communications, Inc. | Asynchronous transfer mode cell processing system with multiple cell source multiplexing |
US5787084A (en) * | 1996-06-05 | 1998-07-28 | Compaq Computer Corporation | Multicast data communications switching system and associated method |
US5845081A (en) * | 1996-09-03 | 1998-12-01 | Sun Microsystems, Inc. | Using objects to discover network information about a remote network having a different network protocol |
US5831980A (en) * | 1996-09-13 | 1998-11-03 | Lsi Logic Corporation | Shared memory fabric architecture for very high speed ATM switches |
US6820086B1 (en) * | 1996-10-18 | 2004-11-16 | Hewlett-Packard Development Company, L.P. | Forming linked lists using content addressable memory |
US6061351A (en) * | 1997-02-14 | 2000-05-09 | Advanced Micro Devices, Inc. | Multicopy queue structure with searchable cache area |
US5892922A (en) * | 1997-02-28 | 1999-04-06 | 3Com Corporation | Virtual local area network memory access system |
US6011795A (en) * | 1997-03-20 | 2000-01-04 | Washington University | Method and apparatus for fast hierarchical address lookup using controlled expansion of prefixes |
US5909686A (en) * | 1997-06-30 | 1999-06-01 | Sun Microsystems, Inc. | Hardware-assisted central processing unit access to a forwarding database |
US6119196A (en) * | 1997-06-30 | 2000-09-12 | Sun Microsystems, Inc. | System having multiple arbitrating levels for arbitrating access to a shared memory by network ports operating at different data rates |
US6041053A (en) * | 1997-09-18 | 2000-03-21 | Microsfot Corporation | Technique for efficiently classifying packets using a trie-indexed hierarchy forest that accommodates wildcards |
US6185185B1 (en) * | 1997-11-21 | 2001-02-06 | International Business Machines Corporation | Methods, systems and computer program products for suppressing multiple destination traffic in a computer network |
US6175902B1 (en) * | 1997-12-18 | 2001-01-16 | Advanced Micro Devices, Inc. | Method and apparatus for maintaining a time order by physical ordering in a memory |
US6389513B1 (en) * | 1998-05-13 | 2002-05-14 | International Business Machines Corporation | Disk block cache management for a distributed shared memory computer system |
US6292492B1 (en) * | 1998-05-20 | 2001-09-18 | Csi Zeitnet (A Cabletron Systems Company) | Efficient method and apparatus for allocating memory space used for buffering cells received on several connections in an asynchronous transfer mode (ATM) switch |
US5987507A (en) * | 1998-05-28 | 1999-11-16 | 3Com Technologies | Multi-port communication network device including common buffer memory with threshold control of port packet counters |
US6430666B1 (en) * | 1998-08-24 | 2002-08-06 | Motorola, Inc. | Linked list memory and method therefor |
US6614793B1 (en) * | 1998-10-06 | 2003-09-02 | Stmicroelectronics Limited | Device for segmentation and transmission of messages stored as blocks of variable length |
US6425032B1 (en) * | 1999-04-15 | 2002-07-23 | Lucent Technologies Inc. | Bus controller handling a dynamically changing mix of multiple nonpre-emptable periodic and aperiodic devices |
US20020059165A1 (en) * | 1999-12-02 | 2002-05-16 | Ants Software | Lock-free list |
US6735207B1 (en) * | 2000-06-13 | 2004-05-11 | Cisco Technology, Inc. | Apparatus and method for reducing queuing memory access cycles using a distributed queue structure |
US6625591B1 (en) * | 2000-09-29 | 2003-09-23 | Emc Corporation | Very efficient in-memory representation of large file system directories |
US20020176357A1 (en) * | 2000-10-03 | 2002-11-28 | Altima Communications, Inc. | Switch having flow control management |
US20020120664A1 (en) * | 2000-11-17 | 2002-08-29 | Horn Robert L. | Scalable transaction processing pipeline |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030236904A1 (en) * | 2002-06-19 | 2003-12-25 | Jonathan Walpole | Priority progress multicast streaming for quality-adaptive transmission of data |
US20040179533A1 (en) * | 2003-03-13 | 2004-09-16 | Alcatel | Dynamic assignment of re-assembly queues |
US7420983B2 (en) * | 2003-03-13 | 2008-09-02 | Alcatel Lucent | Dynamic assignment of re-assembly queues |
US7035988B1 (en) * | 2003-03-17 | 2006-04-25 | Network Equipment Technologies, Inc. | Hardware implementation of an N-way dynamic linked list |
US20050083930A1 (en) * | 2003-10-20 | 2005-04-21 | Jen-Kai Chen | Method of early buffer release and associated MAC controller |
US20060193260A1 (en) * | 2005-02-24 | 2006-08-31 | George H A | Preemptive packet flow controller |
US7870311B2 (en) * | 2005-02-24 | 2011-01-11 | Wind River Systems, Inc. | Preemptive packet flow controller |
US9262554B1 (en) | 2010-02-16 | 2016-02-16 | Pmc-Sierra Us, Inc. | Management of linked lists within a dynamic queue system |
WO2011140515A1 (en) * | 2010-05-07 | 2011-11-10 | Qualcomm Incorporated | Linked -list management of llr- memory |
Also Published As
Publication number | Publication date |
---|---|
US20020042787A1 (en) | 2002-04-11 |
US6988177B2 (en) | 2006-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7656907B2 (en) | Method and apparatus for reducing clock speed and power consumption | |
US20050235129A1 (en) | Switch memory management using a linked list structure | |
US6851000B2 (en) | Switch having flow control management | |
EP1313291B1 (en) | Apparatus and method for header processing | |
US6842457B1 (en) | Flexible DMA descriptor support | |
US7339938B2 (en) | Linked network switch configuration | |
US6430626B1 (en) | Network switch with a multiple bus structure and a bridge interface for transferring network data between different buses | |
EP1181792B1 (en) | Stacked network switch configuration | |
US7764674B2 (en) | Address resolution snoop support for CPU | |
US20080247394A1 (en) | Cluster switching architecture | |
EP1195955B1 (en) | Switch transferring data using data encapsulation and decapsulation | |
US6907036B1 (en) | Network switch enhancements directed to processing of internal operations in the network switch | |
US6084878A (en) | External rules checker interface | |
US7120155B2 (en) | Switch having virtual shared memory | |
US7420977B2 (en) | Method and apparatus of inter-chip bus shared by message passing and memory access | |
US7031302B1 (en) | High-speed stats gathering in a network switch | |
EP1338974A2 (en) | Method and apparatus of inter-chip bus shared by message passing and memory access | |
EP1248415B1 (en) | Switch having virtual shared memory | |
EP1212867B1 (en) | Constructing an address table in a network switch |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001 Effective date: 20160201 |
|
AS | Assignment |
Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001 Effective date: 20170120 Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001 Effective date: 20170120 |
|
AS | Assignment |
Owner name: BROADCOM CORPORATION, CALIFORNIA Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001 Effective date: 20170119 |