CN115086253A - Ethernet switching chip and high-bandwidth message forwarding method - Google Patents

Ethernet switching chip and high-bandwidth message forwarding method Download PDF

Info

Publication number
CN115086253A
CN115086253A CN202210685527.7A CN202210685527A CN115086253A CN 115086253 A CN115086253 A CN 115086253A CN 202210685527 A CN202210685527 A CN 202210685527A CN 115086253 A CN115086253 A CN 115086253A
Authority
CN
China
Prior art keywords
message
loopback
processing engine
direction processing
port
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210685527.7A
Other languages
Chinese (zh)
Other versions
CN115086253B (en
Inventor
何志川
赵仕中
钱超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Centec Communications Co Ltd
Original Assignee
Suzhou Centec Communications Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Centec Communications Co Ltd filed Critical Suzhou Centec Communications Co Ltd
Priority to CN202210685527.7A priority Critical patent/CN115086253B/en
Publication of CN115086253A publication Critical patent/CN115086253A/en
Application granted granted Critical
Publication of CN115086253B publication Critical patent/CN115086253B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/54Organization of routing tables
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides an Ethernet exchange chip and a high-bandwidth message forwarding method, wherein an outgoing direction processing engine in the exchange chip is also connected with an incoming direction processing engine through a plurality of loopback channels forming an aggregation group. When an original message arrives, the original message is sent to an outgoing direction processing engine after being analyzed by an incoming direction processing engine and subjected to priority scheduling processing by a cache scheduling engine, and the outgoing direction processing engine determines a target channel according to the message content of the message and port information of a source port under the condition that the message is received and a target port is a loopback port, loops the message back to the incoming direction processing engine, and then normally forwards the looped message. In the scheme, a plurality of loopback channels forming a plurality of aggregation groups are arranged between an outgoing direction processing engine and an incoming direction processing engine, and a channel determination mechanism of the outgoing direction processing engine is combined, so that loopback load can be shared and loopback bandwidth can be improved on the basis of successfully realizing message loopback.

Description

Ethernet switching chip and high-bandwidth message forwarding method
Technical Field
The invention relates to the technical field of network communication, in particular to an Ethernet switching chip and a high-bandwidth message forwarding method.
Background
The switch chip, which is one of the switch core chips, determines the performance of the switch. The main function of the switch is to provide high-performance and low-delay switching in the sub-network, and the function of high-performance switching is mainly completed by the switching chip.
In the existing switching chip, for the service to be processed, one service can only be looped back through one designated loopback channel. In this way in the prior art, if the bandwidth of the specified loopback channel is small but the bandwidth required by the service is large, the service processing efficiency is affected by looping only through the specified loopback channel. Therefore, the structure of the existing switch chip and the loopback processing mode are not favorable for the high-efficiency processing of the message.
Disclosure of Invention
The object of the present invention includes, for example, providing an ethernet switching chip and a high bandwidth message forwarding method, which can share loopback load and improve loopback bandwidth.
Embodiments of the invention may be implemented as follows:
in a first aspect, the present invention provides an ethernet switching chip, including: the system comprises an incoming direction processing engine, a cache scheduling engine and an outgoing direction processing engine, wherein the incoming direction processing engine is connected with the cache scheduling engine, the cache scheduling engine is connected with the outgoing direction processing engine, the outgoing direction processing engine is also connected with the incoming direction processing engine through a plurality of loopback channels, and the loopback channels form a plurality of aggregation groups;
the incoming direction processing engine is used for analyzing an original message when the original message is received, and sending the original message and analysis information of the original message to the cache scheduling engine;
the cache scheduling engine is used for caching the received messages and the analysis information, and sequentially sending the messages and the analysis information to the outgoing direction processing engine after priority scheduling processing;
the outbound direction processing engine is used for determining a target channel from the plurality of loopback channels according to the message content of the message and the port information of the source port when the message is received and the target port of the message is a loopback port, and looping the message back to the inbound direction processing engine through the target channel;
the incoming direction processing engine is also used for analyzing the loopback message when the loopback message is obtained, and sending the loopback message and analysis information of the loopback message to the cache scheduling engine;
and the outbound direction processing engine is also used for sending the message through a network channel when the message is received and the destination port of the message is the device port of the next hop device.
In an alternative embodiment, each aggregation group includes at least one loopback channel;
the outbound direction processing engine is to:
determining the service type of the message according to the message content of the message, determining the aggregation group matched with the service type from a plurality of aggregation groups according to the service type, and determining a target channel from loopback channels included in the matched aggregation group according to the message content and port information of a source port.
In an alternative embodiment, the outbound direction processing engine is configured to:
when a message is received, the message is edited, CRC operation is carried out according to the edited message content and port information of a source port to obtain a hash value, and a target channel is determined from loopback channels included in a matched aggregation group according to the hash value.
In an alternative embodiment, the outbound direction processing engine is configured to:
when receiving the message, the IP address in the message is replaced, or the MAC address in the message is replaced, or the message is externally packaged, so that the message is edited.
In an optional embodiment, the parsing information of the original packet includes a first destination port, the parsing information of the loopback packet includes a second destination port, and the ingress direction processing engine is configured to:
when an original message is received, searching a forwarding table according to a source port of the original message to obtain a first destination port of the original message, wherein the first destination port is a loopback port; and/or
When a loopback message is received, searching a forwarding table according to a source port of the loopback message to obtain a second destination port of the loopback message, wherein the second destination port is an equipment port of next hop equipment;
when the port type of the source port of the original message and/or the loopback message is a three-layer interface and the MAC address is a routing MAC, the searched forwarding table is a routing table, otherwise, the searched forwarding table is a two-layer forwarding table.
In an optional embodiment, the cache scheduling engine is configured to:
and for the plurality of cached messages and the analysis information, obtaining priority information of each message, carrying out priority scheduling processing according to the sequence of the priority from high to low, and then sequentially sending each message and the corresponding analysis information to the outgoing direction processing engine.
In an optional embodiment, a common loopback channel is provided between the outbound direction processing engine and the inbound direction processing engine, where the common loopback channel is one of the multiple loopback channels, or the common loopback channel is a channel other than the multiple loopback channels;
the outbound direction processing engine is further to:
and when the message is received and the destination port of the message is a loopback port, detecting whether the loopback port enables the aggregation group, if the aggregation group is enabled, executing the step of determining a target channel from the plurality of loopback channels according to the message content and the source port of the message and looping back the message to the incoming direction processing engine through the target channel, and if the aggregation group is not enabled, looping back the message to the incoming direction processing engine through the common loopback channel.
In a second aspect, the present invention provides a high bandwidth packet forwarding method, applied to an ethernet switching chip, where the ethernet switching chip includes an ingress direction processing engine, a cache scheduling engine, and an egress direction processing engine, the ingress direction processing engine is connected to the cache scheduling engine, the cache scheduling engine is connected to the egress direction processing engine, the egress direction processing engine is further connected to the ingress direction processing engine through a plurality of loopback channels, and the plurality of loopback channels form a plurality of aggregation groups, where the method includes:
when receiving an original message, the incoming direction processing engine analyzes the original message and sends the original message and analysis information of the original message to the cache scheduling engine;
the cache scheduling engine caches the received messages and the analysis information, and sequentially sends the messages and the analysis information to the outgoing direction processing engine after priority scheduling processing;
when the outbound direction processing engine receives the message and the destination port of the message is a loopback port, determining a target channel from the plurality of loopback channels according to the message content of the message and the port information of the source port, and looping the message back to the inbound direction processing engine through the target channel;
when the inbound direction processing engine obtains the loopback message, analyzing the loopback message, sending the loopback message and analysis information of the loopback message to the cache scheduling engine, and sending the loopback message to the outbound direction processing engine through the cache scheduling engine;
and the outbound direction processing engine sends the message through a network channel when receiving the message and the destination port of the message is the equipment port of the next hop equipment.
In an alternative embodiment, each aggregation group includes at least one loopback channel;
the step of determining a target channel from the plurality of loopback channels according to the message content of the message and the source port comprises:
determining the service type of the message according to the message content of the message, determining the aggregation group matched with the service type from a plurality of aggregation groups according to the service type, and determining a target channel from loopback channels included in the matched aggregation group according to the message content and port information of a source port.
In an optional embodiment, the step of determining a target channel from loopback channels included in the matched aggregation group according to the packet content and the port information of the source port includes:
when a message is received, the message is edited, CRC operation is carried out according to the edited message content and port information of a source port to obtain a hash value, and a target channel is determined from loopback channels included in a matched aggregation group according to the hash value.
The beneficial effects of the embodiment of the invention include, for example:
the utility model provides an Ethernet exchange chip and a high bandwidth message forwarding method, the exchange chip comprises an incoming direction processing engine, a cache scheduling engine and an outgoing direction processing engine which are connected in sequence, and the outgoing direction processing engine is also connected with the incoming direction processing engine through a plurality of loopback channels forming a polymerization group. When an original message arrives, the original message is sent to an outgoing direction processing engine after being analyzed by an incoming direction processing engine and scheduled and processed by the priority of a cache scheduling engine, and when the outgoing direction processing engine receives the message and a target port is a loopback port, a target channel is determined from a plurality of loopback channels according to the message content of the message and port information of a source port, the message is looped back to the incoming direction processing engine, and then the looped message is normally forwarded. In the scheme, a plurality of loopback channels forming an aggregation group are arranged between an outgoing direction processing engine and an incoming direction processing engine, and a channel determination mechanism of the outgoing direction processing engine is combined, so that loopback load can be shared and loopback bandwidth can be improved on the basis of successfully realizing message loopback.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a block diagram of an ethernet switch chip in the prior art;
fig. 2 is a block diagram of an ethernet switching chip according to an embodiment of the present disclosure;
fig. 3 is a second block diagram of an ethernet switch chip according to the embodiment of the present application;
fig. 4 is a third block diagram of an ethernet switch chip according to the embodiment of the present application;
fig. 5 is a flowchart of a high bandwidth message forwarding method according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures.
Furthermore, the appearances of the terms "first," "second," and the like, if any, are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
Fig. 1 is a block diagram of an ethernet switching chip used in the prior art. As shown in fig. 1, the ethernet switching chip includes an ingress direction processing engine, a cache scheduling engine, and an egress direction processing engine. Wherein the outbound direction processing engine and the inbound direction processing engine are connected by a single loopback channel.
In the ethernet switching chip structure in the prior art, when forwarding a message, a message of one service type can only be looped back through the single loopback channel. Therefore, when the bandwidth required by the packet is large, for example, the required bandwidth is 200G, and the bandwidth of the single loopback channel is small, for example, 100G, the single loopback channel cannot meet the service requirement, and the loopback processing efficiency of the packet is affected.
Based on the above research findings, the present application provides an ethernet switching chip, in which multiple loopback channels forming an aggregation group are arranged between an egress processing engine and an ingress processing engine, and a channel determination mechanism of the egress processing engine is combined, so that a loopback load can be shared and a loopback bandwidth can be increased on the basis of successfully implementing a message loopback.
Fig. 2 is a block diagram of an ethernet switch chip according to an embodiment of the present disclosure. In this embodiment, the ethernet switch chip includes an ingress processing engine IPE, a buffer scheduling engine BSR, and an egress processing engine EPE. The incoming direction processing engine is connected with the cache scheduling engine, and the cache scheduling engine is connected with the outgoing direction processing engine. The outgoing direction processing engine is also connected with the incoming direction processing engine through a plurality of loopback channels, and the plurality of loopback channels form a plurality of aggregation groups Channel Agg. Fig. 2 only schematically shows three loopback channels, and the practical application is not limited.
In this embodiment, when receiving the original packet, the inbound direction processing engine parses the original packet, and sends the original packet and parsing information of the original packet to the cache scheduling engine. The analysis information of the original message includes a first destination port of the original message, and the first destination port is a loopback port.
The cache scheduling engine is used for caching the received messages and the analysis information, and sequentially sending the messages and the analysis information to the outgoing direction processing engine after priority scheduling processing. The cache scheduling engine may be configured to cache the received original packet, and may also cache the received loopback packet. When the inbound processing engine continuously sends the message to the cache scheduling engine, the cache scheduling engine can realize the caching of the message, and can schedule the message according to the priority to send the message to the outbound processing engine for processing.
The outbound direction processing engine is used for determining a target channel from the plurality of loopback channels according to the message content of the message and the port information of the source port when the message is received and the destination port of the message is the loopback port, and looping back the message to the inbound direction processing engine through the target channel. That is, the first destination port of the original packet is a loopback port, so that when the outbound direction processing engine receives the packet for the first time, that is, receives the original packet, the destination channel can be determined for the original packet, and the original packet is looped back, and the packet looped back to the inbound direction processing engine is called a loopback packet.
The incoming direction processing engine is also used for analyzing the loopback message when the loopback message is obtained, and sending the loopback message and the analysis information of the loopback message to the cache scheduling engine. The analysis information of the loopback message includes a second destination port, and the second destination port is an equipment port of next-hop equipment.
As can be seen from the above, the cache scheduling engine caches the messages and the analysis information thereof for all the received messages, including the original message received for the first time and the loopback message received for the second time, and sequentially sends the messages and the analysis information to the outgoing direction processing engine after the priority scheduling processing.
And the outbound direction processing engine is also used for sending the message through the network channel when the message is received and the destination port of the message is the device port of the next hop device. As can be seen from the above, the destination port of the loopback message is the device port of the next hop device, so that the outbound direction processing engine sends the message from the network channel when the received message is the loopback message.
In the ethernet switching chip provided in this embodiment, a plurality of loopback channels are provided between the egress processing engine and the ingress processing engine, and the loopback channels are bound into an aggregation group. And a channel determination mechanism of the direction processing engine is combined, so that on the basis of successfully realizing message loopback, service flow can be distributed into a plurality of loopback channels by improving loopback bandwidth, and loopback load sharing is realized.
The ethernet switching chip provided by this embodiment is particularly suitable for an ethernet environment with a high requirement on data transmission feasibility, such as a data center network and an industrial network.
In this embodiment, when the original packet arrives, the ingress direction processing engine may search the forwarding table according to the source port of the original packet when receiving the original packet, and obtain the first destination port of the original packet. As can be seen from the foregoing, the first destination port of the original packet is a loopback port.
In the process, if the port type of the source port of the original message is a three-layer interface and the MAC address is a routing MAC, the forwarding table searched is a routing table, otherwise the forwarding table searched is a two-layer forwarding table.
The incoming direction processing engine sends the original message and the analysis information including the destination port of the original message to the cache scheduling engine. And when the cache scheduling engine realizes the priority scheduling processing of the cached messages and the analysis information, the cache scheduling engine obtains the priority information of each message for the cached messages and the analysis information, and sequentially sends each message and the corresponding analysis information to the outgoing direction processing engine after carrying out the priority scheduling processing according to the sequence from high to low of the priority.
In this embodiment, the priority information of each packet may be preset, for example, the priority may be set according to the service type of the packet, the size of the packet, and the like.
In this embodiment, a plurality of loopback channels between the egress processing engine and the ingress processing engine are divided into a plurality of aggregation groups, and each aggregation group includes at least one loopback channel. Wherein each aggregation group has a group id.
After receiving the message and the analysis information of the message sent by the cache scheduling engine, the outbound direction processing engine can obtain that the destination port is a loopback port. And a common loopback channel is also provided between the outgoing direction processing engine and the incoming direction processing engine, the common loopback channel is one of a plurality of loopback channels, as shown in fig. 3, or the common loopback channel is a channel other than the plurality of loopback channels, as shown in fig. 4.
The method comprises the steps that when a message is received by an outgoing direction processing engine and a destination port of the message is a loopback port, whether the loopback port enables an aggregation group is detected, if the aggregation group is enabled, a target channel can be determined from a plurality of loopback channels according to message content and a source port of the message, and the message is looped back to the incoming direction processing engine through the target channel. And if the aggregation group is not enabled, looping the message back to the incoming direction processing engine through a common loopback channel.
In this embodiment, a common loopback engine is further disposed between the outbound direction processing engine and the inbound direction processing engine, so that loopback processing of the packet can be successfully implemented even when aggregation is not enabled.
In this embodiment, when the loopback port enables the aggregation group, the egress processing engine determines the service type of the packet according to the packet content of the original packet, determines an aggregation group with a matched service type from the multiple aggregation groups according to the service type, and determines the target channel from the loopback channel included in the matched aggregation group according to the packet content and the source port.
In this embodiment, the packets of different service types are looped back through different aggregation groups, for example, a packet of one service type corresponds to one aggregation group, or a packet of multiple service types corresponds to one aggregation group. In short, the messages belonging to the same service type can be looped back through the same aggregation group. Therefore, the loopback bandwidth can be increased through a plurality of loopback channels aggregated in the aggregation group, so that the message traffic burden is shared.
In this embodiment, on the basis of determining that the matched aggregation group is looped back based on the service type of the packet, a target channel needs to be determined from a plurality of loopback channels included in the aggregation group. Optionally, the direction processing engine in this embodiment may be configured to, when a message is received, edit the message, perform CRC (Cyclic Redundancy Check) operation according to the edited message content and port information of the source port to obtain a hash value, and determine the target channel from the loopback channels included in the matched aggregation group according to the hash value.
In this embodiment, when the outbound direction processing engine edits the message, the outbound direction processing engine may replace an IP address in the message, replace an MAC address in the message, or perform external encapsulation on the message, so as to edit the message.
The edited message content and the port information of the source port can obtain a hash value after CRC operation, and the aggregation group comprises a plurality of loopback channels. Numbers in the groups can be set for the loopback channels respectively, remainder calculation can be performed based on the obtained hash values, and a target channel is determined from the loopback channels by using the remainder result and the numbers of the loopback channels.
Because the messages of the same service type are respectively similar in the message content and the port information of the source port, in this embodiment, hash value calculation is performed based on the message content and the port information of the source port, and then the target channel is determined based on the hash value, so that the messages of the same service type are more likely to be allocated to the same loopback channel for loopback processing, so that the loopback processing process is more standard.
Under the condition of the determined target channel, the outgoing direction processing engine loops the message back to the incoming direction processing engine through the target channel, the incoming direction processing engine receives the looped message from the target channel and analyzes the message, similarly, when the port type of the source port of the message is a three-layer interface and the MAC address is a routing MAC address, the destination port is obtained by searching a routing table, otherwise, the destination port is obtained by searching an FDB table. At this time, the forwarding table sends out a normal forwarding behavior, and the destination port is the device port of the next-hop device. And the incoming direction processing engine sends the message and the analysis information containing the destination port of the message to the cache scheduling engine.
When the cache scheduling engine receives the message and the analysis information thereof for the second time, the message and the analysis information are cached in the same way, and are sent to the outgoing direction processing engine after being subjected to priority scheduling processing.
When the outgoing direction processing engine receives the message for the second time, the outgoing direction processing engine can obtain the device port of which the destination port is the next hop device, and at the moment, the outgoing direction processing engine can edit the message and send the edited message out of the network channel of the switching chip.
The embodiment of the present application further provides a high bandwidth message forwarding method, which is applied to the ethernet switching chip.
Referring to fig. 5, which is a flowchart of the high-bandwidth packet forwarding method provided in this embodiment, the method steps defined by the flow related to the forwarding method may be implemented by the ethernet switch chip. The specific flow shown in fig. 5 is explained below.
S101, when receiving an original message, the incoming direction processing engine analyzes the original message and sends the original message and the analysis information of the original message to the cache scheduling engine.
And S102, the cache scheduling engine caches the received messages and the analysis information, and sequentially sends the messages and the analysis information to the outgoing direction processing engine after priority scheduling processing.
And S103, when the outbound direction processing engine receives the message and the destination port of the message is a loopback port, determining a target channel from the loopback channels according to the message content of the message and the port information of the source port, and looping the message back to the inbound direction processing engine through the target channel.
And S104, when the inbound direction processing engine obtains the loopback message, analyzing the loopback message, sending the loopback message and the analysis information of the loopback message to the cache scheduling engine, and sending the loopback message to the outbound direction processing engine through the cache scheduling engine.
And S105, the outgoing direction processing engine sends the message through a network channel when the message is received and the destination port of the message is the device port of the next hop device.
The high-bandwidth message forwarding method provided in this embodiment is applied to the ethernet switch chip, and performs message loopback through multiple loopback channels forming multiple aggregation groups between an egress processing engine and an ingress processing engine in the ethernet switch chip. And the outbound direction processing engine can determine a target port from the multiple loopback channels based on the message content of the message and the port information of the source port, thereby realizing the loopback of the message. The message forwarding method can successfully realize the loopback of the message through the channel determination mechanism on the basis of the aggregation group with high bandwidth, and can improve the loopback bandwidth and share the loopback load on the basis of avoiding complicated loopback logic processing.
In one possible implementation manner, the outbound direction processing engine, when determining the target channel, may implement the following:
determining the service type of the message according to the message content of the message, determining the aggregation group matched with the service type from a plurality of aggregation groups according to the service type, and determining a target channel from loopback channels included in the matched aggregation group according to the message content and port information of a source port.
In a possible implementation manner, the step of determining, by the outbound direction processing engine, the target channel in the matched aggregation group may be implemented by:
when a message is received, the message is edited, CRC operation is carried out according to the edited message content and port information of a source port to obtain a hash value, and a target channel is determined from loopback channels included in a matched aggregation group according to the hash value.
In a possible implementation manner, the step of the outbound direction processing engine editing the packet may be implemented by the following manner:
when receiving the message, the IP address in the message is replaced, or the MAC address in the message is replaced, or the message is externally packaged, so that the message is edited.
In a possible implementation manner, the parsing information of the original packet includes a first destination port, the parsing information of the loopback packet includes a second destination port, and the step of parsing the original packet and/or the loopback packet by the ingress direction processing engine may be implemented by the following manners:
when an original message is received, searching a forwarding table according to a source port of the original message to obtain a first destination port of the original message, wherein the first destination port is a loopback port; and/or
When a loopback message is received, searching a forwarding table according to a source port of the loopback message to obtain a second destination port of the loopback message, wherein the second destination port is an equipment port of next hop equipment;
when the port type of the source port of the original message and/or the loopback message is a three-layer interface and the MAC address is a routing MAC, the searched forwarding table is a routing table, otherwise, the searched forwarding table is a two-layer forwarding table.
In a possible implementation manner, the step of performing, by the cache scheduling engine, priority scheduling processing may be implemented by:
and for the plurality of cached messages and the analysis information, obtaining priority information of each message, carrying out priority scheduling processing according to the sequence of the priority from high to low, and then sequentially sending each message and the corresponding analysis information to the outgoing direction processing engine.
In a possible implementation manner, a common loopback channel is provided between the outbound direction processing engine and the inbound direction processing engine, where the common loopback channel is one of the multiple loopback channels, or the common loopback channel is a channel other than the multiple loopback channels, and the message forwarding method may further include the following steps:
the method comprises the steps that the outgoing direction processing engine detects whether a loopback port enables an aggregation group or not when a message is received and a target port of the message is the loopback port, if the aggregation group is enabled, a target channel is determined from a plurality of loopback channels according to message content and a source port of the message, the message is looped back to the incoming direction processing engine through the target channel, and if the aggregation group is not enabled, the message is looped back to the incoming direction processing engine through the common loopback channel.
For the description of the method steps of the high-bandwidth message forwarding method provided in this embodiment, reference may be made to the description related to the ethernet switching chip in the foregoing embodiment, and details are not described here.
In summary, the embodiments of the present invention provide an ethernet switch chip and a high bandwidth message forwarding method, where the switch chip includes an ingress direction processing engine, a cache scheduling engine, and an egress direction processing engine, which are connected in sequence, and the egress direction processing engine is further connected to the ingress direction processing engine through a plurality of loopback channels forming an aggregation group. When an original message arrives, the original message is sent to an outgoing direction processing engine after being analyzed by an incoming direction processing engine and scheduled and processed by the priority of a cache scheduling engine, and the outgoing direction processing engine determines a target channel from a plurality of loopback channels according to the message content of the message and the port information of a source port under the condition that the message is received and the port is a loopback port, loops the message back to the incoming direction processing engine, and then normally forwards the loopback message. In the scheme, a plurality of loopback channels forming a plurality of aggregation groups are arranged between an outgoing direction processing engine and an incoming direction processing engine, and a channel determination mechanism of the outgoing direction processing engine is combined, so that loopback load can be shared and loopback bandwidth can be increased on the basis of successfully realizing message loopback.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. An ethernet switching chip, comprising: the system comprises an incoming direction processing engine, a cache scheduling engine and an outgoing direction processing engine, wherein the incoming direction processing engine is connected with the cache scheduling engine, the cache scheduling engine is connected with the outgoing direction processing engine, the outgoing direction processing engine is also connected with the incoming direction processing engine through a plurality of loopback channels, and the loopback channels form a plurality of aggregation groups;
the incoming direction processing engine is used for analyzing an original message when the original message is received, and sending the original message and analysis information of the original message to the cache scheduling engine;
the cache scheduling engine is used for caching the received messages and the analysis information, and sequentially sending the messages and the analysis information to the outgoing direction processing engine after priority scheduling processing;
the outbound direction processing engine is used for determining a target channel from the plurality of loopback channels according to the message content of the message and the port information of the source port when the message is received and the target port of the message is a loopback port, and looping the message back to the inbound direction processing engine through the target channel;
the incoming direction processing engine is also used for analyzing the loopback message when the loopback message is obtained, and sending the loopback message and analysis information of the loopback message to the cache scheduling engine;
and the outbound direction processing engine is also used for sending the message through a network channel when the message is received and the destination port of the message is the device port of the next hop device.
2. The ethernet switching chip of claim 1, wherein each aggregation group comprises at least one loopback channel;
the outbound direction processing engine is to:
determining the service type of the message according to the message content of the message, determining the aggregation group matched with the service type from a plurality of aggregation groups according to the service type, and determining a target channel from loopback channels included in the matched aggregation group according to the message content and port information of a source port.
3. The ethernet switching chip of claim 2, wherein the egress processing engine is configured to:
when a message is received, the message is edited, CRC operation is carried out according to the edited message content and port information of a source port to obtain a hash value, and a target channel is determined from loopback channels included in a matched aggregation group according to the hash value.
4. The ethernet switching chip of claim 3, wherein said egress processing engine is configured to:
when receiving the message, the IP address in the message is replaced, or the MAC address in the message is replaced, or the message is externally packaged, so that the message is edited.
5. The ethernet switching chip according to claim 1, wherein the parsing information of the original packet includes a first destination port, the parsing information of the loopback packet includes a second destination port, and the ingress direction processing engine is configured to:
when an original message is received, searching a forwarding table according to a source port of the original message to obtain a first destination port of the original message, wherein the first destination port is a loopback port; and/or
When a loopback message is received, searching a forwarding table according to a source port of the loopback message to obtain a second destination port of the loopback message, wherein the second destination port is an equipment port of next hop equipment;
when the port type of the source port of the original message and/or the loopback message is a three-layer interface and the MAC address is a routing MAC, the searched forwarding table is a routing table, otherwise, the searched forwarding table is a two-layer forwarding table.
6. The ethernet switching chip of claim 1, wherein the cache scheduling engine is configured to:
and for the plurality of cached messages and the analysis information, obtaining priority information of each message, carrying out priority scheduling processing according to the sequence of the priority from high to low, and then sequentially sending each message and the corresponding analysis information to the outgoing direction processing engine.
7. The ethernet switching chip according to claim 1, wherein a common loopback channel is provided between the outbound direction processing engine and the inbound direction processing engine, the common loopback channel is one of the plurality of loopback channels, or the common loopback channel is a channel other than the plurality of loopback channels;
the outbound direction processing engine is further to:
and when the message is received and the destination port of the message is a loopback port, detecting whether the loopback port enables the aggregation group, if the aggregation group is enabled, executing the step of determining a target channel from the plurality of loopback channels according to the message content and the source port of the message and looping back the message to the incoming direction processing engine through the target channel, and if the aggregation group is not enabled, looping back the message to the incoming direction processing engine through the common loopback channel.
8. A high bandwidth message forwarding method is characterized in that the method is applied to an Ethernet switch chip, the Ethernet switch chip comprises an incoming direction processing engine, a cache scheduling engine and an outgoing direction processing engine, the incoming direction processing engine is connected with the cache scheduling engine, the cache scheduling engine is connected with the outgoing direction processing engine, the outgoing direction processing engine is further connected with the incoming direction processing engine through a plurality of loopback channels, and the loopback channels form a plurality of aggregation groups, and the method comprises the following steps:
when receiving an original message, the incoming direction processing engine analyzes the original message and sends the original message and analysis information of the original message to the cache scheduling engine;
the cache scheduling engine caches the received messages and the analysis information, and sequentially sends the messages and the analysis information to the outgoing direction processing engine after priority scheduling processing;
when the outbound direction processing engine receives the message and the destination port of the message is a loopback port, determining a target channel from the plurality of loopback channels according to the message content of the message and the port information of the source port, and looping the message back to the inbound direction processing engine through the target channel;
when the inbound direction processing engine obtains the loopback message, analyzing the loopback message, sending the loopback message and analysis information of the loopback message to the cache scheduling engine, and sending the loopback message to the outbound direction processing engine through the cache scheduling engine;
and the outbound direction processing engine sends the message through a network channel when receiving the message and the destination port of the message is the equipment port of the next hop equipment.
9. The high bandwidth message forwarding method of claim 8, wherein each aggregation group comprises at least one loopback channel;
the step of determining the target channel from the plurality of loopback channels according to the message content and the source port of the message comprises the following steps:
determining the service type of the message according to the message content of the message, determining the aggregation group matched with the service type from a plurality of aggregation groups according to the service type, and determining a target channel from loopback channels included in the matched aggregation group according to the message content and port information of a source port.
10. The method according to claim 9, wherein the step of determining a target channel from the loopback channels included in the matched aggregation group according to the packet content and the port information of the source port comprises:
when a message is received, the message is edited, CRC operation is carried out according to the edited message content and port information of a source port to obtain a hash value, and a target channel is determined from loopback channels included in a matched aggregation group according to the hash value.
CN202210685527.7A 2022-06-16 2022-06-16 Ethernet exchange chip and high-bandwidth message forwarding method Active CN115086253B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210685527.7A CN115086253B (en) 2022-06-16 2022-06-16 Ethernet exchange chip and high-bandwidth message forwarding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210685527.7A CN115086253B (en) 2022-06-16 2022-06-16 Ethernet exchange chip and high-bandwidth message forwarding method

Publications (2)

Publication Number Publication Date
CN115086253A true CN115086253A (en) 2022-09-20
CN115086253B CN115086253B (en) 2024-03-29

Family

ID=83253301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210685527.7A Active CN115086253B (en) 2022-06-16 2022-06-16 Ethernet exchange chip and high-bandwidth message forwarding method

Country Status (1)

Country Link
CN (1) CN115086253B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183415A1 (en) * 2006-02-03 2007-08-09 Utstarcom Incorporated Method and system for internal data loop back in a high data rate switch
CN101534253A (en) * 2009-04-09 2009-09-16 中兴通讯股份有限公司 Message forwarding method and device
CN103368775A (en) * 2013-07-09 2013-10-23 杭州华三通信技术有限公司 Traffic backup method and core switching equipment
CN108134747A (en) * 2017-12-22 2018-06-08 盛科网络(苏州)有限公司 The realization method and system of Ethernet switching chip, its multicast mirror image flow equalization
CN108683617A (en) * 2018-04-28 2018-10-19 新华三技术有限公司 Message diversion method, device and shunting interchanger
JP6436262B1 (en) * 2018-07-03 2018-12-12 日本電気株式会社 Network management apparatus, network system, method, and program
US10171368B1 (en) * 2013-07-01 2019-01-01 Juniper Networks, Inc. Methods and apparatus for implementing multiple loopback links
WO2022105289A1 (en) * 2020-11-23 2022-05-27 北京锐安科技有限公司 Flow forwarding method, service card and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070183415A1 (en) * 2006-02-03 2007-08-09 Utstarcom Incorporated Method and system for internal data loop back in a high data rate switch
CN101534253A (en) * 2009-04-09 2009-09-16 中兴通讯股份有限公司 Message forwarding method and device
US10171368B1 (en) * 2013-07-01 2019-01-01 Juniper Networks, Inc. Methods and apparatus for implementing multiple loopback links
CN103368775A (en) * 2013-07-09 2013-10-23 杭州华三通信技术有限公司 Traffic backup method and core switching equipment
CN108134747A (en) * 2017-12-22 2018-06-08 盛科网络(苏州)有限公司 The realization method and system of Ethernet switching chip, its multicast mirror image flow equalization
CN108683617A (en) * 2018-04-28 2018-10-19 新华三技术有限公司 Message diversion method, device and shunting interchanger
JP6436262B1 (en) * 2018-07-03 2018-12-12 日本電気株式会社 Network management apparatus, network system, method, and program
WO2022105289A1 (en) * 2020-11-23 2022-05-27 北京锐安科技有限公司 Flow forwarding method, service card and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张峥栋: "基于CE网络多链路聚合的探针测试设计", 《信息与电脑(理论版)》, no. 05, 15 March 2019 (2019-03-15) *
杨勇等: "一种网络设备内部的单端口环路检测技术", 《通讯世界》, no. 03, 25 March 2020 (2020-03-25) *

Also Published As

Publication number Publication date
CN115086253B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
CN102970227B (en) The method and apparatus of VXLAN message repeating is realized in ASIC
CN113382442B (en) Message transmission method, device, network node and storage medium
CN109510780B (en) Flow control method, switching chip and network equipment
CN101573913B (en) Method and apparatus for improved multicast routing
US20050237973A1 (en) Wireless communications apparatus, and routing control and packet transmission technique in wireless network
US20030016679A1 (en) Method and apparatus to perform network routing
US7602809B2 (en) Reducing transmission time for data packets controlled by a link layer protocol comprising a fragmenting/defragmenting capability
CN110943935B (en) Method, device and system for realizing data transmission
CN101184053B (en) LAN export link selecting method, device and routing device
CN103812787A (en) Electric power telecommunication network message prior forwarding method
US20230379244A1 (en) Ultra reliable segment routing
CN112383450A (en) Network congestion detection method and device
US20240106751A1 (en) Method and apparatus for processing detnet data packet
CN108667746B (en) Method for realizing service priority in deep space delay tolerant network
CN107846433A (en) A kind of synchronous methods, devices and systems of session information
US20170195227A1 (en) Packet storing and forwarding method and circuit, and device
CN113556784B (en) Network slice realization method and device and electronic equipment
US8488489B2 (en) Scalable packet-switch
US20100158033A1 (en) Communication apparatus in label switching network
CN115086253B (en) Ethernet exchange chip and high-bandwidth message forwarding method
CN112637705B (en) Method and device for forwarding in-band remote measurement message
CN111490941B (en) Multi-protocol label switching MPLS label processing method and network equipment
CN111600798B (en) Method and equipment for sending and obtaining assertion message
CN113518046A (en) Message forwarding method and frame type switching equipment
US9246820B1 (en) Methods and apparatus for implementing multiple loopback links

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant